We can infer, solely on the basis of WordNet [2] that T entails H as there is a syno-
nym pelt#v2 for pepper#v2 and bird_shot#n1 for buckshot#n1.
We can also use WordNet to infer that T entails H in this case:
T: Reporters tossed accusations at Cheney.
H: Reporters threw charges at Cheney.
However, “toss” is polysemous, we must select the context-appropriate sense,
toss#v1, which applies to physical objects, and extend it to apply to communicative
objects such as questions, insults, and requests.
Currently, there is no verb sense of toss in WordNet that applies to communication
acts. The most appropriate senses of “toss” for the above context, toss#v1 or toss#v3,
both refer to tossing physical objects. Their hypernyms do not include any communi-
cative senses of ‘throw’, such as throw#v5 or throw#v10, instead they only include
throw#v1, which clearly refers to physical objects, as made clear by the accompany-
ing definition and in its immediate hypernym propel#v1.
Humans easily interpret these verbs metaphorically via a conceptual metaphor that
communicative acts such as questions, requests, and statements can be viewed and
talked about as if they were physical objects. Questions can be thrown and dodged.
Accusations can be launched and torpedoed. Requests fired and deflected. A compu-
tational system that checks that verbs are applied according to semantic preferences
will reject the second inference as a violation of semantic type constraints, expecting
physical objects where communicative acts are found instead. One that enforces no
constraints at all would not flag the following sentences as anomalous (we will use
*T to indicate anomalous T sentences and *H to indicate anomalous entailments that
humans do not make such as those made by a system that takes a literal interpretation
of sentences when a non-literal interpretation is more appropriate):
*T: Reporters tossed air at Cheney.
*H: Reporters threw a gas at Cheney.
Ideally, we would like a system that would know when conceptual metaphors sanc-
tion relaxing semantic type constraints, and what parts of the source domain (in the
example above, physical objects, viewed as projectiles) map to what parts of a target
domain (in the example above communicative acts or ideas), and furthermore what
entailments we can procure from the source domain that apply correctly to the target
domain.
In this paper we look at a mechanism to extend RTE inferences to those in the T-H
pairs above, and consider how world knowledge represented as scripts might also
allow us to predict some metaphoric entailments, e.g., that questions may be dodged,
ducked, or deflected just as physical objects that are thrown may be.
1.1 Conceptual versus Linguistic Metaphor
Both Kövecses [4] and Lakoff & Johnson [5] make a clear distinction between con-
ceptual metaphors and their linguistic expression. The former are partial mappings,
typically from more abstract concepts to simpler concepts, e.g., LIFE IS A GAME.
128