========axiom 1=========
VERB:live
THEME:nsubj->INHABITANT
THEME:prep_in->LOCATION
ab(live,lost):-nsubj(lost,INHABITANT),
dobj(lost,life).
========axiom 2=============
VERB:kill
THEME:nsubj->KILLER
THEME:dobj->PREY
isGoal(PREY,not(kill)):-aspect(kill,start).
Axiom 1: If the fluent ‘live’ holds in this situation, it
will not hold (i.e. abnormality) when the subject of
‘live’ loses its life. The fluent ‘live’ holds in the nar-
rative at initial situation. The abnormality condition
is added by axiom 1 to entail ‘not(live)’, when the ac-
tion ‘the lion lost its life’ occurs in the narrative.
Axiom 2: If the action ‘kill’ occurs at this situation
with aspect of ‘start’, the PREY (which is a theme
role) has a goal to stop the killing ‘not (kill)’. The
action ‘kill’ occurs in the initial situation in the story.
The axiom generates the goal ‘not (kill)’ for the ‘ani-
mals’.
As can be seen, we have used theme roles for the
domain dependent axioms (belief update rules).
The system answers factual queries
8
using pattern
matching and the synonyms information. We used the
predicate ‘causeOf(X,Y)’ to reason about various ac-
tions and fluents. We show below examples of some
causal queries
9
:
1. Why did the animals decide to approach the lion?
A: To have an agreement.
2. Why did the lion start killing animals indiscrimi-
nately in the forest?
A: Because the lion was greedy.
3. Why was the lion getting impatient?
A: Because it did not see any animal coming.
These queries are easily answered using the causal
laws, sensing actions and possible actions axioms.
The set of queries answered due to the causal model
of Plan are as follows:
4. Why did the rabbit show the reflection of the lion?
A: It wanted the lion to assume that there is another
lion.
The answer is obtained since the action ‘show ∈
(act
A
)
H
’, where A refers to the rabbit.
5. Why did the rabbit stride to the lion by sunset?
A: As an evidence to show that there was another lion.
The fact that ‘the rabbit came late’ is a precondition
8
These queries include the WH-questions and the deci-
sion questions
9
The answers generated were in Prolog form. However,
different scripts are used to generate the natural language
answers, depending upon the semantics of the query.
(or constraint) for the lion to believe that there was
another lion. Thus ‘stride ∈ (act
A
)
E
’.
While the first three questions can be answered us-
ing the causal model, the questions 4 and 5 need deep
understanding, which is accomplished in our formal-
ism using plan recognition, hypothetical actions and
evidence actions. It is clear that the category of ques-
tions is limited by the chosen model.
7 CONCLUSIONS AND FUTURE
WORK
The work focuses on the problem of coming up with
a theoretical formalism to answer causal queries in
a real world narrative. The main contribution of the
work is to use the plan recognition for reasoning about
the cause. However, the notion of causality (Pearl,
2000) has to be incorporated in the formalism to rea-
son about actual cause. Future work will demonstrate
the system to answer counterfactual queries. Another
important aspect for future work will be to trans-
late the queries into Prolog representation to generate
goals and use the answers given by Prolog to generate
natural language answers.
To make a fully automated system, which can an-
swer causal queries, substantial additional research
is needed. The probabilistic extension of the model
is required to handle the incomplete domain knowl-
edge and uncertainty. The belief update model has
to be built based on the Verbnet (Schuler, 2005) and
Framenet (Baker and Sato, 2003) representation. Se-
mantic entailment will need to be used but the sound-
ness and completeness of the representation needs to
be investigated.
REFERENCES
Baker, C. F. and Sato, H. (2003). The framenet data and
software. In ACL ’03: Proceedings of the 41st An-
nual Meeting on Association for Computational Lin-
guistics, pages 161–164, Morristown, NJ, USA. As-
sociation for Computational Linguistics.
Baral, C. and Gelfond, M. (2005). Reasoning about in-
tended actions. In AAAI, pages 689–694.
Baral, C., Gelfond, M., and Provetti, A. (1997). Repre-
senting actions: Laws, observations and hypothesis.
Journal of Logic Programming, 31:31–1.
Baral, C., Mcilraith, S., and Son, T. C. (2000). Formulating
diagnostic problem solving using an action language
with narratives and sensing. In KR 2000, pages 311–
322.
Bethard, S. and Martin, J. H. (2008). Learning semantic
links from a corpus of parallel temporal and causal
ENTAILMENT OF CAUSAL QUERIES IN NARRATIVES USING ACTION LANGUAGE
117