Require: The mind-graph g
p
with its set of weighted
nodes
˜
N = {ω
1
, . . . , ω
m
} and their weighted con-
nections
˜
C.
Ensure:
˜
G ←
/
0
1: Compute the graph potential δ
p
of the input graph
g
p
2: for r = 2 to m do
3:
˜
N ← {ω
i
}, where |
˜
N |= r and i ∈ {1, . . . , m}
4:
˜
C ← {ω
i j
}, where i, j ∈ {1, . . . , m}(i 6= j)
5: δ
r
← Compute graph potential for all sub-
graphs formed with
˜
N and
˜
C
6: if δ
r
≥ δ
p
then
7: DECOMPOSITION
8: JOIN
9: Get the candidate graph g
r
10:
˜
G ← g
r
11: end if
12: end for
13: SELECTION (of the candidate graph(s) ( ≤ δ
p
)
from
˜
G)
These two techniques are motivated to extract
the skeleton mind-graph and manage the mind-graph
complexity for complex connections. Also there ex-
ists some other techniques or can be formalized these
as per some other specific need of the mind-graphs.
4 CONCLUSIONS
In this work we used graphs (mind-graphs) to rep-
resent the coupling between the knowledge in the
course of textual conversation. The similarity mea-
sures between the mind-graphs have been considered
for information representation. Also, the algorithms
associated to the mind-graphs extraction and normal-
ization have been formalized. Initial experimental
framework has been established. It works with test
sentences, where extracted word cells and their asso-
ciated neighbor cells (form the mind-graphs) explic-
itly defined. Currently, we continue the test with a
larger corpus.
ACKNOWLEDGEMENTS
The current work has been performed at the Univer-
sity of Luxembourg within the project EAMM. We
thank all project members for their support and en-
gagement.
REFERENCES
Bunke, H. and Shearer, K. (1998). A graph distance met-
ric based on the maximal common subgraph. Pattern
Recogn. Lett., 19:255–259.
Clark, A. and Chalmers, D. (1998). The extended mind. In
Analysis, volume 58, pages 7–19.
Haghighi, A. D., Ng, A. Y., and Manning, C. D. (2005).
Robust textual inference via graph matching. In Pro-
ceedings of the conference on Human Language Tech-
nology and Empirical Methods in Natural Language
Processing, HLT ’05, pages 387–394. Association for
Computational Linguistics.
Hensman, S. (2004). Construction of conceptual graph rep-
resentation of texts. In Proceedings of the Student Re-
search Workshop at HLT-NAACL 2004, HLT-SRWS
’04, pages 49–54. Association for Computational Lin-
guistics.
Ikonomakis, M., Kotsiantis, S., and Tampakas, V. (2005).
Text classification: a recent overview. In Proceedings
of the 9th WSEAS International Conference on Com-
puters, pages 125:1–125:6.
Jin, W. and Srihari, R. K. (2007). Graph-based text rep-
resentation and knowledge discovery. In Proceedings
of the 2007 ACM symposium on Applied computing,
SAC ’07, pages 807–811. ACM.
Poray, J. and Schommer, C. (2009). A cognitive mind-map
framework to foster trust. In Proceedings of the 2009
Fifth International Conference on Natural Computa-
tion - Volume 05, ICNC ’09. IEEE Computer Society.
Poray, J. and Schommer, C. (2010). Managing conversa-
tional streams by explorative mind-maps. In Proceed-
ings of the ACS/IEEE International Conference on
Computer Systems and Applications - AICCSA 2010.
IEEE Computer Society.
Radev, D. R. (2004). Lexrank: Graph-based lexical cen-
trality as salience in text summarization. Journal of
Artificial Intelligence Research, 22.
Schenker, A., Last, M., Bunke, H., and Kandel, A. (2003).
Classification of web documents using a graph model.
In Seventh International Conference on Document
Analysis and Recognition, pages 240–244.
Sebastiani, F. (2002). Machine learning in automated text
categorization. ACM Comput. Surv., 34:1–47.
Tuulos, V. and Tirri, H. (2004). Combining topic
models and social networks for chat data mining.
In Web Intelligence, 2004. WI 2004. Proceedings.
IEEE/WIC/ACM International Conference, pages 206
– 213.
Ziegler, C.-N. and Golbeck, J. (2006). Investigating interac-
tions of trust and interest similarity. Decision Support
Systems, 43(2):460 – 475.
ICAART 2012 - International Conference on Agents and Artificial Intelligence
514