keywords it contains and most search engines
employs this kind of strategy in ranking their results,
i.e., they usually look for the exact query occurrence
in the document before looking for the query
keywords occurrence.
5 CONCLUSION AND FUTURE
WORK
In this paper, we presented a novel approach of
result merging strategy that combines two
techniques: computing a similarity score for each
retrieved result using title, description and local rank
instead of the full document; and including users’
satisfaction toward the used search engines in the
computation of the final ranking scores.
According to the experimental results, we can
see that the system produces a well merged list
where the participation of the three used search
engines reflects well the users’ satisfaction we
introduced into the score function.
Also through the experimentations, we noticed
that the top 3 of each search engine are always
ranked in the top 20 of the merged list.
Even if the preliminary results we obtained are
satisfying, this work is a first proposition in multiple
search engines querying and needs some
improvements and further experimentations. Indeed,
as future work, we prospect to integrate in the score
computation more information about the user
information needs to have a ranking that best
matches his needs. This information can be taken
from a user profile or user interests for example.
We plan also to test our Metasearch engine on a
user community to obtain more exhaustive results.
This Meta Search engine will be part of a
personalized information retrieval system which
main goal is to get the most relevant documents to
the user information needs. First of all, the system
will build the user profile and then will use it to
reformulate user’s queries in order to get the most
relevant results to his needs. So, using the
metasearch engine, the system will be able to cover
a large proportion of documents from the Web and
thus will return more relevant documents to the user.
REFERENCES
Meng, W., 2008. Metasearch Engines, Department of
Computer Science, State University of New York at
Binghamton.
Renda, M. E., Straccia, U., 2003. Web Metasearch: Rank
vs. Score based Rank Aggregation Methods.
Lu, Y., Meng, W., Shu, L., Yu, C., Liu, K., 2005.
Evaluation of Result Merging Strategies for
Metasearch Engines In 6
th
International Conference
on Web Information Systems Engineering (WISE
Conference), New York.
Aslam, J., Montague, M., 2001. Models for Metasearch In
ACM SIGIR Conference, pp.276-284.
Callan, J., Lu, Z., Croft, 1995. W. Searching Distributed
Collections with Inference Networks In ACM SIGIR
Conference, pp. 21-28.
Gauch, S., Wang, G., Gomez, M., 1996. ProFusion:
Intelligent Fusion from Multiple, Distributed Search
Engines In Journal of Universal Computer Science,
2(9), pp.637-649.
Rasolofo, Hawking, Y. D., Savoy, J., 2003. Result
Merging Strategies for a Current News Metasearcher
In Inf. Process. Manage, 39(4), pp.581-609.
Jadidoleslamy, H., 2012. Search Result Merging and
Ranking Strategies in Meta-Search Engines: A Survey
In International Journal of Computer Science Issues,
Vol. 9, Issue 4, No 3, p. 239-251.
The American Customer Satisfaction Index (ACSI), http://
http://www.theacsi.org/the-american-customer-
satisfaction-index, (Accessed 15 June 2015).
Slingshot SEO, 2011. A Tale of Two Studies: Establishing
Google & Bing Click-Through Rates, Slingshot SEO,
http://www.slingshotseo.com/wp-
content/uploads/2011/10/Google-vs-Bing-CTR-Study-
2011.pdf, (Accessed 8 July 2014).
Goodwin, D., 2011. Top Google Results Gets 36.4% of
Clicks [Study], http://searchenginewatch.com/article/
2049695/Top-Google-Result-Gets-36.4-of-Clicks-
Study, (Accessed 24 October 2013).