be said that the project was far ahead of its time.
In another important artificial intelligence
initiative, Cyc (Randall and Lenat, 1982), from the
word Encyclopedia, an attempt was made to
assemble a comprehensive ontology and knowledge
base of everyday common sense knowledge. The
idea was to enter as much information as possible in
a computerized storage capacity, which would
establish a common vocabulary for automatic
reasoning. The goal was to enable artificial
intelligence applications to perform human-like
reasoning, or even to make a computer smarter than
a human being. Cyc has been considered to be a
controversial endeavor, and has suffered its share of
criticism. Among many things, a large number of
gaps in not only the ontology of ordinary objects,
but an almost complete lack of relevant assertions
describing such objects, has been contributing to the
increasingly fading interest of Cyc.
During the first decade of the new millennium,
the debate whether or not to achieve artificial
intelligence that can be measured against human
intelligence, is not about a single, or even a few,
super computers. It is more a question of what can
be done by collective, collaborative computing
efforts. Thus, can, and if so; how, a collective
intelligence arise through the interaction between
men and machines? The question is whether or not
the appropriate preconditions for this is the Internet
with all its connections, i.e., men to men, men to
machines, and machines to machines.
The article is organized as follows. First we
discuss the evolution of user-driven collaboration on
the Web with respect to a common platform for
artificial intelligence. Next, we compare computer
intelligence to the human brain. Collective
intelligence with respect to men and machines are
then discussed. Finally, the concept of memes is
debated, and the paper is concluded with some
observations and points for further discussion.
2 COMPUTERS WITH
COLLECTIVE INTELLIGENCE
So, with the introduction of the commercial Internet,
i.e., the World Wide Web, or simply the Web, in the
mid 1990’s, companies realized that the content in
this environment could actually be developed by the
users, i.e., the customers, themselves. Customers
shared reviews on items that they have purchased,
software manufacturers used customers as product
support in the development phase, and cooperating
users built an entire encyclopedia of knowledge.
Google became one of the world’s most successful
companies by utilizing Web search content provided
by the users, and Facebook concurred the social side
of the Web by providing means to link people, and
their personal information, together.
In the book “We are the Web” (2005), Kelly
described this development. The massive input of
information provided by the users into the World
Wide Web was referred to as “The Machine”, i.e., a
large artificial brain, with a capacity comparable to a
human brain. The Web, like the brain, has hundreds
of billions of neurons (or Web pages), joined by
multiple synapses (or hyperlinks), and in turn made
up of billions of transistors available in our regular
computers.
Together, said Kelly, this structure connected to
sensors in virtually all electronic equipment, will
have sufficient complexity to independently start to
learn things. Smart algorithms in combination with a
global database will be able to register (in theory)
almost unlimited amounts of information that can be
processed in the universal cloud of computers. Every
time a user clicks on a link, a node becomes a little
bit better. As Kelly concluded (2005):
“We will live inside the Machine and, by that,
head towards superior intelligence.”
Gelernter (1993) described a Mirror World where
people would interact and transact with digital
representations of the real world, something as:
“A true-to-life mirror image trapped inside a
computer. […] The whole point of a mirror
world is that it is wired in real time and place – it
is supposed to mirror reality rather than being a
parallel reality or cyber world.”
Put another way, reality is mirrored in the eyes of
the user, e.g., composed by the billions and billions
of “hits” that passes through, e.g., Google’s search
engine. This engine, in turn, can be described as an
instance of evolutionary development where
capabilities gradually, almost imperceptibly, are
improved; our spelling mistakes are corrected, the
engine determines whether personal names or places
are used, suggests translations, etc. As such, it
determines the connection between multiple
keywords and combine different media and
languages.
Among many things, Google improves its search
engine by analyzing short clicks, i.e., those of users
who did not find what they were looking for
immediately. Google also tries to find patterns in the
massive amounts of data that the users feed to the
search engine. This is achieved by using machine-
learning techniques, training algorithms, and ideas
ICAART2013-InternationalConferenceonAgentsandArtificialIntelligence
572