An Evolutionary View of Collective Intelligence
Bengt Carlsson
1
and Andreas Jacobsson
2
1
School of Computing, Blekinge Institute of Technology, 371 79, Karlskrona, Sweden
2
Faculty of Technology and Society, Malmö University, 205 06, Malmö, Sweden
Keywords: Collective Intelligence, Artificial Intelligence, Evolution, Meme.
Abstract: Based on the question “How can people and computers be connected so that – collectively – they act more
intelligently than any individuals, groups, or computers have ever done before?” we propose an
evolutionary approach. From this point of view, there are of course fundamental differences between man
and machine. Where one is artificial, the other is natural, and where the computer needs to process, the brain
must adapt. We propose the use of culturally inherited units, i.e., memes, for describing collective
knowledge storage. Like the genes, memes have the ability to be inherited to the next generation. Genes
appear independently of our society while memes are a result of our cultural development. The concept of
collective intelligence may involve a new kind of meme, entirely emerging within the intersection between
man and machine, i.e., outside the scope of human control. The challenge is to model this behavior without
overriding constraints within basic evolutionary vs. machine settings.
1 INTRODUCTION
Artificial intelligence touches upon a popular
philosophical question, stemming from the early
days of computer science; how much human
intelligence can actually be emulated on a computer?
Initially, this was a matter of humans vs. machines,
or to be specific; the entire human species on one
side and one, often presumed to be gigantic,
computer with ultimate superior intelligence
capabilities on the other.
In some areas, e.g., board games and quiz
contests, the computer has proved to be at least as
clever as man. In the American TV game Jeopardy,
a single super computer, named Watson, succeeded
in winning over several human grandmasters. The
performance was impressive because the questions
in part consisted of puns, irony, and other sorts of
information that is difficult to interpret, calculate,
and perceive by a computer. The engineers behind
Watson had to use a combination of a huge database
(also referred to as a knowledge base) and a rule
based machine-learning system in order to estimate
the probability of providing a correct response.
Does this mean that the intelligence of a
computer, such as Watson, can be seen as the
equivalent with that of a human being? This
question dates back to the early 1950’s when Alan
Turing introduced what would later become known
as the Turing Test (1956). Briefly, in the Turing
Test, a human judge engages in a natural language
conversation with a human and a machine designed
to generate performance indistinguishable from that
of a human being. All participants are separated
from one another. If the judge cannot reliably tell the
machine from the human, the machine is said to
have passed the test. The test does not check the
ability to give the correct answer; it checks how
closely the answer resembles typical human
answers. So far, none have succeeded in this test,
apart from the (too) limited domains or applications
of, for instance, a Chess play (see the current status
at http://www.loebner.net/Prizef/loebner-prize.html).
Another related topic is the Fifth Generation
Computer Systems project (FGCS), which was an
effort spanning hundreds of million dollars, in which
information was massively parallel-processed using
logical programming languages (Fuchi, 1984). A
mainframe-like environment was created where a
large number of processors collaborated in order to
achieve a hitherto unprecedented processing power,
and where “smart” programming analyses were
performed. In the early 1980’s, FGCS was virtually
the dream of artificial intelligence. Even if this
technology is outdated – at the time, Internet had not
yet received its breakthrough, and today’s powerful
multi-processor machines were still distant – it must
571
Carlsson B. and Jacobsson A..
An Evolutionary View of Collective Intelligence.
DOI: 10.5220/0004331105710577
In Proceedings of the 5th International Conference on Agents and Artificial Intelligence (ICAART-2013), pages 571-577
ISBN: 978-989-8565-39-6
Copyright
c
2013 SCITEPRESS (Science and Technology Publications, Lda.)
be said that the project was far ahead of its time.
In another important artificial intelligence
initiative, Cyc (Randall and Lenat, 1982), from the
word Encyclopedia, an attempt was made to
assemble a comprehensive ontology and knowledge
base of everyday common sense knowledge. The
idea was to enter as much information as possible in
a computerized storage capacity, which would
establish a common vocabulary for automatic
reasoning. The goal was to enable artificial
intelligence applications to perform human-like
reasoning, or even to make a computer smarter than
a human being. Cyc has been considered to be a
controversial endeavor, and has suffered its share of
criticism. Among many things, a large number of
gaps in not only the ontology of ordinary objects,
but an almost complete lack of relevant assertions
describing such objects, has been contributing to the
increasingly fading interest of Cyc.
During the first decade of the new millennium,
the debate whether or not to achieve artificial
intelligence that can be measured against human
intelligence, is not about a single, or even a few,
super computers. It is more a question of what can
be done by collective, collaborative computing
efforts. Thus, can, and if so; how, a collective
intelligence arise through the interaction between
men and machines? The question is whether or not
the appropriate preconditions for this is the Internet
with all its connections, i.e., men to men, men to
machines, and machines to machines.
The article is organized as follows. First we
discuss the evolution of user-driven collaboration on
the Web with respect to a common platform for
artificial intelligence. Next, we compare computer
intelligence to the human brain. Collective
intelligence with respect to men and machines are
then discussed. Finally, the concept of memes is
debated, and the paper is concluded with some
observations and points for further discussion.
2 COMPUTERS WITH
COLLECTIVE INTELLIGENCE
So, with the introduction of the commercial Internet,
i.e., the World Wide Web, or simply the Web, in the
mid 1990’s, companies realized that the content in
this environment could actually be developed by the
users, i.e., the customers, themselves. Customers
shared reviews on items that they have purchased,
software manufacturers used customers as product
support in the development phase, and cooperating
users built an entire encyclopedia of knowledge.
Google became one of the world’s most successful
companies by utilizing Web search content provided
by the users, and Facebook concurred the social side
of the Web by providing means to link people, and
their personal information, together.
In the book “We are the Web” (2005), Kelly
described this development. The massive input of
information provided by the users into the World
Wide Web was referred to as “The Machine”, i.e., a
large artificial brain, with a capacity comparable to a
human brain. The Web, like the brain, has hundreds
of billions of neurons (or Web pages), joined by
multiple synapses (or hyperlinks), and in turn made
up of billions of transistors available in our regular
computers.
Together, said Kelly, this structure connected to
sensors in virtually all electronic equipment, will
have sufficient complexity to independently start to
learn things. Smart algorithms in combination with a
global database will be able to register (in theory)
almost unlimited amounts of information that can be
processed in the universal cloud of computers. Every
time a user clicks on a link, a node becomes a little
bit better. As Kelly concluded (2005):
“We will live inside the Machine and, by that,
head towards superior intelligence.”
Gelernter (1993) described a Mirror World where
people would interact and transact with digital
representations of the real world, something as:
“A true-to-life mirror image trapped inside a
computer. […] The whole point of a mirror
world is that it is wired in real time and place – it
is supposed to mirror reality rather than being a
parallel reality or cyber world.”
Put another way, reality is mirrored in the eyes of
the user, e.g., composed by the billions and billions
of “hits” that passes through, e.g., Google’s search
engine. This engine, in turn, can be described as an
instance of evolutionary development where
capabilities gradually, almost imperceptibly, are
improved; our spelling mistakes are corrected, the
engine determines whether personal names or places
are used, suggests translations, etc. As such, it
determines the connection between multiple
keywords and combine different media and
languages.
Among many things, Google improves its search
engine by analyzing short clicks, i.e., those of users
who did not find what they were looking for
immediately. Google also tries to find patterns in the
massive amounts of data that the users feed to the
search engine. This is achieved by using machine-
learning techniques, training algorithms, and ideas
ICAART2013-InternationalConferenceonAgentsandArtificialIntelligence
572
from control and gaming theory. Large amounts of
text string examples are analyzed, and in this case;
size matters – according to Google a doubling of the
sample size means an improvement of 0.5 percent
(Levy, 2011).
A half percent may not sound much, but this
small portion indicates something much larger than
finding a more efficient search algorithm. If it is
possible to capture, not only the syntax, but also the
semantics, this half percentage may represent an
important step towards, what Tim Berners-Lee
(2000) described as a Web of data that can be
processed directly and indirectly by machines, which
is an important step towards artificial intelligence.
Ultimately, this can enable us to reach beyond
human intelligence, i.e., computers as the most
cleaver “beings” on earth. But is this realistic? What
about the constraints that are built into the silicon
cover, outmost depending on the binaries of a simple
0 and 1, conveyed through a programming
language? It must be pointed out that this question of
course only is valid, presuming that all computers
are digital. Much effort is spent on wetware
computers, quantum computers, and chemical
computers, but that is a discussion outside the scope
of this paper. Before, we can take on such a
challenge; we must first investigate how a
computer´s memory and processing differs from a
human brain.
3 COMPUTER INTELLIGENCE
VS. THE HUMAN BRAIN
Obviously, the brain is not a search engine; there are
significant differences in how information is both
stored and processed in a brain compared to a
computerized setting. While a huge storage capacity,
where data is never forgotten, is a benefit for the
computer, the opposite can be said for the brain. The
information processing within a computer changes
the flow of binaries, where the brain alters the
anatomy resulting in a whole chain of processes that
need to be activated.
To clarify this, four different topics are
introduced in this section: interaction, memory,
processing, and environment.
3.1 Interaction
When submitting a search query on Google’s search
engine, we often do not know precisely what we are
looking for, but we can extract such an answer by
refining the search (based on the information on the
first search result). We ask the question, and the
program, in this case Google, responds. Computers
have become better and better at providing users
with answers, but only to the questions that a
programming language is able to return an answer
in. Humans, on the other hand, rely on sense rather
than on calculation, i.e., we give answers without
always realizing the underlying problem. Imagine
now that the process is reversed: the computer asks a
human to give the answer to a question set by a
program.
Google’s search engine, with its modified
PageRank algorithm, is a system that investigates
and saves the judgments a person makes when
he/she links to or looks at a specific page (Levy
2011). What the user clicks on depends on how well
the search phrase that the human fed into the system
match the information, which the human is in search
for, and that Google suggests. The system learns
both from experienced users (who are skilled in
formulating search phrases and identifying matches)
and inexperienced users (who are incapable or
unused in formulating search phrases and locating
matches). So, a person has indirectly answered the
question regarding the importance of a specific Web
site (in relation to other Web sites), and thus a small
part of “human intelligence” is integrated into the
artificial computer brain of Google’s search engine.
The rest is actually rather trivial; all searches are
combined, and the result gets increasingly more
adequate.
The vision of the intelligent interconnected Web
is based on what we as humans do inside the World
Wide Computer through both professional and
personal relationships. We are nodes on the Web
where intelligence becomes a part of the global
computer with its ubiquitous “intelligence”
embedded in software code, databases, and
microchips. The intelligence is in the eye of the
beholder; it is actually about how the interaction
with the Internet-connected computer will affect our
lives.
3.2 Memories
Digitally stored information, or let us call it
memory, is becoming easier to access. Google’s
ambition is, by using smart algorithms and massive
server power, to process all stored information in
order to complement and surpass human capacity.
By releasing unnecessary memory storage in our
brain, humans may use their brains for better and
more creative purposes. Instead of recalling
AnEvolutionaryViewofCollectiveIntelligence
573
information, memory can be stored digitally and
recreated when needed. In the end, according to
these visionaries, our memory capacity increases by
paving the way to a more valuable, more human,
processing of information. In this context, the crucial
question to ask is how the human memory differs
from the computer’s information storage.
A full backup, i.e., conducting a complete copy
of data, is a common activity between computers.
This is the simplistic equivalence to a transplant of
human memory to a computer; transfer all original
contents of a computer to an exact replica.
However, other things are also included in the
transmission. An initially systematic storage
becomes increasingly fragmented, and hidden
among the accessible information is undefined
information, e.g., deleted and cashed files. This is an
equivalent to “age ailments” among humans
demonstrated by computer performance degradation,
memory insufficiencies, hard drive crashes, etc.
To put it mildly, it is not a simple task to transfer
information between humans and computers; we use
our memory in the interaction with the environment
where there is no storage of “death information”.
This is a major difference in the storage structure
between a computer and a human, i.e., where the
computer needs to process, the brain must adapt.
Human memories must be adapted to the
evolutionary development, and to be a resource for
making the right decisions in a dynamic and
changing surrounding.
Man’s own combination of being able to
understand a complex world, combined with an
imperfect memory, is probably a really excellent
adaptation to our environment. Contrary, imagine
that we remembered everything we have done in the
past. If our brain was more like a computer,
everything should be recorded; visual inputs,
sounds, and smells. An ordinary walk would result
in gigabytes of stored information; fixation points
for the eye, a shoe touching the ground, and all
meta-information in the form of the thoughts we
thought during the walk. Instead, we need to pick
out important information, evaluate, and restructure
the old information into something new, and sort out
the non-essentials. Our intelligence is based more on
being able to dismiss information rather than to store
only the useful information.
So, despite the computer being more powerful
when it comes to making decisions, it has a
weakness in the way memories and information is
stored and processed. This is also supported by what
we know about the brain’s memory storage (Kandel
2006).
3.3 Processing
As new computer materials, processors, algorithms,
etc., are introduced, the computer is increasingly
often compared to a human brain. A deficiency in
this analogy is that all the processed data are located
in main memory or storage memory of the computer,
and it thus looks the same way the next time it is
used. This is hardly the case with human memory.
To re-evaluate and restructure old information, in
humanlike ways, are tasks that are not plausible for a
computer. So how does the brain’s own process
work?
Our brain consists of a short-term memory,
which holds a continuous throughput of information,
and a long-term memory that holds the capacity to
maintain information for a lifetime. These two
memory capacities represent two diverse biological
processes. Short-term memory strengthens or
weakens existing connections in the brain, through
synapses, and between brain cells. Long-term
memory alters the anatomy of the brain; new
synapses are formed, which require the production
of proteins that in turn need to activate dormant
genes (LeDoux, 2002). A whole chain of processes
has to be activated in the formation of new synapses.
This is a time consuming process separated from the
distributed storage model in a computer or a cloud
setting.
Instead of just storing bits and bytes, human
brains have a continuous, undetermined, organic
growth. The brain continues to process information
long after it has been received and the quality of
memory depends on this outcome. The human brain
thus holds the capacity of a vivid memory instead of
the “dead” artificial computer equivalent. Unlike a
computer, when a long-term memory is returned to
the working memory it looks different to the initially
stored data. A new context is thus formed in a
constant process of renewal (Carr, 2010).
3.4 Environment
If we see the human brain as being involved in an
evolutionary process, this means that we store things
we find useful, and thus reduce the amount of
information that we perceive as un-useful. All new
branches and rearrangements of memory routes in
the brain are developed to make us more prepared to
meet both external dangers and to take advantage of
opportunities. Instead of processing stored data from
other stored data, the brain adapts to the surrounding
environment and reacts accordingly. Despite the
otherwise enormous capacities of computer storage,
ICAART2013-InternationalConferenceonAgentsandArtificialIntelligence
574
in this area it cannot compete with human
capabilities. Why? The computer does not evolve
according to evolutionary principles.
This inconsistency between the human brain and
its computerized counterpart is present during the
human computer interaction. It takes hours to
transmit information to the long-term memory in the
brain, which also may require repetition, i.e.,
learning time to transfer knowledge into long-term
memory. In addition, the brain is not intended for
other tasks as long-term memory elastically expands
and contracts, i.e., due to adjustment in the number
of synapses.
Humans do not limit – but reinforce – the mental
strength when new information is stored in the long-
term memory (where the computationally stored
information is more limited). If we let a computer
store and provide our memories with a stream of
competing messages, we get an overload of working
memory, i.e., we get quantity instead of quality. This
means that our frontal lobe cannot focus on any
particular task. In turn, this means that the
hippocampus, a part of the limbic system in the
brain responsible for the formation of new
memories, is unable to consolidate information and
therefore cannot transfer external stimuli into long-
term memory. Using the computer approach our
minds are trained to be confused; information is
processed quickly and efficiently, but without
sustained attention. Instead, when using the
evolutionary approach, the brain becomes skilled at
forgetting, but unapt to remember, i.e., it gets more
idle-headed to think instead of relying too much on
computers’ artificial memory.
4 COLLECTIVE INTELLIGENCE
The Center for Collective Intelligence at MIT
(http://cci.mit.edu/) asks the question: “How can
people and computers be connected so that –
collectively – they act more intelligently than any
individuals, groups, or computers have ever done
before?” This is an ambitious project that introduces
the need for new programming metaphors, e.g.,
creating social operating systems, defining new
programming languages, and promulgating new
software engineering skills (Bernstein et al. 2012).
This may also be a too optimistic project since it
focuses on reciprocation between man and computer
at an equal footing, or on an offering beyond human
collective intelligence.
4.1 Computers
Computers are useful when it comes to supplying
stored information, and humans are good at
processing intellectual impressions and
communicating different emotions. A computer
vision on future opportunities suffers from two
serious flaws: overconfidence and an inadequate
approach to technology. Nothing indicates that the
computer or the data cloud it is connected to in the
future will store and process memories in a more
biological way, i.e., humans will continue to be
responsible of the more intuitive intelligent choices.
As a tool, or an intellectual sidekick, the computer
has an enormous importance, but we should not
overestimate what it actually does. If we limit
ourselves to what the computer can do, we restrict
ourselves. It is this distinction between an extremely
effective tool and the way we act towards the basic
biological conditions that is a truly exciting
challenge of the digital revolution.
4.2 People
Human progress and technological innovation can
fundamentally change our lives. In evolutionary
terms, this course of events takes much longer time
than the Internet has been around. The mental
change, our way of living and using computers, can
be developed much faster. The critical issue is not to
keep all the elements we now associate with our
cognitive abilities to ourselves. The intelligence and
technological progress are instead based on humans
being extremely adaptable to new environments, i.e.,
using all available tools, also computerized artifacts,
for collective intelligence.
4.3 Collective Intelligence
Basically, we contrast, instead of compare, human
with computer skills, as they are inherently different
from one another. We need to combine humans and
computers in a more indirect way; we call this Cause
and Effect.
The cause is the humans living, i.e., reacting, in a
natural environment with a vivid memory adapting
to evolutionary principles. The effect is (solitary)
computers calculating possible options and
simulating preferred outcomes. The synthesis may
be called collective intelligence connected to
individuals, groups, and interlinked computers.
As seen in Section 3, there are three main skills
separating humans from computers, namely:
memories, processing, and environment. Together
AnEvolutionaryViewofCollectiveIntelligence
575
they form a dividing line not possible to override for
a computer singlehandedly. The remaining skill,
interaction, is in the area of progress – so how do we
evaluate this development compared to other major
transitions in human evolution?
5 DISCUSSION
In their book “The Major Transitions in Evolution”
(1995), Maynard Smith and Szathmáry mention the
origin of language as the last transition that had a
genetic basis. The invention of writing and
electronically store or process data are major
transitions without the genetics involved, i.e., they
may have a much faster growth.
Knowledge storage includes our ability to both
store and process knowledge. While this is
absolutely necessary for our cultural development, it
is also an area where our evolutionary background is
an important feature. A meme is a culturally
inherited unit, comparable to genes, which has its
own survival and reproduction in a cultural
environment (Dawkins 1982). Like the genes,
memes have the ability to be inherited to the next
generation. Genes appear independently of our
society while memes are a result of our cultural
development.
A connection to the memes is the large-scale
collection of keywords that Google uses. A spelling
that is based on selecting the spelling of a word that
occurs the most frequently usually yields the correct
word, i.e., spreading the correct spelling of words at
the expense of the misspelled words. New ideas,
fashion trends, and so on, can be described as more
complex memes that can increase or decrease within
the Google meme-pool.
Google’s vision is to make the search engine into
a system that is as smart, or smarter, than man. But
the intelligence is in the eyes of the beholder, it is
the person behind the keyboard that makes the
informed decisions. Unlike the computer, humans
live in the real world where decisions are assessed
directly, and not through a meta-level of externally
achieved decisions.
Having said this, the available meme pool may
virtually provide information on the real world, both
locally and on a global scale. We may even track
how successful new memes arise, the extinction of
less successful competitors, and how stable a meme
is over a longer period. This will be an indicator of
how robust the society is, or an indicator of changes
in progress.
Conceptually, collective intelligence is judged by
humans originally serving the meme pool with new
concepts, computers processing the information and
the overall Web, which is storing, developing, and
merging memes. In a feedback loop memes are not
only stored, within humans, but within computers
and (indirectly) the Web.
So, nothing in the discussion above contradicts
the concept of collective intelligence. We may
speculate on a new kind of meme entirely emerging
within the intersection between man and machine,
i.e., outside the scope of individual control. This
may lead to more independent decision-making, i.e.,
computers may act more intelligently with respect to
humans, but it is not the same as replacing human
reasoning with a self-replicating artificial entity.
6 CONCLUSIONS
We propose the use of an evolutionary setting when
analyzing the question of how people and computers
can be connected so that – collectively – they act
more intelligently than any individuals, groups, or
computers have ever done before. Basically, from an
evolutionary point of view, the computer processes
where the brain adapts, i.e., in this respect, there is a
fundamental difference between man and machine.
Knowledge storage is represented by memes,
culturally inherited units having a much faster
growth. The concept of collective intelligence may
involve a new kind of meme entirely emerging
within the intersection between man and machine,
i.e., outside the scope of human control. This
development needs to be progressed within the
evolution vs. machine constraints, i.e., human
reasoning is not equalized by a self-replicating
artificial entity.
So, collective intelligence is judged by humans,
and processed by computers while allowing the
overall Web to store, develop, and merge memes.
The question of acting more intelligently than any
individuals, groups, or computers have ever done
before may therefore end up being an issue of how
robust the society is, or an issue of changes in
progress.
REFERENCES
Berners-Lee, T., 2000. Weaving the Web: The Original
Design and Ultimate Destiny of the World Wide Web,
HarperBusiness.
Bernstein, A., Klein, M., and Malone, T.W., 2012.
Programming the global brain, Communications of the
ICAART2013-InternationalConferenceonAgentsandArtificialIntelligence
576
ACM, vol55, No 05.
Carr, N., 2010. The Shallows – How the Internet is
Changing the Way We Think, Read and Remember,
Atlantic Books.
Dawkins, R. 1982. The Extended Phenotype, W.H.
Freeman and Company.
Fuchi, K., 1984. Revisiting Original Philosophy of Fifth
Generation Computer Systems Project, In Proceedings
of FGCS.
Gelernter, D., 1993. Mirror Worlds: or the Day Software
Puts the Universe in a Shoebox...How It Will Happen
and What It Will Mean, Oxford University Press.
Kelly, K., 2005. We are the Web, Wired, Issue Aug. 13.
Kandel, E.,R., 2006. In Search of Memory: the Emergence
of a New Science of Mind, Norton.
LeDoux, J., 2002. Synaptic Self: How Our Brains Become
Who We Are, Penguin.
Levy, S., 2011. In the Plex – How Google Thinks, Works,
and Shapes Our Lives, Simon & Schuster.
Maynard Smith, J., and Szathmáry, E., 1995, The Major
Transitions in Evolutions, W.H. Freeman and
company.
Turing, A., 2009 (1956). Computing Machinery and
Intelligence, in ed. Epstein, R., Roberts, G., and Beber,
G., Parsing the Turing Test – Philosophical and
Methodological Issues in the Quest for the Thinking
Computer, Springer Verlag.
Randall, D., and Lenat, D. B., 1982. Knowledge-Based
Systems in Artificial Intelligence, McGraw-Hill.
AnEvolutionaryViewofCollectiveIntelligence
577