These tasks commonly carried out by a knowledge
engineer are often referred to as the bottleneck in the
expert system development (Feigenbaum &
McCorduck, 1983).
First and main obstacle is the knowledge
engineering paradox (Liebowitz, 1993). Knowledge
and skills that constitute expertise in a particular
domain is tacit. Furthermore, the more competent
experts become, the less able they are to describe
how they solve problems. Another contribution to
the bottleneck is the lack of willingness to share
knowledge. It is often said that knowledge is power
and people can be reluctant to give up what makes
them inexpendable (Verkasalo, 1995). Finally,
knowledge availability constitutes another obstacle
as experts are not always known and have little time
to spare. Additionally, today’s global working
conditions make it hard reaching experts located at
the other end of the world or across the street at the
subcontractor’s plant. Distributed decision making
becomes therefore a major issue (Verkasalo, 1995).
Currently, face to face discussions are still the
most widely used way of transferring knowledge as
they have the ability to make tacit knowledge more
explicit by allowing the expert to provide a context
to his actions. But expert interviews and other
manual techniques are not always possible and
depend very much on the knowledge engineer’s own
understanding of the domain. The challenge in a
global company is therefore to develop tools and
methods that enable experts to be their own
knowledge engineers. Three topics are commented
upon here: knowledge representation, automatic
knowledge extraction and the user interface.
2.1 Knowledge Representation
Experts reasoning is often incomplete and not
suitable for machine processing. Creating the proper
ontology is therefore an essential aspect of sharing
and manipulating knowledge. Based on the notion
that different problems can require similar tasks, a
number of generic knowledge representations have
been constructed, each having application across a
number of domains (Holsapple et al., 1989).
Common classes of knowledge representations are
logic, semantic network, or production rules.
Computer programs can use forms of concept
learning to extract from exemples structural
descriptions that can support different kinds of
reasoning (MacDonald & Witten, 1989). More
generally, automatic elicitation of knowledge, if
possible, offers great advantages in terms of
knowledge database generation.
2.2 Automatic Knowledge Extraction
Automatic knowledge extraction methods make it
possible to build a knowledge base with no need for
a knowledge engineer and only very little need for
an expert, for example by using case-based
reasoning. This poses however a knowledge
acquisition dilemma: If the system is ignorant, it
cannot raise good questions; if it is knowledgeable
enough, it does not have to raise questions. Scalable
acquisition techniques such as interview
metasystems (Kawaguchi et al., 1991) or
interviewing techniques using graphical data entry
(Gaines, 1993) can help overcoming this difficulty.
Because the domain knowledge is often very
specific, knowledge acquisition is a labor-intensive
task. For that reason, generic acquisition shells have
been developed (Chien & Ho, 1992) and extended
with methods for updating incomplete or partially
incorrect knowledge bases (Tecuci, 1992) (Su et al.,
2002). The work has also been facilitated by studies
on the automatic acquisition of shallow knowledge,
which is the experience acquired heuristically while
solving problems (Okamura et al., 1991), or by
compensating for the knowledge engineer’s lack of
domain knowledge, so that the resulting knowledge
base is accurate and complete (Fujihara et al., 1997).
2.3 User Interface
In DSS users are often presented with an exhaustive
amount of data upon which they have to make
decision without necessarily having the proper
understanding or knowledge to do so. The user
interface (UI) is the dialogue component of a DSS
that facilitates information exchange between the
system and its users (Bálint, 1995).
The choice of an interface depends on many
factor, but there are only few reasons for its
inadequacy (Norcio & Stanley, 1989). Mainly, the
UI is often seen as the incidental part of the system.
Consequently it is not well suited to the system or to
the user, and more often to neither. Usability can be
seen as the degree to which the design of a particular
UI takes into account the psychology and physiology
of the users, and makes the process of using the
system effective, efficient and satisfying.
For its response to be understandable, a DSS
should be able to tailor its response to the needs of
the individual. UI adaptability can be achieved by
mapping user’s actions to what they intend to do
(Eberts, 1991) or need (Lind et al., 1994). This can
undermine however the user’s confidence in the
information given to him. Adaptability can therefore
DEFECT-RELATED KNOWLEDGE ACQUISITION FOR DECISION SUPPORT SYSTEMS IN ELECTRONICS
ASSEMBLY
271