structured knowledge and deposit it into the
knowledge graph. Currently, knowledge acquisition
is mainly carried out for text data, and the extraction
problems that need to be solved include: entity
extraction, relationship extraction, attribute
extraction and event extraction.
3.2 Domain-Specific Ontology
Construction
Chinese medicine is a complex and huge system
with thousands of types of entities, attributes, and
relationships, and it is obvious that to build a
complete knowledge rest system, it is far from
enough to rely only on the power of expert
manpower. For this reason, the automatic discovery
capability of ontology needs to be vigorously
studied. In the iterative process, the project uses the
existing ontology as a guide, and applies weakly-
supervised and unsupervised learning, such as
remote supervision and clustering, to explore the
general generalization and classification laws
between factual knowledge (entities, and their
attributes, and relationships) and conceptual
knowledge (concepts, and their attributes, and
relationships), so as to discover new ontologies, and
concepts.
3.3 Evaluation and Naming of Basic
Ontopsychological Concepts
The formation of basic mental concepts is influenced
by a number of factors, the most important of which
are the types and quantitative constraints on
conceptual connotations. By connotation, we mean
the attributes of the concept and their values.
Connotation constraints, on the other hand, refer to
the constraints on the range of values of attributes,
which have the properties of commonness, ease of
understanding, and so on. Connotation constraints
and their evaluation laws can be learned from the
mapping mechanism of existing ontological
concepts and facts. The maximum entropy
regression formula for concept evaluation can be
expressed as:
(1)
C is the target concept to be evaluated, which
consists of multiple feature cluster constraints p with
"or" relationships. Each feature cluster constraint p
consists of multiple sub-feature constraints with
"with" relationships. The sub-feature constraints are
binary (attribute, attribute value range). If an
attribute is constrained to take only one value, then
the attribute value range is that value. If the metric
perspective of this attribute is important, but the
attribute value is not important (i.e., when the
attribute needs to be considered qualitatively in the
formation of a concept, but a specific measure is not
needed, the range of values is noted as NULL). f(p)
takes the value 1 only if all sub-feature constraints in
p are satisfied, otherwise it is 0. Alternatively, p can
be an overall measure of the feature constraints, e.g.,
the number of constraints, the ease of
comprehension due to the structure, etc. Z is a
normalization factor in order to get the evaluation
value in the interval (0,1), which may not be
computed in the selection of the best concept (
Qu,
2023
).
4
REMOTE SUPERVISED
AUTOMATIC LABELING
ALGORITHM
The lower layer of the model is common across all
datasets, while the upper layer (specifically, CRF)
produces outputs that are specific to each dataset.
The character-level layer receives sentences from
the dataset as input and captures contextual
information at the character level using a BiLSTM,
which produces representation vectors for the
characters. These character-level vectors are then
combined with word-level vectors and passed
through a word-level BiLSTM. This generates a
contextual representation that encompasses both
word-level and character-level information. This
shared representation is trained using our multi-task
objective function. Finally, the CRF component of
the model produces annotations for the input
utterances based on the dataset it belongs to. We
train separate multi-task learning models for each
dataset.
4.1 Shared Layer
The input data of our dataset is represented as
s={w1, w2, ⋯, wn}, where wi represents the ith
word. To obtain word embeddings, we utilize a
word-level embedding layer that takes the input
sentence s and produces embeddings X = {x1, x2,
⋯, xn}. For character-level embeddings, we
introduce a space character on both sides of each
word to indicate the character input as c={c0,_, c1,0,
⋯, c1,_, c2,0, ⋯, cn, }, where ci,j denotes the jth
character of the word wi in ci, and _ represents the
ANIT 2023 - The International Seminar on Artificial Intelligence, Networking and Information Technology