search engine. For instance, with a treatment search
engine, we may offer a local explanation such as “the
queried disease is mentioned in the retrieved clinical
trial’s title”. Hence, we use a generic explanation
method that can be tailored to individual search en-
gines’ features.
In this paper, an explainability method is devel-
oped which provides tailored explanations to med-
ical practitioners for retrieved items. To that end,
meaningful features from clinical trials are extracted
from different data sources, and preferences of dif-
ferent users are elicited by utilizing a crowdsourcing-
based methodology. We then put forward a method
to translate preferences into importance level of fea-
tures. Based on features’ importance levels, tailored
explanations are acquired for each specific query, ac-
cording to which we develop a sentence template in
order to present them to users. In addition, we in-
troduce explainability scores, according to which we
order retrieved items. The results suggest that the use
of local explainability on clinical search engines pro-
mote HCPs trust, search experience, and result order-
ing satisfaction.
2 RELATED WORK
To clarify our vision of explainability, we identi-
fied three main dimensions of explainability that can
be observed throughout researchers’ definitions: au-
dience, understanding, and transparency. Under-
standing refers to the user’s ability to understand the
model’s results. However, not all users can interpret
all models using explainability as models can be do-
main specific. For example, users without knowledge
in biology would struggle to understand highly bio-
logical terms generated by a model’s explainability
attempting to diagnose a certain type of lung can-
cer. Likewise, explainable AI (XAI) could use simple
terms, leading to a lack of details for the doctor as-
sessing the diagnosis. The user is therefore required
to have a certain amount of knowledge to understand
the explanation itself, making it crucial for developers
using explainability to target their audience (Rosen-
feld and Richardson, 2019). Lastly, an explainable
method should increase the model’s transparency by
making it more interpretable for its users, and not try
to generate seemingly arbitrary explanations that do
not fit with how the model works (Dimanov et al.,
2020).
Current state-of-the-art local explainability tech-
niques do not use user-friendly explanations. These
current local explainability techniques are either
based on feature importance such as LIME (Das and
Rad, 2020) and SHAP (Lundberg and Lee, 2017),
rule-based (Verma and Ganguly, 2019), saliency maps
(Mundhenk et al., 2019), prototypes (Gee et al., 2019)
example based (Dave et al., 2020), or on counterfac-
tual explanations (Dave et al., 2020). Up to date, fea-
ture importance (Zhang et al., 2019) and rule-based
techniques (Verma and Ganguly, 2019) were used on
search engines, but do not meet the criteria that these
should be user friendly.
LIME is a type of local explainability method
aiming to increase transparency for specific decisions
given by an opaque model. It explains single result
by letting users know why they are getting this spe-
cific result over another (Verma and Ganguly, 2019).
Although LIME offers one way to solve the black-
box problem, it has a few limitations. The first lim-
itation of using LIME is that it is most commonly
used for linear or classification models (Arrieta et al.,
2020). This limits the degree to which the model
can be meaningfully applied to, and restricts itself
to non-user-friendly explanations. Consequently, this
research does not use LIME methods, but developed a
local explainability method to order and generate user
friendly explanations.
3 EXPLAINABLE SEARCH
ENGINE
This section presents the proposed model that pro-
vides explanations for its users, as well as how it
orders a clinical search engine’s results. This en-
ables users to efficiently find potential relevant clin-
ical trials while understanding the underlying pro-
cesses of the model. The proposed method also gen-
erates local explainability scores for each clinical trial
and uses these scores to order the search engine’s re-
sults. Moreover, users are provided with user-friendly
explanations delivering descriptions of the features
available in each clinical trial.
Figure 1 shows the pipeline of steps conducted
for the proposed methodology. The search engine
takes as input the user’s query, and returns an out-
put with explainability-based ordered results with ex-
planations. Figure 1 shows that the local explainabil-
ity search engine combines resources with the HCP’s
query to engineer features. These features are, there-
upon, attributed local explainability scores which are
used to order the list of clinical trials. In addition, the
engineered features’ outputs fill template sentences.
These explanations provide information to the user on
how much of this clinical trial can the search engine
explain. In the following sub-sections, each module
in Figure 1 is discussed and explained in more detail.
HEALTHINF 2022 - 15th International Conference on Health Informatics
736