Fact-in-a-Box: Hiding Educational Facts in Short Stories for Implicit
Learning
Alia El Bolock
1 a
, Caroline Sabty
1
, Nour Eldin Awad
2
and Slim Abdennadher
1,2
1
Informatics and Computer Science, German International University, Cairo, Egypt
2
Media Engineering and Technology, German University in Cairo, Cairo, Egypt
Keywords:
NLG, Education, Implicit Learning, User Study.
Abstract:
Generating stories on-demand is one of the covered tasks in Natural Language Generation. Stories are being
used in every culture by any age. They have been used for different purposes, such as entertainment and the
education of children. They are an effective way of indirectly providing students with valuable facts that are
easier embedded in their memory. We propose an approach for embedding facts into existing or automatically
generated stories, given a specific target audience and a story context. As proof of concept, we implemented
a framework called Fact-in-a-Box to hide facts in existing stories or a human-like generated text through a
customized user. The framework is based on a fine-tuned model for children as the target audience and fixes
the story context to animals. Instructors can apply the approach to deliver facts to the learner in an exciting yet
informative way. The framework is composed of two modules, one for selecting the most relevant story and
the other one to embed the fact in it. We tested the proposed approach using an experiment to test the learning
gain of children and a survey for adults to evaluate the language of the resulting stories and the concept itself.
The performance was relatively good in hiding facts inside an existing story where children could correctly
re-convey 50% of the complex facts and 80% of the simpler tasks.
1 INTRODUCTION
Natural Language Generation (NLG) covers a wide
range of diverse tasks, including generating stories
on-demand, providing endless possibilities for enter-
tainment and education. Intelligent tutoring systems
can provide students with instant feedback, increasing
education quality. Stories are an effective way of indi-
rectly providing students with valuable facts that are
easier embedded in their memory. They are used in
classrooms to promote critical thinking and enhance
learning (Alhussain and Azmi, 2021). NLG can be a
more cost-effective tool than hiring authors to write
informative yet interesting tales for kids.
We propose an approach for embedding pre-
selected facts into existing or automatically gener-
ated stories, given a specific target audience and a
story context. Our main goal was to be able to im-
plicitly hide and deliver knowledge or facts with-
out the conscious awareness of the target audience.
Pre-trained models like Generative Pre-trained Trans-
former (GPT) 2 (Radford et al., 2019), and 3 (Brown
et al., 2020) are used to fine-tune the involved down-
a
https://orcid.org/0000-0002-5841-1692
stream NLG tasks and obtain accurate results based
on the intended context. We rely on pre-trained
transformer-based models to overcome the shortcom-
ings of Seq2Seq, RNN, and LSTM for maintaining
the coherence and story flow. As proof of concept, we
implement a fine-tuned model for children as the tar-
get audience and fix the story context to animals. Fur-
ther modifications were done to enhance the model to
make conditional text predictions based on the user’s
selection. As proof of concept, we implement our ap-
proach into a framework called Fact-in-a-Box, to hide
facts in existing stories or a human-like generated text
through a customized user interface based on the tar-
get audience and the specified context. The frame-
work contains two interaction modes one for the edu-
cator responsible for content creation and one for the
student for receiving implicit learning.
To embed a fact in a story, the framework is di-
vided into two modules. The first module is a Base
Story Selector; it contains existing stories and can
generate new stories. The latter task is not the focus
of this presented paper. It takes as input the fact from
the user and selects from the available/generated sto-
ries the most relevant to it. The selection approach is
330
El Bolock, A., Sabty, C., Awad, N. and Abdennadher, S.
Fact-in-a-Box: Hiding Educational Facts in Short Stories for Implicit Learning.
DOI: 10.5220/0011985100003470
In Proceedings of the 15th International Conference on Computer Supported Education (CSEDU 2023) - Volume 1, pages 330-336
ISBN: 978-989-758-641-5; ISSN: 2184-5026
Copyright
c
2023 by SCITEPRESS Science and Technology Publications, Lda. Under CC license (CC BY-NC-ND 4.0)
applied using TF-ID (Salton, 1983), and Cosine Sim-
ilarity (Pang-Ning et al., 2005) methods. The second
module is the Fact Embedder, which selects the best
position for adding the fact in the selected story. Co-
sine Similarity is used to calculate the similarity be-
tween the story sentences and the fact.
Given the fact list and their intended context as
input, the framework produces the output story text
with the information embedded. This work demon-
strates how beneficial combining NLG and fact em-
bedding may be in the education domain. The pro-
posed project goes beyond animal-based stories di-
rected at children and expands to educational and
training systems. Instructors can apply the approach
to deliver facts to the learner in an exciting yet infor-
mative way. We tested the proposed approach in two
phases: 1) an experiment to test the learning gain of
5 children and 2) a survey for 40 adults to evaluate
the language of the resulting stories and the concept
itself. The proposed framework performance was rel-
atively good in hiding facts inside an existing story
where children could correctly re-convey 50% of the
complex facts and 80% of the simpler tasks. The pro-
posed approach can be expanded to a broader domain
by using more complex stories than children’s stories
to reach a broader audience.
Several works are done regarding story generation
that can be found in (Alhussain and Azmi, 2021) On
the other hand, there are lots of approaches for uncon-
ventional learning mediums to accompany traditional
face-to-face classes and computer-assisted learning
(Akturk, 2022). These include gamification (Nadi-
Ravandi and Batooli, 2022), online learning (Mas-
tan et al., 2022), immersive learning (Bizami et al.,
2022), and indirect learning. Indirect learning is
achieved when knowledge is acquired by watching a
movie, reading a book, or doing a seemingly daily
life activity. While indirect learning can happen un-
intentionally, it can be harnessed by developing ed-
ucational systems that intentionally convey implicit
knowledge through the chosen mediums. Different
use-cases for this implicit learning have been inves-
tigated. For example, movies are used for STEM
education (Kangas et al., 2017), language education
(Obloberdiyevna and Odilkhonovna, 2022), soft skills
(Belda-Medina, 2022), and physical education (Fu
et al., 2022), to name a few. (Faidley, 2021) ex-
plores education through different pop culture medi-
ums, e.g., movies, tv shows, and memes. Storytelling
has always been used as a tool for teaching language
and morals by both caregivers and educators (Abdal-
rahman, 2022; Purnama et al., 2022; Nicolaou, 2023;
Ratih et al., 2022; Hofman-Bergholm, 2023; Quah
and Ng, 2022).
Figure 1: Overview of the Fact-in-a-Box Story Generation.
In this work, we focus on providing a generic tool
for automatically generating indirect learning mate-
rial to intentionally convey information to learners in
an implicit manner in the form of short stories.
2 FACT-IN-A-BOX OVERVIEW
Hiding information inside an existing or generated
text is challenging as many variables should be taken
care of otherwise, the text would not make any
sense. Our proposed framework enables the prob-
lem of embedding words or hiding information within
an already-existing story or a newly generated story.
Fig. 1 gives an overview of the proposed architecture.
Given the chosen input facts to be learned, a story is
selected. The facts are then embedded in an appro-
priate location inside the story based on two scoring
criteria. We will go into the details of each of the two
main modules in the following.
2.1 Base Story Selector
There are two approaches to choosing a story to use
as a basis for embedding the facts. We can either gen-
erate a story from scratch based on the input facts or
choose existing stories to embed the facts. The for-
mer is a pure NLG approach requiring heavy com-
putational power and story tuning. The latter bene-
fits from relying on existing stories adhering to the
Fact-in-a-Box: Hiding Educational Facts in Short Stories for Implicit Learning
331
plot generation rules of stories. When depending on
existing stories, the challenge is choosing a suitable
story to match the facts. While the authors also inves-
tigate generating stories using GPT-2, in the presented
work, we focus on choosing a fitting story from a
database of existing stories. We categorize the global
dataset into smaller topically-clustered and difficulty-
clustered datasets. We have different datasets for sto-
ries about animals, crimes, fairytales, etc., and other
datasets for young children’s stories, teenagers, and
adults
1
. We rely on two metrics to choose a match-
ing story based on the input prompt from the story
database: TF-IDF and cosine similarity.
Before applying the TF-IDF technique, we
cleaned the text by removing extra white spaces, con-
verting all the text into lowercase, removing digits,
and removing stop words. Lastly, we added the stem-
ming of the words, which is the process of converting
similar words into a single word, e.g., running and
runs would be converted to run. This improves the
calculations of the TF-IDF function to focus on the
essential parts only rather than calculating misleading
information and preventing the function from diverg-
ing from the critical information. Then after pass-
ing the list of inputs, each input is passed into the
TF-IDF function with each story, and the maximum
probability score is chosen. This score resembles the
most likely fitting story for the input fact. This pro-
cess is repeated for the rest of the inputs. Also, the
highest-scoring story appearing for each input is cho-
sen as the story to embed these facts in. The other
approach is similar to TF-IDF but applies cosine sim-
ilarity instead. We are trying to find a matching story
that would make our input fit in the sentence with-
out being placed incorrectly and without affecting the
story’s flow and the text’s coherence. For this reason,
we propose another way to find the matching story by
calculating the cosine similarity scores for each input
with every sentence in the story. We first split our
story into a list of sentences. The function is then cal-
culated by encoding each sentence to have them as
vectors so that the function can be applied. Accord-
ingly, the score is calculated for each input n times
where n is the number of sentences in a story. We
calculate a list of probabilities that we sum for each
story. We pick the highest probability scores to find
the most fitting stories in our dataset. Then, we cal-
culate the total score for each fact from our input and
store it for each story, respectively. The final stage for
picking the base story is choosing a story randomly
from the list of highest probabilities. This ensures
different outputs every time the story generator is run
and avoids producing repetitive, boring results.
1
https://libguides.stcc.edu/c.php?g=886516&p=6370592
2.2 Fact Embedder
The Fact Embedder is responsible for choosing ap-
propriate locations within the base story to embed the
different facts. This is done by relying on the cosine
similarity lists already calculated by the Base Story
Selector. Each sentence in the story is encoded, and
we generate a combined list of cosine similarities be-
tween every sentence in the story and each input fact.
This yields a list of probability scores for each input
fact. We then randomly choose one of the maximum
three scores for each fact. Again, this is to ensure
non-determinism and enable different generated sto-
ries. The same fact-embedding approach is applied
when inserting facts into auto-generated new stories.
3 FACT-IN-A-BOX DESIGN
Fact-in-a-Box in a box was designed in a simple man-
ner. The aim of the initial prototype was not to in-
troduce any design elements that might affect the re-
sults of measuring the learning gain from the embed-
ded facts.
3.1 User Interface and Features
The developed user interface is a web application with
two different interaction modes: an educator mode for
content creation and a student mode for indirect or
implicit learning. In the following, we will discuss
each mode’s different features/views.
Educator Mode - Content Creation
The Educator Mode consists of three main views:
Story Selection, Fact Input, and Story Browsing. Story
Selection is an optional feature. Educators can use it
to choose a specific story from the dataset or upload
their own story as the base story for the fact embed-
ding. This step can be skipped if a story should be au-
tomatically selected. The Fact Input of Fact-in-a-Box
is shown in Fig 2. Educators can input the four facts
they want to teach learners. The educator should also
specify the protagonist animal to improve the base
story selection process. After confirming, the facts are
used as a basis for selecting the base story or the ed-
ucator’s input story. The Fact Embedder then inserts
the facts in the suitable locations in the story before
displaying the generated story in the Story Browsing
view. In this view, the facts are highlighted for the
educator to evaluate their location. The educator can
then confirm the generated story or rerun the process.
CSEDU 2023 - 15th International Conference on Computer Supported Education
332
Figure 2: The Fact Input View of Fact-in-a-Box.
Student Mode - Indirect Learning
The Student Mode currently consists of the Story
Reading view, where learners are presented with dif-
ferent stories the educator assigns. Students can
switch between the different stories.
3.2 Implementation Details
The main aim was to implement a rapid prototype as
a proof-of-concept to the proposed idea of fact hid-
ing for implicit learning. Thus, we implemented a
basic architecture for the proposed approach relying
on readily available and easy-to-use tools. We used
Python and Google Colaboratory to build the back-
end of Fact-in-a-Box, and Flask API
2
to build the
web app. Flask is a Python-based microweb frame-
work. Flask is classified as a micro-framework since
it does not require using specific tools or libraries. It
lacks a database abstraction layer, form validation, or
other components relying on pre-existing third-party
libraries to perform common operations. Thus, Flask
is used for building the UI that links with the back-end
code rather than using other frameworks that might
slow down the story generation process in Google Co-
lab’s environment. We are using it for both our gener-
ation and running the server. We used Ngrok
3
to host
the local webserver to the internet.
4 EVALUATION AND
EXPERIMENT
Our goal was to hide some facts regarding a specific
topic in a story to help a specific group of individu-
2
https://flask.palletsprojects.com/en/2.2.x/
3
https://ngrok.com/
als better retain the information. We wanted to eval-
uate whether our approaches yield the intended re-
sults of giving information to individuals without ex-
plicitly mentioning them and without giving the users
the feeling that they are on a learning endeavor. As
proof of concept, we chose to test our Fact-in-a-Box
approach on the target group of young children, i.e.,
generating stories with easy difficulty. Children were
chosen as they are the most obvious target group for
such a tool, especially in its initial phases. Children
have an amazing capacity for retaining knowledge
and, at the same time, are the most adverse age group
to sitting and receiving it (Chau, 2008). As we are
dealing with children, we decided to constrain the tar-
get information to be learned to four facts. We restrict
the global story dataset to stories topically related to
animals. The already existing stories were scraped
”Folklore and Mythology Electronic Texts” (Ashli-
man, 1996) that were all related to animals to test the
model for only one domain. All scraped books were
cleaned, filtered, and processed.
4.1 Pilot Experiment
We experimented with measuring the children’s abil-
ity to recall the information they read in the story and
how much information they could retain. We also
tested their ability to explicitly state the facts we em-
bedded inside the story.
4.1.1 Experiment Design and Setup
The experiment aimed to give clear insights into chil-
dren’s perceptions of the application. Accordingly,
we choose a small sample size for this initial evalua-
tion. We invited 9 children ages seven and nine. We
needed to evaluate their reading ability before start-
ing the experiment, and this was crucial to ensure they
could extract and understand knowledge from reading
texts to conduct our experiment. Four children did not
meet this criterion and were thus excluded from the
experiment.
After that, each child was given the same three
generated stories to read. Each of the stories had four
facts, but they had variable lengths. The order of the
three stories was counterbalanced in a Latin square
manner. The experiment consisted of four question
groups that serve as evaluation metrics. (1) A regu-
lar post-test to measure the learning gain of the chil-
dren from the story with respect to the intended learn-
ing facts. Each child is given the same two questions
about the facts embedded in the three stories. (2) Chil-
dren were asked for three facts they learned from the
story about the protagonist’s animal to evaluate how
well they recognized the included facts. (3) Mislead-
Fact-in-a-Box: Hiding Educational Facts in Short Stories for Implicit Learning
333
Figure 3: S
1
with the input facts highlighted.
ing information contradicting the story’s context was
embedded in one of the cases to see whether the chil-
dren would retain it or not. This helped us understand
whether the proposed model can influence the chil-
dren’s judgment of the story.
4.1.2 Results
We will present the three generated stories, S
1
, S
2
,
and S
3
, with their learning facts, the post-test ques-
tions, and the results of the experiment on this story.
The input facts are highlighted in the generated stories
to display where the model predicted and embedded
each fact. The model outputs a story that revolves
around a specific protagonist animal: elephants, kan-
garoos, and lions, respectively.
1. The generated story S
1
is shown in Fig. 3. The
following post-test questions were asked:
” Do you find elephants gentle creatures?”: The
reason behind asking this question is that the
context of the story implies that elephants were
causing harm to rabbits. However, we men-
tioned in our facts that elephants were kind, so
we wanted to see if the effect of our input as
it was placed by the model early at the begin-
ning of the story would influence the children’s
understanding.
Do Elephants feed on grass?”: This question
was clear as we needed to measure how well the
children retained the relatively easy question or
will they encounter difficulty.
Mention three facts about the elephant from
the story you have just read.”: Here, we are try-
ing to estimate how much information the child
can understand and receive from our input.
Figure 4: S
2
with the input facts highlighted.
To analyze the results of the answers to the sur-
vey questions of the S
1
, we investigated the chil-
dren who said” No” to specific questions. We
found that the reason for answering” No” in the
first story was the story’s context, describing ele-
phants in a way that they harmed another animal
(rabbit). Hence, the answer was” No”, as the ele-
phant hurts another animal, so it cannot be gen-
tle. The second question was easy, and all the
kids scored correctly except for one. When we
investigated the results of the first question, we
found that the story had already mentioned that”
elephants eat leaves” before we stated that” ele-
phants eat grass”. Almost every child stated that”
elephants are huge” and that” the elephants drink
a lot of water”. Still, although four facts were em-
bedded in the story, all the children could not state
three facts and only recalled two.
2. The generated story S
2
is shown in Fig. 4. The
following post-test questions were asked:
”Are Kangaroos adapted to hot environment?”:
This question was fairly indirect to ask the chil-
dren; however, we needed to see how well they
answered this question to measure their ability
to capture the information if it has been stated
at a higher linguistic level.
”What do Kangaroos feed on?”: This question
was clear as we needed to measure how well the
children retained the relatively easy question or
will they encounter difficulty.
”Mention three facts about the Kangaroos from
the story you have just read.”: Here, we are try-
ing to estimate how much information the child
can understand and receive from our input.
As expected in the answers to the survey ques-
tions of the second story, no child could answer
the first question, but all children easily managed
to answer the second.
3. The generated story S
2
is shown in Fig. 4. The
following post-test questions were asked:
Is the lion the king of the forest”: We wanted
to investigate the children’s ability to receive in-
CSEDU 2023 - 15th International Conference on Computer Supported Education
334
Figure 5: S
3
with the input facts highlighted.
formation from our model when they received
a relatively straightforward fact.
Is the lion powerful?”: This question contra-
dicts the story’s context, where the lion was de-
feated by another animal. Still, we wanted to
check if the child could see the lion as powerful
and if it would affect his judgment.
”Mention three facts about the lion from the
story you have just read.”: Here, we are trying
to estimate how much information the child can
understand and receive from our input.
The first question got three No” answers. The
reasoning behind Child 2’s answer was that the
lion was defeated in the story by the Gnat; hence
it is not powerful. Same for Child 4 and 5 men-
tioned that the lion could not stop the Gnat. This
left only 2 children with ”Yes”, supporting our
claim that implicitly affecting the children’s de-
cisions, lets them obtain information that is not
accurate. All the children could identify that” the
lion is the king of the forest” in the second ques-
tion we asked them. Finally, we received an aver-
age of two facts out of the 3 for all the children.
The pilot experiment results showed that all the
children could identify at least two out of four facts
in each story. And they received clear and mentioned
facts with a rate of up to 80 percent. Accordingly,
we found that we could involve false information to
the children, and around 50% were able to verify its
authenticity even though it conflicted with the actual
storyline. Finally, The linguistic difficulty of the sen-
tences greatly affected the children’s ability to answer
the questions. This was yielded from the open-ended
discussion conducted with the children after the ex-
periment, where they pinpointed some sentences as
too complicated.
4.2 Human Evaluation of Stories
Following the proposed approaches presented in (Sai
et al., 2022), we also evaluated the generated stories
using human evaluation. To test the quality of the gen-
erated stories, we surveyed a group of 40 adults. They
Table 1: Summary of the results of the subjective evalua-
tion.
(Q1) (Q2) (Q3) (Q4)
S
1
40% very good (5)
avg = 3.9
yes = 75% yes = 85% yes = 75%
S
2
42.5% very good (5)
avg = 4.1
yes = 75% yes = 85% yes = 75%
S
3
42.5% very good (5)
avg = 4.1
yes = 75% yes = 85% yes = 87.5%
were mainly asked to (Q1) evaluate the story’s gram-
mar (Likert scale from 1 (lowest) to 5), (Q2) whether
there are repetitions in some places (yes/no), (Q3),
whether a child will be able to identify the facts, and
(Q4) whether they would read it to a child as it’s a
good source for learning facts. The summary of the
results can be found in Table 1. S
3
surprisingly scored
better in (Q4). The results strengthen the position of
our approach and claim in being able to convey the
hidden facts within the text, making it unnoticeable
to the reader that there is something off or misplaced
with the text he is reading without being to identify
that a certain part of the text was hidden explicitly.
5 CONCLUSION
We have shown that NLG and story generation may
be beneficial in domains like education and fact em-
bedding. We implemented an approach to embed
facts in related stories. We proposed a framework
called Fact-in-a-Box as a proof of concept that con-
tains two different interaction modes one for educa-
tors and one for students. It contains animal-based
stories directed to children. The process starts by tak-
ing the fact as input from the educator and goes to the
story selector module to select the most relevant story
to it. The selection is done using two methods TF-
IDF and Cosine Similarity. After the story is selected,
the fact embedder module that adds the fact between
the available sentences by selecting the best position.
In the end, the generated story could be shown to the
student. We tested our proposed approach by evaluat-
ing the learning gain of some students and conducted
a survey for adults to evaluate the language of the re-
sulting stories and the concept itself.
In the future, we want to apply more testing for the
different modules of the application. In addition, we
want to enhance the performance of the story gener-
ation module and include different domains. We will
also add a quizzing module, where educators can de-
fine quizzes in the Educator Mode. In the Student
Mode, students will then be able to take a quiz after
reading a specific story, if a quiz is defined for it.
Fact-in-a-Box: Hiding Educational Facts in Short Stories for Implicit Learning
335
REFERENCES
Abdalrahman, K. K. (2022). Teaching and learning vocab-
ulary through short stories. Canadian Journal of Lan-
guage and Literature Studies, 2(2):7–15.
Akturk, A. O. (2022). Thirty-five years of the journal of
computer assisted learning: A bibliometric overview.
Journal of Computer Assisted Learning, 38(5):1220–
1253.
Alhussain, A. I. and Azmi, A. M. (2021). Automatic story
generation: a survey of approaches. ACM Computing
Surveys (CSUR), 54(5):1–38.
Ashliman, D. (1996). Folklore and mythology electronic
texts. DL Ashliman.
Belda-Medina, J. (2022). Promoting inclusiveness, creativ-
ity and critical thinking through digital storytelling
among efl teacher candidates. International Journal
of Inclusive Education, 26(2):109–123.
Bizami, N. A., Tasir, Z., and Kew, S. N. (2022). Innovative
pedagogical principles and technological tools capa-
bilities for immersive blended learning: a systematic
literature review. Education and Information Tech-
nologies, pages 1–53.
Brown, T., Mann, B., Ryder, N., Subbiah, M., Kaplan, J. D.,
Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G.,
Askell, A., et al. (2020). Language models are few-
shot learners. Advances in neural information pro-
cessing systems, 33:1877–1901.
Chau, M. (2008). The effects of electronic books designed
for children in education.
Faidley, E. W. (2021). “movies, tv shows, and memes... oh
my!”: An honors education through popular culture
and critical pedagogy.
Fu, H. S., Silva, P. H. B. d., Silva, A. P. d., Souza Junior,
M. B. M. d., and Melo, M. S. T. d. (2022). Movies
as strategies for physical education classes at school.
Movimento, 28.
Hofman-Bergholm, M. (2023). Storytelling: The ancient
tool of using stories to communicate knowledge for
a sustainable future. In Integrated Education and
Learning, pages 237–253. Springer.
Kangas, T. C., Cook, M., and Rule, A. C. (2017). Cine-
matherapy in gifted education identity development:
Integrating the arts through stem-themed movies.
Journal of STEM Arts, Crafts, and Constructions,
2(2):3.
Mastan, I. A., Sensuse, D. I., Suryono, R. R., and Kautsa-
rina, K. (2022). Evaluation of distance learning sys-
tem (e-learning): a systematic literature review. Jurnal
Teknoinfo, 16(1):132–137.
Nadi-Ravandi, S. and Batooli, Z. (2022). Gamifica-
tion in education: A scientometric, content and co-
occurrence analysis of systematic review and meta-
analysis articles. Education and Information Tech-
nologies, 27(7):10207–10238.
Nicolaou, C. (2023). The secret power of digital story-
telling methodology: Technology-enhanced learning
utilizing audiovisual educational content. In Enhanc-
ing Education Through Multidisciplinary Film Teach-
ing Methodologies, pages 235–246. IGI Global.
Obloberdiyevna, D. S. and Odilkhonovna, K. U. (2022).
Teaching languages using modern educational meth-
ods. International Journal of Intellectual Cultural
Heritage, 2(3):105–111.
Pang-Ning, T., Steinbach, M., and Kumar, V. (2005). Intro-
duction to data mining addison-wesley.
Purnama, S., Ulfah, M., Ramadani, L., Rahmatullah, B.,
and Ahmad, I. F. (2022). Digital storytelling trends
in early childhood education in indonesia: A system-
atic literature review. Jurnal Pendidikan Usia Dini,
16(1):17–31.
Quah, C. Y. and Ng, K. H. (2022). A systematic literature
review on digital storytelling authoring tool in edu-
cation: January 2010 to january 2020. International
Journal of Human–Computer Interaction, 38(9):851–
867.
Radford, A., Wu, J., Child, R., Luan, D., Amodei, D.,
Sutskever, I., et al. (2019). Language models are un-
supervised multitask learners. OpenAI blog, 1(8):9.
Ratih, G. K., Iriani, A., and Dwikurnaningsih, Y. (2022).
Kindergarten teachers training in integrating anti-
corruption education through storytelling and game.
Jurnal Obsesi: Jurnal Pendidikan Anak Usia Dini,
6(03):1628–1639.
Sai, A. B., Mohankumar, A. K., and Khapra, M. M. (2022).
A survey of evaluation metrics used for nlg systems.
ACM Computing Surveys (CSUR), 55(2):1–39.
Salton, G. (1983). Introduction to modern information re-
trieval. McGraw-Hill.
CSEDU 2023 - 15th International Conference on Computer Supported Education
336