
development company system. However, all the 
participants using our approach did not omit them. 
Thus, flagging “internal processing” events in our 
approach was effective. Software engineers are 
likely to miss the internal processing in the 
requirements definition. By placing a flag in every 
event and checking each flag individually, software 
engineers can reduce the number of internal 
processing omissions. 
Both approaches resulted in some different 
points in the sentence compositions (Tables 3 – 6). 
For example of the different points in manual 
creation, the participants mistook events in the 
software development company system such as: 
Participant’s Scenario: 
The System outputs the name. 
Correct Scenario: 
The System outputs the name of the Administrator 
to the Administrator. 
In the example, “of the Administrator to the 
Administrator” was omitted using the manual 
approach but not our approach because our approach 
automatically generates the necessary base of the 
events that is able to be extracted automatically from 
classes in conceptual models. Also, for example of 
the different points in our approach, the participant’s 
event was incomplete in the CD sales management 
system such as: 
Participant’s Scenario: 
The System calculates the total sales number. 
Correct Scenario: 
The System calculates the total sales number by 
subtracting the number of the stock from the 
number of the arrival. 
In the example, “by subtracting the number of the 
stock from the number of the arrival” was not 
described because the event could not be extracted 
automatically from conceptual models. Overall our 
approach generated fewer different points than 
manual creation. Thus, our system generates the 
base of scenarios automatically, which helps prevent 
incorrect sentence compositions. 
Although a clear difference in the time required 
to create scenarios cannot be confirmed, the results 
and questionnaire responses indicate that our 
approach supports the creation of scenarios with the 
necessary requirements from conceptual models 
more efficiently than manual creation.  
The experiment validated our approach, but there 
are some threats to the validity. The number of 
participants and the types of domains are limited in 
this evaluation experiment, so increasing them may 
yield different results. In addition, the evaluation 
experiment about the scenario creation may depend 
on the judgment of the evaluator because scenarios 
are often written in natural languages, which are 
ambiguous. Therefore, an evaluation based on 
stricter rules is needed in the future. 
6 RELATED WORK 
Many studies have improved the quality of the 
requirements. Kamalrudin et al. improved the 
quality of requirements using an essential use case 
(EUC), which is shorter and simpler to describe than 
conventional use cases (Kamalrudin et al. 2011). To 
verify and improve the quality of the requirements, 
they compared EUCs to a template in the EUC 
interaction patterns library. Additionally, Kof 
proposed an approach to identify missing 
information such as messages using a message 
sequence chart (MSC) (Kof, 2007). The MSC 
includes missing information in textual scenarios. 
By translating scenarios to MSCs, missing 
information such as necessary objects and actions in 
scenarios is identified. These approaches identify 
omission of requirements by an automatic 
transformation process. On the other hand, our 
approach reduces the omission of requirements by 
interactions with clients. 
Other works have focused on generating natural 
languages from class diagrams. Burden and Heldal 
proposed a two-step process to generate a natural 
language (Burden and Heldal, 2011). First, a class 
diagram is transformed into an intermediate 
linguistic model. Next, the intermediate linguistic 
model is transformed into natural language text 
using class diagrams as a Platform-Independent 
Model (PIM) of the MDA process. Meziane et al. 
also generated natural language specifications from 
class diagrams (Meziane et al. 2008). They aimed to 
remove ambiguities of the natural language used in 
class diagrams. These approaches are similar to our 
approach, but they aim to check consistency 
between models, while we strive to reduce missing 
requirements. 
In the requirements definition phase, conceptual 
models are important artifacts to understand a 
software domain by software engineers and clients. 
Sagar et al. automatically created conceptual models 
from functional requirements written in a natural 
language (Sagar and Abirami, 2014). They extracted 
design elements of conceptual models from 
functional requirements using part-of-speech (POS) 
tags and classified their relationships by relation 
types. POS tags classify words into linguistic 
categories called parts of speech. Then conceptual 
MODELSWARD2015-3rdInternationalConferenceonModel-DrivenEngineeringandSoftwareDevelopment
382