applied rotation, also the order of rooms and connec-
tions is different within the samples generated from
one floor plan.
4.3 Extension of Floor Plans
The two different approaches need different strategies
to generate new floor plan aspects, as outlined below:
4.3.1 Block Generation Sequencers
In this approach, a new block is generated by feeding
a concatenation of previous blocks with a sequence of
blank vectors into the model and reading the predicted
block from the model’s output. The blank vector se-
quence length must be bigger that the expected length
of the predicted block. This is can be done by deter-
mining the upper limit of predicted block length in the
training database.
4.3.2 Vector Prediction Sequencers
Following a metaphor by Alex Graves, in which se-
quence predictors used for sequence generation are
compared to a person dreaming (by iteratively treating
their own output as if they are real (Graves, 2013)),
this structures works like a dreaming person who gets
inspiration from outside and who combines the in-
formation from outside with its flow of dreaming.
Because of that, we refer to this technique as the
shallowDream structure (see Figure 7).
Basically, this structure operates in two different
phases. During the first phase, the existing floor plan
(in this context also referred to as concept) is fed into
the LSTM. During this phase of concept injection, all
outputs of the LSTM are ignored. After the concept
has been injected completely, the LSTM takes over
both the generation of the structures output that also
serves as its own input. This phase is also referred to
as generation phase.
Using the shallowDream structure, we are able to
implement multiple different functions by simply al-
tering the concept type and the stop symbol. E.g. in
order to predict room connections, the a concatena-
tion of block 1 and the control vector of a block 2 tag
(connection tag) is used as concept and the control
vector of a room layout tag is used as stop symbol.
The control vector at the end of a concept is used to
instruct the LSTM to generate the favored tag type
(and hence to start the new block).
Even after intensive training, the output produced
by the LSTM only approximates the intended feature
vectors. Consequently, these feature vectors have to
be regenerated during the generation phase. For that
purpose, three different strategies are proposed.
No Regeneration In this primitive approach, the
current feature vector is reinserted without any modi-
fications into the models input.
Vector-Based Regeneration This strategy solely
utilizes knowledge about the feature vector’s struc-
ture. Generally, all boolean components are recov-
ered by mapping them to 1.0 or 0.0 based on which
the component is closer to and the real-valued com-
ponents remain unaltered.
Sequence-Based Regeneration In this approach, a
state machine is keeping track of the current block
and tag the sequence is in (thereby utilizing knowl-
edge about the sequence structure). Based on that in-
formation a vector is regenerated by calculating the
most likely, possible vector.
5 INTEGRATION OF THE
PROPOSED MECHANISM INTO
ARCHISTANT
In this work, we restrict ourselves on the two follow-
ing functions:
• Room Connection Generation. Given a set of
rooms (each room is described by a center posi-
tion coordinate and room function) connections
are generated between them, turning a set of
rooms into a room graph.
• Room Layout Generation. Given a room Graph,
layouts for each room a layout (i.e. a polygon de-
scribing its surrounding walls) is generated.
For the sake of simplicity, we added a single but-
ton to the WebUI only, which we labeled ”Creativity”.
Based on the current state of the user’s work, the dif-
ferent functions are selected automatically.
6 EXPERIMENTS
We trained LSTMs based on our two sequencing ap-
proaches. In all cases, we used a training database
with 200 entries, a test database of 40 entries, a
blowup factor of 30, 500 LSTM cells and a learning
rate of 0.01.
6.1 Quantitative Analysis
In order to compare the performances of our differ-
ent approaches, we calculated the room connection
ICPRAM 2018 - 7th International Conference on Pattern Recognition Applications and Methods
120