of images to work with, it is possible to rewrite the
equations exactly for the specific number of selected
images (five in the example, line 2, Figure 5). Equa-
tions 3 and 4 may thus be rewritten as follows:
q
1
= (p
2
1
) + (p
2
+ p
3
+ p
4
+ p
5
) (5)
q
2
= ((p
2
1
+ p
2
2
)/4) + ((p
3
+ p
4
+ p
5
)/2) (6)
q
3
= ((p
2
1
+ p
2
2
+ p
2
3
)/9) + ((p
4
+ p
5
)/3) (7)
q
4
= ((p
2
1
+ p
2
2
+ p
2
3
+ p
2
4
)/16) + (p
5
/4) (8)
i = max(q
1
,q
2
,q
3
,q
4
) (9)
Obviously, these new equations will change de-
pending on the actual number of selected images,
which has direct implications on the maintainability
of the code. However, this modification makes pos-
sible to implement the algorithm using the so-called
GEE image expressions (Google, 2016d). An expres-
sion is a method of the ee.Image class that is able to
parse a textual representation of a math operation and
then apply it to the channels in the image. The prob-
lem is that expressions cannot involve several images,
but a single one.
Since the algorithm has stored the single-channel
max images images in an image collection, ex-
pressions may not be used to compute the equa-
tions: expressions are able to work with the chan-
nels in a single image only. To solve this prob-
lem, the algorithm was adapted again to transform the
only1BandCollection collection into an unique im-
age made of these single channel images. This muta-
tion made possible the use of expressions.
The solution is to iterate through the whole
only1BandCollection image collection, calling a
function as many times as images it contains. The
function will append the (current) single-channel im-
age in the collection (first parameter) as a new chan-
nel to an output, results image (second parameter).
Once the iterator has invoked the function for each of
the images in the collection, the results is the sought
multichannel, merged image. To iterate through im-
age collections their method iterate may be used.
It takes two parameters: the name of the aforemen-
tioned function (mergeChannels in the example) and
the results image, which must be empty. The func-
tion mergeChannels is defined in lines lines 24–31
of Figure 5. The iterator itself is invoked in lines 33–
36. The result is assigned to mergedImage.
Line 37 changes the kind of values stored in
mergedImage to double to avoid precision prob-
lems when computing the equations. In lines 38–
40, the logarithmic scale affecting Sentinel-1 imagery
(Google, 2015b) is removed using a simple arithmetic
operation, so the original values are restored. The re-
sulting image is logRemovedImage.
3.3 The Tailored Equations
Now it is possible to implement the tailored version
of the algorithm as shown in equations 5 to 9 using
expressions. The first thing to do is to rename the
channels in logRemovedImage (lines 41–43 of Fig-
ure 5) to refer to these easily—the previous operations
baptized the channels with rather weird names. Note
that this command is fragile since it depends on the
number of images selected at the beginning of the al-
gorithm.
Expressions need to refer to the different channels
in an image using labels. A dictionary (bandMap) is
defined in lines 44-51 of the example. It allows the
reduction of the amount of code required for each ex-
pression. It defines, respectively, channels 1 to 5 in
image logRemovedImage as a, b, c, d and e, the ac-
tual labels used in the expressions. Then equations 5,
6, 7 and 8 are implemented as expressions in lines 52–
59, 62–69, 72–79 and 82–89 respectively. Equation
9 is implemented sequentially, computing the partial
maximum just after the evaluation of each expression
(lines 70–71, 80–81 and 90–91). Note how the ex-
pressions mimetize the equations. The result of this
process is stored in an image, the result. To finish,
it is worth to remark that the expressions used by the
algorithm directly depend on the number of images to
be processed, set at the beginning of the code, which
again compromises code maintainability.
3.4 Clipping
Lines 92–94 in the example define a rectangle of in-
terest. It will be used to clip the results image, so it
will cover the area stated by such rectangle only. Clip-
ping is essential; otherwise, the algorithm will try to
compute a result for a very big area— as big as a full
Sentinel-1 image—taking, for this example, about 20
hours of elapsed time to complete. Taking into ac-
count that the algorithm is run in parallel in several
Google servers such amount of time is not negligible.
The clipping operation takes place in line 95. Note
that the algorithm has not yet shown nor stored the re-
sult, so according to the deferred processing approach
(section 2.6) no computations have taken place nei-
ther; that is why it is possible to clip the results image
after evaluating the expressions at no computational
cost.
3.5 Normalizing, Visualizing, Exporting
Although not shown in equations 5 to 9, the result had
to be interpreted as a probability map; thus, the results
image had to be rescaled to store values in the range
GISTAM 2017 - 3rd International Conference on Geographical Information Systems Theory, Applications and Management
254