is used, then one axis is calculated first and saved to
a texture. After this step the other axis is calculated,
but it is worth mentioning that saving the render tex-
ture one time more than necessary implies extra com-
putation time. If a “one step” approach is used, then
the number of times that values have to be written to
textures is reduced. However, one needs to consider
the added calculation (and logic) necessary to find the
correct pixel in one step. The two step approach was
used in this project due to debugging purposes where
errors were easily located because the different stages
of the program could be inspected separately.
A sub-pixel correction is needed since the subim-
ages are a result of (up to) all four corner cameras.
One pixel from one of the corner cameras will be the
best match for any given subimage pixel, but this pre-
rendered subimage pixel will not necessarily match
up to the reprojected pixel being calculated. The off-
set is slightly different for each pre-rendered camera
due to their different corner positions. The closest
pixel match is up to 0.5 pixel off, and the offset can
be calculated, resulting in a position that is between
pixels in the pre-rendered subimage. Linear interpo-
lation is used on these pixels giving a color value for
the reprojected subimage pixel. In our setup this point
can be between two pixels (either on the x-axis or on
the y-axis).
The pixel values lend themselves quite well to in-
terpolation, but this is not the case with the depth map.
The depth map can easily be interpolated on surfaces.
The edges of objects are, however, being smudged if
the difference of the neighboring pixel depths is large.
An example would be a scene with an object rel-
atively far from infinity. This would result in three
neighbouring pixels where one has the depth of the
background, one has the depth of an object, and the
middle pixel has a depth that is somewhere in be-
tween. The simple solution is not to interpolate the
depth while interpolating the colour values. This re-
sults in an image where the pixel values and depth
values do not match completely but are quite close
to correct. The downside is that the pixel values are
interpolated while the depth is not, meaning that the
edge of an object can go beyond the edge in the depth
map, effectively spilling colour to the neighbouring
objects.
The computation of the pixel reprojection was
split into five steps (see Section 3.1). The first step
saves the depth to the alpha channel, and secondly
the reprojected subimages on the x-axis between the
pre-rendered corner images are computed (see Fig-
ure 3). The result was an image where the top and
bottom row were filled with reprojected subimages.
The rest of the image was filled with the mean of the
colour values from the corner cameras since the mean
value is a better guess for the pixel colors compared
to missing the information completely. The next step
computed the values from the two rows of subimages
and thus filled the remainder of the image with repro-
jected subimages in the y-axis. Because the x-axis
and y-axis is computed in two steps special care had
to be taken to avoid looking beyond the boundaries
of the reprojected images. First we check that the
position is within the image, after this step the sub
pixel correction is performed. This process utilises
linear interpolation of two pixels, and if this interpo-
lation is performed sufficiently close to the edge of
the image, then the pixel that resides in the image
is interpolated with a pixel that is outside the image
boundary. The problem was solved by clamping all
textures. Clamping means that the edge pixels will
be repeated beyond the boundaries of the image. The
complete image was rendered in four times the resolu-
tion of the screen, and then downsampled to achieve
anti-aliasing. The last step after downsampling was
scaling. The reason for this step is that the previous
steps are relying on that each subimage has a resolu-
tion in whole pixels. This is, however, not the case for
our use as one millimeter on the screen of the HMD
occupies ≈ 83
1
3
pixels/mm.
4 EXPERIMENT
When looking at the image difference a complete
pixel match will be shown as black [0], and since
the pixel differences is normalized the image differ-
ence will therefore be in the range [0;1]. We can see
that our method has a small image difference, and the
difference is largest around the edges of objects (see
Figure 4), and/or when we have occlusion and data
simply is not available. We can also see small pixel
value differences in textures, but in general we have
many black or dark pixels, and thereby a good pixel
match.
4.1 User Test
This experiment aims at statistically comparing if
subjects can discriminate between the images cre-
ated with 120 virtual cameras (VC) in the Unity en-
gine, and the image created with our pixel reprojec-
tion method (PR). The 120 camera image was created
by capturing the camera views to individual render
textures and combining them to a larger render texture
that would fit 15 × 8 subimages. These were down-
sampled and scaled to the appropriate screen size. Es-
sentially the same method as when using pixel repro-
GRAPP 2017 - International Conference on Computer Graphics Theory and Applications
32