Due to the property of an image to display a pixel
value as one single value, the equation shown in equa-
tion 1 has to be approximated. The Gaussian-blurs
kernel decides the radius around a pixel in which the
approximation should be realized (Gedraite E., 2011)
(Deng G., 1993). Canny edge detection was devel-
oped by John F. Canny in 1986 and is a widely used
edge detection operation, which works by detecting
an image’s discontinuities in brightness. Edge de-
tection algorithms are used for the segmentation and
data extraction by tracing the boundaries of objects in-
side an image (J., 1986). This algorithm is composed
of five different steps, which can only be applied to
greyscale images. In the first step, the Canny edge
detection algorithm smooths the given image with the
Gaussian-blur function. This results in a smoother
image with less noise, which could be interpreted as
the wrong edges. Afterward, in the second step, four
different filters are being used to allow the detection
of horizontal, vertical, and diagonal edges. This is
done by applying an edge detection operator on the
image, calculating the first derivative in the horizon-
tal or vertical direction. In the next step, the edges of
the image will be thinned out. Therefore, smaller, in-
significant edges disappear. This is done by compar-
ing the intensity of each pixel with the next pixel at its
positive and negative gradient direction. If the current
pixels intensity is higher than its neighbor’s, it is be-
ing kept as an edge; otherwise, it will be suppressed.
The edges remaining after this step will be catego-
rized as either ,,strong” or ,,weak” edges based on the
value of their derivatives. To filter out the remaining
noise and the actual edges, in the last step of the algo-
rithm, pixels marked as weak will be transformed into
strong ones, as long as they are neighbors to at least
one other strong pixel in the image (J., 1986).
2.5 Using Thresholding for Improved
ROI Detection
Even through the use of the automatic ROI detection
feature or the visual boundaries achieved by edge de-
tection, finding the perfect form and size for an ROI
can be quite difficult. The automatic detection might
not find the exact ROIs desired by the user due to a
wrong set of parameters or too much noise in the im-
age. The edge detection functionality cannot always
provide a full outline of the objects, because of in-
tensity fluctuations alongside the border. One way
to improve the detection step and narrow down the
amount of potential ROIs is thresholding. A thresh-
olding functionality sets all pixels of an image to ei-
ther fore- or background based on a limit given by the
user. This creates a sharp edge between the contained
elements and their background, facilitating the pro-
cess of finding an ROI’s boundary when used as the
base image used for automatic detection.
2.6 Image Rescaling
Images acquired by the workflow can vary in size, de-
pending on the use of camera settings and the data to
be analyzed. Thus, using a fixed dimension for the
image displayed by PySpot can be quite tricky, due
to loss in quality. Therefore, images in PySpot are
displayed in their original size. While this works per-
fectly for most images, some images’ original size is
too small to see the elements clearly or to select ROIs
with the utmost precision. Therefore, PySpot includes
a feature to change the size of the images by a specific
factor between one and ten. A user can freely change
the scaling of the image and zoom in and out as they
wish to. Resizing an image can always be done during
the analysis process. Hence, all elements contained
are adapted to the new size of the image as well.
2.7 Axes Visualization
A 3D image in PySpot is displayed layer by layer,
using a scrollbar to scroll up and down the individ-
ual sections of an image alongside one axis. This al-
lows for an easy way to modify and analyze the cur-
rently displayed axis. As a side-effect, it is not possi-
ble to look at the axes besides the one currently dis-
played, requiring the need for a particular feature to
view all axes simultaneously. The axes visualized are
the XY-axis (which most analysis processes use), the
XZ-axis, and YZ-axis. A new window opens upon
starting the feature to prevent an overfilling of the
main window. This requires the implementation of an
entirely new application. This new application con-
tains three individual graphical views for each axis
to be displayed, scrollbars to allow scrolling and a
checkbox to allow switching between a colored and
grayscaled image. Upon opening the application, for
each image alongside an axis, its pixel values are ex-
tracted from the Numpy array. The extracted values
are converted into an image and appended to an ar-
ray of images. This array contains all images of an
axis. This structure allows for an easy way to scroll
through the layers of an axis. Once every image was
converted and saved into its respective array, the ap-
plication opens. Visualizing the axes in a single win-
dow.
BIOIMAGING 2020 - 7th International Conference on Bioimaging
138