Hence, we illustrate which functions can be em-
ployed for low-level processing using the Edge TPU.
Figure 1 offers a comparison between the extensive
function set of TensorFlow and TensorFlow Lite and
the highly limited set supported by the Edge Com-
piler. We identify potential functions suitable for our
targeted image low-level processing.
Regrettably, some promising func-
tions, such as MatMul, BatchMatMul, or
SparseTensorDenseMatMul for direct matrix opera-
tions, are not available for the Edge TPU. TensorFlow
provides the Conv2d operation, commonly used in
Convolutional Neural Networks (CNNs). However,
the Edge TPU only supports the separable version,
DepthwiseConv2D, which should suffice for most
filtering tasks.
In the realm of image low-level processing, convo-
lution serves as the mathematical foundation for lin-
ear filtering. This entails convolving a one or more-
dimensional image with a corresponding one or more-
dimensional filter kernel. When working with separa-
ble filter kernels, separable filtering is performed in
each dimension. This approach helps conserve math-
ematical operations and, as a result, accelerates pro-
cessing speed.
To achieve this, Deep Learning (DL) models are
developed to implement deconvolution filters. These
models are estimated through a training process that
utilizes original and degraded images, taking into ac-
count even unknown properties of the image acquisi-
tion system.
Our research demonstrates that these deconvolu-
tion models can be effectively implemented as fi-
nite impulse response (FIR) filters on Edge TPU
hardware, as provided by the Coral USB accelerator
(Coral, 2022).
The paper is structured as follows:
First, it begins by presenting the tools employed to
apply the Edge TPU for preprocessing video streams.
This is followed by an exploration of the processing
power available in the matrix multiplication unit of
the Edge TPU, in comparison to the requirements of
the targeted preprocessing model.
Next, the paper delves into the mathematical for-
mulation of convolution to model the image degrada-
tion process. It introduces the deconvolution model,
which is utilized to estimate the restoration param-
eters necessary for low-level processing. The paper
also elucidates how model parameters for restoration
can be derived from undistorted and distorted exam-
ple images.
The final section of the paper showcases experi-
ments related to model estimation and its execution
on the Edge TPU using sample data. Additionally, it
evaluates the results of the restoration process in re-
lation to model parameters. The paper also demon-
strates the advantages of using large filter sizes in dif-
ferent aspects.
2 RELATED WORK
The stages of low-level processing, such as filtering
and noise suppression, although not explicitly men-
tioned, are of paramount significance in the context of
most video stream analysis applications. To address
this, (Basler, 2021a) has incorporated an image signal
processor into the digital camera video pipeline for
low-level processing. This processor can be imple-
mented either as a customized chip directly behind the
camera sensor or as an additional component within a
processor (Basler, 2021b), specially designed for ma-
chine learning and computer vision tasks.
In their work, (Sun and Kist, 2022) provides a
comprehensive overview of the properties and appli-
cation areas of Edge TPUs while also discussing their
general limitations.
Additionally, (Abeykoon et al., 2019) successfully
ported networks for image restoration to the Edge
TPU, showcasing its versatility.
Moreover, (Zeiler et al., 2010) introduces Decon-
volutional Networks, a valuable approach for unsu-
pervised construction of hierarchical image represen-
tations, which find applications in low-level tasks like
denoising and feature extraction for object recogni-
tion.
Furthermore, research, such as that presented in
(Lecun et al., 1998), demonstrates that hand-crafted
feature extraction can be advantageously replaced by
carefully designed learning machines that can operate
directly on pixel images, as exemplified in character
recognition within the scope of (Lecun et al., 1998).
In their study, (Markovtsev, 2019) delve into the
interaction between TensorFlow and the Edge TPU.
They discuss the availability of convolutional oper-
ations and fully connected neural inference opera-
tions on the Edge TPU, taking advantage of the robust
arithmetic hardware of the TPU. Furthermore, they
demonstrate how motion blur can be simulated using
the Edge TPU with DepthwiseConv2d.
In a different research effort, (Yazdanbakhsh et al.,
2021) reported on the inference of 234K distinct con-
volutional neural networks applied to three different
categories of Edge TPU accelerators. The research
involved measuring latency under varying calculation
graph depth and width, and it revealed high inference
accuracy.
Large Filter Low-Level Processing by Edge TPU
465