frames. This can be done, for example by calculating
the optical flow, like in the UltraTrack software (Far-
ris and Lichtwark, 2016). A disadvantage of these
methods is the cumulative error, which requires man-
ual correction after several frames. In addition, mis-
alignments may result due to significant changes in
the appearance and intensity of the structures between
successive frames. These problems occur particularly
with large displacement fields due to fast motion and
insufficient sampling rates of most currently available
commercial devices.
Methods based on texture feature detection form
the second category, which includes Hough transform
(Zhou and Zheng, 2008), Radon transform (Zhao and
Zhang, 2011), or vesselness filter (Rana et al., 2009).
The disadvantage of these methods is that the result
of the angle estimation may be distorted by speckle
noise and intramuscular blood vessels, which modify
the characteristics of the muscle fascicles.
The third category includes deep learning ap-
proaches. Cunningham proposed deep residual (Cun-
ningham et al., 2017) and convolution neural net-
works (Cunningham et al., 2018) to estimate the mus-
cle fascicle orientation. One problem with using deep
learning methods is that they require a large amount
of manually measured image data to achieve good re-
sults. Another difficulty is the dependence of the im-
age acquisition and the image distortions on the ul-
trasound transducer, so that adjusted data sets are re-
quired.
In the present article, we compare two estab-
lished methods from the literature with two new ap-
proaches to determine the orientation of textures. As
established methods, we consider vesselness filtering
(Rana et al., 2009) and Radon transform (Zhao and
Zhang, 2011). We compare these with the very re-
cently proposed gray value cooccurence matrix based
texture orientation estimation (Zheng et al., 2018) and
the calculation of the angle using the projection pro-
file (Dalitz et al., 2008). The latter method has been
used for some time in document image analysis for
estimating the rotation of binary documents. Here we
demonstrate that it can be used for gray level images,
too.
In order to evaluate the quality of the different al-
gorithms, we have compared their results with man-
ual estimations of the pennation angle by different ex-
pert observers. As evaluation criteria, we utilized the
intra-class correlation and the mean absolute percent-
age with respect to the inter-observer average, and the
percentage of results within the inter-observer range.
This article is organized as follows: in section 2 &
3 we describe the implemented algorithms, section 4
describes the evaluation method, section 5 discusses
the results and compares the algorithm performances,
and in section 6 we draw some conclusions and give
recommendations for a practical utilization of the al-
gorithms.
2 REGION OF INTEREST
EXTRACTION
To determine the region of interest (ROI), each video
frame is evaluated separately. Firstly, the two black
areas (see Fig.1) are removed. Then, for a reinforce-
ment of the aponeuroses a vesselness filtering (see
section 3.2) is carried out. Then, Otsu’s threshold-
ing method is used generate a binary image of the
filtered image. In the result, the two largest seg-
ments which correspond to the two aponeuroses are
selected. Straight lines are fitted to the lower segment
border of the superficial aponeurosis and to the up-
per segment border of the deep aponeurosis using the
least squares method. The height of the ROI resulted
from the difference between the smallest y-value of
the lower aponeurosis minus 10 pixels and the largest
y-value of the upper aponeurosis plus 10 pixels. The
width of the ROI is calculated from the width of the
image minus a safety area of 10 pixels to the left and
right borders. This ensures that the ROI is always po-
sitioned within the muscle. As the noise level or the
orientation angle may vary over the entire ROI, we
additionally subdivided the entire region horizontally
into eight overlapping subregions. For a fully auto-
mated process, it would be necessary to automatically
pick the subregion with the “best” image quality. To
characterize this quality, we have computed, for every
subregion, the gray value variance as a measure for
contrast, the mean gradient value and the maximum
value of the histogram of the gradients as measures
for edge sharpness.
3 FASCICLE DIRECTION
ESTIMATION
For the determination of the fiber orientation we used
different methods, which are described in the follow-
ing. These methods were either applied directly to the
ROI or a pre-processing step was used for fascicle en-
hancement. For pre-processing, a Vesselness filter or
Radon transformation was optionally applied for im-
age enhancement. Tbl. 1 shows the investigated com-
binations for pre-processing and fascicle orientation
estimation.
VISAPP 2020 - 15th International Conference on Computer Vision Theory and Applications
80