tage of recent advances in deep neural networks for
object detection, semantic segmentation, and edge de-
tection. We developed deep learning algorithms that
used transfer learning to determine whether an image
frame had a truck and, if the answer is affirmative, lo-
calize the area from the image frame where the truck
is most likely to be present. We utilized a hybrid truck
classification approach that integrated deep learning
models and geometric truck features for recognizing
and classifying various truck attributes, such as trac-
tor type, trailer type, and refrigeration units, that are
useful in commodity prediction. Using logo text de-
tection and recognition, we developed state-of-the-art
techniques for extracting vendor information corre-
sponding to a truck. All these information are used
for the final commodity classification.
5 SUMMARY AND CONCLUSION
We have presented the novel end-to-end road video
processing system to provide real-time dynamic com-
modity information (indispensable downstream for
tracking commodity movements) by deploying sen-
sors and edge devices in locations of interest. Be-
sides, we have developed a new commodity classi-
fication benchmark based on logo data. To our best
knowledge, it is the first dataset collected to evalu-
ate commodity classification based on logo data. It
can be useful in providing traffic engineers and re-
searchers a dataset to systematically evaluate their de-
veloped freight classification models. Our results for
26 predominant logos derived from highway videos is
very promising. A visual system was developed to il-
lustrate the concept of commodity classification. We
believe that this accuracy can be further improved by
both adding more annotated images to the dataset as
well as by proposing an integrated technique to take
into account a image-based matching.
ACKNOWLEDGEMENTS
This paper is based upon work supported by
NSF CNS 1922782, and FDOT (BDV31-977-81,
Truck Taxonomy and Classification Using Video and
Weigh-In-Motion (WIM) Technology). The opinions,
findings, and conclusions expressed in this publica-
tion are those of the author(s) and not necessarily
those of the Florida Department of Transportation or
the U.S. Department of Transportation.
REFERENCES
Bronzini, M., Firestine, T., Fletcher, W., Greene, D.,
McGuckin, N., Meyer, M., Moore, W. H., Rick, C.,
and Sedor, J. (2018). Transportation statistics annual
report. United States Department of Transportation.
Everingham, M., Van Gool, L., Williams, C. K., Winn, J.,
and Zisserman, A. (2010). The pascal visual object
classes (voc) challenge. International journal of com-
puter vision, 88(2):303–338.
Feh
´
erv
´
ari, I. and Appalaraju, S. (2019). Scalable logo
recognition using proxies. In Winter Conference on
Applications of Computer Vision (WACV), pages 715–
725. IEEE.
Graves, A., Fern
´
andez, S., Gomez, F., and Schmidhu-
ber, J. (2006). Connectionist temporal classification:
labelling unsegmented sequence data with recurrent
neural networks. In the 23rd international conference
on Machine learning, pages 369–376. ACM.
He, P., Wu, A., Huang, X., Scott, J., Rangarajan, A., and
Ranka, S. (2019a). Deep learning based geometric
features for effective truck selection and classification
from highway videos. In International Conference
on Intelligent Transportation Systems (ITSC), 2019.
IEEE.
He, P., Wu, A., Rangarajan, A., and Ranka, S. (2019b).
Truck taxonomy and classification using video and
weigh-in motion (wim) technology final report. Final
Research Report Prepared for: Florida Department of
Transportation.
Hernandez, S. V., Tok, A., and Ritchie, S. G. (2016). Inte-
gration of Weigh-in-Motion (WIM) and inductive sig-
nature data for truck body classification. Transporta-
tion Research Part C: Emerging Technologies, 68:1–
21.
Huang, X., He, P., Rangarajan, A., and Ranka, S. (2020). In-
telligent intersection: Two-stream convolutional net-
works for real-time near-accident detection in traffic
video. ACM Transactions on Spatial Algorithms and
Systems (TSAS), 6(2):1–28.
Llorca, D. F., Arroyo, R., and Sotelo, M.-A. (2013). Vehi-
cle logo recognition in traffic images using hog fea-
tures and svm. In International Conference on In-
telligent Transportation Systems (ITSC), pages 2229–
2234. IEEE.
Psyllos, A. P., Anagnostopoulos, C.-N. E., and Kayafas, E.
(2010). Vehicle logo recognition using a sift-based
enhanced matching scheme. IEEE transactions on in-
telligent transportation systems, 11(2):322–328.
Redmon, J., Divvala, S., Girshick, R., and Farhadi, A.
(2016). You only look once: Unified, real-time ob-
ject detection. In the IEEE conference on computer
vision and pattern recognition, pages 779–788.
Shi, B., Bai, X., and Yao, C. (2016). An end-to-end train-
able neural network for image-based sequence recog-
nition and its application to scene text recognition.
IEEE transactions on pattern analysis and machine
intelligence, 39(11):2298–2304.
Zhou, X., Yao, C., Wen, H., Wang, Y., Zhou, S., He, W.,
and Liang, J. (2017). East: an efficient and accurate
scene text detector. In the IEEE conference on Com-
puter Vision and Pattern Recognition, pages 5551–
5560. IEEE.
VEHITS 2020 - 6th International Conference on Vehicle Technology and Intelligent Transport Systems
236