Figure 4: The Accuracy curves of the model (Picture credit:
Original).
Nevertheless, it's worth noting that the validation
accuracy, at 55%, didn't quite reach the same levels as
the training metrics. Discrepancies between training
and validation performance are common, suggesting
that the model's generalization to unseen data may still
be improving. Further training epochs or fine-tuning
could potentially help bridge this performance gap and
further enhance the model's effectiveness on the
validation dataset. In summary, the DenseNet model
demonstrated substantial learning and achieved high
accuracy in classifying emotions in the training data.
However, it's important to acknowledge that the
validation accuracy fell short compared to the training
metrics. This discrepancy is a frequent occurrence in
machine learning, indicating that the model might
require additional fine-tuning to better generalize to
unseen data. By conducting further analysis and
enhancing the model through additional training
epochs or fine-tuning, it is possible to narrow the
performance gap between the training and validation
data. This iterative process would lead to an improved
ability to recognize pet facial expressions in real-
world scenarios.
While the DenseNet model exhibited promising
results and a strong learning capability, there is still
room for refinement. Through continued optimization
efforts, it has the potential to excel in accurately
classifying emotions and capturing pet facial
expressions in diverse and unobserved contexts.
Patience and iterative improvements will play a
crucial role in maximizing the effectiveness of this
model.
4 CONCLUSION
This study effectively closes the loop on the quest to
understand and interpret pet emotions through their
facial expressions. Building upon the introduction's
premise of the importance of pet facial expression
recognition, this paper's main methodology and
contribution, the proposed model, have been
introduced and thoroughly examined. The results of
extensive experiments conducted on the proposed
model showcase its potential in recognizing pet
emotions. Notably, the model exhibits substantial
progress in extracting meaningful features from
diverse pet facial expressions. While the training
accuracy reaches impressive levels, a key limitation
emerges in the form of a performance gap between
training and validation data, suggesting room for
further fine-tuning and exploration of data
augmentation strategies. In the future, research plans
include expanding the study to a wider range of animal
species than traditional pets, which is promising.
Practical applications extend to veterinary care,
animal well-being assessments, and even the
burgeoning field of animal-human communication.
Furthermore, the research aims to delve deeper into
the analysis of pet behaviors and their correlation with
facial expressions, thereby enriching understanding of
pet emotions. This study underscores the importance
of understanding pet emotions, not only for the
welfare of animal companions but also for
strengthening the bonds between humans and animals.
REFERENCES
V. Sreenivas, V Namdeo, E. Vijay Kumar, “Modified deep
belief network based human emotion recognition with
multiscale features from video sequences,” Software:
Practice and Experience, vol. 51, 2021, pp. 1259-1279.
H. Liao, D. Wang, P. Fan, et al. “Deep learning enhanced
attributes conditional random forest for robust facial
expression recognition,” Multimedia Tools and
Applications, vol. 80, 2021, pp. 28627-28645.
K. Fukushima, “Neocognitron: A self-organizing neural
network model for a mechanism of pattern recognition
unaffected by shift in position,” Biological cybernetics,
vol. 36, 1980, pp. 193-202.
Q. Chen, Y. Huang, R. Sun, et al. “An efficient accelerator
for multiple convolutions from the sparsity
perspective,” IEEE Transactions on Very Large Scale
Integration (VLSI) Systems, vol. 28, 2020, pp 1540-
1544.
L. Lin, Y. Zhang, W. Zhang, et al. “A real-time smile
elegance detection system: a feature-level fusion and
SVM based approach,” Electronic Imaging, 2017, pp.
80-85.
H. Yao, F. Dai, S. Zhang, et al. “Dr2-net: Deep residual
reconstruction network for image compressive
sensing,” Neurocomputing, vol. 359, 2019, pp. 483-
493.
A.T. Lopes, E. De Aguiar, A.F. De Souza, et al. “Facial
expression recognition with convolutional neural
networks: coping with few data and the training sample
order,” Pattern recognition, vol. 61, 2017, pp. 610-628.