
6 CONCLUSIONS
This paper presents Forge, which builds upon Anvil to
standardise evaluation processes in computer graphics
pipelines. The system addresses major challenges in
the field, including maintaining consistent configura-
tions across renderers, reducing human error, and im-
proving experimental reproducibility. Through these
features, Forge enables researchers and developers to
conduct reliable and accurate comparative studies of
rendering techniques.
Forge’s modular architecture offers flexibility and
adaptability for diverse evaluation needs, allowing
users to integrate new systems and tools without dis-
rupting existing workflows. Its synchronisation in-
terface supports decentralised operations across in-
stances, as demonstrated by its WebSocket System
for remote evaluations, coordinating systems in sep-
arate processes or machines. This design enhances
scalability and versatility, making Forge ideal for var-
ious research contexts, from simple experiments to
complex distributed rendering evaluations in environ-
ments like render farms.
The paper’s case studies demonstrate Forge’s ef-
fectiveness across various applications, such as val-
idating rendering techniques by verifying algorithm
consistency and correctness, and measuring perfor-
mance and visual differences. The framework is a
valuable research tool, offering a standardised envi-
ronment to compare metrics like image quality, com-
putational efficiency, and noise reduction. By au-
tomating evaluation processes, Forge reduces config-
uration inconsistencies and human error, leading to
more reliable results.
A key limitation of Forge is its steep learning
curve for setup and usage, especially for new users.
Until wider adoption leads to more community-
contributed tools, users may struggle to integrate it
smoothly into their workflows and need to invest time
developing custom systems. Furthermore, the Web-
Socket System for remote evaluations, while enabling
distributed operations, may introduce network latency
that could affect timing-sensitive measurements. This
is particularly relevant for real-time rendering scenar-
ios where precise performance analysis is crucial.
Future development of Forge should prioritise
three key areas: improving accessibility through com-
prehensive documentation, tutorials, and example
projects to ease adoption; optimising the WebSocket
System for time-sensitive evaluations; and expand-
ing evaluation capabilities through new metrics and
machine learning-based analysis techniques. Active
community participation will be crucial for contribut-
ing additional evaluation systems and metrics, ulti-
mately enhancing the framework’s versatility across
different research applications. Furthermore, a com-
parative analysis should be performed to assess CG
workflows with and without the use of Forge, employ-
ing alternative tools for comparison.
REFERENCES
Alatalo, T. (2011). An entity-component model for ex-
tensible virtual worlds. IEEE Internet Computing,
15(5):30–37.
Andersson, P., Nilsson, J., Akenine-M
¨
oller, T., Oskarsson,
M.,
˚
Astr
¨
om, K., and Fairchild, M. D. (2020). Flip:
A difference evaluator for alternating images. Proc.
ACM Comput. Graph. Interact. Tech., 3(2):15–1.
Andersson, P., Nilsson, J., Shirley, P., and Akenine-M
¨
oller,
T. (2021). Visualizing Errors in Rendered High Dy-
namic Range Images. In Theisel, H. and Wimmer,
M., editors, Eurographics 2021 - Short Papers. The
Eurographics Association.
Bilas, S. (2002). A data-driven game object system. In
Game Developers Conference Proceedings, volume 2.
Bonneel, N., Coeurjolly, D., Digne, J., and Mellado, N.
(2020). Code replicability in computer graphics. ACM
Trans. Graph., 39(4).
Dahl, T., Koskela, T., Hickey, S., and Vatjus-Anttila, J.
(2013). A virtual world web client utilizing an entity-
component model. In 2013 seventh international con-
ference on next generation mobile apps, services and
technologies, pages 7–12. IEEE.
Kallweit, S., Clarberg, P., Kolb, C., Davidovi
ˇ
c, T., Yao,
K.-H., Foley, T., He, Y., Wu, L., Chen, L., Akenine-
M
¨
oller, T., Wyman, C., Crassin, C., and Benty, N.
(2022). The Falcor rendering framework.
Martin, A. (2007). Entity Systems are the future of MMOG
development.
Napoli, K., Bugeja, K., Spina, S., and Magro, M. (2024).
Spectral transforms for caustic denoising: A compar-
ative analysis for monte carlo rendering. In Advances
in Computer Graphics: Proceedings of the 41st Com-
puter Graphics International Conference, CGI 2024,
July 1–5, LNCS. Springer. In press.
Napoli, K., Bugeja, K., Spina, S., Magro, M., and De Barro,
A. (2022). Anvil: A tool for visual debugging of ren-
dering pipelines. In VISIGRAPP (1: GRAPP), pages
196–203.
Papagiannakis, G., Kamarianakis, M., Protopsaltis, A., An-
gelis, D., and Zikas, P. (2023). Project elements: A
computational entity-component-system in a scene-
graph pythonic framework, for a neural, geomet-
ric computer graphics curriculum. arXiv preprint
arXiv:2302.07691.
Unity Technologies (2024). Introduction to the data-
oriented technology stack for advanced unity devel-
opers.
Wang, V., Salim, F., and Moskovits, P. (2013). The Defini-
tive Guide to HTML5 WebSocket. Apress.
GRAPP 2025 - 20th International Conference on Computer Graphics Theory and Applications
266