algorithmic discrimination protections, data privacy,
notice and explanation, and human alternatives,
consideration, and fallback. While these principles
are not specifically targeting education, they lay out
fundamental protections to adhere to.
The U.S. Department of Education’s report,
Designing for Education with Artificial Intelligence
(DoE, 2024), outlines five core recommendations for
developers. The first recommendation, Designing for
Teaching and Learning, urges developers to embed
educational values in AI tools, focusing on “key
ethical concepts such as transparency, justice and
fairness, non-discrimination, non-
maleficence/beneficence, privacy, pedagogical
appropriateness, students’ and teachers’ rights, and
well-being” to foster ethical, learner-centered
environments (p. 12). The second recommendation,
Providing Evidence for Rationale and Impact, calls
on developers to establish clear, research-based
rationales for AI designs or, if using new approaches,
to transparently explain their underlying logic.
Developers are encouraged to analyze data to make
improvements and address risks, ensuring AI tools
support diverse student outcomes and are rigorously
evaluated. The report’s third recommendation,
Advancing Equity and Protecting Civil Rights,
reminds developers to safeguard against bias and
promote equitable access, while the fourth, Ensuring
Safety and Security, calls for robust protections of
student privacy and data security. Lastly, the fifth
recommendation, Promoting Transparency and
Earning Trust, emphasizes the importance of trust-
building through open communication and clear
information-sharing with educators.
In addition to governmental guidelines,
standards organizations and corporate leaders have
outlined specific principles to support responsible AI
practices. The AI Risk Management Framework by
the National Institute of Standards and Technology
(NIST) offers a structured approach to addressing AI
risks. This framework articulates characteristics of
trustworthy AI: valid and reliable, safe, secure and
resilient, accountable and transparent, explainable
and interpretable, privacy-enhanced, and fair with
harmful bias managed (NIST, 2023). NIST’s
framework highlights the importance of transparency
and reliability, stating that responsible AI “involves
not only minimizing risk but maximizing benefit and
accountability.” The NIST framework provides
detailed definitions and descriptions of each
component that are helpful for guiding other
organizations in their AI principles. Furthermore,
some corporate leaders are aware there is more to do
than simply define the principles—developers also
need to put them into practice. Microsoft’s
Responsible AI Standard v2 (2022) operationalizes
their principles into concrete and actionable guidance
for their development teams. While not an education-
specific document, it showcases the need to deeply
consider how to apply AI principles during
development.
This section does not provide an exhaustive
review of the work being done in the area of AI
principles and frameworks, but rather provide key
examples across sectors that can provide examples
and guidance. These works, among others, were
consulted as we developed AI principles for our
context.
2.2 Developing Our AI Principles
At VitalSource, our approach to responsible AI is
rooted in a commitment to creating impactful,
scalable educational tools grounded in rigorous
learning science. The advent of powerful, open
generative AI tools has significantly shifted the
educational landscape, and we view this as a means
for amplifying the reach of proven learning methods.
We recognize the profound responsibility involved in
using AI thoughtfully and with rigorous evaluation to
improve educational experiences for learners
worldwide. In developing our AI Principles, we
started with the values that have long guided our work
and aligned them with our core mission. From our
existing development and research (including
existing AI systems), we identified common themes
such as transparency, accountability, and rigorous
evaluation. We began the synthesis of our AI
principles from our internal values because we agree
with the sentiment that, “In the end, AI reflects the
principles of the people who build it, the people who
use it, and the data upon which it is built,” from the
Executive Order on the Safe, Secure, and Trustworthy
Development and Use of Artificial Intelligence
(White House, 2024). The AI principles developed
would be both a reflection of our own values and a
guide for future change by considering AI guidance
from leading governmental and standards
organizations. By distilling these resources into our
educational technology context, we developed six
principles (data privacy and corporate governance
omitted for brevity):
1. Accountability: VitalSource is accountable
for its use of AI, from decisions on how to
apply AI to ensuring quality, validity, and
reliability of the output. VitalSource
maintains oversight of the output through
human review, automated monitoring