- External Collaborations: Partner with external AI
experts or institutions that have already
integrated AI technologies. Their experiences
can provide valuable insights into potential
challenges.
- Continuous Learning Workshops: Organise
sessions where the latest findings, research, and
anomalies related to AI LLMs like ChatGPT are
discussed with staff. This creates a dynamic
environment where new risks can be identified in
real-time and feed into the ongoing review
process outline below.
- Technology Audits: Periodically review the
technology's performance and integration within
the educational process. Such audits can identify
any misalignments or areas of potential concern.
This relates to privacy and technology risks
identified in our risk assessment (see appendix).
4.3 Analysis and Evaluation
To evaluate the impact of risks identified, they need to
be fully analysed and understood. The authors
recognise further conversations and review are
necessary (see ongoing monitoring and review. This
means involving students themselves in the risk
analysis process, along with educators and assessment
methods. For the second risk area, labour market
skills, working closely with employers and enabling
them to inform curriculum developments in relation to
using AI tools can support students for employment.
Risk evaluation involves categorising the impact
of risks, this can be done using a simple RAG (Red,
Amber, Green) rating, or a risk assessment matrix with
numerical scoring. For broader strategic
developments, managers may wish to use scenario
planning methodologies to build on risk areas and
consider outcomes if risks are realised. This can enrich
the risk analysis and evaluation process and provide
broader mitigation strategies.
4.4 Risk Treatment
Risk mitigation and management measures are finally
considered to address the identified risks. Following
the above RAG rating or risk matrix, designing
specific strategies for each risk can include mitigation,
transfer, acceptance, or avoidance – this will depend
on the severity of the risk. High risk areas require
implementation of mitigation strategies to reduce the
impact these risks have and can then be monitored. It
is of the authors’ view that avoidance, or banning AI
LLMs outright, is an ineffective strategy considering
new tools are on the rise, many of which will be fully
integrated into what we already do. For example, word
processing and searching the web already contain
GPT, Bard or similar (Yu, 2023).
Building on the above risk identification steps,
ongoing stakeholder management (part of
communication and consultation below) will leverage
a breadth of expertise from students, external
collaborators, managers, educators, and technical
staff.
Finally, given the dynamic nature of AI and
unknown risks related to future developments,
ongoing education is imperative. This can build on
continuous learning workshops mentioned above,
development of working committees (internally and
externally) and regularly reviewing policy
communications for HE regulators and the
government.
Proactive risk mitigation strategies include:
- Assigning a specific task force or working group
whose sole responsibility is to track and respond
to developments in AI LLMs and their
implementation.
- If the institution is already incorporating AI
LLMs, like ChatGPT (including integrated
versions like Bing AI), then digital service teams
should support the function and training of staff
and monitor and treat technical faults.
- To ensure students are building the right skills to
use ChatGPT, putting on ‘how to use ChatGPT’
workshops within or outside the curriculum can
help minimise students using the tool unethically
but encourage appropriate application.
Institutions may also want to provide learning
materials and resources that students (and staff)
can access independently.
4.5 Ongoing Communication and
Consultation
Communication and consultation must occur at all
stages of the risk management framework, to
effectively apply risk management strategies.
Practitioners can benefit from wider input of
colleagues and other stakeholders (like students and
employers) to fully understand and manage the risks.
It is also likely that this process will highlight areas of
concern that are less obvious.
4.6 Ongoing Monitoring and Review
Risks can be volatile, in the sense that their risk level
and impact can change depending on how variables
are affected. For example, as AI LLMs become more
sophisticated and more creative, it becomes more