Abstract
Background: In higher education, artificial intelligence (AI) is the application of AI technology to support instruction, learning or decision-making in educational environments. Reducing burdens for educators and students while improving learning outcomes is one of the potential benefits. Nonetheless, there are several paradoxical problems with ethical considerations for AI-driven environments.
Objectives: This study explores ethical concerns for an AI-driven environment in higher education to provide a feasible strategy to address ethical concerns.
Method: A systematic literature review methodology was applied. The IEEE Xplore Digital Library database was used, with keywords such as ‘artificial intelligence OR AI’ AND ‘education’ AND/OR ‘ethics’. Only conference and journal papers published between the years 2022 and 2024 were filtered. Thirteen papers were identified as augmented and feasible after full-text review.
Results: The findings revealed how AI in higher education provides insightful data about ethical concerns in the AI-driven environment. Utilising AI in education raises serious ethical problems about equity, privacy, autonomy and the importance of human contact in education.
Conclusion: Artificial intelligence-driven environments for higher education must be aligned appropriately with educational contexts, protecting all learners’ fundamental rights and well-being. Institutions should use AI’s ability to enhance learning opportunities by tackling the above-mentioned ethical concerns to safeguard all learners’ fundamental rights.
Contribution: An AI ethical considerations strategy that guides and prioritises learners’ well-being and rights while responsibly using AI’s potential is outlined. Significantly, a methodological contribution to AI in education, particularly by informing policymakers on how to prioritise learners’ well-being and rights effectively, is formulated.
Keywords: artificial intelligence; ethics; higher education; systematic literature review; learners; South Africa.
Introduction
In the field of education, artificial intelligence (AI) is continuously redefining teaching and learning standards, altering the higher education landscape, and significantly contributing towards the Sustainable Development Goals (SDGs) (Opesemowo & Adekomaya 2024). South Africa is committed to achieving the SDGs specific to SDG 4. Sustainable Development Goal 4 is about guaranteeing inclusive and fair quality education that supports lifelong learning for everyone (Saini et al. 2023). The SDGs have focused worldwide governmental education efforts on achieving the most important goals of the SDG framework, which include ensuring students receive an education of exceptional quality (Saini et al. 2023).
The SDGs were set by the United Nations in 2015 as a worldwide endeavour to eradicate poverty, protect the earth and guarantee that everyone enjoys freedom and peace by 2030 (De Villiers, Kuruppu & Dissanayake 2021). In support of SDG 4, Chapter 9 of the National Development Plan (NDP) Vision 2030 is enhancing access to and quality of education in South Africa (National Planning Commission 2013). Higher education is the major driver of South Africa’s NDP Vision 2030 through rules and standards that support the country’s intellectual capital, ranging from curricula and languages to rules and philosophy (Ahmad et al. 2022).
In South Africa, the higher education system faces challenges in providing equal access to excellent education (Chigova 2021). There is often a lack of equal opportunities for quality education. However, AI may offer a solution. Current literature suggests that there are inadequate detailed guidelines and best practices for South African universities and other institutions of higher education on how to use AI appropriately to achieve sustainable development in accordance with SDG 4 (Opesemowo & Adekomaya 2024).
Artificial intelligence in education
With the use of virtual AI platforms, teachers could create ease access to learning resources and online courses for learners, and thus geographical barriers to education may be overcome (Chakroun et al. 2019). This case is valid in the case of areas where access to schools is restricted, such as underdeveloped regions.
Adaptive learning systems may change instructional material and pace to match every learner’s requirements and skills, making education accessible to a diverse range of learners with different learning styles and paces (Okewu et al. 2021). Artificial intelligence can also identify learners at risk, enabling prompt actions to help them get back on track. The numerous benefits of AI-powered learning analytics include the ability for teachers to track learner progress in real time. Artificial intelligence can adjust the pace and level of difficulty of the lessons to each learner’s individual needs so that students can realise their full capabilities (Chakroun et al. 2019).
Artificial intelligence in institutions of higher education
The role of AI in education is evolving, driven by new applications that make AI easier to integrate into physical and virtual lecture environments (Bozkurt et al. 2021). Artificial intelligence technologies in education are expected to expand significantly, revealing possibilities for enhancing the overall quality, accessibility and inclusivity of higher education worldwide (Opesemowo & Adekomaya 2024). The use of AI in higher education is still in the early stages and is primarily being tested in developing nations (Kabudi, Pappas & Olsen 2021; Mogoale et al. 2025).
Literature suggests many advantages of AI in higher education, and increasing access to quality education is of central importance to support SDG 4 (How et al. 2023). Artificial intelligence can aid teaching in mixed-ability lecture halls by providing individualised learning systems that offer students timely and detailed feedback on their written compositions and also help lecturers by alleviating heavy workloads (Hrastinski et al. 2019).
Ethical concerns
Artificial intelligence technology poses some risks and corresponding threats with ethical implications despite offering numerous benefits for both learners and educators. These ethical implications need to be fairly measured by conceptual and empirical research to fairly determine the threats (Klimova, Pikhart & Kacetl 2023). There are risks and ethics involved in applying AI in higher education. Some authors note the lack of a pedagogical vision in the research work, as studies are focused on building AI systems and applications based on technical feasibility (Ouyang, Zheng & Jiao 2022).
Researchers have addressed the ethical concerns associated with the terminology ‘K-12 education’ in the United States, which refers to students from kindergarten (ages 5–6) through high school (ages 17–18) (Akgun & Greenhow 2022). Equivalent terms include primary and secondary education or pre-college education. The European Union has addressed the ethical considerations of AI, specifically in relation to privacy, surveillance and non-discrimination. The European Union has also established rules to ensure the trustworthiness of AI systems (Almeida, Shmarko & Lomas 2022).
Artificial intelligence ethics documents mostly discuss the implications of AI for human rights (Muller 2020). The development of autonomous AI systems brings about questions related to accountability and responsibility (Osasona et al. 2024). Osasona et al. (2024) emphasise reflecting on the future of decision-making frameworks with AI, as it is important to anticipate and think about emerging issues, to devise strategies for ethical consideration and to adapt to a changing AI-driven environment. Hence, it is of paramount importance to anticipate and address ethical considerations in an AI-driven environment, explore strategies to discuss such ethical concerns and adapt to the evolving AI landscape.
There is a substantial body of literature that covers the ethical considerations regarding AI. There are numerous paradoxical problems with respect to ethical considerations for an AI-driven environment in higher education. Mostly, students are excessively dependent on AI-driven task solutions. While the growing dependence on AI-powered learning may appear to reduce the value of human connection and the ethical policies of academia, dependence may also lead to a deterioration in the problem-solving and critical thinking abilities of students. The reliance on AI by students not only impacts students’ problem-solving abilities but also may lead to students graduating on the strength of work that is not their own in a conventional manner (Bozkurt et al. 2023).
Therefore, it is crucial to address the reliance on AI as an ethical consideration by exploring the ethical concerns associated with an AI-driven environment.
Research aim
This study aims to provide a strategy for higher education to address ethical concerns through a systematic literature review (SLR), also known as a stand-alone literature review. In support of the aim, these research questions were addressed:
- How can higher education institutions ensure students’ skill sets remain relevant in an AI-driven world?
- How can AI integration in higher education foster creative, divergent and convergent thinking among students?
Conducting this study will support SDG 4 by contributing to the building of resilient education systems, which will strengthen South Africa’s educational systems to withstand and adapt to various disruptions, such as technological changes (Chigova 2021; National Planning Commission 2013). Following a structured approach, this study discusses the research methodology, including the inclusion and exclusion criteria of the systematic approach. The study then discusses the results from the examined and reviewed literature, making recommendations and drawing conclusions.
Research method
The study employed an SLR conducted according to strict guidelines and systematic procedures (Okoli 2015). An SLR is a specific type of review that emphasises reproducibility, rigour and clarity (Pickering & Byrne 2014). It enables researchers to systematically synthesise available information, identify gaps in the discipline and recommend a systematic plan for organising research activities (Bandara et al. 2015; Rowe 2014). Such reviews provide a useful resource for informing practice and policy (Okoli 2015).
Although SLRs are doubtlessly demanding to conduct, the academic community gains substantially from the time and effort involved in producing an independent review (Segooa, Motjolopane & Modiba 2023). Systematic literature reviews adhere to a well-defined method that reduces bias and guarantees a transparent review process (Booth, Papaioannou & Sutton 2012). The SLR is done through the meticulous selection and identification of primary research papers, as well as the exploration of the existing literature. This exploration allows thorough assessments of current knowledge and best practices relevant to providing a feasible strategy for higher education to address ethical concerns. This SLR explores AI-driven environments in institutions of higher education.
The majority of scholars emphasise the importance of a highly planned, sequential approach that involves multiple phases when conducting an SLR (Bandara et al. 2015; Boell & Cecez-Kecmanovic 2015; Levy & Ellis 2006; Okoli 2015; Wolfswinkel, Furtmueller & Wilderom 2013). This study followed SLR processes despite the comprehensive steps involved in writing an SLR; thus, it was challenging and time consuming to cover all stages.
Search strategy
Systematic searches may include electronic database searches, conference proceedings (Bandara et al. 2015; Okoli 2015; Webster & Watson 2002), in-person searches (Templier & Paré 2015), forwards and backwards searching, keyword searches (Levy & Ellis 2006; Rowe 2014), multidisciplinary exploration within and outside the discipline (Boell & Wang 2019; Webster & Watson 2002) and use of stopping criteria (Boell & Cecez-Kecmanovic 2015; Levy & Ellis 2006; Okoli 2015). The search for this study was conducted using the IEEE Xplore Digital Library database, a reputable electronic database (Wilde 2016).
Search string
To select relevant articles and conference proceedings, the following search strings were used:
‘artificial intelligence OR AI’ AND ‘education’ AND/OR ‘ethics’. To select articles that significantly contributed to the subject, non-relevant ones were carefully excluded. The original search using the search query yielded 59 publications.
Exclusion criteria
From the 59 initial search results, 46 papers were excluded from full-text review. These papers were excluded because of:
- Being published before 2022 – the study chose studies conducted from 2022 onwards because the popularity of AI integration and use has gained momentum since 2022 in education in developing countries (Baidoo-Anu & Ansah 2023; Dogan, Goru Dogan & Bozkurt 2023)
- Not being based on education
- Papers not written in English
- Book chapters.
Inclusion criteria
A backward search was conducted by analysing the references of the initial filter studies to identify relevant research studies that may have been missed during the initial search. When the number of available publications was inadequate, both forwards and backwards search methods were employed to expand the identified body of literature (Webster & Watson 2002).
The selected articles had to meet the following criteria (Figure 1):
By November 2022, the technological revolution of AI in education had increased dramatically, with AI tools such as Chat Generative Pre-Trained Transformer (ChatGPT 2022). Therefore, the study only considered papers published within the stipulated years. A dataset of 13 papers was augmented and deemed feasible for the research subject.
Data analysis
This phase involved the extraction, categorisation and synthesis of data from the selected studies in a structured way, with the aim of answering the research questions on ethical considerations for an AI-driven environment (Okoli 2015). Other methods of analysis undertaken include coding and conducting concept-centric analysis. Coding is used to create new concepts that emerge from the studies selected for review (Wolfswinkel et al. 2013). It is highly recommended that one organises the findings from the literature review in a way that focuses on concepts or themes rather than an author-centric approach (Bai et al. 2019). There were 13 papers from a full-text review that were found to be feasible for the study.
Ethical considerations
Ethical clearance to conduct this study was obtained from the Tshwane University of Technology, Human Research Ethics Committee (ref. no. HREC2024-09-001).
Results
Because of the emergence of AI and resultant ethical concerns regarding higher education, the findings of the papers in Table 1 are presented here:
| TABLE 1: Inclusive publication selected for this study. |
P1: ‘Artificial intelligence in education: Ethical framework’ explored the ethical implications of integrating AI into education (Krašna & Gartner 2023). This paper emphasises the necessity of establishing ethical guidelines and promoting AI literacy among educators and learners to mitigate potential misuse and biases associated with AI technologies. The authors conducted a literature review and empirical research involving students from social sciences and engineering disciplines. They assessed the students’ awareness of AI through surveys regarding their familiarity with AI-themed films, which served as a proxy for their understanding of AI concepts and potential associated risks.
The findings reveal a significant lack of awareness among students about AI systems and their implications. Social sciences students exhibited minimal knowledge of AI representations in popular culture, while engineering students showed slightly better awareness. These findings suggest a gap in AI literacy, particularly concerning ethical considerations and potential misuse in educational contexts. The paper identifies four foundational ethical principles for AI in education: ‘autonomy, privacy, trust, and responsibility’. It argues that educators and learners may misuse AI technologies without proper education on these principles. The authors highlight the risks of biases inherent in AI systems, which can perpetuate discrimination based on gender, race or socio-economic status. They advocate for critical thinking education to help individuals discern the reliability of AI outputs and promote ethical awareness across all stakeholders involved in education. The paper emphasises that proactive measures must be taken to educate all parties about the ethical use of AI in education, harnessing its benefits while minimising risks.
P2: ‘Fairness in Design (FID): A Framework for Facilitating Ethical Artificial Intelligence Designs’ addressed the increasing need for fairness in AI systems amid growing concerns about biases in algorithmic decision-making (Zhang et al. 2023). The user study involved 24 AI professionals. It introduced the FID framework, which is designed to support AI design teams in systematically considering fairness problems throughout the design process. The FID framework aims to lower barriers to entry for teams with varying levels of understanding of fairness concepts. The FID framework is built around 10 selected fairness principles derived from existing literature. These principles serve as reference points for teams to brainstorm and discuss fairness-related concerns effectively. The methodology employs a game-like format using prompt cards that facilitate discussions among team members. This interactive approach encourages diverse stakeholders to identify and address potential fairness issues from multiple perspectives.
The findings demonstrated that the FID framework significantly improved participants’ ability to make informed decisions regarding fairness, especially in complex algorithmic contexts. The framework is effective in reducing knowledge barriers, allowing teams to address fairness issues more appropriately. The findings highlight the significance of including ethical issues from the outset of AI design. The FID framework provides a novel toolkit that enhances critical thinking and promotes fairness-aware AI solutions, ultimately contributing to more ethical outcomes in AI system development.
P3: ‘Leveraging the Power of AI in Undergraduate Computer Science Education’ investigated the incorporation of advanced AI technologies, specifically ChatGPT, into undergraduate computer science teaching (Liu 2023). The aim was to assess how these technologies might improve learning while also addressing possible ethical and academic integrity issues. The study used a literature review to describe existing research on AI in education and identified knowledge gaps. It also contained a poll to measure current awareness of and use of ChatGPT by computer science teachers and students.
The findings showed that AI technologies such as ChatGPT may dramatically improve educational experiences and develop critical thinking through interactive illustrations, resulting in higher engagement. Concerns included academic integrity, reliance on technology and the necessity for curricular change.
P4: ‘Decision-Making Framework for the Utilisation of Generative Artificial Intelligence in Education: A Case Study of ChatGPT’ explored the ethical consequences and policy decisions surrounding the use of AI, specifically ChatGPT, in educational environments (Bukar et al. 2024). The study identified and prioritised ethical concerns related to the use of ChatGPT in education and proposes a decision-making framework that assists stakeholders in determining whether to restrict or legislate its use. The authors used an SLR to identify 10 ethical concerns associated with ChatGPT. These concerns were analysed using the analytic hierarchy process (AHP). This multi-criteria decision-making instrument involved gathering responses from a panel of 10 experts. The AHP allowed for the ranking of concerns based on their significance and impact on policy alternatives.
The findings of the analysis revealed that the top ethical concerns included copyright, concerns about academic integrity, privacy, and secrecy, as well as legal compliance. The findings indicated that ‘Restriction’ was favoured over ‘Legislation’ as a policy approach, as ‘Restriction’ received a higher weight of 0.513712 compared to ‘Legislation’ with 0.485887. The study concluded that significant ethical considerations must be addressed when integrating generative AI, such as ChatGPT, into education. For legislators seeking to effectively address ethical issues while integrating AI into education, this approach provides valuable insights by highlighting the necessity of inclusive stakeholder conversations, impact evaluation pilot programmes, explicit rules, adaptable regulatory frameworks, awareness campaigns and tactics for the appropriate use of AI in educational settings.
P5: ‘Critical Reflections on the Ethical Regulation of AI: Challenges with Existing Frameworks and Alternative Regulation Approaches’ addressed significant challenges regarding the ethical regulation of AI and proposed alternative regulatory frameworks (Cooreman & Zhu 2022). The paper aimed to critically analyse existing ethical regulation frameworks for AI, identify their shortcomings and propose alternative approaches to enhance their effectiveness. The authors synthesised insights into recent literature on AI ethics, engaging with perspectives from ethicists, computer scientists and policymakers. They focused on three main challenges: defining AI, ensuring public participation in governance and addressing environmental sustainability issues associated with AI systems.
The findings of the analysis revealed some fundamental challenges. Current definitions of AI are inadequate, often leading to regulatory loopholes and a public participation issue with a significant gap in public understanding of AI technologies that limits effective participation in democratic governance related to AI regulation. Also, there is a sustainability issue. There are only a handful of environmental studies integrated into the assessment of AI systems. The environmental impact of AI systems, particularly concerning energy consumption during training processes, is often overlooked in current regulatory frameworks. A re-evaluation of regulatory approaches is necessary. Existing regulatory frameworks fail to adequately address the challenges. The authors advocated for a multidimensional definition of AI that reflects its diverse applications and impacts.
P6: ‘Pros Cons of Artificial Intelligence-ChatGPT Adoption in Education Settings: A Literature Review and Future Research Agendas’ outlined the popularity of ChatGPT and evaluated the integration of ChatGPT in educational environments by analysing benefits and challenges, focusing on ethical considerations, impacts on educators and students and future research directions (Maita et al. 2024). Using the PRISMA systematic review framework, the authors analysed 45 sources, including articles, reviews and editorials, to find the uses of ChatGPT in education for educators and students. Data were gathered and synthesised to address research questions regarding the implications and applications of ChatGPT.
The findings indicated that ChatGPT aids educators in lesson planning, content creation and administrative tasks. However, the paper highlights questions about relying too much on AI, which might affect educator-student relationships and critical pedagogical roles. However, for students, fostering personalised and adaptive learning, ChatGPT risks encouraging academic dishonesty and dependency on AI for problem-solving, which might undermine critical thinking skills. Ethical issues as key concerns include ‘data privacy, academic integrity’ and a balance between technological assistance and authentic learning experiences.
P7: ‘Educational AI and Ethical Growth’ examined ChatGPT’s impact on students’ learning methods and academic ethics from the perspective of a Bangladeshi professor (Khan et al. 2023). The study evaluated the use case of ChatGPT in academia in terms of its influence on students’ academic integrity, learning outcomes and critical thinking, using a mixed-methods approach that polled 201 students aged 18–25 from various institutions. Five faculty members were interviewed using quantitative survey data.
The findings highlighted some of the essential features regarding ChatGPT in education, including potential negative impacts on students who use ChatGPT to conduct work rather than studying, reduced abilities for creative thinking and analytical problem-solving, an increased risk of plagiarism and decreased communication between students and educators. Some major challenges were the lack of true information creation and intellectual comprehension. According to the study’s conclusions, educators should incorporate oversight when employing AI, modify evaluation procedures to ensure a thorough understanding of concepts and implement other strategies to maintain academic integrity when using AI technologies.
P8: ‘Human Thinking in the Age of Gen-AI’ examined philosophical elements of generative AI, including its potential, limitations and implications for human thinking and institutions of higher education (Zinchenko et al. 2023). The study employed a multimethod approach, including historical methods to study the evolution of AI, analytical methods to investigate contemporary AI perspectives and comparative and dialectical methods to examine the relationship between machine and human intelligence. Hermeneutics and synergistics were used to recreate intelligence traits. The results revealed that the expert survey predicted a low likelihood of AI triggering human extinction. The study asserts that human intelligence differs from AI in numerous ways, including the ability to think ‘out of the box’, nonlinearity, moral reasoning, emotional appraisal and the capacity to think beyond formal logic and rational reasoning. As a result, higher education needs to place a greater focus on the development of ethical dimensions, emotional intelligence and volitional components of human intelligence, as AI technologies cannot fully replace human creativity and unique cognitive capacities.
P9: ‘Innovations in Education – A Deep Dive into the Application of Artificial Intelligence in Higher Learning’ studied the innovative applications of AI in institutions of higher education to emphasise transforming traditional learning approaches (Nugraheni et al. 2024) and offers a thorough analysis of the multifaceted effects that AI is having on higher education teaching, learning and administrative tasks, with an emphasis on its potential for personalisation and enhancement of educational experiences. An applied mixed-methods research strategy was used. This study followed a pragmatic epistemology aimed at understanding integrating AI into education from both positivist and constructivist lenses. It comprised 10 major protocols: identifying AI application areas, reviewing current AI technologies, engaging stakeholders, developing integration roadmaps, implementing AI-driven solutions and evaluating effectiveness. The examined AI applications in higher education included personalised learning, intelligent tutoring systems and data analytics.
The findings suggested that fairness, inclusivity and ethical responsibility should be at the heart of any balanced strategy for integrating AI in educational settings.
P10: ‘Technical Support for the Ethical Teaching of AI: A Contextualised Virtual Reality-Based Instructional Design’ investigated AI ethics taught to sixth-grade students using virtual reality (VR) and situational problem debate as a pedagogical strategy to improve students’ ethical awareness and reasoning ability (Technologies 2023). The Konstanz Method of Dilemma Discussion was used in this VR learning environment. The design of a VR learning system consists of three phases: scenario simulation, in-depth investigation and transfer learning. The study employed a case study approach, in which students role-played different perspectives on an ethical issue in group discussions, role-swapping and inter-group exchanges to address ethical considerations.
The findings revealed that VR offers an immersive and participatory setting for teaching complex AI ethical ideas. While the method encourages students to think critically, it also converts abstract ethical information into practical understanding by studying ethical situations from many viewpoints. A proposed theoretical framework and instructional design suggested that this VR-based method will be beneficial in growing AI ethical literacy in students by offering contextualised, interactive learning experiences. The report highlighted that traditional abstract ethics instruction is giving way to more interactive scenario-based learning.
P11: ‘The Effects of AI Services to the Educational Processes–Survey Analysis’ examined the influence of AI tools on educational processes, with an emphasis on educators’ awareness, perceptions and preparedness to include AI in their teaching methods (Krašna & Gartner 2024). The study also investigated generational disparities in teacher attitudes towards AI and information and communications technology (ICT) use, emphasising the importance of addressing ethical concerns. A survey approach was used to collect information from Slovenian primary and secondary school teachers. Data were gathered using an online survey that included 23 questions organised by demographics, ICT use, opinions towards AI use in schools and ethical considerations. For the examination of generational impacts, 75 valid responses were selected and analysed using descriptive and inferential statistical methods.
The findings revealed generational consequences for the adoption of both AI and ICTs. Younger instructors show a preference for the broader use of modern ICT. The participants prioritised ethical aspects such as abuse prevention and the ramifications of AI. Lecturers acknowledged the difficulty they had in distinguishing between AI-generated and human-authored work when evaluating student assignments. The use of technologies like smartphones and tablets revealed differences in technology uptake and familiarity across various generations. The study suggested that while educators see the value of AI in education, its implementation differs among generations because of differences in digital literacy. These differences in digital literacy necessitate the implementation of regular teacher training programmes to overcome the gap. Ethical education and critical thinking are essential mechanisms for guiding the appropriate use of AI in education, ensuring that the promised learning outcomes are not compromised on ethical grounds.
P12: ‘The Generative AI Landscape in Education’ surveyed 200 undergraduate university students as part of a mixed-methods study design that included both qualitative and quantitative research (Ahmed et al. 2024). Despite technological limitations and hurdles, most students considered AI systems effective for learning. The limitations of AI systems include uncertainties about academic integrity and the possibility of plagiarism, possible disruption of thinking abilities, data protection issues and ethical implications for AI-generated work.
The findings warned that while generative AI has enormous promise for altering education, its implementation must be approached with caution and conscientiousness in active governance and monitoring to minimise numerous risk concerns.
P13: ‘The Role of ChatGPT and Artificial Intelligence in Education’ conducted a study on the function and influence of ChatGPT in education, including its benefits and problems, as well as its ramifications for instruction and learning (Quiroz-Martinez et al. 2024). After reviewing 238 initial documents, 30 final papers were selected for in-depth analysis.
The findings highlighted an important research gap: there is a severe absence of student voices in existing AI education research. The study also emphasised the need to create adaptable assessment systems and strengthen teacher training programmes. To carefully integrate AI technology into teaching, the study suggests that while ChatGPT has significant teaching potential, it must be carefully implemented and evaluated.
Discussion of ethical concerns
The integration of AI into higher education is changing the way the teaching and learning processes are conducted. Given the possible benefits of AI-driven educational settings, this integration presents significant ethical concerns that must be carefully considered. The selected systematic review of the study yielded ethical considerations for an AI-driven environment that are presented in Figure 2.
 |
FIGURE 2: Ethical consideration for artificial intelligence-driven environment in higher education. |
|
As indicated in Figure 2, the study has identified several paradoxical ethical concerns in an AI-driven higher education environment. As AI emerged with the potential to improve educational processes, AI also raised ethical concerns that must be addressed (Opesemowo & Adekomaya 2024). Educators and students should become more aware of obstacles to and maintain skill sets that encourage student innovation and problem-solving. Significantly, integrating AI into higher education can be beneficial if standards are followed to minimise ethical problems because dependence on AI hampers the development of critical thinking and problem-solving skills required for academic and real-life success (Ayman et al. 2023).
Overreliance on AI technologies can diminish students’ ability to think creatively and independently (Maita et al. 2024) while also reducing their interest in independent learning, potentially leading to a decline in human intelligence (Borji 2023).
Some of the disadvantages of overreliance on these technologies are:
- Overreliance on AI-generated material may reduce critical-thinking abilities (Maita et al. 2024).
- Pre-programmed replies are unable to solve problems and make decisions in the same way as humans do, rendering them ineffective for complicated or ambiguous circumstances (Ayman et al. 2023).
- Lack of the ability to comprehend and empathise with human emotions because of their algorithmic and data-driven nature (Borji 2023).
- AI chatbots struggle with customisation as they provide standardised solutions that do not consider user demands (Deng & Lin 2022).
AI in educational settings has the potential to decrease the quality of the relationships between students and educators. In comparison, traditional learning environments encourage relationships among students, which are essential for emotional support and tailored assistance. In contrast, AI-driven education lacks the empathy and understanding that human instructors bring, which can negatively impact students’ emotional well-being and participation in the learning process.
A recent poll revealed that 89% of American college students use ChatGPT for homework and 53% use ChatGPT for paper writing (Buselic 2023). ChatGPT is used by 48% of students during examinations and up to 22% of paper summaries (McGee 2023). This use of AI reflects broader ethical concerns about maintaining academic honesty and educational excellence. Moreover, banning AI is not the solution. Artificial intelligence cannot replace the need for human connection, cooperation, critical thinking and problem-solving skills (Lund & Wang 2023). Scholars have emphasised approaches that higher education can adopt to permit AI to be used in the classroom. Bueselic (2023) advocates for ethical guidelines on the use of AI in education to prevent misuse while promoting its benefits.
Some established educational institutions, such as New York City public schools, have rejected the use of AI tools like ChatGPT because of concerns about cheating and its potential negative impact on the development of critical thinking skills (Johnson 2023). In 2023, several schools in the United States and Australia prohibited the use of ChatGPT at school and home, claiming ethical and pedagogical concerns (Buselic 2023; Johnson 2023). Despite this, it is widely used in higher education.
This paper advocates for a balanced approach that acknowledges both the potential advantages and limitations provided by generative AI in educational environments (Buselic 2023). Higher education should blend technology and traditional teaching approaches, employ AI as a complement to traditional teaching techniques and use a balanced approach. Proactive measures must be taken to educate all parties regarding the ethical use of AI in education to harness its benefits while minimising risks (Krašna & Gartner 2023). Rather than substituting traditional skills, it is necessary to emphasise the need to adapt teaching methods to incorporate generative AI as a technique for enhancing learning (Cooreman & Zhu 2022). There is a need to encourage the development of robust information literacy and critical-thinking abilities, which can be developed through planned tasks that encourage students to actively engage with AI outcomes.
Recommendations
This article makes recommendations towards a strategy to address ethical concerns in an AI-driven environment in higher education, aiming to maintain student skill sets and foster creativity among students.
Higher education requires changes to methods for assessment to measure ethics in an AI-driven environment (Curtis 2023). The National Artificial Intelligence Policy Framework (NAIP) for South Africa aims to incorporate AI technology to boost economic expansion and enhance societal well-being, positioning South Africa as a competitive player in AI (Mtuze & Morige 2024). One of the Strategic Pillars of the South African NAIP is to promote responsible and ethical AI use (Technologies 2023).
The NAIP framework guides the strategic adoption of AI in education through seeking innovation, inclusion and skills development in alignment with national goals. In this paper, it is recommended that six strategies be employed to address innovation, inclusion and skills development for AI environments in higher education, with a particular focus on the ethical application of AI technology and academic integrity. The recommended strategies are as follows: (1) enhancing student engagement through AI-driven personalised learning platforms that adjust to the unique needs of individuals, hence enabling active and inclusive participation, (2) emphasising professional development to ensure teachers receive continuous training in AI literacy and pedagogies for effective use of AI technologies, (3) leveraging monitoring and evaluation systems based on AI-driven data analytics to give real-time feedback on learning outcomes, enabling evidence-based improvement, (4) developing a framework that embraces the utilisation of AI tools for plagiarism detection and maintenance of academic integrity, hence ensuring ethical use of information, (5) refining assessment methods through AI-enabled, adaptive methods that yield instant and personalised feedback to a variety of diverse learners and (6) finally, integrating AI concepts and digital competencies throughout the curriculum, hence equipping students to handle the challenges of the Fourth Industrial Revolution while inspiring innovation and critical thinking:
- Curriculum integration: Educators are urged to change the curriculum to include AI technologies while focusing on critical thinking and creativity. This involves creating projects that demand creative thought but cannot be simply performed using AI-generated material.
- Assessment methods: Educators should move away from traditional essay-based assignments and towards methods that encourage deeper participation, such as oral presentations or in-class conversations that are less susceptible to AI exploitation.
- Plagiarism detection tools: Educators are advised to use powerful plagiarism detection software, such as GPTZero or Turnitin’s AI-detection tools, to assist in identifying AI-generated work (Liu 2023). These technologies use textual properties to distinguish between human and machine-generated content.
- Monitoring and evaluation: It is critical to continuously evaluate the influence of AI on learning objectives and ethical issues. Educators should be mindful of biases in AI training data and use the technology as a supplemental rather than a primary resource.
- Professional development: Educators should engage in ongoing training on how to effectively integrate AI tools. Training will allow educators to better comprehend the technology’s potential and limits, resulting in a more educated approach to its application in education.
- Student engagement: Encouraging students to participate in conversations about an integrity culture can be promoted through the moral application of AI. Educators can urge students to think about the consequences of using AI technology at student assessments.
This study proposes that by employing these measures, institutes of higher education may address ethical issues while maximising the benefits of AI technologies for AI-driven environments for institutions of higher education.
Limitations and recommendations
The databases used for the study served as the basis for its limitations. Future studies may tackle diverse databases, which might influence various findings. The review included 13 studies, primarily from Western and South African contexts, which may limit global generalisability. This study was cross-sectional, with a publication window between 2020 and 2024 and recognised the growth of AI in education. Thus, the findings obtained through the data are, at this point, opposite to those of a longitudinal study, which would have provided changing data points over time and possibly also offered a better insight into the ethical concerns that become more evident over a longer time horizon.
Conclusion
Integrating and allowing AI in higher education to enable students and instructors to use technology is no longer a topic of dispute. Despite the exclusion of AI tools in some educational environments, a significant number of students continue to use AI tools for academic tasks (Buselic 2023; Liu 2023). The prohibition of AI tools in some institutions of higher education reflects ethical concerns about the impact on academic integrity and students’ problem-solving development, thereby compromising educational quality (Johnson 2023). A study by Bueselic (2023) advocates for ethical guidelines for using AI in education to minimise misuse while highlighting its advantages.
This paper proposed a feasible strategy for an AI-driven environment in higher education to address ethical concerns. By addressing the stated ethical concerns, institutions of higher education can leverage AI’s potential to improve learning experiences while protecting all students’ basic rights. Future-ready institutions of higher education must create an adaptable strategy that promotes responsible AI use while preserving human-centric education values.
Acknowledgements
Competing interests
The authors declare that they have no financial or personal relationships that may have inappropriately influenced them in writing this article.
Authors’ contributions
P.D.M. contributed to conceptualisation, methodology, analysis and the writing of this article. A.P. and R.C.M. provided additional support, shared expertise, offered feedback and assisted in the management of the research process. M.A.S. contributed to conceptualisation, method and review of the initial manuscript before publication.
Funding information
This research received no specific grant from any funding agency in the public, commercial or not-for-profit sectors.
Data availability
Data sharing is not applicable to this article as no new data were created or analysed in this study.
Disclaimer
The views and opinions expressed in this article are those of the authors and are the product of professional research. They do not necessarily reflect the official policy or position of any affiliated institution, funder, agency or that of the publisher. The authors are responsible for this article’s results, findings and content.
References
Ahmad, S.F., Alam, M.M., Rahmat, M.K., Mubarik, M.S. & Hyder, S.I., 2022, ‘Academic and administrative role of artificial intelligence in education’, Sustainability 14(3), 1101. https://doi.org/10.3390/su14031101
Ahmed, Z., Shanto, S.S., Rime, M.H.K., Morol, M.K., Fahad, N., Hossen, M.J. et al., 2024, ‘The generative AI landscape in education: Mapping the Terrain of opportunities, challenges and student perception’, IEEE Access 12, 147023–147050. https://doi.org/10.1109/ACCESS.2024.3461874
Akgun, S. & Greenhow, C., 2022, ‘Artificial intelligence in education: Addressing ethical challenges in K-12 settings’, AI and Ethics 2, 431–440. https://doi.org/10.1007/s43681-021-00096-7
Almeida, D., Shmarko, K. & Lomas, E., 2022, ‘The ethics of facial recognition technologies, surveillance, and accountability in an age of artificial intelligence: A comparative analysis of US, EU, and UK regulatory frameworks’, AI and Ethics 2, 377–387. https://doi.org/10.1007/s43681-021-00077-w
Ayman, S.E., El-Seoud, S.A., Nagaty, K. & Karam, O.H., 2023, ‘The influence of ChatGPT on student learning and academic performance’, in 2023 International Conference on Computer and Applications (ICCA), IEEE, pp. 1–5, 15–17 November 2023, Cairo.
Bai, Z., Jain, N., Kurdyukov, R., Walton, J., Wang, Y., Wasson, T. et al., 2019, ‘Conducting systematic literature reviews in information systems: An analysis of guidelines’, Issues in Information Systems 20(3), 83–93. https://doi.org/10.48009/3_iis_2019_83-93
Baidoo-Anu, D. & Ansah, L.O., 2023, ‘Education in the era of generative artificial intelligence (AI): Understanding the potential benefits of ChatGPT in promoting teaching and learning’, Journal of AI 7, 52–62. https://doi.org/10.2139/ssrn.4337484
Bandara, W., Furtmueller, E., Gorbacheva, E., Miskon, S. & Beekhuyzen, J., 2015, ‘Achieving rigor in literature reviews: Insights from qualitative data analysis and tool-support’, Communications of the Association for Information systems 37, 8. https://doi.org/10.17705/1CAIS.03708
Boell, S. & Wang, B., 2019, ‘www.litbaskets.io, an IT artifact supporting exploratory literature searches for information systems research’, in paper presented at the 30th Australasian Conference on Information Systems, December 09–11, 2019, Fremantle, Australia, viewed from https://aisel.aisnet.org/acis2019/71/
Boell, S.K. & Cecez-Kecmanovic, D., 2015, ‘On being “systematic” in literature reviews’, in D.E. Leidner (ed.), Formulating research methods for information systems, vol. 2, pp. 48–78, Springer International Publishing, Cham.
Booth, A., Papaioannou, D. & Sutton, A., 2012, Systematic approaches to a successful literature review, SAGE.
Borji, A., 2023, A categorical archive of ChatGPT failures, arXiv preprint arXiv:2302.03494.
Bozkurt, A., Junhong, X., Lambert, S., Pazurek, A., Crompton, H., Koseoglu, S. et al., 2023, ‘Speculative futures on ChatGPT and generative artificial intelligence (AI): A collective reflection from the educational landscape’, Asian Journal of Distance Education 18, 53–130.
Bozkurt, A., Karadeniz, A., Baneres, D., Guerrero-Roldán, A.E. & Rodríguez, M.E., 2021, ‘Artificial intelligence and reflections from educational landscape: A review of AI studies in half a century’, Sustainability 13(2), 800. https://doi.org/10.3390/su13020800
Brooks, D., 2022, ‘Artificial intelligence use in higher education’, in Educause review—Special report. Artificial intelligence where are we now, pp. 18–25, EDUCAUSE, Boulder, CO.
Bukar, U.A., Sayeed, M.S., Razak, S.F.A., Yogarayan, S. & Sneesl, R., 2024, ‘Decision-making framework for the utilization of generative artificial intelligence in education: A case study of ChatGPT’, IEEE Access 12, 95368–95389. https://doi.org/10.1109/ACCESS.2024.3425172
Buselic, V., 2023, ‘Teaching information literacy and critical thinking skills in chat GPT time’, in 2023 International Conference on Computing, Networking, Telecommunications & Engineering Sciences Applications (CoNTESA), pp. 14–20, IEEE, 16–18 October 2023, Manila.
Chakroun, B., Miao, F., Mendes, V., Domiter, A., Fan, H., Kharkova, I. et al., 2019, Artificial intelligence for sustainable development: Synthesis report, mobile learning week 2019, UNESCO International Bureau of Education.
Chigova, L.E., 2021, ‘The National Development Plan 2030: A focus on innovation issues’, Journal of Public Administration 56, 1069–1073.
Cooreman, H. & Zhu, Q., 2022, ‘Critical reflections on the ethical regulation of AI: Challenges with existing frameworks and alternative regulation approaches’, in 2022 IEEE International Symposium on Technology and Society (ISTAS), pp. 1–5, IEEE, Valencia.
Curtis, N., 2023, ‘To ChatGPT or not to ChatGPT? The impact of artificial intelligence on academic publishing’, The Pediatric Infectious Disease Journal 42(4), 275. https://doi.org/10.1097/INF.0000000000003852
Deng, J. & Lin, Y., 2022, ‘The benefits and challenges of ChatGPT: An overview’, Frontiers in Computing and Intelligent Systems 2, 81–83. https://doi.org/10.54097/fcis.v2i2.4465
Department of Communications and Digital Technologies, 2023, South Africa National Artificial Intelligence Policy Framework: Towards the development of South Africa National Artificial Intelligence Policy, Department of Communications and Digital Technologies.
De Villiers, C., Kuruppu, S. & Dissanayake, D., 2021, ‘A (new) role for business–Promoting the United Nations’ Sustainable Development Goals through the internet-of-things and blockchain technology’, Journal of Business Research 131, 598–609. https://doi.org/10.1016/j.jbusres.2020.11.066
Dogan, M.E., Goru Dogan, T. & Bozkurt, A., 2023, ‘The use of artificial intelligence (AI) in online learning and distance education processes: A systematic review of empirical studies’, Applied Sciences 13(5), 3056. https://doi.org/10.3390/app13053056
How, M.-L., Cheah, S.-M., Chan, Y.J., Khor, A.C. & Say, E.M.P., 2023, ‘Artificial intelligence for advancing Sustainable Development Goals (SDGs): An inclusive democratized low-code approach’, in F. Mazzi & L. Floridi (eds.), The ethics of artificial intelligence for the Sustainable Development Goals, vol. 152, pp. 145–165, Springer International Publishing, Cham.
Hrastinski, S., Olofsson, A.D., Arkenback, C., Ekström, S., Ericsson, E., Fransson, G. et al., 2019, ‘Critical imaginaries and reflections on artificial intelligence and robots in postdigital K-12 education’, Postdigital Science and Education 1, 427–445. https://doi.org/10.1007/s42438-019-00046-x
Johnson, A., 2023, ‘ChatGPT in schools: Here’s where it’s banned—And how it could potentially help students’, Forbes 16, 14.
Kabudi, T., Pappas, I. & Olsen, D.H., 2021, ‘AI-enabled adaptive learning systems: A systematic mapping of the literature’, Computers and Education: Artificial Intelligence 2, 100017. https://doi.org/10.1016/j.caeai.2021.100017
Khan, M.M.R.R., Habib, S.B., Tasnim, S.T. & Islam, M.A., 2023, ‘Educational AI and ethical growth: Exploring the effects of ChatGPT on student learning strategies, critical thinking, and academic ethics from a Bangladeshi academic perspective’, in 2023 26th International Conference on Computer and Information Technology (ICCIT), IEEE, pp. 1–6, 13–15 December 2023, Dhaka.
Klimova, B., Pikhart, M. & Kacetl, J., 2023, ‘Ethical issues of the use of AI-driven mobile apps for education’, Frontiers in Public Health 10, 1118116. https://doi.org/10.3389/fpubh.2022.1118116
Krašna, M. & Gartner, S., 2023, ‘Artificial intelligence in education–ethical framework’, in 12th Mediterranean Conference on Embedded Computing (MECO), IEEE, 12–15 June 2023, Bar, Montenegro.
Krašna, M. & Gartner, S., 2024, ‘The effects of AI services to the educational processes–Survey analysis’, in 2024 47th MIPRO ICT and Electronics Convention (MIPRO), IEEE, pp. 496–501, 20–24 May 2024, Opatija
Levy, Y. & Ellis, T.J., 2006, ‘A systems approach to conduct an effective literature review in support of information systems research’, Informing Science 9, 181–212. https://doi.org/10.28945/479
Liu, Y., 2023, ‘Leveraging the power of AI in undergraduate computer science education: Opportunities and challenges’, in 2023 IEEE Frontiers in Education Conference (FIE), pp. 1–5, IEEE, College Station, TX.
Lund, B.D. & Wang, T., 2023, ‘Chatting about ChatGPT: How may AI and GPT impact academia and libraries?’, Library Hi Tech News 40(3), 26–29. https://doi.org/10.1108/LHTN-01-2023-0009
Maita, I., Saide, S., Putri, A.M. & Muwardi, D., 2024, ‘ProsCons of artificial intelligence-ChatGPT adoption in education settings: A literature review and future research agendas’, IEEE Engineering Management Review 52(1), 67–78. https://doi.org/10.1109/EMR.2024.3394540
Mcgee, R.W., 2023, ‘Is ChatGPT biased against conservatives? An empirical study’, SSRN Electronic Journal. https://doi.org/10.2139/ssrn.4359405
Mogoale, P.D., Pretorius, A., Mogase, R.C. & Segooa, M.A., 2025, ‘Integrating artificial intelligence within South African higher learning institutions’, South African Journal of Information Management 27(1), a1399. https://doi.org/10.4102/sajim.v27i1.1939
Mtuze, S.S.K. & Morige, M., 2024, ‘Towards drafting artificial intelligence (AI) legislation in South Africa’, Obiter 45(1), 161–179. https://doi.org/10.17159/obiter.v45i1.18399
Muller, C., 2020, ‘The impact of AI on human rights, democracy and the rule of law’, in Towards regulation of AI systems (CAHAI (2020) 06), pp. 23–31, Council of Europe, Strasbourg.
National Planning Commission, 2013, National Development Plan Vision 2030, The Presidency.
Nugraheni, A.S.C., Widono, S., Saddhono, K., Yamtinah, S., Nurhasanah, F. & Murwaningsih, T., 2024, ‘Innovations in education: A deep dive into the application of artificial intelligence in higher learning’, in 2024 4th International Conference on Advance Computing and Innovative Technologies in Engineering (ICACITE), IEEE, pp. 785–789, 04–05 April 4–5, Greater Noida.
Okewu, E., Adewole, P., Misra, S., Maskeliunas, R. & Damasevicius, R., 2021, ‘Artificial neural networks for educational data mining in higher education: A systematic literature review’, Applied Artificial Intelligence 35(13), 983–1021. https://doi.org/10.1080/08839514.2021.1922847
Okoli, C., 2015, ‘A guide to conducting a standalone systematic literature review’, Communications of the Association for Information Systems 37(43). https://doi.org/10.17705/1CAIS.03743
Opesemowo, O.A.G. & Adekomaya, V., 2024, ‘Harnessing artificial intelligence for advancing Sustainable Development Goals in South Africa’s higher education system: A aualitative study’, International Journal of Learning, Teaching and Educational Research 23(3), 67–86. https://doi.org/10.26803/ijlter.23.3.4
Osasona, F., Amoo, O.O., Atadoga, A., Abrahams, T.O., Farayola, O.A. & Ayinla, B.S., 2024, ‘Reviewing the ethical implications of AI in decision making processes’, International Journal of Management & Entrepreneurship Research 6(2), 322–335. https://doi.org/10.51594/ijmer.v6i2.773
Ouyang, F., Zheng, L. & Jiao, P., 2022, ‘Artificial intelligence in online higher education: A systematic review of empirical research from 2011 to 2020’, Education and Information Technologies 27, 7893–7925. https://doi.org/10.1007/s10639-022-10925-9
Pickering, C. & Byrne, J., 2014, ‘The benefits of publishing systematic quantitative literature reviews for PhD candidates and other early-career researchers’, Higher Education Research & Development 33(3), 534–548. https://doi.org/10.1080/07294360.2013.841651
Quiroz-Martinez, M.-A., Tumaille-Quintana, D.-S., Moran-Burgos, A.-D. & Gomez-RIOS, M., 2024, ‘The role of ChatGPT and artificial intelligence in education’, in 2024 IEEE Colombian Conference on Communications and Computing (COLCOM), IEEE, pp. 1–6, 15–17 May 2024, Medellín.
Rowe, F., 2014, ‘What literature review is not: Diversity, boundaries and recommendations’, European Journal of Information Systems 23(3), 241–255. https://doi.org/10.1057/ejis.2014.7
Saini, M., Sengupta, E., Singh, M., Singh, H. & Singh, J., 2023, ‘Sustainable Development Goal for quality education (SDG 4): A study on SDG 4 to extract the pattern of association among the indicators of SDG 4 employing a genetic algorithm’, Education and Information Technologies 28, 2031–2069. https://doi.org/10.1007/s10639-022-11265-4
Segooa, M.A., Motjolopane, I. & Modiba, F.S., 2023, ‘Development of a design science artefact to teach computing students: A systematic literature review’, in ECRM 2023 22nd European Conference on Research Methods in Business and Management, 08–09 June 2023, Academic Conferences and Publishing International Limited, Lisbon.
Technologies., D.O.C.A.D., 2023, South Africa national artificial intelligence policy framework: Towards the development of South Africa national artificial intelligence policy.
Templier, M. & Paré, G., 2015, ‘A framework for guiding and evaluating literature reviews’, Communications of the Association for Information Systems 37, 6. https://doi.org/10.17705/1CAIS.03706
Webster, J. & Watson, R.T., 2002, ‘Analyzing the past to prepare for the future: Writing a literature review’, MIS Quarterly 26(2), xiii–xxiii.
Wilde, M., 2016, ‘IEEE xplore digital library’, The Charleston Advisor 17(4), 24–30. https://doi.org/10.5260/chara.17.4.24
Wolfswinkel, J.F., Furtmueller, E. & Wilderom, C.P., 2013, ‚Using grounded theory as a method for rigorously reviewing literature’, European Journal of Information Systems 22(1), 45–55. https://doi.org/10.1057/ejis.2011.51
Zhang, J., Shu, Y. & Yu, H., 2023, ‘Fairness in design: A framework for facilitating ethical artificial intelligence designs’, International Journal of Crowd Science 7(1), 32–39. https://doi.org/10.26599/IJCS.2022.9100033
Zinchenko, V., Mielkov, Y., Nych, T., Abasov, M. & Trynyak, M., 2023, ‘Human thinking in the age of generative AI: Values of openness and higher education for the future’, in 2023 International Conference on Electrical, Computer and Energy Technologies (ICECET), IEEE, pp. 1–6, 15–17 November 2023, Cape Town.
|