About the Author(s)


Ronald Munatsi Email symbol
Department of Anthropology and Development Studies, Faculty of Humanities, University of Johannesburg, Johannesburg, South Africa

Citation


Munatsi, R., 2025, ‘Using artificial intelligence to enhance evidence informed-decision-making’, South African Journal of Information Management 27(1), a2004. https://doi.org/10.4102/sajim.v27i1.2004

Original Research

Using artificial intelligence to enhance evidence informed-decision-making

Ronald Munatsi

Received: 18 Feb. 2025; Accepted: 08 July 2025; Published: 28 Oct. 2025

Copyright: © 2025. The Author(s). Licensee: AOSIS.
This work is licensed under the Creative Commons Attribution 4.0 International (CC BY 4.0) license (https://creativecommons.org/licenses/by/4.0/).

Abstract

Background: Sustainable development challenges are pressuring governments worldwide for evidence-informed decision-making (EIDM). The complexity of these challenges necessitates a multi-disciplinary approach to EIDM. Despite evidence of the efficacy of artificial intelligence (AI) in processing big data, there is a gap in their use in enhancing EIDM.

Objectives: The study aims to validate the claim that ‘AI can enhance EIDM’.

Method: A general systematic review methodology partially using abridged systematic review principles was used to collect and synthesise evidence on the use of AI, Machine Learning (ML) and Deep Learning (DL) in EIDM. Thematic content analysis was conducted to analyse the review data.

Results: Despite some equity, validation, interoperability, transparency and other challenges, AI can facilitate evidence synthesis and intuitive visualisation agencies that enable complex analysis for easy comprehension and use in decision-making. AI-based ML and DL can improve EIDM by streamlining complex decision-making procedures and enhancing process efficiency and objectivity.

Conclusion: Complex decision-making may now be automated through consistent data trend analysis, forecasting, uncertainty quantification, user demand prediction, choice recommendation and suitable information packaging using AI-driven technologies. Gaining transformational insights to improve decision outcomes in important sectors is now feasible, but more research is required to address fairness and bias issues in AI systems, guarantee openness and explainability, create strong data governance frameworks and encourage citizen engagement.

Contribution: This study provides a solid basis for examining a more comprehensive framework tying theory and practice in a way that is understandable and essential to mainstreaming the use of AI in EIDM.

Keywords: artificial intelligence; machine learning; deep learning; evidence informed decision-making; policy.

Introduction

There is strong evidence that policy decisions based on sound evidence produce favourable outcomes (Oliver, Lorenc & Innvær 2014). At the same time the global, economic, technological and environmental challenges to sustainable development are increasingly complex (OECD 2017). The diverse range of sustainable development challenges is placing increasing pressure on governments around the world for evidence-informed decision-making (EIDM), with the public calling for greater accountability and transparency in policy and other decision-making (Hawkins & Parkhurst 2015). The complexity of the problems we face in a range of areas, such as health, education, technology and climate change, as well as our policy ecosystems, necessitates an approach to evidence generation and decision-making that uses multi-disciplinary, interdisciplinary and transdisciplinary lenses (Jensen, Koop & Tatham 2014). This kind of approach integrates the natural, social and biological sciences from a humanities perspective that crosses the socially constructed boundaries of the various disciplines, allowing analysis and harmonisation of relationships between them in a synchronised whole (Choi & Pak 2007). As it lays the groundwork for accomplishing many other Sustainable Development Goals (SDGs), SDG 9, which focuses on ‘Industry, Innovation, and Infrastructure’, is crucial to achieving all other SDGs. Innovating in ethical and responsible use of artificial intelligence (AI) in EIDM is central to the overall progress made towards a sustainable future in light of global technological trends.

The coronavirus disease 2019 (COVID-19) pandemic is a living quintessence of how complex these challenges can be. It demonstrates how a single problem can have enormous individual components interacting in sophisticated ways and how difficult it is to make decisions on interventions to address these (Alanazi et al. 2020). Effective EIDM is a pivotal function in addressing such global challenges. However, decision-makers require access to a wide range of the most preeminent evidence available to make informed decisions that positively impact lives (Oxman et al. 2009). To effectively address these intricate issues, we must reorient our present policy and decision-making processes by enhancing the analytical data processing and evidence synthesis tools and frameworks (eds. Love & Stockdale 2017).

Evidence for decision-making is drawn from a variety of sources, such as impact evaluations, academic research, administrative and other data types, citizen knowledge from stakeholder consultations and professional advice from experience (Ademokun et al. 2016). Such evidence is fundamental in informing policy decisions and equitable programmes across all government sectors instead of using opinion, emotions, populism, political ideology or other non-scientific evidence sources (Biden 2021). In light of this, the application of AI has received a lot of attention to guarantee objectivity, effectiveness and the use of solid evidence in decision-making.

Contribution to literature

There is significant literature on the role, and effectiveness of AI, machine learning (ML) and deep learning (DL) in processing ‘big data’ that is too intricate for conventional processing approaches. However, a gap still exists in using AI, ML and DL in supporting EIDM, especially in public policy and governance (Luan et al. 2020; Ojokoh et al. 2020; Zuiderwijk, Chen & Salem 2021). Simply put, AI is ‘the capacity of computers to perform tasks and adapt to emerging situations using similar intelligence as a human being would do’ (Roth & Kersting 2018). Machine learning is a subset of AI that refers to the capability of systems to improve with experience and not to reprogram but the acquisition of the capacity to automatically perform tasks or solve problems because of repeatedly processing the same or similar task or problem (Janiesch, Zschech & Heinrich 2021). In the context of this study, a computer program attains the capability of a decision-maker in making policy and other decisions based on provided information. Just as ML is a subset of AI, DL is also a subset of ML but goes beyond ML in imitating human beings’ way of thinking and learning by simulating the structure and function of biological neurons in the brain (Sharma, Sharma & Jindal 2021). Neurons are the fundamental units of the brain and nervous system, responsible for receiving sensory input from the external world. Deep learning, therefore, enables the analysis of complex data patterns and the development of predictive models, leading to more accurate and efficient decisions (Shahid, Rappon & Berta 2019).

To verify the claim made in studies by Holford et al. (2020) and Patel et al. (2021) that ‘ML and DL technologies using AI can enhance EIDM’, a rapid review of pertinent literature was conducted. Drawing from the synthesis, generalisations regarding the ways in which AI, ML, and DL can improve EIDM were then made. This paper adds to a renewed research agenda by highlighting specific areas where AI can improve EIDM through its application in policy institutional processes, and their systemic environments – even though it may not necessarily advance our technical knowledge and understanding of the actual integration of AI in EIDM.

Research methods and design

A rapid literature review methodology using abridged systematic review principles was used to collect and synthesise evidence on the use of AI, ML and DL in EIDM. After removing duplicates and further eligibility screening, 31 articles were included in the review. The use of this approach was primarily driven by resource constraints because conducting a systematic review necessitates additional time and personnel costs from a variety of related professional skill sets, including statisticians, methodologists, subject matter experts and information specialists (often librarians). These experts work together to guarantee that the review is thorough, accurate and trustworthy (Shaneen et al. 2023). Consequently, this study’s primary weakness is that its methodology was not as rigorous when compared to a full-fledged systematic review. A systematic review reduces the possibility of bias by synthesising the literature on a well-defined research question using a clear, structured and repeatable methodology to find, evaluate and synthesise evidence on a topic to guide decision-making (Gopalakrishnan & Ganeshkumar 2013).

According to Tricco et al. (2015), a rapid literature review is a streamlined or ‘light touch’ version of a systematic review in which the process is expedited, and certain components of the procedure may be left out to produce evidence quickly while retaining a somewhat comparable level of rigour. The PICOT mnemonic (Population OR Patient OR Problem, Intervention, Comparison, Outcome and Time) was used to define the research question and formulate the search strategy. EBSCOHost, EmeraldInsight, SAGE, Springer and Web of Science databases were searched using a systematic advanced search strategy to identify articles to include in the evidence synthesis. The database search was for English full-text articles published in the last 20 years (2004–2024) covering AI, ML, DL and EIDM.

There was no geographical scope, and articles published worldwide were included. Two independent reviewers were used for screening the search results and to conduct data abstraction from the included articles. Descriptive analysis was then conducted to synthesise data from the included literature. The following search strategy was used to search for relevant articles:

([‘Artificial intelligence’ OR ‘Deep Learning’ OR ‘Expert Systems’ OR ‘Machine Learning’ OR ‘Neural Networks’ OR ‘Robotics’ AND ‘Data driven decision making’ OR ‘Data driven practice’ OR ‘Evidence based decision making’ OR ‘Evidence based practice’ OR ‘Evidence informed decision making’ OR ‘Evidence informed policy making’ OR ‘Evidence translation’ OR ‘Evidence-to-policy’ OR ‘Research translation’ ‘Knowledge translation’ OR ‘Research-to-action’ OR ‘Action Research’]).

Ethical considerations

Ethical clearance to conduct this study was obtained from the University of Johannesburg Faculty of Humanities Research Ethics Committee (No. REC-01-049-2022).

Results

Artificial intelligence in the evidence-informed decision-making ecosystem is not ‘science fiction’ or something futuristic

There is evidence of numerous initiatives to harness the power of AI in EIDM, and there is also evidence of a growing interest in studies covering its use in the public sector. However, empirical insights into how this can be effectively done and scaled up are still lacking (Charles, Rana & Carter 2022). One apparent reason why it is important to use AI in public sector processes is primarily because it can eliminate vast amounts of time spent by workers. Employees may be able to concentrate on more crucial duties as a result, which could speed up the delivery of services. Implementation of AI in EIDM is not something that is ‘science fiction’ or futuristic but something currently being deployed and integrated in different sectors as demonstrated by the literature from study areas linked to AI and its use in supporting EIDM that were explored, for example, computer science, data science, informatics, industry and technology, governance and public policy. The world is already engaging with ML and DL technologies that use AI daily, for instance, various automated devices, virtual audio assistants, Internet cookies, online search predictions and suggestions based on previous actions on the Internet, among other ‘thinking systems’ (Roth & Kersting 2018).

However, regarding specific sectors, the health sector dominates the use of AI, ML and DL to facilitate EIDM. This dominance of the health sector may be attributed to the fact that the EIDM concept has its origin in evidence-based practice (EBP), which also stemmed from evidence-based medicine (EBM) (Eddy 2011). The use of AI, ML and DL in supporting EIDM has also been predominantly practised in medical diagnosis (Secinaro et al. 2021).

The EIDM ecosystem entails various processes, the first being evidence generation. Multiple types of evidence are used in decision-making, including research studies, stakeholder consultations, expert or practice-informed knowledge, administrative data and statistics (Turner et al. 2017). These forms of evidence are pulled together and synthesised. The synthesis process involves gathering, critical appraisal and aggregating the evidence to answer policy questions. Decisions are then made, and policies are developed based on the synthesised evidence. Upon implementation, the policies can then be evaluated to establish the impact or efficacy of the policies, reviewed where possible and the loop continues (Kano & Hayashi 2021). However, it should be noted that evidence is just one of many other elements in the mix that may stipulate the rationale for decision-making (Françoise et al 2022).

Data processing, evidence synthesis and communication define the gains of using artificial intelligence, machine learning and deep learning in evidence-informed decision-making

Evidence for decision-making is as good as the processes for generating, synthesising and communicating the data and information (Rickles 2011). The proliferation of information and allied communication technologies coupled with global interconnectivity and the creation of knowledge societies in the fourth industrial revolution (4IR) has immense capacity to improve these processes (Adhikari 2020). Data processing and evidence synthesis and communication is now defining the gains from the transformative capacity of the 4IR as information and knowledge become central to fundamental liberties and human development (Markowitz 2019). However, EIDM is a very complex process (Mueller 2020).

There is increasing use of interdisciplinary and transdisciplinary approaches in generating evidence to facilitate the formulation of comprehensive research questions and the outcomes of varied but interlinked perspectives on development challenges, making it possible for policymakers to make informed decisions based on a broad range of evidence (Choi & Pak 2007).

Using multi-disciplinary frameworks in EIDM adds an extra layer to this complexity; hence, the need to upgrade the processes of collecting, synthesising and communicating evidence for decision-making (Gluckman, Bardsley & Kaiser 2021). Bolton, Raven and Mintrom (2021) observe that decision-making processes related to policies are intricate, if not chaotic. In order to help decision-makers better navigate such a complex environment, AI may make it easier to generate, synthesise and communicate data and evidence. In order for this to occur, AI-based ML and DL technologies must be incorporated into that intricate decision-making environment in a way that takes decision-making processes into account and aims to produce the most comprehensive and realistic sustainable development outcomes.

Originally, information communication technologies (ICTs) were effective only at the transaction processing levels and less useful for complex decision-making. However, developments in ICT algorithms, ML and DL technologies using AI have changed this. It is now possible to manage and analyse data trends, develop data consistency, forecast, quantify uncertainty, anticipate user information needs, provide information in the most appropriate form and suggest possible and impactful courses of action (Phillips-Wren & Jain 2006).

Complicated decision-making processes using interdisciplinary and transdisciplinary approaches can now be automated, providing decision-makers with numerous evidence-informed policy options, and it also becomes feasible to forecast the effects of future decisions (Coombs et al. 2020). Notable developments in this regard exist in data management, analytics and visualisation agencies that enable data processing and integration by applying complex analysis before rendering the analytical insights easily usable by decision-makers through intuitive visualisation (Nash, Trott & Allen 2019). A simple example of the use of AI, ML and DL in EIDM that emanated from the evidence synthesis is a model developed by Cornell University.

The system analyses project reports in agriculture by detecting the significant types of interventions, main activities associated with the interventions and the most dominant anticipated outcomes. The system then combines the generated evidence using AI-supported ML and DL technologies to develop frameworks that can forecast how the projects perform and measure their efficacy rates. Predictive analytics, combined with different statistical methods resulting from ML and DL data mining and predictive modelling, will then be used to assess present and historical evidence, and forecast future outcomes and trends (Lefkowitz 2018). This way, policy and other decision-makers obtain transformational insights to improve policy outcomes in critical sectors.

Artificial intelligence offers unlimited opportunities to enhance evidence-informed decision-making

The cited developments are evidence that the knowledge and technology-driven 4IR dispensation, characterised by the ML and DL systems using AI, provides unlimited opportunities to enhance decision-making by removing cognitive biases that cause decision-makers to come up with flawed decisions (Rastogi et al. 2022). Global challenges require interlinked policy interventions.

Generation and integration of robust evidence provide policymakers with immense insight to define policy priorities by modelling intricate systems and situations to assess the feasibility of policy decisions using evidence. Such capacity demonstrates the immense power of ML and DL in generating, searching and synthesising evidence for decision-making in a more systematic, objective and transparent way (Felzmann et al. 2019). Big data is crucial in decision-making (Saunders et al. 2020).

These emerging devices and applications, powered by AI and using ML and DL algorithms, can now provide policymakers with easy, quick and unlimited access to evidence, the opportunity to make informed decisions and to predict the impact of their decisions and policy procedures (Qiao & Beling 2016). The AI subsets of ML and DL technologies have demonstrated effectiveness in EIDM, particularly in health care systems, by improving patient diagnosis and prognosis outcomes.

Machine learning and DL technology has largely been integrated into health care services and is greatly enhancing decision outcomes informed by many variables that are otherwise difficult for human physicians to articulate (Giordano et al. 2021).

Various policy institutions are already innovating with machine learning and deep learning technologies that use artificial intelligence

Chatbots are contributing more to responsive EIDM by enhancing public participation, accelerating information availability and facilitating data-driven decision-making. They are able to evaluate big databases, automate tedious tasks and get public opinion (Shekgola & Modiba 2025). The use of chatbots in the South African Parliament to help members of parliament with legislative information, such as bill analysis, inferences and comprehension of revision instructions, is an example of current AI applications in parliaments. Conversational chatbots have developed to respond to queries in a way that is increasingly human-like. Most chat conversations that users start on different websites are first handled by a bot, and if the bot is unable to assist, they are then passed to a real person. Many times, people are not even aware that they have been interacting with bots from the beginning. Bots with AI support can also be used for training and orientation. Artificial intelligence services based on ML and DL are also being used by the Italian Parliament. These services include automatic categorisation of laws and amendments, similarity checks and the use of markup elements to make it easier for decision-makers to navigate the numerous bills and acts they must consult during the legislative process (Stempeck 2024). One committee of the Finnish parliament made an intriguing innovation when it used AI-generated answers to questions about a policy issue it was investigating during a hearing. Chatbots have changed significantly as a result of developments in ML and DL using AI. Artificial intelligence-powered large language models (LLMs) have emerged. A more advanced and popular generation of intelligent chatbots, such as ChatGPT and Gemini, have emerged as a result of these advancements. They present fresh opportunities to locate, generate and synthesise relevant evidence on complex issues (Hosseini & Tupasela 2025).

Chatbots are being used by the Norwegian government to improve citizen involvement and efficiency in public administration. Enquiries from citizens, information provision and help with chores like sick leave applications are all handled by these AI systems.

Additionally, the government is emphasising data-driven decision-making, improving human–AI cooperation and fostering transparency and trust. The government is finding new ways to work together and teaching employees how to use chatbots efficiently. By increasing information accessibility, boosting citizen involvement, expediting the policy-making process and encouraging accountability and transparency, chatbots provide a potent tool for strengthening EIDM. Chatbots are expected to become more significant in influencing public policy and enhancing citizens’ lives as AI technology develops (Dube 2024).

To enable more effective updating of constituency data and spatial analysis – as well as the development of AI application tracking systems to facilitate natural language processing – a number of government and parliamentary institutions have begun integrating data science technologies, such as geographic information systems (GIS) (Hannum et al. 2025). For example, implementing data analytics to improve oversight function involves developing dashboards to track policy implementation, track budget spending and identify issues that call for legislative intervention. Utilising data mining to identify trends in stakeholder suggestions and synthesising them, AI applications are being employed to gather and handle input from stakeholder consultations.

The US government is using chatbots to improve evidence-based decision-making through improved citizen involvement, data availability and accurate information provision. For instance, the National Secure Data Service Demonstration Project utilises chatbots to retrieve and examine data from several sources with an emphasis on evidence. By offering individualised experiences, better communication and feedback gathering, chatbots also increase citizen involvement. By automating repetitive procedures, providing round-the-clock accessibility and cutting expenses, they simplify government operations. To guarantee accountability and transparency in its decision-making procedures, the government is also using chatbots to find out user acceptance, reduce bias in evidence use and increase accountability and transparency (Larsen & Følstad 2024).

There are other specific examples of the impact of ML and DL applications using AI in real-world decision-making. The Japan Meteorological Agency (JMA) uses DL to analyse seismic data in real-time and develop an AI-powered earthquake prediction system.

By identifying patterns in subterranean tremors, the AI increases the precision of early warnings. The application has improved emergency planning and resource allocation, decreased false warnings, accelerated evacuations and raised seismic detection accuracy by 70%. Using ML models to evaluate crime patterns using real-time data analysis from surveillance cameras, social media and historical records, the US pioneered AI-powered predictive policing. Crime rates in test cities decreased by 20% as a result of this AI-driven risk assessment that forecasted high-crime regions before incidents occurred. It also improved reaction times for high-risk locations and deployed police resources more effectively. Similarly, an AI-powered road management system with smart traffic lights that react to traffic in real-time was put into place in Dubai. Artificial intelligence cameras identify infractions and reckless driving. This offers predictive analytics to enhance traffic safety and prevent collisions. Because of proactive safety measures, this initiative reduced traffic by 25%, resulting in quicker commutes, lower CO2 emissions from less idling and fewer accidents.1

Another area where AI, ML and DL possess the significant capacity to revolutionise EIDM in as far as evidence generation and synthesis is concerned is in systematic reviews, which are an integral component of EIDM. A systematic review is a process of analysing the evidence on an unambiguously defined question using systematic and explicit methods to search, include and exclude studies before critical appraisal and eventual synthesis of the identified relevant evidence (Krnic Martinic et al. 2019). Systematic reviews identify, assess and condense the results of all relevant research on a policy or other decision-making issue to make the evidence more accessible and user-friendly to decision-makers (Petkovic et al. 2018).

Although systematic reviews cost relatively less than conventional primary research studies, they are both time-consuming and labour-intensive. Evidence for decision-making needs to be produced more quickly while maintaining the rigour and robustness of the evidence. Therefore, using ML and DL systems powered by AI to develop algorithms for automated literature searching and screening procedures will contribute immensely to the process and quality of EIDM (Hamel et al. 2021; Zhang et al. 2022).

Therefore, ML and DL technologies using AI offer policymakers a more objective, inclusive, responsive, transparent and rigorous approach to decision-making through their capacity to rapidly synthesise and generate vast amounts of evidence, identify patterns and draw inferences (Felzmann et al. 2019). In explaining design features of forecasting decision support systems, Fildes, Goodwin and Lawrence (2006) highlighted the fact that these technologies can provide insights in almost real-time, enabling policymakers to immediately make informed decisions to address emerging policy problems using robust evidence. Thus, using AI-powered decision support systems, it is now possible to analyse data trends, forecast, quantify uncertainty, anticipate user information needs, provide information in the most appropriate form and suggest numerous possible and impactful courses of action. It is also now feasible to forecast the effects of future decisions and suggest ways to prevent future shocks.

The benefits of using artificial intelligence in evidence-informed decision-making outweigh the challenges

Findings also unveiled the fact that most ML and DL systems that use AI are typically disruptive and have the potential to harness and synthesise multi-dimensional evidence from numerous contextual sources, using very little time and resources (Lobel 2025). This potential has been cited as a threat to EIDM value chains, making some roles and functions redundant (Nalubega & Uwizeyimana 2019). As in all ICT-supported processes, security and confidentiality are also significant concerns (Beil et al. 2019). The main ethical problem, however, is associated with data standards, for example, the choice of datasets to use in developing the algorithms without subjectivity and bias (Gerke, Minssen & Cohen 2020). Linked to the development of algorithms is the issue of transparency and perceived legitimacy of the decisions made using AI, ML and DL where there may be a need to explain the process followed in reaching a decision (De Fine Licht & De Fine Licht 2020).

In integrating these AI-driven tools, decision-makers will be obligated to alter their way of developing and implementing policies and will have to assess and reorient their technological systems and their capacity development or procure new experts among a host of other related issues. As much as decision-makers realise the latent power of AI and would want to adopt these technologies, their integration presents specific practical and regulatory challenges that institutions may not be equipped to address on their own (Oxford Commission on AI and Good Governance 2021).

According to the Evidence Synthesis Infrastructure Collaborative (2025), there are issues with validation, interoperability, transparency and fair access when ML and DL are integrated with AI in EIDM. Researchers and decision-makers are unsure about the dependability of many AI-driven technologies because of their inconsistent efficacy and lack of standardised validation processes. The adoption and usability of these technologies across disciplines are restricted by methodological and trustworthiness inconsistencies in the absence of a centralised governance framework. It is also more difficult to trust and use these technologies when there is little openness, missing metadata and inadequate validation procedures. Furthermore, in order to guarantee that AI applications function correctly and produce trustworthy results, professionals in synthesis, evidence and research must invest more time in learning data science and working with specialists because of the absence of established performance benchmarks. Although the cited challenges are valid concerns, AI, ML and DL benefits outweigh these challenges. It is also possible to address these concerns in developing AI-driven systems, for instance, through ensuring transparency and inclusivity in governance and technical implementation.

Harnessing artificial intelligence in evidence-informed decision-making requires holistic integration at individual, institutional and systemic levels

To fully benefit from AI, technologies such as ML and DL must be adopted holistically in order to advance EIDM, particularly in policy institutions. This can be accomplished by integrating AI into all facets of the systemic, institutional and individual levels. According to Evans (2017), it is critical to invest in the development of individual competencies in the use of DL and ML to access, interpret and comprehend research evidence, as well as the processes involved in its production and utilisation. Using AI to build supportive organisational procedures that prioritise efficiency in obtaining, synthesising and applying evidence in decision-making is critical at the institutional-level of capacity development. According to Aaserud (2016), creating explicit AI policies, procedures and systems that are intended to promote the use of evidence in decision-making is a part of building institutional capacity.

Establishing a culture that supports the application of AI to enhance the systemic level of evidence-based decision-making is equally important, as it enables communication with the external world, which comprises important players in the evidence–policy mix. As a strategy, this comprehensive approach creates partnerships and networks of support, increases political will and commitment, enhances knowledge, skills and attitudes, integrates systems and cultivates a culture of consistent use of evidence (Lavis et al. 2003). Overall, ICT strengthening is essential, with a focus on the critical roles played by robust ICT systems, first-rate connectivity and technical teams equipped with the necessary skill sets to manage the quickly advancing technological landscape.

Policy-making institutions can investigate ways to guarantee interoperability so that systems seamlessly feed into each other and use AI technologies to interface highly technical processes like GIS capabilities and visualisations with their overall knowledge management systems for effective data processing, internal sharing and active dissemination. The majority of policy institutions, including parliaments and government ministries, have existing institutional frameworks that can be enhanced by integrating and streamlining current systems and consistently advancing AI innovation to enhance EIDM (Vatamanu & Tofan 2025).

Conclusion

Given the highly performing computer systems, big data, ML and DL technologies, AI is ushering in a new dispensation of EIDM supported by applications that emulate human intelligence, for instance, insight, analytical skills, discernment and experiential learning (Zuo et al. 2019). Machine learning and DL technologies using AI are even more desirable in situations where decisions may have potential adverse implications on the people affected by the decisions, for instance, in a context where there is legal or political accountability in the event of negative outcomes. These technologies provide more reassurances against unintended effects than humans (Coyle & Weller 2020). Despite the cited security, disruption and ethical concerns, innovative AI-supported 4IR technologies using ML and DL to generate, access and synthesise evidence have immense capacity to enhance evidence-informed decisions for global transformation.

Given the move towards the utilisation of multi-disciplinary, interdisciplinary and transdisciplinary approaches in understanding global challenges and making policy decisions to address complex socio-economic, political and environmental challenges, the use of ML and DL simplifies this process (Kennedy et al. 2021). It is imperative to develop an evidence-informed advocacy strategy to pro-actively engage governments and key stakeholders to ensure the institutionalisation of EIDM using a holistic interdisciplinary and transdisciplinary approach supported by AI, ML and DL. This way, we can provide solutions that adequately address our challenges more efficiently in the sustainable development agenda. Fully functional e-government and e-parliament systems with responsive research and information services that keep up with technological trends and a high demand for evidence by decision-makers, typified by improved policy and implementation, are all made possible by a variety of ML and DL applications using AI.

Finally, to fully embrace the benefits offered by integrating ML and DL using AI, addressing fairness and bias in the systems, guaranteeing openness and explainability, creating strong data governance frameworks and encouraging citizen engagement should be the main areas of future research goals. Additionally, research is required to improve the interpretability of AI insights, successfully incorporate AI into current workflows and create transparent accountability procedures for decisions made by AI (Glauberman et al. 2023).

Acknowledgements

The author expresses his gratitude to Charity Simbarashe Chisoro for assisting with the screening of search results and data abstraction.

A preprint version of this work was previously published on Authorea, https://doi.org/10.22541/au.168356746.69664781/v1, and the author acknowledges its role in shaping the final manuscript.

Competing interests

The author declares that he has no financial or personal relationships that may have inappropriately influenced him in writing this article.

Author’s contributions

R.M. is the sole author of this research article.

Funding information

This work received no specific grant from any funding agency in the public, commercial or not-for-profit sectors.

Data availability

The data that support the findings of this study are available from the corresponding author, R.M. upon reasonable request.

Disclaimer

The views and opinions expressed in this article are those of the author and are the product of professional research. It does not necessarily reflect the official policy or position of any affiliated institution, funder, agency or that of the publisher. The author is responsible for this article’s results, findings and content.

References

Aaserud, M., 2016, ‘Identifying determinants of effective complex interventions in primary health care systems: A systematic review’, Implementation Science 11(1), 147.

Ademokun, A., Dennis, A., Hayter, E., Richards, C. & Runceanu, L.-E., 2016, Evidence informed policy making toolkit, INASP, Oxford, viewed 15 July 2024, from https://www.inasp.info/sites/default/files/2018-04/EIPM%20Toolkit-Ed2-FULL.pdf.

Adhikari, R., 2020, ‘Fourth industrial revolution: From least developed countries to knowledge societies’, in S.S. Aneel, U.T. Haroon & I. Niazi (eds.), Corridors of knowledge for peace and development, pp. 41–66, Sustainable Development Policy Institute, viewed 15 January 2025, from http://www.jstor.org/stable/resrep24374.13.

Alanazi, S.A., Kamruzzaman, M.M., Alruwaili, M., Alshammari, N., Alqahtani, S.A. & Karime, A., 2020, ‘Measuring and preventing COVID-19 using the SIR model and machine learning in smart health care’, Journal of Healthcare Engineering 2020, a8857346. https://doi.org/10.1155/2020/8857346

Beil, M., Proft, I., Van Heerden, D., Sviri, S. & Vernon Van Heerden, P., 2019, ‘Ethical considerations about artificial intelligence for prognostication in intensive care’, Intensive Care Medicine Experimental 7(1), 70. https://doi.org/10.1186/s40635-019-0286-6

Biden, J.R.J., 2021, Memorandum on Restoring Trust in Government Through Scientific Integrity and Evidence-Based Policymaking, viewed 4 January 2025, from https://www.epa.gov/system/files/documents/2023-12/biden-harris-administration-scientific-integrity-actions_accessible.pdf.

Bolton, M., Raven, R. & Mintrom, M., 2021, ‘Can AI transform public decision-making for sustainable development? An exploration of critical earth system governance questions’, Earth System Governance 9, 100116. https://doi.org/10.1016/j.esg.2021.100116

Charles, V., Rana, N.P. & Carter, L., 2022, ‘Artificial intelligence for data-driven decision-making and governance in public affairs’, Government Information Quarterly 39(4), 101742. https://doi.org/10.1016/j.giq.2022.101742

Choi, B.C. & Pak, A.W., 2006, ‘Multidisciplinarity, interdisciplinarity and transdisciplinarity in health research, services, education and policy: 1. Definitions, objectives, and evidence of effectiveness’, Clinical Investigation in Medicine 29(6), 351–364.

Coombs, C., Hislop, D., Taneva, S.K. & Barnard, S., 2020, ‘The strategic impacts of intelligent automation for knowledge and service work: An interdisciplinary review’, Journal of Strategic Information Systems 29(4), 101600. https://doi.org/10.1016/j.jsis.2020.101600

Coyle, D. & Weller, A., 2020, ‘Explaining machine learning reveals policy challenges: The need to make objectives explicit may expose policy trade-offs that had previously been implicit and obscured’, Science 368(6498), 1433–1434. https://doi.org/10.1126/science.aba9647

De Fine Licht, K. & De Fine Licht, J., 2020, ‘Artificial intelligence, transparency, and public decision-making: Why explanations are key when trying to produce perceived legitimacy’, AI and Society 35(4), 917–926. https://doi.org/10.1007/s00146-020-00960-w

Dube, I., 2024, ‘Chatbots: A tool to improve public service delivery and create public value’, Journal of Public Administration and Development Alternatives 9(3), 43–64. https://doi.org/10.55190/JPADA.2024.341

Eddy, D.M., 2011, ‘The origins of evidence-based medicine – A personal perspective’, Virtual Mentor 13(1), 55–60. https://doi.org/10.1001/virtualmentor.2011.13.1.mhst1-1101

Evans, D., 2017, ‘Evidence use in health policy making: An international scoping study’, BMJ Global Health 2(3), e000546. https://doi.org/10.1136/bmjgh-2016-000260.3

Evidence Synthesis Infrastructure Collaborative, 2025, Safe and responsible use of artificial intelligence: Final report (Working Group 3), McMaster Health Forum, Ontario.

Felzmann, H., Villaronga, E.F., Lutz, C. & Tamò-Larrieux, A., 2019, ‘Transparency you can trust: Transparency requirements for artificial intelligence between legal norms and contextual concerns’, Big Data and Society 6(1), 205395171986054. https://doi.org/10.1177/2053951719860542

Fildes, R., Goodwin, P. & Lawrence, M., 2006, ‘The design features of forecasting support systems and their effectiveness’, Decision Support Systems 42(1), 351–361. https://doi.org/10.1016/j.dss.2005.01.003

Françoise, M., Frambourt, C., Goodwin, P. & Stüwe, L., 2022, ‘Evidence based policy making during times of uncertainty through the lens of future policy makers: Four recommendations to harmonise and guide health policy making in the future’, Archives of Public Health 80(1), 1–5. https://doi.org/10.1186/s13690-022-00898-z

Gerke, S., Minssen, T. & Cohen, G., 2020, ‘Ethical and Legal Challenges of Artificial Intelligence-Driven Healthcare’, in B. Adam & M. Kaveh (eds.), Artificial Intelligence in Healthcare, 1st edn., pp. 295–336. viewed 27 December 2025, from https://ssrn.com/abstract=3570129.

Giordano, C., Brennan, M., Mohamed, B., Rashidi, P., Modave, F. & Tighe, P., 2021, ‘Accessing artificial intelligence for clinical decision-making’, Frontiers in Digital Health 3, 645232. https://doi.org/10.3389/fdgth.2021.645232

Glauberman, G., Ito-Fujita, A., Katz, S. & Callahan, J., 2023, ‘Artificial intelligence in nursing education: Opportunities and challenges’, Hawaii Journal of Health and Social Welfare 82(12), 302–305.

Gluckman, P.D., Bardsley, A. & Kaiser, M., 2021, ‘Brokerage at the science-policy interface: From conceptual framework to practical guidance’, Humanities and Social Sciences Communication 8(1), 84. https://doi.org/10.1057/s41599-021-00756-3

Gopalakrishnan, S. & Ganeshkumar, P., 2013, ‘Systematic reviews and meta-analysis: Understanding the best evidence in primary healthcare’, Journal of Family Medicine and Primary Care 2(1), 9–14. https://doi.org/10.4103/2249-4863.109934

Hamel, C., Hersi, M., Kelly, S.E., Tricco, A.C., Straus, S., Wells, G. et al., 2021, ‘Guidance for using artificial intelligence for title and abstract screening while conducting knowledge syntheses’, BMC Medical Research Methodology 21(1), 285. https://doi.org/10.1186/s12874-021-01451-2

Hannum, K., Wellstead, A.M., Howlett, M. & Gofen, A., 2025, ‘Leveraging GIS for policy design: Spatial analytics as a strategic tool’, Policy Design and Practice 8(1), 35–49. https://doi.org/10.1080/25741292.2025.2474784

Hawkins, B. & Parkhurst, J., 2016, ‘The “good governance” of evidence in health policy’, Evidence & Policy 12(4), 575–592. https://doi.org/10.1332/174426415X14430058455412

Holford, J., Milana, M., Waller, R., Webb, S. & Hodge, S., 2019, ‘Data, artificial intelligence and policy-making: Hubris, hype and hope’, International Journal of Lifelong Education 38(6), iii–vii. https://doi.org/10.1080/02601370.2020.1715685

Hosseini, Z. & Tupasela, A., 2025, ‘Inclusive AI-based chatbots for public services in Finland: Potential of constructivism’, in Proceedings of the International Academic Forum, The Barcelona Conference on Arts, Media & Culture, 2024 Helsinki, pp. 159–175.

Janiesch, C., Zschech, P. & Heinrich, K., 2021, ‘Machine learning and deep learning’, Electronic Markets 31(3), 685–695. https://doi.org/10.1007/s12525-021-00475-2

Jensen, M.D., Koop, C. & Tatham, M., 2014, ‘Coping with power dispersion? Autonomy, co-ordination and control in multilevel systems’, Journal of European Public Policy 21(9), 1237–1254. https://doi.org/10.1080/13501763.2014.922861

Kano, H. & Hayashi, T.I., 2021, ‘A framework for implementing evidence in policymaking: Perspectives and phases of evidence evaluation in the science-policy interaction’, Environmental Science and Policy 116, 86–95. https://doi.org/10.1016/j.envsci.2020.09.001

Kennedy, L.W., Caplan, J.M., Garnier, S., Lersch, K., Miró-Llinares, F., Gibbs Van Brunschot, E.E. et al., 2021, ‘Editorial on using evidence based analytics to create narratives for police decision making’, Frontiers in Psychology 12, 791605. https://doi.org/10.3389/fpsyg.2021.791605

Krnic Martin, M., Pieper, D., Glatt, A. & Puljak, L., 2019, ‘Definition of a systematic review used in overviews of systematic reviews, meta-epidemiological studies and textbooks’, BMC Medical Research Methodology 19(1), 1–12. https://doi.org/10.1186/s12874-016-0277-1

Larsen, A.G. & Følstad, A., 2024, ‘The impact of chatbots on public service provision: A qualitative interview study with citizens and public service providers’, Government Information Quarterly 41(2), 101927. https://doi.org/10.1016/j.giq.2024.101927

Lavis, J.N., Robertson, D., Woodside, J.M., McLeod, C.B. & Abelson, J., 2003, ‘How can research organizations more effectively transfer research knowledge to decision makers?’, Knowledge Transfer Study Group 81(2), 221–248. https://doi.org/10.1111/1468-0009.t01-1-00052

Lefkowitz, M., 2018, ‘Harnessing machine learning and big data to fight hunger’, Cornell Chronicle, viewed 15 January 2025, from https://news.cornell.edu/stories/2018/10/harnessing-machine-learning-and-big-data-fight-hunger

Lobel, O., 2025, ‘Technological readiness versus disruption: A framework for assessing distinct artificial intelligence policy strategies’, Emory Law Journal 74, 1055, viewed 07 April 2025, from https://scholarlycommons.law.emory.edu/elj/vol74/iss5/1.

Love, P.J. & Stockdale-Otarola, J. (eds.), 2017, Debate the issues: Complexity and policy making, OECD, Paris.

Luan, H., Geczy, P., Lai, H., Gobert, J., Yang, S.J., Ogata, H. et al., 2020, ‘Challenges and future directions of big data and artificial intelligence in education’, Frontiers in Psychology 11, 580820. https://doi.org/10.3389/fpsyg.2020.580820

Markowitz, C., 2019, Harnessing the 4IR in SADC: Roles for policymakers, Occasional Paper 303, viewed 14 June 2024, from https://saiia.org.za/wp-content/uploads/2019/11/Occasional-Paper-303-markowitz.pdf.

Mueller, B., 2020, ‘Why public policies fail: Policymaking under complexity’, EconomiA 21(2), 311–323. https://doi.org/10.1016/j.econ.2019.11.002

Nalubega, T. & Uwizeyimana, D.E., 2019, ‘Public sector monitoring and evaluation in the fourth industrial revolution: Implications for Africa’, Africa’s Public Service Delivery and Performance Review 7(1), a318. https://doi.org/10.4102/apsdpr.v7i1.318

Nash, K., Trott, V. & Allen, W., 2022, ‘The politics of data visualisation and policy making’, Convergence 28(1), 3–12. https://doi.org/10.1177/13548565221079156

OECD, 2017, Debate the issues: Complexity and policy making, OECD Insights, OECD Publishing, Paris.

Ojokoh, B.A., Samuel, O.W., Omisore, O.M. Sarumi, O.A., Idowu, P.A., Chimusa, E.R. et al., 2020, ‘Big data, analytics and artificial intelligence for sustainability’, Scientific African 9, e00551. https://doi.org/10.1016/j.sciaf.2020.e00551

Oliver, K., Lorenc, T. & Innvær, S., 2014, ‘New directions in evidence-based policy research: A critical analysis of the literature’, Health Research Policy and Systems 12(1), 34. https://doi.org/10.1186/1478-4505-12-34

Oxford Commission on AI & Good Governance, 2021, AI in the public service: From principles to practice, White Paper, Oxford, viewed 05 January 2004, from https://oxcaigg.oii.ox.ac.uk/wp-content/uploads/sites/11/2021/12/AI-in-the-Public-Service-Final.pdf.

Oxman, A.D., Lavis, J.N., Lewin, S. & Fretheim, A., 2009, ‘SUPPORT tools for evidence-informed health policymaking (STP) 1: What is evidence-informed policymaking?’, Health Research Policy and Systems 7(1), S1. https://doi.org/10.1186/1478-4505-7-S1-S1

Patel, J., Manetti, M., Mendelsohn, M., Mills, S., Felden, F. & Rocha, M., 2021, AI brings science to the art of policymaking, Boston Consulting Group, viewed 05 April 2021, from https://www.bcg.com/publications/2021/how-artificial-intelligence-can-shape-policy-making.

Petkovic, J., Welch, V., Jacob, M.H. & Tugwell, P., 2018, ‘Do evidence summaries increase health policymakers’ use of evidence from systematic reviews? A systematic review’, Campbell Systematic Reviews 14(1), 1–52. https://doi.org/10.4073/csr.2018.8

Phillips-Wren, G. & Jain, L., 2006, ‘Artificial Intelligence for Decision Making’, in B. Gabrys, R.J. Howlett & L.C. Jain (eds.), Knowledge-Based Intelligent Information and Engineering Systems, KES 2006, Lecture Notes in Computer Science(), vol. 4252, Springer, Berlin, Heidelberg.

Qiao, Q. & Beling, P.A., 2016, ‘Decision analytics and machine learning in economic and financial systems’, Environment Systems and Decisions 36(2), 109–113. https://doi.org/10.1007/s10669-016-9601-x

Rastogi, C., Zhang, Y., Wei, D., Varshney, K.R., Dhurandhar, A. & Tomsett, R., 2022, ‘Deciding fast and slow: The role of cognitive biases in AI-assisted decision-making’, Proceedings of the ACM on Human-Computer Interaction 6(CSCW1), 83. https://doi.org/10.1145/3512930

Rickles, D., 2011, ‘Public health’, in F. Gifford (ed.), Philosophy of medicine, pp. 523–572, Elsevier, Boston, MA, viewed 11 November 2024, from https://philpapers.org/rec/RICPH.

Roth, D. & Kersting, K., 2018, ‘Machine learning and artificial intelligence: Two fellow travelers on the quest for intelligent behavior in machines’, Frontiers in Big Data 1, 6. https://doi.org/10.3389/fdata.2018.00006

Saunders, G.H., Christensen, J.H., Gutenberg, J., Pontoppidan, N.H., Smith, A., Spanoudakis, G. et al., 2020, ‘Application of big data to support evidence-based public health policy decision-making for hearing’, Ear and Hearing 41(5), 1057–1063. https://doi.org/10.1097/AUD.0000000000000850

Secinaro, S., Calandra, D., Secinaro, A., Muthurangu, V. & Biancone, P., 2021, ‘The role of artificial intelligence in healthcare: A structured literature review’, BMC Medical Informatics and Decision Making 21(1), 1–23. https://doi.org/10.1186/s12911-021-01488-9

Shahid, N., Rappon, T. & Berta, W., 2019, ‘Applications of artificial neural networks in health care organizational decision-making: A scoping review’, PLoS One14(2), e0212356. https://doi.org/10.1371/journal.pone.0212356

Shaneen, N., Shaheen, A., Ramadan, A., Hefnawy, M.T., Ramadan, A., Ibrahim, I.A. et al., 2023, ‘Appraising systematic reviews: A comprehensive guide to ensuring validity and reliability’, Frontiers in Research Metrics and Analytics 8, 1268045. https://doi.org/10.3389/frma.2023.1268045

Sharma, N., Sharma, R. & Jindal, N., 2021, ‘Machine learning and deep learning applications – A vision’, Global Transitions Proceedings 2(1), 24–28. https://doi.org/10.1016/j.gltp.2021.01.004

Shekgola, M. & Modiba, M., 2025, ‘Utilising an AI chatbot to support smart digital government for Society 5.0 in South Africa’, South African Journal of Information Management 27(1), a1983. https://doi.org/10.4102/sajim.v27i1.1983

Stempeck, M., 2024, How Italy’s government is using AI, viewed 02 March 2025, from https://democracytechnologies.org/ai-data/how-italys-government-uses-ai-data.

Tricco, A.C., Antony, J., Zarin, W., Strifler, L., Ghassemi, M., Ivory, J. et al., 2015, ‘A scoping review of rapid review methods’, BMC Medicine 13(1), 224. https://doi.org/10.1186/s12916-015-0465-6

Turner, S., D’Lima, D., Hudson, E., Morris, S., Sheringham, J., Swart, N. & Fulop, N.J. 2017, ‘Evidence use in decision-making on introducing innovations: a systematic scoping review with stakeholder feedback’, Implementation Science 12(1). https://doi.org/10.1186/s13012-017-0669-6

Vatamanu, A.F. & Tofan, M., 2025, ‘Integrating artificial intelligence into public administration: Challenges and vulnerabilities’, Administrative Sciences 15(4), 149. https://doi.org/10.3390/admsci15040149

Zhang, Y., Liang, S., Feng, Y., Wang, Q., Sun, F., Chen, S. et al., 2022, ‘Automation of literature screening using machine learning in medical evidence synthesis: A diagnostic test accuracy systematic review protocol’, Systematic Reviews 11(1), 11. https://doi.org/10.1186/s13643-021-01881-5

Zuiderwijk, A., Chen, Y.C. & Salem, F., 2021, ‘Implications of the use of artificial intelligence in public governance: A systematic literature review and a research agenda’, Government Information Quarterly 38(3), 101577. https://doi.org/10.1016/j.giq.2021.101577

Zuo, J., Zhang, C., Chen, J., Wu, Y., Liu, Z. & Li, Z., 2019, ‘Artificial intelligence prediction and decision evaluation model based on deep learning’, in International Conference on Electronic Engineering and Informatics (EEI), Nanjing, China, 2019, pp. 444–448.

Footnote

1. https://blog.govnet.co.uk/technology/ai-in-government-case-studies#japan.



Crossref Citations

No related citations found.