Abstract
Background: The current view of the world is equated to being volatile, uncertain, complex and ambiguous (VUCA), as well as brittle, anxious, non-linear and incomprehensible (BANI). Leaders are inundated with constant changes and challenges in the VUCA and BANI contexts, which directly contribute to an increasing state of paralysed dysfunction. Artificial intelligence (AI) directly contributes to the rapid growth of structured and unstructured data, worsening the VUCA and BANI contexts as organisations continue to battle to manage and make sense of data. Innovative and sustainable approaches are needed to assist with the effective management of data into Strategic Intelligence (SI).
Objectives: This study aimed to expand on the Nominal Ranking Technique (NRT) methodology, as an innovative and sustainable approach to managing and making sense of big data (BD), leading to SI for informed decision-making.
Method: Content analysis as a qualitative approach was used to analyse 225 data files. The content analysis for this study is referred to as the NRT methodology.
Results: The newly expanded NRT methodology includes six colour-coded primary categories and two colour-coded secondary categories. The primary and secondary categories contribute to the structured and systematic approach of the NRT methodology, which resulted in six SI-Relevant data files.
Conclusion: The expanded NRT methodology provides a sustainable means of converting BD into actionable SI, thereby directly supporting informed decision-making in VUCA and BANI contexts.
Contribution: The structured and systematic approach of the NRT methodology directly contributes to the effective management of BD into SI for informed decision-making.
Keywords: NRT methodology; strategic intelligence; big data; VUCA; BANI; decision-making.
Introduction
In the volatile, uncertain, complex and ambiguous (VUCA) world, hereinafter referred to as the ‘as-is VUCA’, in which the brittle, anxious, non-linear and incomprehensible (BANI) world context exists, data continue to grow exponentially, leading to the voluminous challenge described by Azzam and Beckmann (2024). Organisations today operate in a VUCA world, reflecting the volatile, uncertain, complex and ambiguous nature of the world. Recent global events such as pandemics, wars, technological breakthroughs and economic instability highlight the need for the management of big data (BD) to provide data-driven insights that enable businesses to adapt and succeed amidst volatility, uncertainty, complexity and ambiguity (Kloud9 2023). In the context of this study, strategic intelligence (SI) is the evolution from traditional intelligences such as business intelligence, competitive intelligence, military intelligence, etc., to holistic foresight for strategic decision-making (De Koker 2025). Innovative and continuously expanding approaches are required for SI to be derived from BD. ‘Big data involves processing vast amounts of data using advanced techniques’ (Jamarani et al. 2024). The Nominal Ranking Technique (NRT) methodology is a structured and systematic approach initially adapted from the nominal group technique (NGT). Building on previous research by De Koker and Du Plessis (2024), the NRT methodology is expanded significantly by including more categories in data categorisation. De Koker (2018), and De Koker and Du Plessis (2019, 2020) adapted the basic principles of NGT into what is known as the NRT methodology, later tested and applied by De Koker and Du Plessis (2024) and De Koker (2025). The NRT methodology provides an innovative and sustainable approach to effectively managing BD.
The objective of this article is to expand on the NRT methodology, as an innovative and sustainable approach to managing and making sense of BD that will lead to SI for informed decision-making. Originating from a larger PhD study, which entailed a much larger data collection and analysis component than can be reported herein, this research article reports only on one content analysis portion – referred to as the NRT methodology of the University of Johannesburg (UJ) Information and Knowledge Management (IKM) Department Strategic Information Management (SIM) database.
The layout of this article begins with the introduction, in which the research objective is provided. The literature review provides the background and sets the context for expounding the NRT methodology. The research methodology section elaborates on the content analysis employed in the form of the NRT methodology. The data collection and results and the narration of findings follow, in which the application of the NRT methodology is discussed as the new contribution.
NRT methodology applied on big data for strategic intelligence
Strategic intelligence is the actionable force on which firm decisions are based. Traditionally, a five-step process to transform data into intelligence is followed in the intelligence cycle. This article adopts the definition of SI from Fernández-Villacañas Marín (2015), who defines SI as ‘a systematic and continuous process of producing needed intelligence of strategic value in an actionable form to facilitate long-term decision-making’. Waghmare (2019) further explains that SI, in the most basic definition, is about having relevant information in the hands of the right people at the right time to make the right decisions for the future success of an organisation.
This study recognises the intricate context of the as-is VUCA world, cognisant that BD continues to become bigger. Big data presents great opportunities in the as-is VUCA world. Big data, powered by advanced data science techniques such as time-series forecasting, machine learning, deep learning and Natural Language Processing (NLP), enable organisations to analyse vast historical and unstructured datasets to uncover patterns, predict market fluctuations and transform ambiguity into actionable insights. By visualising complexity and optimising scenario planning through tools such as clustering, decision trees and sentiment analysis, businesses can enhance strategic decision-making, reduce risks and navigate the volatility and uncertainty of the VUCA world more effectively (Google Cloud 2025; Kloud9 2023; Sarker 2021). Ma, Jørgensen and Ma (2024) note that the global volume of data is projected to double approximately every 2 years. This extensive growth of data comes with many computational and statistical challenges in BD because of the volume, variety and complexity of data (Ma et al. 2024). The challenges include ‘issues related to scalability and storage bottleneck, noise accumulation, spurious correlation, incidental endogeneity and measurement errors’ (Ma et al. 2024). It is with this context in the background that necessitates innovative and sustainable approaches to managing BD.
Research methods and design
As part of a larger PhD study, the research philosophy was pragmatism, in which content analysis was performed. As articulated by Saunders, Lewis and Thornhill (2016, 2023), content analysis is an analytical technique designed to code and categorise qualitative data to derive systematic, quantifiable insights. Dumay and Cai (2014) extend this understanding by observing that content analysis serves as a methodological approach for interpreting the meaning embedded within textual data, while also quantifying the occurrence of such interpretations. Furthermore, Saunders et al. (2016, 2023) emphasise that content analysis enables the explicit reporting of categorised qualitative data and supports the replication and validation of research findings in future studies. For the purposes of this study, content analysis is the NRT methodology.
Ethical considerations
Ethical clearance to conduct this study was obtained from the University of Johannesburg’s College of Business and Economics (CBE) Research Ethics Committee. Ethics clearance for this research was approved on 31 October 2022. Reference number: 2020SCiiS58.
Results
NRT methodology
The NRT methodology, results and discussion are performed together in one section because the NRT methodology is a structured and systematic process that is iterative in nature.
The NRT methodology was applied to the UJ IKM Department SIM database for this study, in which 225 data files reside. The NRT methodology, as mentioned, was expanded from the initial NRT methodology by De Koker and Du Plessis (2024). Part of the expansion of the NRT methodology identified four primary elements that form the foundation of the NRT methodology. These are: (1) standard of inclusion, (2) data cleansing linked to the Pareto principle, (3) relationship of data, information and intelligence linked to the Pareto principle and (4) VARCS (Valid, Authentic, Reliable, Current, Sufficient) principles by De Koker (2025).
The standard of inclusion notes the importance of predetermining inclusion criteria in the management of BD. ‘Inclusion criteria are a set of predefined characteristics used to identify subjects who will be included in a research study’ (Velasco 2012). Predetermining the inclusion criteria ensured that the data files in the NRT methodology complied with the predetermined standard of inclusion.
Data cleansing linked to the Pareto principle notes the importance of cleaning the data in the form of the data files, to be included in the NRT methodology process. The Pareto principle governs data cleansing, validating the importance of data cleansing in the relationship between moving from data to information to intelligence. Intelligence becomes SI in the context of the executive and strategic decision-maker (De Koker 2025).
The relationship of data, information, and intelligence linked to the Pareto principle notes the importance of data that are collected from the business – internally and externally, which when cleansed, processed and refined to produce information and then analysed to produce intelligence; forms the basis of the systematic and structured analysis of data, information and intelligence (De Koker 2025).
The VARCS principles note that in managing BD, the data need to be valid, authentic, reliable, credible and sufficient (De Koker 2025). The VARCS principles are directly linked to data integrity, which relates to the ‘manner of assuring data quality and efficiency throughout its lifecycle’ (Shukla et al. 2025). Establishing the VARCS principles in data, subsequently ensures that information and intelligence abide by the VARCS principles, later leading to SI that also abides by the VARCS principles.
The four primary elements of the NRT methodology are crucial and set the basis to ensure that managing BD is executed effectively. With the expansion of the NRT methodology, in which the four primary elements were determined as the foundation, a nine-step process was followed in the NRT methodology of the UJ IKM Department SIM database (De Koker 2025). As alluded to earlier in the introduction of this article, the data collection, data analysis and reporting is too extensive to be included in this article. Step 1 of the nine-step NRT methodology is reported on extensively in this article. The other steps are briefly mentioned after Step 1, with Step 4 briefly explaining the NRT as illustrated in Box 1. The full nine-step NRT methodology is expounded in the PhD study of De Koker (2025).
Step 1: Defining inclusion criteria
The NRT methodology, even though structured and systematic, is flexible and can be adapted to accommodate the needs of the particular research being conducted. Previously, as per De Koker and Du Plessis (2024), the NRT methodology included four categories, namely Unique, Novice, Duplicate and Duplicate-Novice. Expanding on the previous four categories of the NRT methodology (De Koker & Du Plessis 2024), four new categories were added. In total, the NRT methodology now includes eight categories. The eight categories are split into two. These are: (1) six colour-coded primary categories and (2) two colour-coded secondary categories. These categories were used to categorise the data files in the UJ IKM Department SIM database.
The six colour-coded primary categories to categorise data files are made up of: Unique (colour-coded green), Duplicate (colour-coded red), Novice (colour-coded orange), Duplicate-novice (colour-coded yellow), Confidential (colour-coded purple) and Classified (colour-coded blue).
The two colour-coded secondary categories to categorise only the Unique data files are made up of: Strategic Intelligence (colour-coded pink) and SI-Relevant (colour-coded brown).
The six colour-coded primary categories and the two colour-coded secondary categories exist separately, as the two colour-coded secondary categories’ data files were only determined after all the data files in the UJ IKM Department SIM database were categorised into the six colour-coded primary categories. Thereafter, all the data files categorised under Unique category were put through the process of determining the SI and subsequent SI-Relevant data files.
To make sense of the BD in the data files in order for SI to be derived at the end, the six colour-coded primary categories and the two colour-coded secondary categories allowed for all the data files in the UJ IKM Department SIM database to be screened, cleaned, categorised and analysed. This led to determining the inclusion criteria for these categories, as follows:
Six colour-coded primary categories
Unique data files inclusion criteria
- A full-text complete document (Microsoft Word Document or Adobe Acrobat Document) in the English language was categorised as a Unique data file.
- If a data file did not have any spelling and/or grammar errors, the data file was categorised as a Unique data file.
- If a data file showcased a problem that could be solved, the data file was categorised as a Unique data file.
- If a data file proposed a solution to the problem, the data file was categorised as a Unique data file.
- If a data file provided evidence of the solution, the data file was categorised as a Unique data file.
Duplicate data files inclusion criteria
- Where a data file was categorised as a Duplicate, the Last Modified date was considered.
- Data files with similar content were compared and thoroughly examined, using the metadata - Last Modified date.
- The data files with the same content were opened from the UJ IKM Department SIM database. The data files with the same content were all in the format - Microsoft Word document.
- In the Microsoft Word document, the File button was clicked, followed by the Info button. The Related Dates section was examined.
- If the Last Modified date of a data file was more recent in date and time, compared to the data file with the same content, the data file was categorised as a Unique data file.
- If the Last Modified date of a data file was older in date and time, in comparison to the data file with the same content, the data file was categorised as a Duplicate data file.
- If the name of the data file had a spelling error and another data file existed with the same content, the data file with a spelling error in the name was categorised as a Duplicate data file.
- If two data files existed in the database with the same content and the same Last Modified date, the first data file in the database was categorised as a Unique data file, and the second data file in the database was categorised as a Duplicate data file.
Novice data files inclusion criteria
- An incomplete data file was categorised as a Novice data file.
- A data file that was not a Microsoft Word document was categorised as a Novice data file.
- A data file that was not an Adobe Acrobat document was categorised as a Novice data file.
- A data file that does not identify a business problem that could be solved was categorised as a Novice data file.
- A data file that does not propose a solution to an identified problem was categorised as a Novice data file.
- A data file that does not provide any evidence of a proposed solution was categorised as a Novice data file.
- A full-text complete document (Microsoft Word document, or Adobe Acrobat document) that exhibits the same characteristics as that of a Unique data file; however, it was indicated in the data file that the data file is a Novice. Such a data file was categorised as a Novice data file.
- An incomplete document (Microsoft Word document, or Adobe Acrobat document), where it was indicated in the data file that the data file was a Novice. Such a data file was categorised as a Novice data file.
Duplicate-novice data files inclusion criterion
- A duplicate, incomplete document that exhibits the same characteristics as a Novice data file was categorised as a Duplicate-novice data file.
Confidential data files inclusion criteria
- A full-text complete document (Microsoft Word document or Adobe Acrobat document) in the English language that exhibits the same characteristics as that of a Unique data file; however, the data file is Confidential. Such a data file was categorised as a Confidential data file.
- A full-text, complete document (Microsoft Word document or Adobe Acrobat document) that exhibits the same characteristics as that of a Unique data file, but it was indicated in the data file that confidential information had been removed. Such a data file was categorised as a Confidential data file.
- ‘Confidential information is considered any information that would damage national security if disclosed in an unauthorised matter’ (Biscontini 2025).
Classified data files inclusion criteria
- A full-text complete document (Microsoft Word document or Adobe Acrobat document) in the English language that exhibits the same characteristics as that of a Unique data file; however, the data file is classified. Such a data file was categorised as a Classified data file.
- A full-text complete document (Microsoft Word document or Adobe Acrobat document) that exhibits the same characteristics as that of a Unique data file; however, the content within the document has been hidden and indicated as classified. Such a data file was categorised as a Classified data file.
- One Classified data file was removed from the UJ IKM Department SIM database by the owner because of the classified nature of the information in the data file. The researcher has not seen the data file but knows that the data file exists through the owner of the UJ IKM Department SIM database.
- ‘Classified information refers to data that is deemed sensitive and requires protection from unauthorised disclosure, particularly to safeguard national security’ (Biscontini 2025).
Two colour-coded secondary categories
Strategic intelligence data files inclusion criteria
- The Unique data files were further scrutinised to determine the SI data files.
- A full-text complete document (Microsoft Word document or Adobe Acrobat document) in the English language that exhibits the same characteristics as that of a Unique data file.
- A Unique data file was categorised as a SI data file, if the data file did not have any spelling and/or grammar errors.
- A Unique data file was categorised as a SI data file, if the keyword ‘Strategic Intelligence’ was present in the data file.
- A Unique data file was categorised as a SI data file if any of the following keywords were present in the data file (artificial intelligence, business intelligence, business model, business model canvas, commercialisation, competitive intelligence, complex, complexity, complexity science, complexity theory, Cynefin, Cynefin framework, data, decision intelligence, decision-making, ecosystem, fourth industrial revolution, human intelligence, information, IKM, information management, innovation, intelligence, knowledge management, strategic, strategic information, strategic information management and systems thinking.
- A Unique data file was categorised as a SI data file if the data in the data file exhibited the characteristics of the VARCS principles, namely the data is Valid, Authentic, Reliable, Current, and Sufficient.
Strategic intelligence-relevant data files inclusion criteria
- The SI data files were further scrutinised to determine the SI-Relevant data files.
- A full-text complete document (Microsoft Word document, or Adobe Acrobat document) in the English language that exhibits the same characteristics as that of a SI data file.
- A SI data file was categorised as SI-Relevant if the data file did not have any spelling and/or grammar errors.
- A SI data file was categorised as SI-Relevant if the keyword ‘VUCA’ was present in the data file. The keyword VUCA represents – volatility, uncertainty, complexity, and ambiguity.
- A SI data file was categorised as SI-Relevant if the data in the data file exhibited the characteristics of the VARCS principles, namely the data are Valid, Authentic, Reliable, Current, and Sufficient.
Step 2 focussed on the clean-up of the UJ IKM Department SIM database, where the 225 data files were individually screened to make sense of the data in the data files.
Step 3 focussed on the categorisation of the 225 data files into the different categories. With the inclusion criteria for each of the six colour-coded primary categories applied on the 225 data files, 145 Unique data files, 37 Duplicate data files, 21 Novice data files, 4 Duplicate-novice data files, 16 Confidential data files and 2 Classified data files were identified.
Step 4 focussed on conducting the NRT on the 145 Unique data files. From this step onwards, only Unique data files were worked with. A condensed version of the NRT is illustrated in Box 1. The complete dataset, as represented in Box 1, extends to 22 pages, and the extent of the table has 225 rows of data. The rows of the dataset were collapsed to easily represent Box 1. The rows of data that were kept in Box 1 for this research article, are Data File_1, Data File_52, Data File_100, Data File_109, Data File_169 and Data File_224. The details of the complete dataset are available in De Koker (2025).
The NRT of the Unique data files was performed in Microsoft Excel, and reports on the keywords found in each Unique data file. Column 1 in the table represents the number of the Unique data files. Column 2 in the table represents the name of the Unique data file. Columns 3 to 31 in the table represent the keywords in alphabetical order from left to right. Column 32 in the table represents the rank that each of the Unique data files recorded, which is the number of keywords found in a data file. In the row of any data file, under any keyword, where a keyword is present, the number 1 was used to illustrate that specific keyword is present in the specific data file. The number 1 was used to allow for the AutoSum Formula to work in Microsoft Excel. As an example from the NRT of the Unique data files, Data File_1 contains 9 keywords. Therefore, for each keyword found in Data File_1 the number 1 was assigned. The keywords found in Data File_1, (1) business model, (2) decision-making, (3) Information Management, (4) information, (5) innovation, (6) intelligence, (7) strategic, (8) strategic information and (9) Strategic Information Management.
Step 5 in the NRT methodology reports on the score each keyword received that relates to the keyword frequency and the top five keywords, which is also illustrated in the last row of Box 1:
- The keyword ‘information’ scored 124 out of 145 as it was present in 124 Unique data files – keyword frequency for ‘information’ = 124/145.
- The keyword ‘strategic’ scored 117 out of 145 as it was present in 117 Unique data files – keyword frequency for ‘strategic’ = 117/145.
- The keyword ‘intelligence’ scored 85 out of 145 as it was present in 85 Unique data files – keyword frequency for ‘intelligence’ = 85/145.
- The keyword ‘strategic information’ scored 78 out of 145 as it was present in 78 Unique data files – keyword frequency for ‘strategic information’ = 78/145.
- The keyword ‘data’ scored 75 out of 145 as it was present in 75 Unique data files – keyword frequency for ‘data’ = 75/145.
Step 6 focussed on the Post NRT Unique data files, where the rank of the Unique data files was reported on. Data File_109 and Data File_169 ranked highest, with 17 keywords present in the data files, respectively. Data File_50 and Data File_100 ranked lowest, with 0 keywords present in the data files.
Step 7 focussed on determining a representative sample from the Unique data files that was used to continue further in the process of the NRT methodology. From the 145 Unique data files, it was determined that the top 105 Unique data files were representative and were further included in the process of the NRT methodology.
Step 8 focussed on determining the SI data files described in Step 1. By applying the SI data files inclusion criteria to the top 105 Unique data files, 35 SI data files were identified.
In Step 9, as the last step of the NRT methodology, the SI-Relevant data files were determined. By applying the SI-Relevant data files inclusion criteria to the 35 SI data files, 6 SI-Relevant data files were identified.
The four primary elements determined as the foundation of the NRT methodology are: (1) standard of inclusion, (2) data cleansing linked to the Pareto principle, (3) relationship of data, information and intelligence linked to the Pareto principle and (4) VARCS principles – illustrated in Figure 1 (De Koker 2025).
From the nine-step process followed in the NRT methodology, the BD in the form of the data files in the UJ IKM Department SIM database were screened, cleaned, categorised and analysed. From 225 data files, only 6 SI-Relevant data files were identified. These 6 SI-Relevant data files were identified because they adhered to the SI-Relevant data files inclusion criteria.
The six SI-Relevant data files were full-text complete documents (Microsoft Word document, or Adobe Acrobat document) in the English language that exhibited the same characteristics as that of a SI data file, with no spelling and/or grammar errors. Furthermore, it was found that the six SI-Relevant data files contained the keyword VUCA, showing that the data in the data files are relevant in the as-is VUCA world context. Lastly, the six SI-Relevant files showed that the data in the data files exhibited the characteristics of the VARCS principles, namely the data is Valid, Authentic, Reliable, Current, and Sufficient.
Data cleansing linked to the Pareto principle and the relationship of data, information, and intelligence linked to the Pareto principle is clearly illustrated in the NRT methodology. The Pareto principle, also known as the 80/20 rule, states that 20% of the effort or input leads to 80% of the results or output (Craft & Leake 2002; Phelan & Rustichini 2015; Potomkin et al. 2021). Applied to the NRT methodology, the data cleansing process might be considered tiresome or irrelevant, however, the NRT methodology shows that the systematic and structured manner of thorough data cleansing has a substantial result on the final validity, authenticity, reliability, currency and sufficiency of SI as the data progresses in the process from data to information to intelligence.
Conclusion
The structured and systematic approach of the NRT methodology, which is able to adjust to specific as-is VUCA contexts, determined the iterative nature of the expanded NRT methodology as an innovative and sustainable approach to managing and making sense of BD, which leads to SI for informed decision-making. The importance of the Pareto principle emphasises that 20% of concerted effort in the NRT methodology will lead to 80% of valid, accurate, reliable, current and sufficient SI.
The as-is VUCA world, as established in the research, determined that data continue to grow bigger by the day. To further enhance the NRT methodology, future research that can be commissioned might focus on the automation of the NRT methodology. Even though the NRT methodology was performed on Microsoft Excel, further automation with the use of AI technologies will enhance the value proposition of the NRT methodology as an innovative BD management tool.
Acknowledgements
This article includes content that overlaps with research originally conducted as part of Lucian Theodoric de Koker’s Doctoral thesis entitled, ‘A strategic intelligence framework for moving from complexity to structure in the 4IR’, submitted to the Department of Information and Knowledge Management in the College of Business and Economics, University of Johannesburg in 2025. The thesis was supervised by Professor Tanya du Plessis. Portions of the data, analysis and/or discussion have been revised, updated and adapted for journal publication. The original thesis is in the process of being published on the University of Johannesburg’s Open Access Institutional Repository. The author affirms that this submission complies with ethical standards for secondary publication and appropriate acknowledgement has been made to the original work.
Competing interests
The authors declare that they have no financial or personal relationships that may have inappropriately influenced them in writing this article.
Authors’ contributions
L.T.d.K. and T.d.P. contributed equally to the conceptualisation, writing and editing of the manuscript and share first authorship. All authors contributed to the article, discussed the results and approved the final version for submission and publication. The first author was a Doctoral student, supervised by the other author.
Funding information
This research received no specific grant from any funding agency in the public, commercial or not-for-profit sectors.
Data availability
The data that support the findings of this study are available from the corresponding author, L.T.d.K. upon reasonable request.
Disclaimer
The views and opinions expressed in this article are those of the authors and are the product of professional research. They do not necessarily reflect the official policy or position of any affiliated institution, funder, agency or that of the publisher. The authors are responsible for this article’s results, findings and content.
References
Azzam, M. & Beckmann, R., 2024, ‘How AI helps to increase organizations’ capacity to manage complexity – A research perspective and solution approach bridging different disciplines’, IEEE Transactions on Engineering Management 71, 2324–2337. https://doi.org/10.1109/TEM.2022.3179107
Craft, R.C. & Leake, C., 2002, ‘The Pareto principle in organizational decision making’, Management Decision 40(8), 729–733. https://doi.org/10.1108/00251740210437699
De Koker, L.T., 2018, ‘The commercialisation lifecycle of a knowledge management consulting firm in the fourth industrial revolution’, Master’s dissertation, University of Johannesburg.
De Koker, L.T., 2025, ‘A strategic intelligence framework for moving from complexity to structure in the 4IR’, Doctoral thesis, University of Johannesburg [not published].
De Koker, L.T. & Du Plessis, T., 2019, ‘Development of a framework for a knowledge management firm in the fourth industrial revolution’, in M. Twum-Darko (ed.), Pragmatic business solutions by Africa for Africa: 5th International Conference on Business and Management Dynamics 2019 proceedings, AMHI, Cape Peninsula University of Technology in collaboration with University of Namibia Business School, Swakopmund, September 02–04, 2019, pp. 109–121.
De Koker, L.T. & Du Plessis, T., 2020, ‘Research as a service offering of knowledge management firms in the fourth industrial revolution’, South African Journal of Information Management 22(1), a1178. https://doi.org/10.4102/sajim.v22i1.1178
De Koker, L.T. & Du Plessis, T., 2024, ‘Nominal ranking technique in information and knowledge management: A methodology to SoTL’, South African Journal of Information Management 26(1), a1806. https://doi.org/10.4102/sajim.v26i1.1806
Dumay, J. & Cai, L., 2014, ‘A review and critique of content analysis as a methodology for inquiring into IC disclosure’, Journal of Intellectual Capital 15(2), 264–290. https://doi.org/10.1108/JIC-01-2014-0010
Fernández-Villacañas Marín, M., 2015, ‘The global strategic intelligence management’, in Learning and thinking skills, University of Alcalá, viewed 25 June 2025, from http://europehomeproject.eu/resources/learning-materials.
Google Cloud, 2025, What is Big Data?, viewed 25 June 2025, from https://cloud.google.com/learn/what-is-big-data#how-are-data-driven-businesses-performing.
Jamarani, A., Haddadi, S., Sarvizadeh, R., Kashani, M.H., Akbari, M. & Moradi, S., 2024, ‘Big data and predictive analytics: A systematic review of applications’, Artificial Intelligence Review 57, 176. https://doi.org/10.1007/s10462-024-10811-5
Kloud9, 2023, The role of data science in navigating the VUCA World, viewed 20 April 2025, from https://www.kloud9.nyc/blogs/the-role-of-data-science-in-navigating-the-vuca-world.
Ma, Z., Jørgensen, B.N. & Ma, Z.G., 2024, ‘A systematic data characteristic understanding framework towards physical-sensor big data challenges’, Journal of Big Data 11, 84. https://doi.org/10.1186/s40537-024-00942-5
Phelan, C. & Rustichini, A., 2015, Pareto efficiency and identity, NBER Working Paper No. 20883, National Bureau of Economic Research, Cambridge.
Potomkin, M.M., Sedliar, A.A., Deineha, O.V. & Zvarych, A.O., 2021, ‘Comprehensive use of the Pareto principle and the analytic hierarchy process to increase the substantiation of alternative ranking results’, Cybernetics and Systems Analysis 57(3), 422–428. https://doi.org/10.1007/s10559-021-00366-6
Sarker, I.H., 2021, ‘Data science and analytics: An overview from data-driven smart computing, decision-making and applications perspective’, SN Computer Science 2(5), 377. https://doi.org/10.1007/s42979-021-00765-8
Saunders, M., Lewis, P. & Thornhill, A., 2016, Research methods for business graduates, 5th edn., Pearson Education, Essex.
Saunders, M.N.K., Lewis, P. & Thornhill, A., 2023, Research methods for business graduates, 9th edn., Pearson Education, Essex.
Shukla, S., Gupta, B.K., Somvanshi, P. & Mishra, M., 2025, ‘Managing big data integrity for IoT healthcare information systems’, in S.A Khan, M. Faisal, N. Alharbe, R. Kumar & R.A. Khan (eds.), Computational intelligence applications in cyber security, pp. 122–140, CRC Press, Taylor & Francis Group, Boca Raton, FL.
Velasco, E., 2012, Encyclopedia of research design, Sage, Thousand Oaks, CA.
Waghmare, S., 2019, ‘Strategic intelligence and its importance in management of organisation’, in Proceedings of the 8th International Conference: The effect of new approach to globalization on business, Chronicle of the Neville Wadia Institute of Management Studies and Research, January 28–29, 2019, pp. 271–279.
|