1 Introduction

Companies exist as a result of and are shaped by decisions (Melnyk et al. 2014; Pereira and Vilà 2016) that constitute and are constituted by their strategy (Mintzberg 1972). Strategic decision making is a dynamic and challenging process (Mintzberg 1973; Liu et al. 2013; Dev et al. 2016; Moreira and Tjahjono 2016) due to organizations operating in complex environments and because of the the direct or indirect effects that decisions can have on stakeholders (Koch et al. 2009; Delen et al. 2013; El Sawy et al. 2017; Carbone et al. 2019).

Traditional decision theory distinguishes between decisions made under risk versus those made under uncertainty (Knight 1921). In the former category, all possible outcomes, including their probabilities of occurrence, are known and statistically or empirically available (Knight 1921; Marquis and Reitz 1969; Sydow 2017). However, for strategic organizational decisions, which belong to the latter category (Knight 1921; Marquis and Reitz 1969), the degree and type of uncertainty are influenced by various aspects (Rousseau 2018). Such decisions must thus to be taken in an adaptive mode to handle complexity (Mintzberg 1973), which organizations support through the introduction of hierarchies and departments to define responsibilities (Simon 1962). While this improves decision speed and efficiency for operational decisions, the quality of strategic decisions has been found to be enhanced by including a multitude of perspectives, experiences, and expertise (Knight 1921; Rousseau 2018). Organizations hence assign the task of handling complexity while ensuring diversity to managers from different departments (Rousseau 2018). Consensus must be achieved among this group to reach a decision, which is why in this study, strategic organizational decision making is defined as group decision making under uncertainty.

Nevertheless, even with more people involved, the human capacity to process information is limited (Lawrence 1991; Fiori 2011). Human decision makers, therefore, consciously construct simplified models, called heuristics or rules of thumb (Simon 1987; Fiori 2011), which deal with complex problems sequentially to make them treatable for the human computation capacity. This is called bounded rationality, a concept that researchers have interpreted differently since Herbert Simon originally defined it in the 1950s (Simon 1955; see overview of Fiori 2011). It is often seen as an unconscious activity that cannot be controlled (e.g., Kahneman 2003), sometimes also known as intuition. For Simon, however, even intuition is based on stored information and experience, which the decision maker decides to rely on when determining alternatives and probabilities, although more unconsciously (Simon 1986; Fiori 2011). Rational behavior is thus assumed to be on the continuum between intended rationality and intuition, depending on the information-processing capabilities of the agent, the complexity of the problem, and various aspects of the environment (Lawrence 1991; Fiori 2011). However, rational behavior is guided by rules, which means that it is always bounded (Fiori 2011). This makes the human brain similar to computers, both being “physical symbol systems” that process information (Simon 1995: 104).

Computers are defined as artificial intelligence (AI), which Simon (1995) sees as mathematical and physical applications that are able to handle complexity, in contrast to traditional mathematical theorems. However, opinions and studies on the extent to which AI can be used for the same tasks as the human brain, especially in connection with decision making have been scarce and differ in focus, technology, and objective (Bouyssou and Pirlot 2008; Munguìa et al. 2010; Nilsson 2010; Glock and Hochrein 2011; Nguyen et al. 2018; Wright and Schultz 2018).

Including technology in business is not a new development, as machines have been part of manufacturing processes to support humans for centuries, but machines are rather a tool, completely governed by humans, and less defined in real social collaboration settings than organizations are (Lawrence 1991; Nguyen et al. 2018; Boone et al. 2019). With AI, machines are assumed to act and react to humans, implying a possible change in the human–machine relationship (Huang and Rust 2018). Opportunities and hazards, however, are neither agreed nor analyzed in more detail, making research necessary (Lawrence 1991; Silva and Kenney 2018; Vaccaro and Waldo 2019).

The goal of this article is thus to offer guidance for groups to successfully apply existing AI to enhance decision quality in complex and uncertain environments. The topic is suitable for study with a literature review, as research on AI in general is manifold, but clear recommendations are lacking. By synthesizing existing frameworks and studies, the following research question (RQ) will be answered:

RQ How can AI support decision-making under uncertainty in organizations?

The assumptions and findings of traditional decision theory, as defined by Knight (1921), Fredrickson (1984) and Resnik (1987), serve as the foundation for the analysis. However, to ensure the success of the whole decision-making process, the “how” of the RQ must also include pre-requisites that are crucial for possible AI integration. Furthermore, AI support can only be evaluated adequately when the potential consequences and challenges of the adapted process are analyzed and, if possible, considered beforehand. To facilitate understanding and derivation of the results, the RQ is thus divided into the following three sub-dimensions, all referring to the general decision-making process under uncertainty (Fredrickson 1984; Rousseau 2018): (1) possibilities of AI integration per step, (2) necessary pre-conditions and crucial preparations, and (3) potential challenges and consequences. The resulting conceptual framework provides an overview of aspects that executives should be aware of, also referring to the potential effects of AI integration on the tasks and responsibilities of human decision makers.

The remainder of the article is organized as follows. First, after a brief overview of the history of AI and its definition, as well as existing categories of applications, the theoretical section provides an introduction to decision theory and group decision making, linking it to AI. The third section briefly describes the method of linking a systematic literature review (SLR) with content analysis (CA) and the executed process. Then, an outline of the findings is presented to answer the RQ, followed by providing a conceptual framework for organizational decision making under uncertainty. The article subsequently offers managerial implications and closes with an overview of limitations, future research possibilities, and a short conclusion.

2 Decision making with the help of AI

2.1 Development and current status of AI research

2.1.1 Definition and history of AI

AI emerged as a concept in the sixth century BC, with Homer’s Iliad mentioning self-propelled chairs (McCorduck 2004; Nilsson 2010). The computing machine was invented in 1937 by Alan Turing, who claimed that as soon as a machine can act as intelligently as a human being, it can be seen as artificially intelligent (McCorduck 2004; Nilsson 2010). Then, in 1955, McCarthy et al. (1955) first introduced the term “Artificial Intelligence” in a proposal for the Dartmouth summer research project to study how intelligence can be exercised by machines. The goal of their project was to describe any feature of intelligence so precisely that a machine could simulate it. Simon supports this view, defining AI as “systems that exhibited intelligence, either as pure explorations into the nature of intelligence, explorations of the theory of human intelligence, or explorations of the systems that could perform practical tasks requiring intelligence” (Simon 1995: 96). More recent definitions include “technologies that mimic human intelligence” (Huang et al. 2019: 44) and “machines that perform tasks that humans would perform” (Bolander 2019: 850), or they focus on the independence of machines from humans, speaking of “artifacts able to carry out tasks in the real world without human intervention” (Piscopo and Birattari 2008: 275). These definitions can be further expanded by similar approaches, all relating machines to intelligence, although this concept is also not defined (for an overview of definitions, see Legg and Hutter 2007: 401).

For this reason, in this article, Nilsson’s (2010: 13) definition is adopted, as it encompasses Simon’s view and all other above-mentioned aspects, while being precise enough to guide the further analysis: “For me, AI is that activity devoted to making machines intelligent, and intelligence is that quality that enables an entity to function appropriately and with foresight in its environment.” The capabilities necessary to “function appropriately and with foresight” range from perception to interpretation and the development of actions to interact with, react to, or even influence the environment to achieve individual goals (Legg and Hutter 2007; Bolander 2019). The specific capability that is needed depends on the environment and the type of problem. Lawrence (1991) established a framework for decisions driven by complexity, leading to several decision types, versus decisions driven by politicality, which describes environmental influences not only from society and politics, but also within organizations. Figure 1 relates these definitions to Simon’s (1986, 1995) continuum of rational behavior, assuming that perception is rather linked to intended rationality, while interpreting and acting require the inclusion of additional experiences and stored information. Following Simon (1995), all steps can be executed by humans and machines alike. This is supported by the definition of an algorithm as “a process or set of rules to be followed in problem-solving operations” (Silva and Kenney 2018: 13). Being the integral part of AI, algorithms thus equate to human heuristics for solving problems in a step-wise manner.

Fig. 1
figure 1

The continuum of rational behavior (based on Simon 1986, 1995; Lawrence 1991; Nilsson 2010)

Nevertheless, there are caveats regarding this view. One of the earliest stems from Descartes, who in 1637 claimed that it would be “morally impossible (…) to allow it (sic. the machine) to act in all events of life the same way as our reason causes us to act.” This is supported by Bolander (2019), who claims that humans and machines cannot be compared in intelligence, as they have different strengths and weaknesses. Moreover, some researchers find AI to be useful for special areas only, where no abstractions, knowledge transfer, or the analysis of unstructured tasks is needed (Sheil 1987; Surden 2019), and there are differing views on the potential that AI has for creativity, emotions, or empathy (Wamba et al. 2015; Kaplan and Haenlein 2019). To integrate AI beneficially into organizational decision making, current research indicates that one must first understand its capabilities and potential dangers, especially compared to or in interaction with people. This understanding is expected to decrease the human fear of losing power and of change, and it supports building trust. Furthermore, Morozov (2013) highlights the challenge of technological solutionism, assuming technological decisions to be superior and no longer accepting human imperfection and failure. This also includes the risk of consciously or unconsciously creating problems because it is technologically feasible to solve them (Morozov 2013). The following study provides a better understanding of the benefits and limits of AI, starting with an overview of its applications in the following section.

2.1.2 AI applications

A detailed definition of an AI application is not available. For humans, different dimensions of intelligence are said to exist (Legg and Hutter 2007), and following Nilsson’s definition and the continuum of rational behavior (see Fig. 1), types of AI applications range from less to more complex, depending on the environment and the type of decision (McCarthy et al. 1955; Nilsson 2010).

Lawrence (1991) linked these dimensions to possible AI applications, but focused only on two concrete applications: natural language processing and expert systems. Almost 30 years later, the number of AI applications has increased significantly. Therefore, the framework will be linked to the categories of bottom–up and top–down approaches, following the majority of researchers (Nilsson 2010; Bolander 2019; Surden 2019). The former category refers to applications that are created implicitly, meaning that they all statistically learn from experience and are thus not completely predictable, error-free, or explainable. The second cluster includes mathematical and statistical approaches, although researchers sometimes do not agree with or even mention them as being AI (e.g., Simon 1995; Welter et al. 2013; Haruvy et al. 2019). These applications are also called logical rules and knowledge representation, based on rules that human programmers provide to computers, often with the goal of automation (Surden 2019), leading to systems that are predictable and explainable with strict and known abilities (Bolander 2019). Figure 2 offers a framework relating the categories to the continuum of rational behavior (see Fig. 1), with top–down applications assumed to be used for perception and interpretation, and bottom–up applications for actions, as this step requires the highest level of intelligence. Specifying clear applications for the categories is unfortunately not possible, as researchers do not even agree on how to categorize traditional mathematical applications, while new applications for bottom–up AI are also not stipulated or agreed. The reason for this may be that most systems today, especially when it comes to decision making, are located in the middle, “having a human in the loop” (Bolander 2019; Surden 2019).

Fig. 2
figure 2

Framework for categorizing AI applications related to the continuum of rational behavior (based on Lawrence 1991; Nilsson 2010; Bolander 2019; Surden 2019)

This section established a better understanding of current AI research. In the following, an introduction to decision theory and the characteristics of group decision making, as an equivalent of the organizational approach for strategic decisions, are provided.

2.2 Organizational decision making

2.2.1 Decision theory and resulting challenges

As already explained, strategic decision making belongs to the category of decisions under uncertainty. To make the best decision, each alternative is assigned a probability and utility level, and the alternative with the highest weighted value is chosen (Knight 1921; Fredrickson 1984; Resnik 1987). Probability levels are estimates, characterized by coherence, conditionalization, and convergence (Resnik 1987). Coherence relates to the influence of frequency. With a high frequency of similar decisions in similar situations, expertise increases, which conditions the estimate into a specific direction. Convergence refers to the number of people included. As this number increases, the processing capacity is assumed to increase as well (Resnik 1987).

Utility levels represent an individual or group’s subjective preference for each of the alternative outcomes (Thompson 1967). Especially when decisions affect and involve many stakeholders, values cannot be defined to equally include all utility levels (Liu et al. 2013; Melnyk et al. 2014; Wright and Schultz 2018). Objectivity has been found to be possible only to a limited extent, as decision makers need to rely further on heuristics due to uncertainties inherent in information processing and group discussions in complex environments. In addition, the type and amount of rationality can differ within one decision (Metzger and Spengler 2019), as some aspects of the decision might be influenced more intuitively than others. This entails the risk of bias, which can lead to incorrect problem definitions or the wrong evaluation of alternatives, as some impacts are valued higher than others or guided by assumptions, such as the sunk cost effect (Roth et al. 2015; Danks and London 2017; Cheng and Foley 2018; Boone et al. 2019; Julmi 2019; Kourouxous and Bauer 2019; Metzger and Spengler 2019). Bias can either be conscious, an active introduction of incorrect information by one decision group member at any stage of the process, or unconscious, due to the individual or group being unaware of subjectivity, which in some cases even increases with experience (Roth et al. 2015; Cheng and Foley 2018). Although the process of decision theory refers to one rational individual, research on decisions under uncertainty has found that groups make decisions more in line with the theory than individuals do, and they also compensate for some of these challenges through discussion (Charness and Sutter 2012; Kugler et al. 2012; Carbone et al. 2019). As groups are also the focus of this study, the next section provides an overview of current research (for an overview, see Kugler et al. 2012).

2.2.2 Decision making in groups

As stated in the introduction, for the purpose of this article, strategic organizational decision making is defined as group decision making under uncertainty, as groups are the established type of such decisions in organizations (Rousseau 2018). Heterogeneous groups have been found to make better decisions than homogeneous ones, as information diversity, discussion, and experience lead to improved interpretation, thereby decreasing bounded rationality (Beckmann and Haunschild 2002; Charness and Sutter 2012; Kouchaki et al. 2015; Rousseau 2018; Herden 2019). However, whether groups help to reduce bias (Kouchaki et al. 2015; Rousseau 2018) or can also introduce it into a decision is not agreed (Marquis and Reitz 1969; Charness and Sutter 2012). In addition, for designating alternatives and probabilities, groups have been found to engage in negotiation (Marquis and Reitz 1969; Kugler et al. 2012), but a research gap exists about how they define joint utilities (Samson et al. 2018).

According to Rousseau (2018), to enhance decision quality, it is crucial to search for different types and forms of information and not only the most easily available. At the same time, the reliability, validity, consistency, and relevance of information sources must be analyzed. While this can be facilitated when more people are involved in the decision-making process, researchers have also found that using technology that is able to process large amounts of data can have a supportive effect (Long 2017; Herden 2019). Several researchers on group decision making thus call for more exploration of the use of group communication and information systems (Charness and Sutter 2012; Kugler et al. 2012), including the effect that computer programs can have to help with structuring decisions (Schwenk and Valacich 1994).

Combining humans and technology is expected to improve decision making even further than only including more people. The following section provides the framework of the organizational decision-making process as guidance for this study.

2.3 The basic process for organizational decision making under uncertainty

The proposed process in Fig. 3 is based on decision theory (Fredrickson 1984) and several studies on decision making under uncertainty with the involvement of many people (Beckmann and Haunschild 2002; El Sawy et al. 2017; Long 2017; Rousseau 2018). It provides guidance for analyzing the results of the SLR along the sub-dimensions of the RQ and serves as the foundation for the conceptual framework.

Fig. 3
figure 3

The basic organizational decision-making process as the framework for the analysis (based on studies from Fredrickson 1984, Beckmann and Haunschild 2002; El Sawy et al. 2017; Long 2017; Rousseau 2018)

The process begins with the definition of the decision goal as the guideline for all subsequent steps. The information that must be collected in step two can be categorized as external (i.e., societal, political, legal, or industrial sources) or internal (El Sawy et al. 2017). Scholars deem internal information to be either explicit (e.g., facts and figures on the organization, as well as its products, traffic flows, inventories, and prices) or implicit (Beckmann and Haunschild 2002; Rousseau 2018). Implicit internal information is more difficult to glean, as it often entails highly individual aspects, such as emotions or experience, and is influenced by the amount of trust or the reasons for hidden agendas that each group member has (Fu et al. 2017; Boone et al. 2018, 2019). Since decision makers can only interpret information that is available, the quality and completeness resulting from step two influences the rest of the process (Meissner 2014; Julmi 2019). In addition, the amount of information has an impact on the process, as especially in large organizations, most collected information is not needed, while the processing capacity remains limited (Feldman and March 1981; Fiori 2011; Roetzel 2018). Steps two and three, defined in this framework as knowledge management, continuously influence all further steps, as the flow of information never stops, implying that there can be an impact during a later step as well (Long 2017).

Based on the interpretation of the available information, shaped by the decision goal and the heuristics of the group, alternatives are determined in step four, for which probability and utility values are then assigned in step five. Finally, in step six, the group weighs the alternatives and makes the decision. In an ideal world, the resulting outcome matches the desired goal.

For the purpose of this article, the decision-making process consists of three stages, namely, input–process–output, which are linked to perception, interpretation, and actions, respectively. The framework hence connects to the continuum presented in Sect. 2.1.1.

Research to date has neither stipulated the steps for which the use of AI is suitable and in what way, nor is there an agreement on its benefits. The possibilities for bias, for example, have been found to even increase when using AI for decisions (for an overview, see Silva and Kenney 2018), as an AI application is executing a small decision-making process for itself each time it is used, based on the goal it is used for and the data it has available. As there is no dialogue possibility with the technology, scholars argue that it is often not clear how the system arrives at a certain output (Bolander 2019). On the one hand, each algorithm is only as good as the data input and the programmed process mining, which are usually both done by humans and thus might be biased (Barocas and Selbst 2016). This is dangerous, as humans are not able to compensate for failed algorithms (Vaccaro and Waldo 2019). On the other hand, some AI applications have been found to support the challenge of including ambiguous utility values (Metzger and Spengler 2019). The following literature review provides an analysis of the antecedents and consequences of applying AI in strategic organizational decision making and how to best combine it with human capabilities.

3 Research methodology

Following Meredith (1992), conceptual models that build on descriptions and explanations provide the best foundation for theory testing afterwards. For the purpose of this article, an SLR was used as the descriptive basis, as it is defined as a systematic approach that “informs regarding the status of present knowledge on a given question” (Rousseau et al. 2008: 500). It follows specific criteria and is re-executable (Tranfield et al. 2003), implying that it is reliable and combines all literature of a delineated research area. The structured summary also provides an in-depth understanding of results (Briner and Denyer 2012). This is expected to offer the necessary explanations to understand the phenomenon, resulting in a conceptual model for empirical testing. For a qualitative analysis of selected articles, this approach is further amplified by CA (Mayring 2008), which is a more iterative approach that integrates as much material on a topic as possible, while inductively building categories afterwards. It is a useful methodology for analyzing various influences on the correct design of processes, especially when linked to new technologies such as AI. The CA methodology has been employed by Glock and Hochrein (2011) to analyze purchasing organization design, Rebs et al. (2018) to study stakeholder influences and risk in sustainable supply chains, Nguyen et al. (2018) to analyze big data analytics in supply chain management (SCM), and Roetzel (2018) to study information overload. Combining SLR and CA methods ensures that all relevant literature, analyzed in a structured process, is included (Denyer and Tranfield 2009; see Table 1), thereby offering a detailed description and explanation of theory building (Meredith 1992).

Table 1 Systematic review process

3.1 Search strategy

After a preliminary scope searching (Booth et al. 2016), the databases Business Source Complete (via EBSCO Host), Sciencedirect, ABI/inform (via ProQuest), and Web of Science were selected. These electronic databases are also acknowledged in the current literature and were chosen to provide fast and reliable access to appropriate articles. Furthermore, as AI is a rather technological topic, information was assumed to be primarily included in electronic databases.

The databases were searched using three search strings, each consisting of a combination of three groups of keywords resulting from the preliminary scope search. In the first group, to avoid inadvertently excluding results, fairly general terms relating to AI were used, namely AI and machine learning. A decision was made to not search for abbreviations, because in this research field, not only is the acronym AI common, but it is also used in different fields, resulting in potentially irrelevant results. For the same reason, the second group also included broad search terms, namely decision making and decision support, and the third group only included human machine. The preliminary search revealed that this search term captures all existing combinations that researchers use to define the relationship, although it led to remarkably fewer results. The search terms were coupled with Boolean operators AND/OR to search strings (Booth et al. 2016), which were entered into the databases from 2016 onwards in peer-reviewed journals. The start was set to 2016, as the search frequencies of AI on Google Trends (AI 2020) show an initially large increase after 2016, supported by Nguyen et al.’s (2018) literature review findings. The search strategy was adapted for all databases (see Table 4 in Appendix 1), as there were differences in user interface and functionalities (Booth et al. 2016).

3.2 Selection process

The database search resulted in 3458 articles, and 2524 after duplicates were removed, making a selection process necessary (see Fig. 5 in Appendix 1). Each study was evaluated according to established inclusion and exclusion criteria of quality and relevance regarding the RQ and its three sub-dimensions as mentioned in the introduction (Briner and Denyer 2012; see Table 5 in Appendix 1), resulting in a final total of 55 articles (see Appendix 3). The majority of articles were eliminated, as they were considered to be either too specifically tied to certain industries or not generalizable for answering the RQ or focused on operational decisions only. The only exception was the broad literature on SCM, which often focused on multi-stakeholder decisions, similar to the definition of organizational decisions due to the heterogeneous groups of people involved, and that literature was thus considered to offer relevant input to the analysis.

3.3 Classification of content

A deeper insight into the content of the articles is provided by classification, which aids in categorizing data to enable a more structured description (Mayring 2015) and to create new knowledge that would not be possible by reading the articles in isolation (Denyer and Tranfield 2009). The following classification categories relate to the topics addressed in the sample and the three previously stated sub-dimensions of the RQ:

  • Knowledge management with the help of AI.

  • Categorization of AI applications.

  • Impact of AI on organizational structures.

  • Challenges of using AI in strategic organizational decision making.

  • Ethical perspectives on using AI in strategic organizational decision making.

  • Impact of AI usage in strategic organizational decision making on the division of tasks between humans and machines.

The first two categories address the first sub-dimension (i.e., the possibilities of AI integration into the previously introduced decision-making process). The subsequent two categories, organizational structures and challenges, provide insight into the second and third sub-dimensions (i.e., possible pre-conditions and preparations necessary for a successful AI integration, and the challenges and consequences thereof), while ethical perspectives contribute to answering all three sub-dimensions of the RQ. The last category closes the analysis by addressing the first sub-dimension of the RQ directly with a proposed division of tasks between humans and AI, and it also includes findings on the possible consequences for the designation and development of the human role. The difficulty in defining categories that can be assigned perfectly to the sub-dimensions demonstrates the variety of aspects that researchers and practitioners relate to this topic. Moreover, it is important to mention that clustering for CA had to be done in several rounds, with much discussion among the research team, as many articles offer input and content for more than one category. However, as Table 2 indicates, each article was attributed to one category only. Frameworks or models are marked in bold and are also described in the following analysis.

Table 2 Overview of articles and assigned categories

The final step of the methodological process, namely, the results and discussion based on the interpretation of data, is provided in the next chapter, followed by the conceptual integration of theory.

4 Results and discussion

4.1 Distribution of articles per year, journal, and research methodology

Assessing the distribution of articles per year, an increase can be seen over the entire period of observation (see Fig. 6 in Appendix 2). The highest rise occurred between 2018 (10 articles) and 2019 (28 articles). This might be due to a higher focus on the topic in the business area worldwide, starting in the last quarter of 2019 (Artificial Intelligence in Business and Industrial Worldwide 2020) and leading to an increase in scientific interest to analyze this topic from a business perspective. This is also evident in the increasing number of published articles in high-quality journals in 2019.

The 55 selected articles have been published in 42 different journals, while only 9 have more than 1 article included in the review (see Fig. 7 in Appendix 2). This illustrates the relevance of the topic for different disciplines, the high interest in the research field, and the various focus topics. This is also demonstrated by the respective focus of the magazines, from technology and computer systems to society and ethics, and journals focusing on business and management. The type and methodology of analysis (see Fig. 8 in Appendix 2), however, is rather theoretically oriented. Empirical approaches increase in 2019, where even conceptual articles try to relate their findings to practical observations and data.

Regarding the distribution of articles among the categories defined for the content analysis (see Fig. 9 in Appendix 2), a major focus is on the human–machine relationship (12) and ethical perspectives (12). In both categories, theoretical approaches are still dominant. Therefore, real-life examples of an implementation of AI into organizational decision making are assumed to be rare, making empirical analysis difficult. This seems to be different for the smallest category of knowledge management (6), which is the most practically analyzed one.

The following sections provide an overview of the articles per category of the CA, each of which deals with one or more sub-dimensions of the RQ, as explained in Sect. 3.3. Thereby, more detailed insights into the content of the articles are offered (for an overview see Table 2) to provide managerial implications that are only possible by synthesizing the findings (Denyer and Tranfield 2009). The basis for these implications will be the conceptual framework presented in Sect. 4.3. It offers an answer to the RQ based on a combination of the findings resulting from addressing the sub-dimensions.

4.2 Using AI as support for strategic organizational decision making

4.2.1 Knowledge management with the help of AI

Studies of the sample highlight that through the interaction between individuals and technological systems, new meanings and influences are expected to be created (Shollo and Galliers 2016). Researchers agree that AI can be used for the collection, interpretation, evaluation, and sharing of information, thereby providing support in speed, amount, diversity, and availability of data (Acharya and Choudhury 2016; Shollo and Galliers 2016; Bohanec et al. 2017a). In addition, Acharya and Choudhury (2016) highlight the opportunity to increase data quality, as too much, too little, or incorrect information can negatively affect decision outcomes, which is often the case in large organizations with complex structures.

However, Metcalf et al. (2019) raise the concern that the training of AI will be difficult, as data are constantly changing and complex in nature. They thus deem humans to be necessary to ensure the quality of information and interpretation, which is also supported by other researchers’ findings (Shollo and Galliers 2016; Bohanec et al. 2017a, b; Terziyan et al. 2018). In addition, especially for highly strategic decisions, implicit information has been found to be more important than pure analysis of facts (Acharya and Choudhury 2016; Bohanec et al. 2017a). Therefore, “while humans have access to both explicit and tacit knowledge, lack of access to tacit knowledge and the reliance on historical data from which patterns can be identified are major limiting factors of AI (…)” (Metcalf et al. 2019: 2). Some researchers even provide evidence that groups are capable of including some of these aspects through discussion (Shollo and Galliers 2016; Bohanec et al. 2017a; Metcalf et al. 2019).

Nevertheless, several researchers offer potential tools to make implicit knowledge available, as marked in bold in Table 2. The most holistic method for integrating all types of information into decision making is proposed by Terziyan et al. (2018): by cloning human decision makers, the patented intelligence (Pi-Mind) methodology attempts to capture soft facts and potential utility levels, although the quality of the clone always depends on the input data provided by humans. Acharya and Choudhury (2016: 54) call for an inter-organizational knowledge-sharing model to address the challenge that “an overemphasis on technology might force an organization to concentrate on knowledge storage, rather than knowledge flow.” As information quantity influences all steps of the decision-making process, these authors also state that resources within an organization should be allocated to enable efficient knowledge management (Acharya and Choudhury 2016).

The six articles in this category do not propose clear strategies on how to organize knowledge management, neither in general, nor with the help of AI. However, an agreement can be observed on AI supporting the amount and speed of information collection and interpretation. Nevertheless, the authors in this category argue that the resulting quality depends on human capabilities and willingness to disclose implicit information.

4.2.2 Categorization of AI applications

Almost all articles in the sample propose a set of AI applications to a certain extent. Table 3 clusters all the applications mentioned according to their use case and possible integration into the decision-making process defined in Sect. 2.3.

Table 3 Overview of mentioned AI applications related to process step and use case

Researchers of this category agree on the stages of input–process–output, with related definitions on data being at rest, in collection, in transition, in motion, or in use. Parallel to this, respective applications increase in ability from purely statistical AI, which some researchers do not even classify as AI (Baryannis et al. 2019a), to human–machine AI (Blasch et al. 2019), supporting the framework in Sect. 2.1.2.

Scholars argue that determining which application to use depends on the type, quantity, and quality of data available, resulting in various necessities to handle the data, such as classification, clustering, or detection of connections (see Table 3). Moreover, as several applications can be used for both top–down and bottom–up approaches (Flath and Stein 2018; Mühlroth and Grottke 2018; Baryannis et al. 2019a, b; Blasch et al. 2019), the purpose for which the specific application should be used is identified as an additional influence (Blasch et al. 2019).

Articles use and recommend a hybrid approach, as mathematical models are, on the one hand, found to be less capable of handling a large amount of data, which, on the other hand, is needed to train machine learning applications, often based on mathematical ones (Baryannis et al. 2019b; Blasch et al. 2019). Therefore, Simon’s (1995) definition of AI applications being more than mathematical theorems is supported.

The articles discuss potential and hypothetical use cases for AI, mainly with the goal of data interpretation, alternative creation, or probability and preference definition, possibly even related to an evaluation of consequences (Pigozzi et al. 2016; Baryannis et al. 2019a, b). Information collection is seen as a task to be completely fulfilled by AI. It is related to the generation of information from various and numerous sources with differing techniques, such as natural language processing, text mining, or other data mining possibilities (Baryannis et al. 2019b; Blasch et al. 2019). Nevertheless, executing feature engineering afterwards to reduce input bias as far as possible is said to be necessarily done by humans, assisted by top–down applications (Flath and Stein 2018).

There is disagreement on how useful AI applications are in general for organizational decision making. As Baryannis et al. (2019b) found in their literature review, the majority of studies analyzed do not see any decision-making capability, although some articles provide bottom–up applications as decision support systems. Practical experiments in the sample instead refer to information gathering and status tracking within production or logistics [e.g., the data science toolbox of Flath and Stein (2018), supply chain risk management tools by Baryannis et al. (2019a, b), and the self-thinking supply chain of Calatayud et al. (2019)] with the exception of Colombo (2019), who introduced holistic risk analysis and modeling (HoRAM) as an already tested application to be used for almost the whole decision-making process in dynamic environments.

In summary, although scholars do not agree on what to classify as an AI application and whether it is useful for decision making, the consensus is that the choice of application is influenced by various dimensions of data and the basic reason for which the technology is intended to be used. Relating to Fig. 2, most applications that the articles refer to can be clustered as top–down, as they are not able to act self-consciously (i.e., human-like; Blasch et al. 2019). With increasing research and development efforts, however, the literature expects the capabilities of AI applications to increase and to shift to the right (Mühlroth and Grottke 2018; Colombo 2019).

4.2.3 Impact of AI on organizational structures

Von Krogh (2018: 405) supports Herbert Simon’s findings, stating that organizational structures are closely linked to decision making, as they result from the limited human processing capacity: “To mitigate this problem, information-processing and decision-making authority can be delegated across roles and units that display various degrees of interdependence.” The organizational strategy and resulting goals have been found to be an important influence, not only for this definition of roles and relationships to make information manageable, but also on all steps of the strategic decision-making process relating to Fig. 3 (von Krogh 2018).

Organizational strategy and goals are further said to determine the reasons for which AI is used (Bienhaus and Abubaker 2018; von Krogh 2018; Butner and Ho 2019; Paschen et al. 2019). They are also discussed as the basis for an adaptation or creation of structures, which is expected to be necessary to make AI integration possible (von Krogh 2018; Paschen et al. 2019; Udell et al. 2019). However, von Krogh (2018) also argues that structures change as soon as AI applications are actively used, thus influencing processes and responsibilities. In their surveys, Bienhaus and Abubaker (2018) and Butner and Ho (2019) recommend completely re-building and re-thinking processes rather than placing new ones on top of old structures. To support organizations with establishing these new processes, Paschen et al. (2019) developed a framework with four dimensions to assess whether the introduction of AI leads to an innovation in products or processes, as well as whether it is competency-enhancing or -destroying, thereby referring to humans as well. Depending on the combination of these four dimensions, firms can “generate different value-creating innovations” (Paschen et al. 2019: 151).

Lismont et al. (2017) offer another perspective, categorizing companies according to their readiness for technology implementation. They conclude that the more mature a company is in using AI, the higher the variety of applications, the number of affected processes, and related goals are. Due to interdependencies, Tabesh et al. (2019), therefore, argue that the complex construct of organizations should only be changed in steps and always while carefully referring to the defined strategy.

In summary, organizational structures are the foundation for successful AI integration, and vice versa, the use of AI in decision making also influences those structures. The strategic reasons for implementing AI inform the type and location of AI used. However, the available applications are also expected to influence existing decision-making processes that are to be adapted to make usage possible.

4.2.4 Challenges of using AI in strategic organizational decision making

To determine whether, how, and why to integrate AI into existing business processes, AI literacy has been found to be crucial (Kolbjørnsrud et al. 2017; Lepri et al. 2018; Canhoto and Clear 2019), as “not every decision problem needs to be solved by technology” (Migliore and Chinta 2017: 51). Researchers define AI literacy as a profound understanding of the technology and its possibilities and limitations, which according to Whittle et al. (2019) is often missing. To increase AI literacy, scholars have argued that the involvement of the employees who will be affected by AI integration, rather than only top management, is crucial, as acceptance differs across levels (Kolbjørnsrud et al. 2017; Bader et al. 2019). Stakeholders have been found to necessarily gain a sense of ownership, and by familiarizing themselves with the technology and actively being part of the integration, they are able to define their role. Therefore, according to the literature, education and training constitute a highly important task (Kolbjørnsrud et al. 2017; Watson 2017). The authors such as Migliore and Chinta (2017), Bader et al. (2019) and Whittle et al. (2019) recommend analyzing which capabilities are needed by which employee to leverage the technology’s potential, thereafter enabling each individual to successfully work with AI for assigned tasks. This also implies that executives need to guide employees through this process, based on their own literacy and understanding of the technology (Kolbjørnsrud et al. 2017; Watson 2017; Whittle et al. 2019).

Soft skills in general have been argued to become increasingly important with the introduction of AI in organizational decision making (Kolbjørnsrud et al. 2017), including a focus on training employees in capabilities for collaboration, creativity, and sound judgement. AI is recommended to be introduced step wise (Kolbjørnsrud et al. 2017; Watson 2017) as trust into the technology increases with experience and understanding. Employees become accustomed to using it for tasks for which machines have not been used before (Kolbjørnsrud et al. 2017; Lepri et al. 2018). Transparency, referring to “information about the nature and flow of data and the contexts in which it is processed” (Singh et al. 2019: 6563) to reach a certain decision (Canhoto and Clear 2019), is crucial for a successful introduction and usage as well. The articles in this category suggest a heterogeneous introduction team consisting of new and established organizational executives (Kolbjørnsrud et al. 2017; Lepri et al. 2018) and people with sufficient training (Watson 2017). Scholars again claim that finding the right introduction team and providing support over the process are the responsibility of leadership. Kolbjørnsrud et al. (2017) found that top executives possess a higher awareness and understanding of their responsibility to invest time and guide employees through this process, than middle managers do.

Further challenges that the majority of authors in the category have addressed are data security and data privacy issues, as well as the danger of data manipulation, which must be evaluated before implementing new technologies (Kolbjørnsrud et al. 2017; L’Heureux et al. 2017; Lepri et al. 2018; Canhoto and Clear 2019; Singh et al. 2019; Whittle et al. 2019). The articles assume that the resulting transparency and literacy help to decrease bias. Migliore and Chinta (2017) also found that having more available data is helpful. These authors define bias as bounded rationality, which contrasts the definition in this study (see Introduction). Therefore, this assumption is questioned, as additionally, the right quantity and quality of data has been found to be a challenge in itself (Lepri et al. 2018; Canhoto and Clear 2019). Bellamy et al. (2019: 78) suggest that “machine learning is always full of statistical discrimination,” meaning that even machines are biased. Some frameworks have thus been proposed to offer solutions for fair pre-processing, in-processing, and out-processing, for example the AI Fairness 360 (Bellamy et al. 2019) and the Open Algorithms (Lepri et al. 2018), but they are also said to only reduce bias. Furthermore, Canhoto and Clear (2019) demonstrate that decision quality always depends on the application used, the resources available, the input provided, and the interpretation ability of the humans using it.

The literature thus suggests that education and training, combined with an awareness of data security issues, lead to literacy and transparency, thereby decreasing caveats. Furthermore, focusing on the active involvement of affected employees and a step-wise introduction has been found to result in a successful implementation. Through these factors, even though the danger of active or implicit bias might not decrease, at least awareness is supported. However, the majority of authors have also claimed that in processual and structural implementation, the important aspects of ethics and morality should not be forgotten.

4.2.5 Ethical perspectives on using AI in strategic organizational decision making

Although all researchers of this category state that an ethical framework is needed to use AI in organizational decision making, there is no agreement on the design. Some recommend an implementation of decision rules into AI systems (Webb et al. 2019; Wong 2019), while others concentrate on making the machine learn moral guidelines by itself (Bogosian 2017), relating to top–down and bottom–up approaches of AI.

Morally or socially correct behavior and the resulting implicitly learned societal rules are claimed to be rather subjective (Cervantes et al. 2016; Etzioni and Etzioni 2016; Bogosian 2017; Giubilini and Savulescu 2018). Some researchers, therefore, propose a combination of legal frameworks (Vamplew et al. 2018), although these alone cannot include the complex and often conflicting factors that humans incorporate into decisions: “What may be right for one person may be completely inaccurate for the other.” (Cervantes et al. 2016: 281).

Parisi (2019: 26) states that “the question of automated cognition today concerns not only the capture of the social (and collective) qualities of thinking, but points to a general re-structuring of reasoning as a new sociality of thinking,” requiring a new understanding and definition of aspects such as fairness, responsibility, moral fault, or guilt. In an attempt to offer a new definition, several researchers have analyzed the behavior humans exhibit when working with artificial agents, especially in terms of attributing human values and shortcomings to the machines. The UnBias project by Webb et al. (2019) demonstrates that fairness is the guiding principle in decisions, though the understanding of fairness differs among participants. Wong (2019) lists conditions to ensure fairness. Among them, the transparency of the decision process and the inclusion of all affected stakeholders’ perspectives are as important as a regulatory framework. Other researchers have analyzed the differences in the definitions of ethical aspects for human-only, AI-only, or combined decision making and found that moral fault was always attributed to humans (Shank et al. 2019). Kirchkamp and Strobel (2019) discovered that the feeling of guilt also does not change, while responsibility in human-AI teams is perceived as being higher than in human-only teams, and selfish-acting decreases. According to their findings, any higher form of moral responsibility is so far not attributed to machines. In addition, Hertz and Wiese (2019) found that people choose machines for analytical questions, while human advisors are preferred for social and personal topics.

In summary, articles on ethics are as divided as the topic of AI itself. “Legal and safety-based frameworks (…) are perhaps best suited to the more narrow AI which is likely to be developed in the near to mid-term” (Vamplew et al. 2018: 31), and they, therefore, seem to be the only frameworks agreed on as a guiding principle (Etzioni and Etzioni 2016; Wong 2019). Researchers thus assume that including ethical guidelines into algorithms is only possible to a limited extent and is always influenced by the people designing them, although several researchers have proposed tools to support this inclusion (Cervantes et al. 2016; Etzioni and Etzioni 2016; Giubilini and Savulescu 2018; Vamplew et al. 2018). A new definition of social and moral norms and aspects in relation to AI is argued to be necessary. As no clear recommendation can be derived on how to solve this challenge, Vamplew et al. (2018) assume that a step-wise procedure is required to agree on ethical guidelines and the extent to which case-based judgement remains.

4.2.6 Impact of AI usage in strategic organizational decision making on the division of tasks between humans and machines

Most articles in this category claim that the “unique strengths of humans and AI can act synergistically” (Jarrahi 2018: 579), implying that through the combination of human and AI capabilities, efficiency and profitability in decision making are expected to increase (Smith 2016; Anderson 2019; Shrestha et al. 2019). Furthermore, it is widely agreed that humans and machines can augment each other, implying that AI systems learn from human inputs and vice versa (Jarrahi 2018; Schneider and Leyer 2019). This assumption is also supported by the authors from other categories, such as Kolbjørnsrud et al. (2017), Terziyan et al. (2018), von Krogh (2018) or Blasch et al. (2019), demonstrating the relevance this topic has on other aspects as well.

Researchers offer several frameworks for dividing tasks between AI and humans, usually ranging from full delegation to AI or hybrid forms, through to human-only decision making (Shrestha et al. 2019; Yablonsky 2019). Parry et al. (2016) and Agrawal et al. (2019), however, are the only authors who consider the potential of allowing AI to make decisions completely independently as being realistic. Nevertheless, they also claim that this is not suitable for all types of decisions, making “the retention of a veto power when the decisions can have far-reaching consequences for human beings (…)” necessary (Parry et al. 2016: 17). Bolton et al. (2018: 55) identify AI as being able to “automate tasks,” which “allows humans to focus on work that will add value”, while Klumpp and Zijm (2019) speak of the artificial divide, meaning that humans become supervisors more than executors. Thus, using AI to automate some tasks of the decision-making process gives people time to invest in those skills that AI cannot adequately perform, but which are critical to strategic decisions. The other authors further argue that humans are better at judgment, the analysis of political situations, psychological influences, flexibility, creativity, visionary thinking, and equivocality (Parry et al. 2016; Smith 2016; Rezaei et al. 2017; Jarrahi 2018; Agrawal et al. 2019; Shrestha et al. 2019). In addition, “even if machines can determine the optimal decision, they are less likely to be able to sell it to a diverse set of stakeholders.” (Jarrahi 2018: 582).

To summarize this category, the authors claim that AI offers the potential for machines to augment human capabilities, and vice versa, while it also changes the human role to become more of a supervisor. The authors hence expect a rather limited integration possibility of this technology into a process such as strategic organizational decision making, where capabilities are needed that only humans are argued to possess.

Lyons et al. (2017), therefore, claim that for the relationship between humans and machines to work, all involved parties must understand the tasks, responsibilities, and duties, and a high level of transparency is required, which is similar to the organization of human-only relationships. A possible concept of how this can be defined for the purpose of strategic organizational decision making is provided in the following.

4.3 Conceptual framework for AI integration into the organizational process for decision making under uncertainty

The majority of researchers in the sample support the decision-making process in Sect. 2.3 (Bohanec et al. 2017a; von Krogh 2018; Shrestha et al. 2019) and its usage as guidance for addressing the sub-dimensions of the RQ. Derived from the analysis of research in the previous sections, Fig. 4 thus presents the conceptual framework as an elaboration on Fig. 3. As the arrows indicate, the majority of categories are expected to not only influence the process itself, but also be influenced by it. In addition, some categories even impact one another. Therefore, not all categories can be attributed exclusively to one sub-dimension of the RQ. Next, the parts of the conceptual framework are explained in more detail.

The majority of researchers claim that strategic organizational decision making is a people-driven and -dependent task in which technology can only be used as support, although most of the researchers in Sect. 4.2.6 expect humans and AI to augment each other. Regarding the first sub-dimension of the RQ, the conceptual framework presents the possible division of tasks between human decision makers and technology, with the task of knowledge management as its own category. This task combines several aspects and must be considered thoroughly by itself, as the results of Sect. 4.2.1 have demonstrated. Researchers agree that AI has the potential to collect large amounts of information from numerous sources, leverage sharing, and facilitate interpretation, implying that using it for the knowledge management task can increase speed and efficiency (Acharya and Choudhury 2016; Shollo and Galliers 2016; Blasch et al. 2019; Butner and Ho 2019). However, AI is said to be unable to solve the inherent challenge of making implicit data available that stakeholders and decision makers are not willing or able to provide, although initial possibilities for overcoming this challenge have been proposed (Terziyan et al. 2018; Colombo 2019; Metcalf et al. 2019). The quality of implicit information is thus further believed to depend on humans and can only be evaluated and framed through human discussion (Rousseau 2018).

With the overview of the task division, the framework summarizes the current academic discussion on tasks for which AI is expected to be useful. The technology’s successful integration and usage, however, has been found to depend on the respective AI application, and vice versa, as Fig. 4 also highlights. Furthermore, while utility calculations are said to depend on humans (Pigozzi et al. 2016), researchers argue that AI can provide a forecast of how each decision alternative might affect the organization or partners (Agrawal et al. 2019; Baryannis et al. 2019a, b; Colombo 2019). This might influence the weighing of alternatives, for which the pure mathematical calculation can be carried out by AI as well. The final decision, however, must be taken by the human decision group only. With the current state of technology available, AI can thus leverage the stages of input and process (Bohanec et al. 2017a; von Krogh 2018), with the most significant impact on knowledge management (Mühlroth and Grottke 2018; Blasch et al. 2019). This indicates that most existing applications cannot be defined as AI at all based on Nilsson’s (2010) definition of intelligence.

The choice of application is also influenced by organizational structures and the related allocation of resources. However, evidence from research also suggests that this impact is reciprocal, implying an influence of AI applications on the definition of organizational structures (Lismont et al. 2017; Tabesh et al. 2019). According to the literature, this category is both a pre-requisite for and a consequence of introducing AI into the process of decision making under uncertainty (von Krogh 2018; Paschen et al. 2019; Tabesh et al. 2019). For this introduction to be successful, an important foundation is the organizational strategy and the resulting reasons for which AI is used and integrated into the decision-making process, such as knowledge management (Bienhaus and Abubaker 2018; von Krogh 2018; Butner and Ho 2019).

AI literacy and data transparency are further pre-requisites of AI integration, as highlighted in the middle of the framework. Scholars agree on the importance of enabling employees to use the technology beneficially (Lepri et al. 2018; Canhoto and Clear 2019). They must learn which application to choose for which task, which data need to be provided for the application to work correctly, and how results should be interpreted. In addition, training and continuous experience of working with the technology has been found to increase trust and thus effectiveness (Kolbjørnsrud et al. 2017; Lepri et al. 2018).

As the analysis in Sect. 4.2.5 highlighted, ethical perspectives influence all other categories. The question as to who is morally responsible or what framework the machines should act in accordance with, has not yet been solved, nor is it possible to include moral guidelines in algorithms (Cervantes et al. 2016; Vamplew et al. 2018). Evidence suggests that, to date, machines are not related to any moral responsibility, implying the necessity to adapt the definition of moral constructs such as guilt or fairness (Parisi 2019). Any successfully proven approach of how to realize this, however, is missing.

Based on the analysis of the previous sections, and as Fig. 4 highlights, the answer to the RQ is influenced by a variety of aspects. This makes it difficult to provide a clear definition or guideline of how to best integrate AI into the organizational process of decision making under uncertainty. Researchers have found that AI always depends on a clear goal, as it cannot handle uncertainty or input complexity (Smith 2016; Jarrahi 2018; von Krogh 2018). Relating the findings to Nilsson (2010), a current AI application can thus only “interact with foresight in its environment” when used by humans. This contrasts with Simon’s (1986, 1995) theory of computers and humans being alike. Nevertheless, some researchers have also proposed developing AI further as a net-based and learning algorithm, which would yield more capabilities of intelligence than it currently has (Parry et al. 2016; Watson 2017; Agrawal et al. 2019; Bolander 2019), although there is no agreement on whether AI will ever be able to exercise implicit human capabilities (Parisi 2019; Shrestha et al. 2019). In addition, research suggests that AI is unable to serve as a substitute for all benefits of human group decision making (von Krogh 2018), and using it can also amplify the dangers and challenges that human decision making entails (Flath and Stein 2018; L’Heureux et al. 2017). Moreover, especially when it comes to individual decision making, AI is assumed to have a less beneficial effect. The diversity of experience and other soft skills can only be provided through human negotiation and discussion, as “it is easier to recognize biases in other people than in ourselves” (Rousseau 2018: 137).

Therefore, utilizing AI as support in this important organizational process implies a role change for human decision makers. As the literature states, they become supervisors (Bolton et al. 2018; Klumpp and Zijm 2019), a role that has to be interpreted differently than it is defined in traditional production processes. However, supervising AI has manifold dimensions, and a deep understanding of AI’s functioning and the ability to translate and interpret its results correctly are crucial for a successful and responsible use of this technology (Lyons et al. 2017; Canhoto and Clear 2019; Whittle et al. 2019). This leads to several managerial implications and research possibilities, which are presented in the following chapter.

Fig. 4
figure 4

Conceptual framework for AI integration into the organizational process for decision making under uncertainty

5 Concluding remarks

5.1 Managerial implications

The analysis of the AI applications revealed that researchers disagree on whether current applications are useful for strategic decisions (Shollo and Galliers 2016; Baryannis et al. 2019b). Therefore, proposals for implementation strategies are rare.

Deduced from the organizational strategy, managers are recommended to first specify the reason for integrating AI and the resulting decision tasks to be supported. In line with this follows the adjustment of organizational structures to make AI integration possible. Third, the applications to be used must be stipulated. However, as the results have shown, each of these steps can also influence the other, so it is not a stringent but very individual implementation process. Scholars argue that the AI literacy of managers is crucial to become aware of the possibilities and challenges of AI, which in turn enables managers to make the most efficient use of the technology (Kolbjørnsrud et al. 2017; Whittle et al. 2019).

Scholars, however, emphasize the importance of being aware that with an integration of AI into the strategic decision-making process, the human role is expected to change. This means a shift in responsibility, which simultaneously requires a focus on other skills (Kolbjørnsrud et al. 2017; Bolton et al. 2018; Bader et al. 2019; Klumpp and Zijm 2019). Therefore, researchers suggest that employees and managers alike should engage in training those capabilities that AI does not possess, such as empathy, creativity, and emotions (Parry et al. 2016; Jarrahi 2018; Terziyan et al. 2018; von Krogh 2018; Schneider and Leyer 2019).

It can consequently be stated that human groups remain important, although AI offers some benefits, such as information amount and diversity, which can usually only be gained with the inclusion of more people in the decision-making process. Smaller teams, thus, are expected to increase efficiency and speed, as less negotiation is needed. Here, it is important to ensure that diverse group members are chosen with the necessary skills for strategic decision making and AI usage. This, however, also increases the risk of few people possessing too much power and managers must always be aware that the use of AI can bring additional dangers and challenges, such as bias in several dimensions (Flath and Stein 2018; L’Heureux et al. 2017).

Some studies have provided frameworks for analyzing the readiness of an organization or the necessary steps to become more AI-based (see Table 2; Watson 2017; Canhoto and Clear 2019; Yablonsky 2019). Nonetheless, ethical frameworks must still be developed, although this perspective is discussed with increasing awareness (Bellamy et al. 2019; Parisi 2019; Shank et al. 2019; Webb et al. 2019). Managers are hence required to actively engage in this when developing and extending the use of AI.

5.2 Limitations and further research possibilities

This study has some limitations. The first relates to the methodology that was employed. Although the steps of Tranfield et al. (2003) and Mayring (2008) were followed, bias might have been introduced through the definition of keywords, which would have influenced the search and interpretation. As categories were defined rather broadly, articles on some specific topics might be missing. Nevertheless, the decision to carry out broad research and use broad interpretation categories was made to include as much data as possible and to obtain a general understanding of a rather undefined topic with many research objectives. Moreover, including further keywords relating to statistical or mathematical applications might have expanded the findings, as articles that do not mention AI when using these applications would also have been included. However, since there is so far no clear definition of which applications to include when speaking of AI, a decision was made not to enlarge the number of keywords. This allowed for an understanding of the current state of AI in decision making rather than biased results.

Searching in only four databases is another bias-related limitation, but searches in other databases would not have been possible in the same manner due to technical constraints in the search fields. As AI is a fairly practice-oriented topic, it might also have been interesting to include more practical views, however, this was not possible due to the peer-reviewed criterion. As the literature review of Calatayud et al. (2019: 26) demonstrates, non-scientific articles currently dominate the topic. For this reason, the suggestion would be to enhance this literature review by adding practical literature.

The fact that the small number of articles in this literature review leads to a myriad of different topics highlights the uncertainty that might only be solved by testing several designs. The potential for leveraging the best of both worlds could be seen in the analysis, and further, especially empirical, research is thus needed to analyze the possibilities of AI and the potential results of its integration into human-centered processes, such as decision making (von Krogh 2018). As most companies are still in the piloting and planning phase (Butner and Ho 2019), opportunities increase to uncover other interesting results. In this regard, the following would be helpful: a clear definition of AI and related applications, as well as initial process concepts demonstrating how to integrate the technology into decision-making structures and how to establish a partnership with the humans involved. The framework provided in Fig. 4 could be a useful starting point, while theory wise, the actor-network theory might serve as a basis, as it might help to explain when and how the responsibility changes from human to non-human actors.

5.3 Conclusion

This article is the first to focus on the current status of research about AI’s potential to become a support in strategic organizational decision making, that is, group decision making under uncertainty. It set out to answer the following question: How can AI support decision making under uncertainty in organizations?

The conceptual framework of Fig. 4 (see Sect. 4.3) provides the synthesis of findings from the analysis of current literature on this question. It takes into account the necessary pre-conditions for and potential consequences of combining human decision makers and AI, as well as a potential task division.

This study revealed that the established understanding of machines as tools is not suitable for AI. Successfully using this technology requires human decision makers to change their role and become translators and interpreters of the results rather than only supervising the machine with the execution of a predefined process. This also implies an increase in responsibility and change in the skills needed. Therefore, the way in which to view AI will heavily depend on how humans view themselves (Mueller 2012), while its benefits also greatly depend on the context and goal. While Lawrence’s (1991) framework of complexity and politicality is expected to remain, the resulting applications might further change with the development of the technology as a learning algorithm. Assuming that computing machines and humans are equal, however, is neither to be expected, based on current research, nor ethically supported (von Krogh 2018).