
The GRAI framework – extending the SECI model to reflect generative AI
This article is part of two ongoing series: tools and methods and artificial intelligence (AI) in relation to knowledge management (KM), and KM in relation to AI.
The SECI model1, first proposed in 1994-1995 by Ikujiro Nonaka and Hirotaka Takeuchi, is well-known in knowledge management (KM). Despite being criticized for being dated, too simple, and not being as widely applicable as often assumed, the SECI model is still a useful perspective as it highlights the human element in knowledge transfer. It is also one of the few models that considers the shared environmental context for knowledge creation and transfer. Nearly 30 articles in RealKM Magazine explore on the one hand criticisms of the SECI model (including potential alternatives), and on the other hand, case studies in which it has been usefully applied.
The recent few years have seen the rapid emergence and uptake of generative AI, for example the now widely-used ChatGPT. Given that, despite the criticisms, the SECI model has proven to be suitable for describing and understanding knowledge creation in organizations, KM researchers Karsten Böhm and Susanne Durst contend that it seems useful to examine the SECI model in the context of generative AI, to see whether SECI should continue to be used as an analytical explanatory framework in this new era. Böhm and Durst report the findings of their investigation in a newly published open access paper2 in VINE Journal of Information and Knowledge Management Systems.
Böhm and Durst’s research first involved a contextual review of previous research on the original SECI model, drawing on a wide range of peer-reviewed articles published in leading KM journals and other journals that have also published relevant papers. In the next step of their research process, the topic of AI was added to the analyses. More specifically, examples were identified that show how the individual SECI components could function with generative AI support. These were taken from the available literature on KM and AI, with additional examples developed by Böhm and Durst based on their many years of experience in teaching and researching KM.
The outcome of this research is the GRAI (generative, receptive artificial intelligence) framework, a revision of the SECI model, as shown in the diagram above and explained below. Böhm and Durst advise that the GRAI framework serves not only to demonstrate the further power of the SECI model to explain knowledge creation and sharing, but also represents a conceptual development of this model through the representation of new interactions that result from the integration of generative AI into these knowledge processes.
Böhm and Durst are aware that, being a conceptual paper, their study has limitations. They recommend that GRAI should be tested in different organizational and national cultural contexts, and that there is a need for studies that investigate how people react to the new generative AI actor.
The SECI model
Nonaka and Takeuchi developed the SECI model as a result of their studies on innovation in Japanese companies in the 1980s and 1990s, so the model reflects the values and culture of Japanese business and work practices.
The model describes the processes of knowledge creation in companies and emphasizes the interplay between tacit and explicit knowledge. SECI provides insights into how knowledge can be created, converted and transferred with an iterative approach based on four phases of socialization, externalization, combination, and internalization, hence the acronym SECI. Knowledge creation in the model progresses through a spiral form rather than a circular movement (Figure 1). This is because knowledge moves up from the individual level to the group level and further (and finally) to the organizational level.
The SECI process takes place in “ba” which has been defined as a shared context in which knowledge is shared, created, and utilized. Consequently, context needs to be considered when trying to create meanings.

In the SECI model:
- Socialization (the S in SECI) refers to the exchange of knowledge between human beings.
- Externalization (the E in SECI model) represents the explication of (internalized) knowledge in some form of externalized information (codified knowledge).
- Combination (the C in SECI model) traditionally combines externalized knowledge (e.g. stored information in an IT system).
- Internalization (the I in SECI model) is the process of consuming information (externalized knowledge) into an internal representation that the human user can act upon after it has been internalized.
SECI in the context of generative AI – the GRAI framework
Böhm and Durst advise that a revised SECI model should take the machine into account as a participant that can play either an active or a passive role. The active role would generate an output or a response, while the passive role could be compared to listening and adapting/rebuilding the internal (knowledge) representation. Consequently, the four areas would each be split into a human perspective and a machine perspective, leading to eight fields of action in the new GRAI framework, which stands for generative, receptive artificial intelligence, as shown in Figure 2.

Sticking to the original knowledge conversion cycle from socialization to externalization to combination and finally to internalization, Böhm and Durst report that GRAI opens up a number of new relationships besides the classical ones that were always assumed to be human-to-human. In the current development stage of generative AI, the most interesting fields are those in which humans and machines interact with each other. This leads to a combination of two actors (human and machine) within two role positions in the four fields of the original SECI model. The resulting eight different interaction fields are summarized below.
Böhm and Durst advise that the situation in which those knowledge exchanges are completely left to the machine(s) can be considered as a topic for future development and investigation, although the first experiments toward this direction are already appearing.
The addition of the machine as an actor in knowledge creation processes does not mean that Böhm and Durst understand both the human and machine roles as equal. Rather, they see dominance or importance of the human user in these processes, which are still seen as “human-centered” (the human actor gives the decisive steering impulse) and/or “machine augmented” (the machine actor complements or augments the actions of the human actor in a complex, consistent, and context-related way). Depending on the intensity of the support and the actor that takes the assistance role, a distinction could be made between human-in-the-loop (a human actor being assisted by a machine) or machine-in-the-loop (a machine assisted by a human). In the interaction fields of GRAI both situations could arise.
The socialization interaction field in GRAI
Socialization is a dialogue-oriented setting with the primary intention of knowledge sharing between two actors, including the comprehension of other viewpoints and opinions. It is a highly contextualized process that is transmitted using natural language.
From the perspective of the machine agent toward the human agent, it can be seen as a setting that is oriented toward knowledge or information acquisition for a human user, e.g. explaining a topic to a human user.
The socialization interaction from the human agent to the machine agent is another dialog-oriented situation with the focus on specifying a complex information demand or situation, e.g. in an extensive and possibly iterative prompting interaction that informs about a situation (e.g. providing a richer context for the dialogue).
The externalization interaction field in GRAI
Within the externalization interaction field, the main focus is the connection between the internal knowledge representation models and the physical and digital reality, with the data and information coming from there. This transition process usually requires substantial efforts to integrate new information into existing models or to make information accessible for those models. This aspect refers to human users and IT systems (machine agents) that should deal with new information that is often contextualized and vague/inconsistent. Generative AI offers new capabilities to bridge this gap in a more efficient way (without the need to build specific IT solutions) and in a more effective way. That is, being able to work with vague/inconsistent information due to the contextualized information processing capabilities of large language models (LLMs).
More recent versions of LLM-based systems such as ChatGPT or Google Gemini allow the human user to add additional relevant materials into the conversation, e.g. using the “memory function” of those systems. This enlargement of the context relates to the interaction field human agent to machine agent as it helps the machine to identify a more precise context with a domain specific focus in the dialogue. This way a general conversation can be leveraged toward a specific direction by the human user through providing externalized information to the machine.
Another application use case that is emerging rather frequently here is the use of generative AI in retrieval-oriented tasks using retrieval augmented generation (RAG), which combines an initial search query with specific document collections (externalized information) to derive more specific search results.
Turning to the use-case of information retrieval in enterprise specific information sources (enterprise search) in the field of KM, the use of generative AI could introduce a bias in the results originating from the foundation models used. However, these effects might be similar to the contextual integration of a human user and could be counteracted by carefully selecting and adopting the right foundation model.
Another interesting interaction field is machine agent toward the human agent where the generation of structured content comes from existing unstructured information that contains this information only in an implicit form. Generative AI with multimodal capabilities can use recognition functions for objects and their attributes and generate product information in a table-like structure. The results could then be used for product catalogs, e.g. in the e-commerce domain.
The internalization interaction field in GRAI
Internalization relates strongly to the creation of internal models (or representations) of the knowledge of the outer world of the agent, reflecting the views and beliefs of the agent. The assistance of generative AI might both help the internalization processes of the human agent and also be beneficial to proactively build or adapt (digital) representations for the machine agent.
The direction from the machine agent toward the human agent is the process of supporting the internalization of a human user with the help of generative AI, e.g. creating a better understanding of a certain concept or topic.
The opposite direction human agent to machine agent is a situation in which the generative AI has an observing role for a longer period of time to suggest appropriate support actions for the human user based on the internal model built from those observations. Such situations could be a moderating role during online meetings (including summarization and analytics of the discourse) or the general user support across different applications, e.g. the Copilot functionality in Microsoft Office 365 products.
The combination interaction field in GRAI
The interaction field of combination might have received the most attention with the advent of generative AI systems because the way that LLMs could generate content that was combined from a large source of data was unique with the advent of ChatGPT. It did not require specific expertise to access this functionality – requests for the combination of externalized information could be stated in natural language with all its ambiguity.
With respect to the interaction field machine agent toward the human agent, the generation of complex summaries on a subject or provided information sources was one of the most prominent examples. The combination task could be configured for a specific information demand, given style, or tone of text (e.g. content generation for a specific target group) to be used by the human agent for further processing. Another example in this interaction field could be the generation of meeting protocols from the transcripts of online meetings.
Likewise, the flexibility of combining content elements in very different ways could also be used as a creativity tool for the human user (human agent to machine agent). Here, the human user combines different subjects in a single prompting request or dialogue sequence and therefore requires the generative AI system to combine different patterns that are otherwise unlikely to appear in the existing reality, e.g. the generation of images in the style of a certain artist, or imagery that combines aspects that are not existent in the physical reality.
Article and header image source: Knowledge management in the age of generative artificial intelligence – from SECI to GRAI, CC BY 4.0.
References: