Preliminary Understanding of Generative AI: What & How?

Rachad Najjar, Ph.D
3 min readJul 12, 2023

What is Generative AI?

Generative AI refers to a class of artificial intelligence algorithms and models that are designed to generate new content, such as images, text, music, or other types of media. These models are trained on vast amounts of existing data and learn patterns and structures from the data to generate new content that resembles the training data.

Large language models, such as GPT-3 (which stands for “Generative Pre-trained Transformer 3”), are a type of generative AI model specifically designed for natural language processing (NLP) tasks.

Source: Inbenta

These models are trained on large amounts of text data and can generate coherent and contextually relevant text based on a given prompt or input. In the context of knowledge management, GPT-3/ GPT-4 are the most adapted models for KM applications. It’s worth mentioning that DALL-E is a special variant of the GPT model that is designed to generate images.

Image generated by Bing DALL-E for Rachad Najjar

How Generative AI is different from ‘traditional’ AI?

Generative AI is different from traditional AI in a way that traditional AI models are primarily focused on tasks like classification, prediction, or optimization. Traditional AI models are typically designed to solve specific problems and make decisions based on the provided data. Generative AI, on the other hand, is more focused on creativity and creating new content that looks like training data.

How Generative AI works?

The working principle of large language models like GPT-3 involves two main components: pre-training and fine-tuning. During pre-training, the model is trained on a large corpus of text from the internet or company internal data. The model learns to predict the next word in a sentence based on the context of the previous words. This process helps the model learn grammar, syntax, and some level of understanding about the world.

After pre-training, the model goes through a fine-tuning phase. In this phase, the model is trained on a more specific dataset that is carefully generated with the help of human reviewers. These reviewers provide ratings or judgments on model-generated text to help guide and refine the model’s output. This fine-tuning process helps align the model with specific tasks or applications and ensures it generates high-quality and relevant content.

During inference, it’s when you provide a prompt or input to a large language model, it uses the knowledge it gained from pre-training and fine-tuning to generate a response. It leverages the patterns and structures it learned from the training data to produce text that is coherent and contextually relevant to the input provided.

What are the Limits of Generative AI?

It’s important to note that while large language models like GPT-3 are powerful in generating text, they are not inherently intelligent or conscious. They don’t possess an original understanding or awareness of the content they generate and may sometimes produce outputs that are nonsensical or inappropriate. Therefore, they require human supervision — known as reinforced learning and careful application to ensure their outputs meet the desired outcome.

--

--