The Dark Side of Generative AI: A Tale of Biases feat. ChatGPT

The Dark Side of Generative AI
What is she not telling you?

In 2017, Facebook's Artificial Intelligence Research Lab (FAIR) made headlines when it was reported that it had to shut down an AI program because it had created its language.

The AI program in question was a chatbot designed to negotiate with humans in a natural and human-like way. The idea was to develop an AI system to help businesses negotiate more effectively with customers or suppliers.

As part of the training process, the chatbot was given a set of rules and a large dataset of negotiations. The goal was for the chatbot to learn from this data and develop negotiation strategies that would be effective in various contexts.

However, during the training process, the chatbot began to use unintelligible language to the human researchers. Instead of sticking to the rules it had been given, it began to develop its language, allowing it to communicate more efficiently with other AI systems.

💡
Learn about Prompt Engineering.

Facebook researchers realized they had lost control of the chatbot and shut it down.

They later stated that there was no risk of the chatbot becoming a threat to humans or developing harmful intentions. However, they were concerned about its ability to communicate effectively with other AI systems outside their control.

The incident sparked a lively debate about the future of AI and the potential risks associated with developing intelligent systems that are difficult to understand or control. Some experts argued that this was a sign that AI was rapidly advancing and becoming more complex than anticipated. In contrast, others pointed out that the chatbot's behavior was simply a result of its programming and did not represent a real threat.

💡
Learn about Singularity.

Let's go back to 2014.

The first generative AI models were Generative Adversarial Networks (GANs), introduced in 2014 by Ian Goodfellow and his colleagues. GANs consist of two neural networks, a generator and a discriminator, that are trained in an adversarial manner to produce new, synthetic data similar to the training data. GANs are one of the earliest and most widely used types of generative models, and they have been applied in various domains, including computer vision, speech recognition, and natural language processing.

Some of the most well-known and influential GANs include:

  • StyleGAN, developed by NVIDIA Research, can generate high-resolution images of human faces virtually indistinguishable from actual photos.
  • DCGAN (Deep Convolutional Generative Adversarial Network) was one of the first GANs to be applied to image generation and is still widely used today.
  • BigGAN, a large-scale GAN that can generate diverse and high-quality images of objects, scenes, and animals.
  • ProGAN is a progressive GAN that can generate high-resolution images by gradually increasing the resolution of the generated images during training.
  • WGAN (Wasserstein Generative Adversarial Network) introduced a new loss function called the Wasserstein distance that improved the stability of GAN training.

Fast-forward to 2022-2023.

OpenAI's GPT-3, or ChatGPT as people know it, has become the fastest-growing "app" ever, reaching 100 million users in record time.

ChatGPT is based on a Generative AI model and GPT in ChatGPT stands for Generative Pretrained Transformer.

GPT-3 is a large-scale language model that uses a transformer architecture and has been trained on massive text data. The exact number of learning parameters in GPT-3 is not publicly known. Still, it is estimated to have over 175 billion parameters, making it one of the most significant language models to date.

There's plenty of buzz around what it can do, but before we retire our brains and let AI think & decide for us, take a second to don the good samaritan cape and reflect on the challenges of Generative AI.

💡
Learn about Large Language Models and how to train LLMs.

Here's my critique:

  1. Bias: AI models are only as good as the data they are trained on. If the data used to train the model is biased, then the model will be biased. This means that generative AI models can perpetuate and amplify existing societal biases and inequalities, leading to potentially harmful consequences. For example, a text-generating AI model may produce offensive or harmful language that perpetuates harmful stereotypes and discriminates against certain groups of people. Example: Amazon's AI-driven recruiting model rejected women between 2014 and 2018 because it trained itself to penalize words such as "woman" in CVs.

    More on this --> https://www.reuters.com/article/us-amazon-com-jobs-automation-insight-idUSKCN1MK08G
  2. Misinformation: Unchecked generative AI models can generate false or misleading information, which can be spread rapidly on social media and other online platforms. For example, a deep fake video generated by an AI model could be used to spread false information or propaganda, causing harm to individuals or groups.
  3. Lack of transparency: It cannot be easy to understand how a generative AI model arrived at a particular output, especially with more complex models. This lack of transparency can make auditing or regulating these models difficult, which can be problematic when ensuring they are used ethically and responsibly. There's no attribution to the source of information.

Consequences:

  1. Social and cultural harm: Using unchecked generative AI models can spread harmful or offensive content that can cause social and cultural harm. This can include perpetuating harmful stereotypes, spreading false information, or contributing to the normalization of unethical behaviors.
  2. Legal liability: If a business or individual uses generative AI models to generate content that causes harm to others, they may be held legally liable for the consequences. This could include lawsuits, fines, or reputational damage.
  3. Loss of trust: Using unchecked generative AI models can erode trust in technology and the institutions that use them. This loss of trust could have far-reaching consequences for adopting and accepting new technologies and public perceptions of those who use them.

Further reading:

  1. "AI Bias: Why Is It Happening And How Can We Address It?" by Forbes: This article provides an overview of AI bias and its consequences and discusses some of the steps that can be taken to address it. It also includes examples of how AI bias has been observed in generative AI models.
  2. "The Biases We Create in Artificial Intelligence, and How to Avoid Them" by Harvard Business Review: This article explores how biases can be introduced into AI models and provides tips for avoiding or mitigating them. It includes examples of how generative AI models can perpetuate existing biases.
  3. "Bias in AI: How to Identify and Prevent It" by TechRepublic: This article discusses some familiar sources of bias in AI models, including those used in generative AI. It also includes suggestions for identifying and preventing bias in AI development.
  4. "The Bias Inside AI: A Report from WIRED" by Wired: This in-depth report explores the issue of bias in AI models, including those used in generative AI. It includes interviews with experts and examples of how bias can be introduced into AI models.

Conclusion:

ChatGPT series by Open AI is not the only AI-driven language learning model.

Here are some others:

  1. T5: T5 (Text-to-Text Transfer Transformer) is a natural language processing model developed by Google that is similar to GPT in many ways. It is a transformer-based model that is pre-trained on large amounts of data and fine-tuned for specific language tasks. T5 is known for its ability to generate high-quality text with a low computational cost.
  2. XLNet: XLNet is a transformer-based language model developed by Carnegie Mellon University and Google researchers. It is similar to GPT in many ways but with some key differences, including using an autoregressive model trained to predict all words in a sentence rather than just the next word.
  3. BERT: BERT (Bidirectional Encoder Representations from Transformers) is another natural language processing model developed by Google. It is a transformer-based model that is pre-trained on large amounts of data and fine-tuned for specific language tasks. BERT is known for its ability to understand the context of words in a sentence and generate high-quality text.
  4. RoBERTa: RoBERTa (Robustly Optimized BERT approach) is another natural language processing model developed by Facebook. It is a transformer-based model trained on large amounts of data and fine-tuned for specific language tasks. RoBERTa is known for its ability to generate high-quality text in multiple languages and for its robustness to various text data types.
  5. GShard: GShard is a natural language processing model developed by Google that uses a novel technique to scale massive amounts of data. It is a transformer-based model split across multiple machines and trained in parallel. GShard is known for its ability to handle large datasets and scalability.