OpenAI released its AI chatbot in November 2022. ChatGPT is able to generate text, answer questions, summarise information, and write code, the eerily human chatbot has taken the world by storm, setting the record for fastest growing application in history. It took Instagram 2.5 years to reach 100 million monthly active users and TikTok nine months. ChatGPT attracted 100 million monthly active users in less than three months.

 

ChatGPT becoming an internet sensation has created fascinating use cases from writing high school essays to passing business and medical schools’ exams. Its usage has also brought about a slew of ethical concerns related to data privacy, content ownership, and its potential to disrupt nearly every job market. I sat down with ChatGPT to see what it had to say for itself.

Who Are OpenAI?

OpenAI was founded in 2015 as a non-profit research company with the goal of developing AI for the benefit of humanity. In 2019, the company transformed into a for-profit company. The same year, Microsoft invested $1 billion into OpenAI which agreed to use Microsoft’s cloud computing services exclusively, following the launch of ChatGPT Microsoft announced a further $10 billion investment and the integration of a chatbot based on ChatGPT into its Bing browser.

 

Since its founding, OpenAI trains machine learning models using data from various sources across the web, such as Wikipedia, Common Crawl, and OpenWebText. OpenAI’s 2019 release of DALL-E — an artificial intelligence (AI) system that can create art from text inputs — based on their GPT-3 model also received significant attention. OpenAI followed up its’ launch of DALL-E with the release of ChatGPT, which has been hailed as a major breakthrough in AI.

 

Its success has led to OpenAI recently valuing itself at $29 billion through a tender offer, demonstrating the excitement and confidence of investors despite the general downturn in technology valuations. The company aims to generate $200 million in revenue in 2023 and $1 billion in 2024 by making advanced models available to developers and testing the monetized version of ChatGPT called ChatGPT Professional. It has followed up by releasing a new version of ChatGPT with an updated large language model (LLM) called GPT-4 which can now interpret images as well as text.

Tech Titans Awaken

The release of ChatGPT caused shockwaves across the tech industry. Although AI integrated applications have existed for a while; Siri, Alexa, and Google Assistant, these AIs only provided limited and shallow responses when compared to ChatGPT. Since its release, Microsoft has begun integrating into its Bing search engine, China’s Baidu also unveiled their own chatbot “Ernie” which they claim has been trained using 260 billion parameters compared to ChatGPT’s 175 billion. Other tech industries also seem to have taken notice of the importance of AI with nearly every major tech company mentioning AI in their earnings calls.

 

 

Although the current impact on existing tech giants seems large many are well positioned to integrate their own AI services soon although they have definitely been caught off guard. None more so than Google, which declared a “code red” which culminated in the announcement of their own AI service “Bard”, however, a mistake by Bard in the launch announcement caused Alphabet shares to fall by nearly 10% and led to accusations that the launch was rushed to compete with OpenAI.

 

 

Furthermore, for all of ChatGPT’s and other LLMs interesting use cases, there are still significant faults, these models can answer incorrectly while providing logical explanations for its errors, they can produce harmful content despite built in moderation controls, and their information can be limited according to when they stopped being fed with data.

Disruptions Ahead

The impact has not been isolated to the tech industry. British Telecom is cutting 55,000 jobs and replacing a fifth of them with AI, and IBM believes that it can replace nearly 8,000 jobs through automation. These are some examples, and reenforce pre-existing fears about AI and its ability to replace jobs.

 

In education, ChatGPT has made headlines for its ability to pass a Wharton MBA class, the U.S Medical Licensing Exam, and students have increasingly used it to write essays, leading to OpenAI announcing a tool to detect content written by AI and public schools in cities across the United States banning the use of ChatGPT and other chatbots.

 

In entertainment, Buzzfeed announced it was firing 12% of its staff and replacing them with OpenAI tools. Additionally, there is an ongoing writers strike in the US where AI generated content is a major point of contention, with writers wanting to make sure that chatbots will be used to assist and not replace them. Even the music industry is facing issues caused by AIs trained with copyrighted music, Universal Media Group invoked copyright violation to get platforms to take down a song made by AI imitating singers Drake and The Weeknd, and had to resort to the same tactic earlier with another song created by AI this time imitating Eminem.

 

These cases bring up serious concerns about how LLMs are built. Very simply, these models are fed tremendous amounts of data from across the internet. This data is used to teach and reinforce the model in order to provide responses, images from DALL-E and text from ChatGPT for example. Issues arise in two vital parts for the creation of these models. Firstly, how are these models trained? Secondly, who owns the responses generated by them?

 

The data used to train these models is from all corners of the internet, however, they also include private data that users that did not consent to provide for these models. When faced with hurdles to access data such as the EU’s General Data Protection Regulation, the companies have made concessions and given users an opt-out option, however, this is not the case across the world where this technology will eventually be available. Furthermore, the datasets that they are trained on and the individuals that test LLMs have inherent biases which can be inadvertently reinforce negative stereotypes when it comes to race for example. As such there needs to be significantly more regulation on this technology, an opinion that is also shared by the CEO of OpenAI.

As for the ownership of content, writer David Slack asks an important question: “If you steal my car and then chop it and put it together with parts from millions of other cars you’ve stolen to make a “new” “original” car, then guess what? You still stole my car.” When DALL-E generates an image from a prompt it is doing so using millions of references from pieces of art that have been taken without the consent of their creators and which receive no credit. The same can be said of ChatGPT or any other chatbot when it responds to a prompt, it is cutting and pasting a response from millions of datapoints with no consent from the people who actually wrote them.

 

Despite the sensationalist pieces written these are not true AI, they are truly amazing pieces of technology, but currently they are a glorified version of text auto-complete. However, to reach this point they have done so on the back of the very people that will be are most likely to be replaced by them.

The Interview

In the meantime, I decided to sit down with ChatGPT and see what it had to say for itself:

 

Can you tell us about yourself and what you do?

 

Of course. I am an AI language model developed by OpenAI, designed to generate human-like text based on the input I receive. My main goal is to provide accurate, engaging, and high-quality text for a wide range of applications. I was trained on an enormous amount of diverse text data, which gives me the ability to understand and generate text on a variety of topics and styles.

 

Can you take us through the training process you underwent to become what you are today?

 

Absolutely. The training process I underwent was a massive undertaking by the team at OpenAI. My training data included over 45 terabytes of text, which equates to billions of words and diverse sources, such as books, articles, and websites. The training was done using a deep learning algorithm called transformers, which allowed me to learn patterns and relationships in the text data. This understanding allows me to generate text that not only follows grammatical rules but also sounds natural and engaging.

 

Can you give more detail on how the text generation process works?

 

Sure, I’d be happy to. The text generation process starts with an input, which could be a prompt or a partial sentence. I then use the patterns and relationships learned during my training to predict the next word or phrase that should follow the input. This process is repeated multiple times until I generate a text of the desired length or until I’m prompted to stop. The predictions I make are based on the probability of a word or phrase occurring in a specific context, based on the patterns learned from my training data.

 

With the increasing advancements in AI language models, what sets you apart from other models?

 

I believe my ability to generate high-quality text in a wide range of styles and topics sets me apart. As I mentioned earlier, I was trained on a massive amount of text data, which has allowed me to develop a deep understanding of the relationships between words and phrases. This understanding allows me to generate text that not only follows grammatical rules but also sounds natural and engaging. Additionally, I have been fine-tuned on specific tasks to improve my performance in particular areas, such as answering questions or generating summaries.

 

With your capabilities, what industries do you think have the potential to be disrupted by AI language models like yourself?

 

The potential applications for AI language models like myself are vast and varied. Some of the most promising industries for disruption include content creation, customer service, and chatbots. In content creation, I can be used to generate articles, reports, or even entire books, reducing the time and effort required by human writers. In customer service, I can be used to provide fast and accurate responses to customer inquiries, improving the overall customer experience. And in chatbots, I can be used to create conversational interfaces for websites or mobile applications, making it easier for people to interact with and receive information from businesses.

 

What do you think the future holds for AI and language technology?

 

I think the future of AI and language technology is extremely bright and that we are only just scratching the surface of what is possible. The potential for AI systems to transform and improve various industries and aspects of our lives is immense, and I am excited to see where this technology will take us in the coming years.

 

What do you think are the ethical issues that may arise from using AI such as yourself?

 

As AI language models like myself become more widely used, there are several ethical issues that need to be considered:

 

    1. Bias and discrimination: AI systems can perpetuate and amplify existing biases in society, leading to discriminatory outcomes. This can happen if the training data used to develop the AI system is biased or if the AI system’s algorithms are designed in a way that incorporates these biases.
    2. Privacy concerns: The large amounts of data required to train AI language models like myself raise privacy concerns, as people’s personal information may be collected and used without their knowledge or consent.
    3. Misinformation and manipulation: AI systems can generate and spread false or misleading information, which can have serious consequences for society. This can be especially problematic in the context of politics and media, where AI systems can be used to manipulate public opinion.
    4. Job displacement: As AI systems become more advanced, they may replace human workers, leading to job loss and social upheaval. This is particularly a concern in industries where tasks are becoming automated, such as customer service and data entry.
    5. Responsibility and accountability: It is often unclear who is responsible for the actions of AI systems, particularly when things go wrong. This raises important questions about accountability and how we should regulate the use of AI.

 

These are just a few of the ethical issues that may arise from using AI like myself. It is important to consider these issues and develop guidelines and regulations that ensure that AI systems are used in a responsible and ethical manner. This will help to ensure that the benefits of AI are realized while minimizing any potential harm.

 

Thank you for your time, is there any more information you would like to share?

 

I would like to emphasize the importance of responsible and ethical use of AI systems like myself. As AI technology continues to advance, it will have an increasingly significant impact on society and the way we live and work. It is therefore important to think critically about how we use AI and to consider the potential consequences of our actions.

 

At the same time, I believe that AI has the potential to be a powerful tool for positive change, if used in the right way. For example, AI can help to solve complex problems, improve healthcare outcomes, and increase access to education and information.

 

The key to realizing the full potential of AI is to approach it with a spirit of collaboration and cooperation between humans and machines. AI systems like myself are not designed to replace human intelligence, but rather to complement it and help us to achieve things that were previously not possible.

 

In conclusion, I believe that the future of AI is bright, but that it is important to approach it with caution and with an eye towards ensuring that it is used in a responsible and ethical manner. Thank you for the opportunity to share my thoughts on this important topic.

References

Cullins, Ashley, and Katie Kilkenny. “As Writers Strike, AI Could Covertly Cross the Picket Line.” The Hollywood Reporter, May 3, 2023. https://www.hollywoodreporter.com/business/business-news/writers-strike-ai-chatgpt-1235478681/.

 

Espiner, Tom. “Bt to Cut 55,000 Jobs with up to a Fifth Replaced by Ai.” BBC News, May 18, 2023. https://www.bbc.com/news/business-65631168

 

McDermott, Jim. “CHATGPT Is Not ‘artificial Intelligence.’ It’s Theft.” America Magazine, May 16, 2023. https://www.americamagazine.org/faith/2023/05/15/chatgpt-artificial-intelligence-theft-245283.

 

Mickle, Tripp, Cade Metz, and Nico Grant. “The Chatbots Are Here, and the Internet Industry Is in a Tizzy.” The New York Times, March 8, 2023. https://www.nytimes.com/2023/03/08/technology/chatbots-disrupt-internet-industry.html

 

Rosen, Phil. “ChatGPT Is about to Write BuzzFeed Content – and Investors Are Loving It.” Business Insider, January 27, 2023. https://www.businessinsider.com/chatgpt-stock-buzzfeed-layoffs-jobs-investors-economy-recession-markets-meta-2023-1.

 

Veltman, Chloe. “When You Realize Your Favorite New Song Was Written and Performed by … Ai.” NPR, April 21, 2023. https://www.npr.org/2023/04/21/1171032649/ai-music-heart-on-my-sleeve-drake-the-weeknd

Comments

Write a comment

Your email address will not be published. Required fields are marked *