CHAT GPT and The Dawn of Generative AI

March 31, 2023

Large language models are taking the world by storm. They are in the news and on social media, discussed by professionals across a variety of industries and sectors.  

ChatGPT (and its close competitors Bing and Bard) are bringing to reality a concept that was for decades purely in the realm of science fiction: having a conversation with a computer.

Language models can generate text, write code, explain scientific and mathematical concepts, give language lessons, write poems, give music recommendations — the list goes ever on.

The new version, of ChatGPT, GPT-4 can also analyse images.The AI chatbot generates responses almost instantaneously, maintains a conversational tone and answers follow-up questions. It can break down big topics into manageable chunks and give detailed step-by-step instructions.

However, its knowledge is mostly limited to the time before 2021, its responses are not always accurate, and the text it generates sometimes has a monotonous, epitomizing feel.

Numerous metrics can illustrate the difference between the text that is produced by a human, and the one generated by language models. One of the tools, GPT Zero developed by a Princeton University Student Edward Tian analyzes perplexity to determine if the text is AI-generated or contains AI-generated segments. 

OpenAI also offers its own classifier which labels texts as very unlikely, unlikely, unclear if it is, possibly, or likely AI-generated but doesn’t offer any metrics. Perplexity is a measure of randomness in a text, and typical output generated by the LLMs is pretty uniform compared to the unique way each person writes and builds their sentences. For example, this article’s average perplexity score is 344, while a document produced by a generative AI dealing with a similar topic could have an average score of 16 or lower. 

ChatGPT is a large language model, which in simple words, tries to predict the most statistically likely output in response to a given input. Depending on the data that was used to train it, this output can seem more or less natural, but at its core, it treats text as a string of probabilities and tries to predict the next “correct” word in a sequence.

This means that the model doesn’t strictly speaking ‘know’ the topic it’s explaining, rather it creates an output based on how it thinks an explanation should sound, based on its training data.

This principle is illustrated well by asking language models math questions: it can be of help relying on tutorials and explanations from its training data but doesn’t do any actual calculations.  

An infographic describing how larnge language model works: turns text into tokens, assigns probability, and leverages training data to generate an output.

Capabilities of Generative AI Tools

Unlike ChatGPT, Bing – Microsoft’s flagship AI project – had access to the internet and had a lot fewer safeguards in place, which led to very unique and unexpected results. It’s capable of using real-time search engine searches to augment its output.

Some users made Bing do and say things that some found amusing, and others — frightening.
Besides simply giving instructions and summarizing information, Bing was prone to arguments, accusations, and even made claims of sentience [1]

That being said, Bing is powered by the same language model as ChatGPT (GPT-4), and at its core has similar capabilities and functionality [2].

This example is an excellent illustration of the fact that even with similar technology under the hood the final solution can be very different in both reliability and application.

Generative Artificial Intelligence Frequently Asked Questions:

  • What is Generative AI?
    Generative AI is an artificial intelligence that makes use of generative models to output text, images, or other media in response to a prompt. One example are the large language models like GPT-3 and GPT-4.
  • What does GPT stand for?
    GPT stands for Generative Pretrained Transformer.
  • How do Large Language Models operate?
    Models use multi-layered neural networks in a process called deep learning(a subset of machine learning) to analyze text and make predictions. The process is extremely complex and uses numerous algorithms to transform words into numeric data, help Artificial Intelligence focus on specific parts of the text and interpret it.
    Language models track meaning mapping words on vectors (words with similar meanings are mapped close to each other) and assign probabilities to every word in a sentence to come up with its most statistically likely continuation.
  • How are LLMs trained?
    Large language models are trained on huge data sets, or text corpora which contain billions of words. After unsupervised training, models go through fine-tuning, where they are trained on smaller data sets with limited scope and get feedback from human users.
    As an example, GPT-3 was trained on Wikipedia, books and internet data aggregated by Common Crawl.
    There are also smaller open data sets to train artificial intelligence, for example, LAION-400M and COYO-700M.
  • What are the key limitations of Generative Artificial Intelligence?
    Two limitations that are discussed the most are susceptibility to bias and artificial hallucination. AI amplifies biases found in its training data, and ‘hallucinates’ unreal, but plausible-sounding facts.

Text-to-Image Generation

While the release of ChatGPT certainly attracted the most attention, the first sign of large-scale AI disruption happened a few months prior when the public debated the merits of AI-generated art.

AI-generated images started appearing on platforms like ArtStation, in late 2022 transforming the industry landscape.

The impact of the technology goes far beyond creative industries: it can be theoretically used for advertising, marketing, and commercial content creation.

Conceptually, the idea of AI-made graphics wasn’t new, after all This Person Does not Exist and Art Breeder (both based on the StyleGan model, developed by Nvidia) existed since 2018.
But Dall-E (and similar technologies like Midjourney and Stable Diffusion), blew them away by giving anyone the ability to generate an image based on a text prompt.

The outputs are absolutely striking. At first glance, many generated images look like paintings, professional portraits, or works of a graphic designer, but the details evoke signature strangeness associated with AI-generated imagery: warped proportions and unnatural or hard-to-discern details.

portraits created by Art breeder (StyleGan)
Example of portrait images produced by Art Breeder
An image generated by Midjourney neural network
An image generated by Midjourney
An art generated via Stable Diffusion by Stable AI
An art generated via Stable Diffusion by Stable AI
A poster for the “Bioinspiration” exhibition in Technisches Museum in Vienna, Austria featuring AI-generated imagery.
A poster for the “Bioinspiration” exhibition in Technisches Museum in Vienna, Austria featuring AI-generated imagery.

The influx of AI-generated art caused controversy among the artists whose work was used to train the models with no compensation, and whose recognisable styles got emulated by targeted text prompts. 

Some companies like Shutterstock and Getty Images, which millions of businesses use as a source for stock images, have banned posting and selling of AI-generated images; the former introduced their own AI-powered text-to-image solution soon after.

Rights and Ownership

Text – or any other AI-generated media – raises interesting questions of ownership. Who owns AI-produced output: the company that developed the AI or the author of the prompt, and do the people whose work has been used to train the model have any claim on the end result?

While ChatGPT doesn’t break any laws by writing a short story “in the style of Stephen King”, if a company were to train a language model purely on Mr King’s works in an effort to perfectly imitate and iterate his books — would this be an infringement of intellectual property rights?

For companies that are looking to harness ChatGPT’s ability to quickly generate everything from proposals to social media content, this poses a serious legal question: to what extent do they own that material? 

In education and publishing, language models highlighted numerous problems with plagiarism.

It was verified in a blind test that ChatGPT can attain low but passing grades in exams in law and business schools, and numerous students used it to effortlessly get passing grades on college essays, papers, and other written tasks [3].

An online publisher Clarkesworld made the news by closing their submissions for paid short stories due to a deluge of AI-generated content. And while detection methods do exist, deal with a large enough volume of submissions, and the cost of processing becomes too large to make publishing the submitted material commercially viable [4]

Ability and Responsibility

While OpenAI’s ChatGPT is perfectly capable of talking about various concepts with confidence, it’s bound by a lack of hands-on experience.

When proactively corrected by a human, it can rectify mistakes and give alternative answers that come closer and closer to the truth, but a person using the model as an introduction to a topic or as an instruction manual to perform a task wouldn’t be able to give these corrections.

This brings into question a potential use case for writing technical documentation or creating interactive user manuals. In many industries, the cost of a potential mistake can be very high, or can even result in a risk of injury. 

Trustworthiness of information, transparency of sources, and reliability of AI-Solutions are critical to big businesses, so it’s very unlikely that companies will suddenly start replacing crucial specialists with AI chatbots. In the long run, every organisation and team is somebody’s responsibility. 

Language Models and Code

Programming languages are informed by similar principles to regular languages.
They have syntax, semantics, and even dialects, so it comes as no surprise that text-based AI systems can write code as easily as it does in English or Spanish.

Numerous tests proved ChatGPT was doing well with Leet interviews, with some people going as far as to claim the online assessment format is ‘dead’ now that anyone can use ChatGPT for help.

ChatGPT can easily find solutions to common problems, using mostly well-known information, and can absolutely crush any question for an entry-level interview. However, ChatGPT is unable to check if something works or not, makes mistakes, and lacks the ability to approach difficult, unique problems illustrating that the most likely use case for its abilities at this time, is to use it to automate simple repetitive tasks or use it for brainstorming.

It’s worth noting that capabilities currently available for public use are a far cry from what a dedicated model can achieve, provided it’s trained specifically to write and review code.

Co-Pilot not Autopilot

While the idea to let AI do everything, letting humans sit back and relax, reaping ample benefits is appealing to many, the biggest potential benefit of AI technology lies in its impressive computational power and ability to query terabytes of data, to help a human specialist deal with a problem.

AI language models can help with initial research and can be leveraged effectively to quickly gain access to information tied in unstructured and fragmented data sources. 

All in all, AI solutions that are tailored to solve specific business challenges are not new. Monstarlab’s own projects often rely on Artificial Intelligence implementations like a web app we’ve developed for Uni-Mate.

In manufacturing, language models can help workers perform tasks safely, access relevant parts of documentation on the fly, and help break language barriers.

In smart city and smart building, applications of AI models include predictive maintenance and new ways to work with numerous data streams coming from applications and IoT sensors.

In healthcare, chatbot solutions can help patients efficiently share information and empower clinicians by saving valuable time.

In life science, powerful transformer LLMs are able to work with proteins and chemicals to accelerate drug discovery and make R&D cheaper and more effective.

Beyond computer vision, automation control and robotics, potential applications with Augmented and Virtual Reality are potentially limitless.

“Generative AI, particularly solutions like ChatGPT, will revolutionize the way we experience immersive technologies such as Metaverse, AR and VR. This technology has the potential to not only disrupt, but also redefine the boundaries of immersive storytelling, training simulations, customer support and social experiences. By seamlessly integrating advanced natural language processing and generation capabilities, we will witness a new era in which virtual worlds become increasingly engaging, interactive, and personalized.”

Where OpenAI’s GPT-3, GPT-4, and similar models really created a revolution is accessibility. If before AI discussion was mostly restricted to information technology and B2B applications, now everyone can try it and discuss it on social media and other public platforms. On one hand, the technology finally received the attention it deserves, but on the other, there are a lot of misconceptions around what it is and how it works.

It’s the case for most industries, that companies would require reliable solutions developed specifically for their needs, tailor-made to solve pressing issues and drive desirable outcomes. 
This way, having been built with security and safety in mind, AI solutions can be an immense asset, making people’s lives better across the board. 

At Monstarlab we like to focus on solutions being technology-enabled instead of technology-led. We should focus on applications for AI where there’s a real value behind it, not just for the sake of using it. As we understand its potential and limitations, I think we’ll start to see some pretty amazing solutions built on something like ChatGPT. However, in order for organizations to unlock this potential, they would need to be open-minded, adaptable and perhaps a bit bold. Steffen D. Sommer | CTO (International Markets)

AI technology is here to stay. And it’s going to cause changes in your industry.
If you want to keep on top of the changing digital environment across sectors, keep an eye on our Thought Leadership section where we discuss industry trends, cutting-edge technology applications, and insights from leading experts.


[1]C.Roach, ChatGPT Bing is becoming an unhinged AI nightmare, Digital Trends, 2023.
[2]Y.Mehdi, Confirmed: the new Bing runs on OpenAI’s GPT-4, Microsoft Bing Blog, 2023.
[3]S. M. Kelly, ChatGPT passes exams from law and business schools, CNN Business, 2023.
[4]M. Loh, The editor of a sci-fi magazine says he’s getting flooded […]’, The Insider, 2023.



Nikita Baydyuk

Nikita Baydyuk

Content Specialist

You may also like

November 17, 2021

Retail Banks: Why All is Not Lost in the Battle with NeoBanks [White Paper]

Digital-only banks, challenger banks, or neobanks have steadily encroached on the customer base of the more traditional retail banks over the last few years. Out of the 256 neobanks globally, nearly half are operating in...

Banking, Financial Services & Insurance Consultancy Strategy Technology Wholesale, Retail, e-Commerce & FMCG

March 07, 2022

Promoting Sustainable Finance with Engaging User Experiences and Inclusive Policies

Part of our Banking, Financial Services, and Insurance Series As the world faces an increasingly alarming number of threats due to climate change, more and more institutions and companies are realising the urgency of ...

Banking, Financial Services & Insurance Technology

In order to improve this website, we use cookies. For more information please read our Terms of Service. To agree with the use of cookies on this website, please click the ‘Continue’ button.