ChatGPT, Bard and the ongoing AI wars

Why exactly did Google need to respond to OpenAI and what does this mean for the industry?

Rhys Kentish
4 min readFeb 9, 2023
Photo by DeepMind on Unsplash

We are in the midst of the industrial revolution for Artificial Intelligence. The proliferation of AI services for consumers is something to behold. Google, in their recent announcement for Bard, makes reference to a study that postulates “the scale of the largest AI computations is doubling every six months far outpacing Moore’s Law”. This level of progress is staggering and looks to only increase. In this blog we’re going to take a look at how some of these tools work, why Google needs to respond to OpenAI and Microsoft and what this means for the sector.

What are Large Language Models, and how do they work?

Large Language Models (LLMs) are models that are trained on billions of points of data in a particular language and attempt to predict the next word/sentence. ChatGPT is an example of a LLM. ChatGPT was trained using human feedback, you get it to generate multiple responses given a prompt, you show them to humans and the humans select the response they like the most. Instead of using this feedback directly, it’s used to train another model which tries to predict which future response a human would like more, this is then used as the reward function to train the LLM. The reward model allows training to scale as it imitates humans and thus does not require thousands of humans to train the LLM. The process is repeated, humans rank the new responses, that new information is used for training the reward model, etc. until the LLM is sufficiently trained in the task that you require.

There’s a flaw with this training system, the LLM is being trained on what response humans prefer rather than the factual information. ChatGPT is perhaps more likely to have a “stab” at predicting something that it doesn’t know rather than saying it doesn’t know as that’ll score more highly with the reward algorithm (and humans). I’m sure you’ve seen examples of where ChatGPT made up studies or where an obvious fact was wrong, sometimes quite humorously, this is the flaw in action. Could a solution to this give the model real-time access to information — Wikipedia for example (I can hear teachers saying “never quote Wikipedia”), cleverer people than me are addressing issues like this.

Why did Google need to respond to OpenAI?

Before the acquisition of OpenAI by Microsoft, Bing (Microsoft’s search engine) represented just 9% of all global web searches. Now, OpenAI (Microsoft by proxy) obviously has first mover advantage in this market and the acquisition by Microsoft set off alarms at Google. This is maybe the first time there truly has been a threat to Google’s main revenue source, search. The ability to drill down into a response and clarify exactly what you mean instead of trawling through page upon page of google results could see the end of ‘search’ as we know it. Microsoft has already announced that they’re going to incorporate ChatGPT into Bing. It’s legitimate for Google to be concerned about OpenAI and other industry disruptors (I’d be surprised if the other industry megalodons haven’t been investing heavily in AI tools) as they risk losing the monopoly they have on search. They would go on to announce Bard, a direct competitor to ChatGPT. Google has also recently invested nearly half a billion dollars in Anthropic, another competitor to ChatGPT (incidentally started by former employees of OpenAI).

Will Bard be better than ChatGPT?

Other than the questionable choice of the name, we do not know what Bard will be like. In the announcement blog post, Google mentioned that they will initially be launching a smaller model but gave no indication of the size of that model. It’s important to note here that the size of the model doesn’t necessarily mean it will be “better”. As mentioned earlier, it could just mean the model is better at people pleasing. Google has been publishing immense progress in the Artificial intelligence sector however, MusicLM is a prime example of this. It seems that Google has made huge strides forward in this sector but has not mastered the commodification of it as OpenAI has. The competition will certainly drive innovation.

What does all this mean for the computer programming industry?

Google’s Deepmind’s mission is “Solving intelligence to advance science and benefit humanity”, aka solve intelligence, then solve everything else. I think in the immediate future (next 2 years) these AI tools will be just that; tools to help Engineers build, beyond that it’s difficult to see how advanced they’ll become and whether they actually will replace Software Engineers. Tools like ChatGPT and Github Copilot can already generate pretty robust code (although this depends on the training data again, provided buggy code the models will generate their own buggy code), it’s not hard to imagine for an AI to generate a whole website or mobile app complete with the Xcode build files etc. I wonder how far away from that we actually are.

If you want to go deeper into this subject here are a few videos I recommend:

Originally published at https://www.brightec.co.uk.

Sign up to discover human stories that deepen your understanding of the world.

Free

Distraction-free reading. No ads.

Organize your knowledge with lists and highlights.

Tell your story. Find your audience.

Membership

Read member-only stories

Support writers you read most

Earn money for your writing

Listen to audio narrations

Read offline with the Medium app

Rhys Kentish
Rhys Kentish

Written by Rhys Kentish

App developer @brightec. Was once internet famous on a website no-one uses anymore. @rhyskentish on twitter and instagram

No responses yet

Write a response