AI writers and sites have been talking about Google’s Gemini Ai Model for quite a while, at least since ChatGPT started to take the world by storm right around a year ago. Their chatbot, Bard, has mostly been described as a rush job to get something out there after ChatGPT starting grabbing attention and then users at a record breaking pace. I don’t know if that’s true about Bard - but I have seen Bard improve over recent months and I’ve definitely noticed the anticipation and expectations around Gemini ramping up as well.
So it was pretty exciting to see Google’s blog post announcing it today. They came out swinging as a rival / superior model to OpenAI’s latest - with words and benchmarks:
Gemini Ultra’s performance exceeds current state-of-the-art results on 30 of the 32 widely-used academic benchmarks used in large language model (LLM) research and development.
You’ll notice that refers to Gemini Ultra. That’s because Gemini 1.0 is optimized for three different sizes:
Gemini Ultra — our largest and most capable model for highly complex tasks.
Gemini Pro — our best model for scaling across a wide range of tasks.
Gemini Nano — our most efficient model for on-device tasks.
I think this may be another dig at OpenAI’s GPT models, from the ground up feels different to what feels like the bolted on addition of the whole set. I may be wrong about that, but it’s my perception given the way those multimodal elements have been added over time to OpenAI’s tools.
It was built from the ground up to be multimodal, which means it can generalize and seamlessly understand, operate across and combine different types of information including text, code, audio, image and video. (emphasis mine)
It was good to see that Google mentioned that Gemini and their other tools and foundation models are “guided by our AI Principles.” I scanned through their AI Principles page and I think there’s a lot to like about it, including:
A line very near the top of the page that includes the words: “… we recognize that advanced technologies can raise important challenges that must be addressed clearly, thoughtfully, and affirmatively. These AI Principles describe our commitment to developing technology responsibly …”
Their 7 objectives for AI applications - which include being socially beneficial, accountable to people, and incorporating privacy design principles. Of course Google does not exactly have an unblemished record in those last two areas - so I’m placing my hopes on what they do with this unprecedented, world-changing technology , rather than on their past track record.
Most of all I like this We Will Not Pursue section:
AI applications we will not pursue
In addition to the above objectives, we will not design or deploy AI in the following application areas:
1. Technologies that cause or are likely to cause overall harm. Where there is a material risk of harm, we will proceed only where we believe that the benefits substantially outweigh the risks, and will incorporate appropriate safety constraints.
2. Weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people.
3. Technologies that gather or use information for surveillance violating internationally accepted norms.
4. Technologies whose purpose contravenes widely accepted principles of international law and human rights.
As our experience in this space deepens, this list may evolve.
AI policies and regulations are being discussed and debated by governments and big tech companies around the word. If Google can manage to stick with these principles, and prove it in some way, that might just be a good influence on other big players, or at least a good talking point for any of those seeking to establish guidelines for AI models and applications.
Google says a Gemini update should be coming to Bard today (I’ve looked a few times and have not seen it yet) and coming to the Pixel 8 Pro today as well. I’m going to double up tonight and post a few thoughts on Gemini and the Pixel 8 Pro in another post in just a little while.
Nice summary, Patrick! First, thanks for including Google's AI Principles - at least a summary. I've found their Principles to be to the point and gives me a sense that they actually care about the positive and negative impacts of AI.
Regarding Gemini, I look forward to trying it out; however, I hope they aren't asking me (or others) for $20+ a month. I'm running out of money!
Again, nice work, my friend! Ernie