This topic has been bugging me for months now. I keep hearing (reading) that generative AI (GenAI) is overhyped, that its bubble is about to burst, that it hasn’t lived up its hype, hasn’t delivered AGI (Artificial General Intelligence). I’ve also seen a lot written on how it is not even very useful and that most users hate it.
I don’t work with Gartner and think about peaks of inflated expectations and troughs of disillusionment. I am light years away from being any sort of financial or market analyst. So I have no worthwhile thoughts to share on how many GenAI startups will go bust in the near future. Or on whether the dominant players in GenAI might not see enough financial reward to keep pumping crazy levels of human and financial resources into GenAI efforts.
I do use GenAI tools every day. For work and in life outside of work. And they are consistently, incredibly useful every day.
Here are my current thoughts on the GenAI bubble bursting, overhyped theme:
AGI
I read more than half a dozen daily newsletters (many of them from great Substack writers) that cover GenAI, and the AI coverage in The Guardian newspaper, Wired, TechCrunch, and several other similar publications. None of that reading has lead to expect that AGI should have arrived months ago, or arrive next week, or this year. In fact, many things I’ve read say that even the leading thinkers and doers in the AI space cannot say with high confidence when they believe AGI will be achieved.
So … I don’t see any reason to be disappointed that ChatGPT is not yet considered AGI.
I think we can have more realistic near-term hopes to see ACI - artificial capable intelligence - or “agentic” AI - AI that can be given a complex set of tasks to perform and is able to perform them.
Just for quick reference here, there are a number of good definitions around for AGI, for any of you who are not familiar with the term, here is a good, concise one via Mustafa Suleyman’s excellent book ‘The Coming Wave’:
Artificial general intelligence (AGI) is the point at which an AI can perform all human cognitive skills better than the smartest humans.
GenAI is not Perfect
This is another common theme I see in articles predicting the demise of GenAI. It still makes a lot of mistakes. It still spits out inaccurate responses, hallucinations. My thought on this is the same as for AGI. I have never expected GenAI tools to be perfect. I’ve always been well aware that they’re still not always accurate, that not all of their responses are useful.
The thing is, on the work front for example, I can count on the fingers of zero hands the number of perfect human colleagues I have worked with over the years. I have had zero co-workers who never had a thought on a work subject that turned out to be wrong, who never produced a piece of work that was not usable as delivered, without any review or modification. Can any of us say that we’ve never been in a work meeting where someone in the room (or the virtual room) said something that strikes us as not just wrong, but comically wrong, ridiculous?
Build It Up to Tear It Down
Thinking about this this morning, some of the GenAI is overhyped/not useful takes feel a little like the age old tradition of building up superstar athletes and celebrities of all flavors and then delighting in tearing them down. I think it’s at least a possibility that some of the same writers who helped build up unrealistic expectations for GenAI are now among the loudest voices tearing it down.
My Bottom Line
As the old school blogging term goes - your mileage may vary of course - but my daily experience with GenAI tools is more like this:
It’s like a Swiss Army Knife. Here are some of the ways it has helped me be a better cybersecurity professional:
Don’t just take my word for it though. Red Canary - a blue chip provider of cybersecurity managed detection and response services - published a great post on how GenAI makes their security operation center (SOC) better through using GenAI. It highlights that this also results in happier SOC engineers, and has a great diagram showing where and why we need humans in the loop:
These examples are all focused on cybersecurity, but I have even more that are nothing to do with cyber - that cover a wealth of different areas. I’m a big believer in embracing this way of thinking about how to use GenAI, via Professor Ethan Mollick (emphasis mine):
I know this may not seem particularly profound, but “always invite AI to the table” is the principle in my book that people tell me had the biggest impact on them. You won’t know what AI can (and can’t) do for you until you try to use it for everything you do.
I have to give a healthy tip of my hat to Daniel Nest, who wrote a post just a couple days ago sharing some similar thoughts on how “The reports of AI's death are greatly exaggerated.”.
Reading Daniel’s great post pushed me over the line from griping about this topic in my head to spitting out some words about it here :)
I so agree with your point that an imperfect AI can be at least as useful an imperfect colleague. You can't expect it to write a perfect essay, say, but it can make suggestions that you can incorporate. It requires some work. People who reject it because it is not perfect are having unreasonable about what it is good for..
That said, there probably will be a bursting of the bubble, very much like the dot-com crash, but that will not be the end of AI. The small startups with crazy niche applications will go down like so many absurd dot-com startups.
As to generative art ripping off artists, I like to point out that artists have been ripping off other artists since the cave painters. Artists learn to be artists from other artist. Sometimes the influences are obvious, but no one calls it theft. I could not imagine a writer that had never read a single book could ever write a single book. This idea means that all writers are plagiarists who scrape the works of others.
If it is a bubble it will burst someday. if not, it will normalize itself