I’m an AI Optimist. Or at least, getting a little creative with words here, an AI Hopefulist. As 2024 gets rolling, I’ve read and heard enough things from enough sources that I consider well beyond solid to feel hopeful and optimistic about how AI will impact our lives in the short to medium term. Given the incredible speed that technology and advances in AI move at, I’ll say that medium term is the next 5 years.
My optimism isn’t based on ignoring the frightening predictions about AI leading to job losses, or the dangers and risks that arise as AI surpasses humans’ intelligence and capabilities in certain areas. I try to follow the latest research on AI across the board, including all of the danger zone sort of coverage.
Here are a few things that make me an AI optimist at the moment:
AI in Copilot mode
In my work in the field of cybersecurity I’ve seen how Microsoft’s Security Copilot will assist cybersecurity SOC (security operations center) analysts. For instance, in threat hunting and triaging of security alerts, analysts will be able to use natural language to have Security Copilot create KQL (Kusto Query Language) queries used in hunting efforts. This is just one example; many others already exist and we’ll see more and more of these copilot style tools.
This exchange in an interview with Andrew Ng - the founder of DeepLearning.ai, founder & CEO of Landing AI, GP at AI Fund, chairman and co-founder of Coursera, and an Adjunct Professor at Stanford University’s Computer Science Department - at one of my favorite Substack publications, Azeem Azhar’s Exponential View, is an encouraging view on the idea of AI augmenting our skills:
Azeem Azhar:
I think 6 years ago at an event, one of these sort of disruption events, Geoff Hinton said we won't need radiologists and don't become radiologists. And Geoff Hinton, of course, is an absolute esteemed computer scientist. But what I think he missed out when he made that comment was your point, which is that the bit that a computer currently automates within radiology, which might be the markup of an MRI scan or something, is only a small task in the full task life cycle of the radiologist …
which is why 6 years later, with vastly improved AI systems, we have more radiologists than ever, and the shortage of radiologists in the US and in Europe is bigger than it has been as well.
Andrew Ng:
Yeah, I think that's exactly right. And in fact, one of my friends at Stanford made a counterpoint to what Geoff Hinton said that I agree with, which is radiologists that use AI will replace radiologists that don't, but it's not that I will replace radiologists.
A shorter, but similar, take comes via Mike Elgan atComputerWorld:
Companies that try to replace employees outright with AI will realize that humans empowered with well-designed AI tools are far more effective than AI working on its own.
Outside of work, a handful of “AI Assistant” type apps are my go-to research tools. These are consistently faster and more useful to me than any search engine. I fully expect the number and quality of these to increase dramatically this year and in coming years. And there will be more dedicated AI Assistant devices like Humane’s AI Pin.
Big Tech and Governments Paying Attention to Ethics, Safety and AI Risks
The best example I’ve seen of this, the deepest dive, is Mustafa Suleyman’s book ‘The Coming Wave’ - described on its Amazon page as:
An urgent warning of the unprecedented risks that AI and other fast-developing technologies pose to global order, and how we might contain them while we have the chance—from a co-founder of the pioneering artificial intelligence company DeepMind.
‘Containment’ is at the center of Suleyman’s book, and it offers great insight into how incredibly difficult this will be to achieve, while also putting forward detailed recommendations on how it could potentially be achieved.
Anthropic - the makers of the Claude large language model which is a serious competitor to OpenAI’s ChatGPT - have a constitution and a set of foundational principles for their model. Here’s a little slice of that:
At a high level, the constitution guides the model to take on the normative behavior described in the constitution – here, helping to avoid toxic or discriminatory outputs, avoiding helping a human engage in illegal or unethical activities, and broadly creating an AI system that is helpful, honest, and harmless.
Governments are looking at ways to regulate AI - the US saw an Executive Order on this in 2023, and the EU AI Act has had some good reviews. The National Institute of Technology and Standards (NIST) released an AI Risk Management Framework in 2023 as well. Attention is being paid to AI risks at least at those levels.
“Little” Underdog Players coming on strong
Although the headlines in the generative AI space have largely been dominated so far by Big Tech giants like OpenAI (with billions of dollars of backing from Microsoft) and Google, there’s also lots of good news on smaller companies with open source and smaller models gaining popularity and producing impressive technical results.
IBM and Meta recently formed an “alliance” to promote open source AI development.
You can now try out some of these models on a laptop or home level PC, without needing terribly high hardware specs. I’ll write a little more on that soon, as I’ve been doing this on my iMac starting a few days ago and I’m seeing very interesting results already.
Nice work, Patrick! I tend to agree that learning AI and prompt engineering and how AI "works" in an elementary fashion is important for anyone "threatened" by AI.
Of note, Google has an excellent "AI Constitution" that specifically acknowledges that AI should NOT be used to do harm.
Keep up the great work, my friend! Ernie