Since November of 2022, when ChatGPT was released, it has felt like GenAI tools and the large language models that power them have done nothing but grow in their power, capabilities, and usage.
Or at least that was the case until the last few months - when lots of thoughts have been shared saying that these models have hit their peak. That they’ve run out of training data, that the costs - in money, energy, and impact on the environment - are too great and things need to slow down, and other reasons.
In this AI Alongside chat Daniel Nest and I talk about whether we agree with the peaked theory, and how much we care if it proves to be true.
Daniel is the creator and publisher of Why Try AI here on Substack. It’s one oof my favorite reads on AI or any topic and Daniel serves up rundowns on the hottest new tools and features, deep dives, and easily the best coverage of Genai tools for image creation.
Leave a comment and let us know what you think about where AI tools and models are at, or what other tools or topics you’d like to see covered here.
Share this post