Playback speed
×
Share post
Share post at current time
0:00
/
0:00
Transcript

Since November of 2022, when ChatGPT was released, it has felt like GenAI tools and the large language models that power them have done nothing but grow in their power, capabilities, and usage.

Or at least that was the case until the last few months - when lots of thoughts have been shared saying that these models have hit their peak. That they’ve run out of training data, that the costs - in money, energy, and impact on the environment - are too great and things need to slow down, and other reasons.

In this AI Alongside chat Daniel Nest and I talk about whether we agree with the peaked theory, and how much we care if it proves to be true.

Daniel is the creator and publisher of Why Try AI here on Substack. It’s one oof my favorite reads on AI or any topic and Daniel serves up rundowns on the hottest new tools and features, deep dives, and easily the best coverage of Genai tools for image creation.

Leave a comment and let us know what you think about where AI tools and models are at, or what other tools or topics you’d like to see covered here.

Leave a comment

Thanks for reading Tech & Nonsense! Subscribe for free to receive new posts and support my work.

Tech & Nonsense
AI Alongside
AI Alongside offers field notes on AI collaboration efforts. Each episode will blend real-world experiences, practical insights, and thoughts on some of the latest new GenAI tools, features, and abilities that grab my attention and my guests/colleagues’ attention in the ever changing AI landscape.