My take from being at the keynote and the content I've seen so far at the conference is that Nvidia's is moving up the stack (like all good hardware vendors are prone to do).
Obviously they are going to keep doing bigger. But the takeaway for me is that they are building "docker for llms" - NIM. They are building a container system where you can download/buy(?) NIMs and easily deploy them on their hardware. Going to be fun to watch what this does to all the AI startups...
Won't do anything to most consumer facing AI, the UI & convenience is already a major selling point. A bigger threat is that the feature the business is built around makes it into mainline software... there is no demand for (paid) background removal anymore as every iPhone can do it nowadays.
Generally if whatever AI product you have can easily just be a feature in whatever application businesses already use, then you are running a business on borrowed time.
> here is no demand for (paid) background removal anymore as every iPhone can do it nowadays.
Proper background removal of even remotely complex content is still in demand, especially on a large scale. I doubt you'd use an iPhone to work on >100 of images per second.
AI startups who just wrap a (standard) API or model in a thin UI layer.
The backend part will be a commodity and the UI layer offers no value proposition.
Obviously they are going to keep doing bigger. But the takeaway for me is that they are building "docker for llms" - NIM. They are building a container system where you can download/buy(?) NIMs and easily deploy them on their hardware. Going to be fun to watch what this does to all the AI startups...