Nvidia's GPU Technology Conference concluded last week, bringing word of the company's Blackwell chips and the much-ballyhooed wonders of AI, with all the dearly purchased GPU hardware that implies.
Over the years we have been growing our inference stack, learning more about every different kind of workload, starting with computer vision and deep recommender systems and speech, automatic speech recognition and speech synthesis and now large language models. It's been a really developer-focused stack. And now that enterprises OpenAI and ChatGPT, they understand the need to have these large language models running next to their enterprise data or in their enterprise applications.
, a lot of our customers are hybrid cloud. They have preferred compute. So instead of sending the data away to a managed service, they can run the microservice close to their data and they can run it wherever they want.: What does Nvidia's software stack for AI look like in terms of programming languages? Is it still largely CUDA, Python, C, and C++? Are you looking elsewhere for greater speed and efficiency?: We're always exploring wherever developers are using.
And that has resonated with every customer. So if you talk to SAP, they have BOP , which is like a proprietary SQL to their database. And I talked to three other customers that had different proprietary languages – even SQL has like hundreds of dialects. So being able to do code generation is not a use case that's immediately solvable by RAG.
Ai Ai Latest News, Ai Ai Headlines
Similar News:You can also read news stories similar to this one that we have collected from other news sources.
Source: TheEconomist - 🏆 6. / 92 Read more »
Source: pcgamer - 🏆 38. / 67 Read more »
Source: CreativeBloq - 🏆 40. / 65 Read more »
Source: TheRegister - 🏆 67. / 61 Read more »
Source: TheEconomist - 🏆 6. / 92 Read more »