at GTC, the parts are as speedy as they are hot, both in terms of demand and temperature. Its GB200 superchips are capable of churning out 40 petaFLOPS of 4-bit precision peak performance while sucking 2700W of power. Small wonder the chip requires liquid cooling.Nvidia's first-gen superchips in favor of the Blackwell variant to power its upcoming Ceiba AI supercomputer.
This tells us that Blackwell-based systems will need to be much larger than an equivalent MI300 system if they want to compete on the Top500's flagship High Performance Linpack benchmark. "FP64 is important, and it's useful, but we think it's just one of the tools that you're going to need to go and tackle a lot of these grand-scale challenges," Harris said.
For example, you could simulate a complex or fleeting phenomenon at high precision and then use the data generated to train a model on expected behavior. This model could then be used to quickly process mountains of data at low precision for the most promising data points. Nvidia's rise in the datacenter is thanks, in no small part, to the hard-fought lessons learned from taking those cards and attempting to get applications running on them at scale.
Ai Ai Latest News, Ai Ai Headlines
Similar News:You can also read news stories similar to this one that we have collected from other news sources.
Source: pcgamer - 🏆 38. / 67 Read more »
Source: TheRegister - 🏆 67. / 61 Read more »