Google has introduced its version of the NVIDIA Blackwell GB200 NVL super accelerator for its cloud AI platform, Datacenter Dynamics reports. This version differs from the solutions showcased by Meta and Microsoft, highlighting Google’s distinct approach. However, the move signals growing interest from hyperscalers in NVIDIA’s new AI platform.
The company emphasized a strategic partnership with NVIDIA to create what it describes as the “sustainable computing infrastructure of the future.” Specific details about the new platform will be discussed at one of Google’s upcoming conferences—stay tuned, as we’ll keep you updated.
Unclear Configuration but Advanced Infrastructure
The configuration of Google’s version remains partly unknown. The available photo shows two racks: one filled with an unspecified number of GB200 accelerators, while the other contains Google’s proprietary hardware, including power supplies, switches, and cooling modules.
Although NVIDIA generally recommends using InfiniBand for interconnection within its AI platforms, industry experts suggest that Google might rely on its own developments based on Ethernet. The company already employs custom-designed optical switches (OCS) in AI clusters with its TPU accelerators, further indicating Google’s preference for in-house infrastructure solutions.
Comparisons with Microsoft and Meta Versions
Microsoft’s GB200-based solution also uses two racks, but with a significant difference: one of the racks features a large heat exchanger, likely intended to cool multiple racks simultaneously. Reports suggest that Microsoft and NVIDIA previously had disagreements regarding the optimal platform layout for the GB200.
Meta’s version, meanwhile, remains closest to NVIDIA’s original GB200 NVL72 configuration, notes NIX Solutions. NVIDIA has recently shared its platform specifications with the Open Compute Project (OCP), indicating more openness in its designs. Notably, the company opted not to release a “compromise” version of the super accelerator—the GB200 NVL36×2—which would have required two racks for deployment.
This development underscores the evolving landscape of cloud-based AI infrastructure. Google’s collaboration with NVIDIA and the variations introduced by Meta and Microsoft show a shift towards diverse AI solutions tailored to each company’s needs.