According to TrendForce, Nvidia is working on the next generation of B100 and B200 GPUs, which are based on the Blackwell architecture. The new GPUs are expected to hit the market in the second half of this year and will be used by CSP cloud customers. For those unfamiliar, "cloud CSP clients" apply to customers using cloud service providers (CSPs) for their cloud computing needs. The company will also add an optimized version of the B200A for enterprise OEM customers who need advanced AI.
TSMC 's CoWoS-L package capacity (used by the B200 series) reportedly continues to be limited. The B200A is said to use the relatively simple CoWoS-S packaging technology. The company is focusing on the B200A to meet the requirements of cloud service providers.
B200A Specifications:
Unfortunately, B200A specifications are still not fully known. For now, we can only confirm that the memory capacity of the HBM3E has been reduced from 192 GB to 144 GB. It is also reported that the number of memory chip layers has been halved from eight to four. With all that, the single chip capacity has been increased from 24GB to 36GB.
The B200A will have lower power consumption than the B200 GPU and will not need liquid cooling. The air cooling system of the new graphics processors will also simplify their setup. The B200A is expected to ship to OEMs around the second quarter of next year.
Supply chain surveys show that NVIDIA's main high-end GPU shipments in 2024 will be based on the Hopper platform, with the H100 and H200 for the Nordic market America and H20 for the Chinese market. Since the B200A will be available around the second quarter of 2025, it is not expected to interfere with the H200, which will arrive in the third quarter or later.