<div dir="ltr"><div dir="ltr"><div><br></div>Some interesting trivia on the NVIDIA H100 GPU:<br><div><font face="arial, sans-serif" color="#000000"><br></font></div><div><font face="arial, sans-serif" color="#000000">(1) "Nvidia is said to have opted to outsource the production of its next-generation GPUs to Taiwan's TSMC. Nvidia intends to manufacture its H100 GPUs on TSMC's 4-nanometer manufacturing technology. The new GPUs will be available beginning in the third quarter of 2022." <a href="https://www.guru3d.com/story/nvidia-will-manufacture-h100-gpus-using-tsmc-4-nm-process/" target="_blank">[1]</a></font></div><div><font face="arial, sans-serif" color="#000000"><br></font></div><div><font face="arial, sans-serif" color="#000000">(2) "<span style="text-align:center">The NVIDIA H100 Tensor Core GPU delivers exceptional performance, scalability, and security for every workload. H100 uses breakthrough innovations based on the </span>NVIDIA Hopper™ architecture<span style="text-align:center"> to deliver industry-leading conversational AI, speeding up large language models (LLMs) by 30X. H100 also includes a dedicated Transformer Engine to solve trillion-parameter language models." <a href="https://www.nvidia.com/en-us/data-center/h100/?srsltid=AfmBOoqTOjJl2kGWWiLV5Ldbj4o9F-8PA1lh7MGTHu79Ljh3xlVlPtzA" target="_blank">[2]</a></span></font></div><div><span style="font-family:arial,sans-serif;color:rgb(51,51,51)"><br>(3) "Nvidia Makes 1000% Profit on H100 GPUs </span><a href="https://semiwiki.com/forum/index.php?threads/nvidia-makes-1000-profit-on-h100-gpus.18591/" target="_blank" style="font-family:arial,sans-serif">[3]</a><br></div><div><font face="arial, sans-serif"><br></font></div><div><p style="box-sizing:border-box;margin:0px auto 26px"><font face="arial, sans-serif" style="color:rgb(0,0,0)">(4) "Nvidia is seemingly considering Intel’s foundries to manufacture its H100 AI GPUs. Team Green may start slow with a small batch to test the waters, potentially leading to larger orders if everything goes as planned. <br></font><span style="color:rgb(0,0,0);font-family:arial,sans-serif">. Following the great demand for its H100 GPUs and TSMC’s overloaded calendar, Nvidia is apparently looking for alternative factories to build its chips. The GPU giant may soon add Intel Foundry Services (IFS) to its suppliers since TSMC alone can’t satisfy its needs.<br></span><span style="color:rgb(0,0,0);font-family:arial,sans-serif">. Currently, all major Nvidia chips – A100, A800, A30, H100, H800, H200, GH200, etc. – are manufactured by TSMC. This makes availability very susceptible to unexpected events, be they natural or political in nature. It’s a delicate position for one of the most valued companies on the planet. <br>.  </span><span style="color:rgb(0,0,0);font-family:arial,sans-serif">According to MyDrivers’ sources, Intel has the capacity to produce 5,000 wafers per month for Nvidia using its advanced process and packaging technologies. While the exact chips remain unknown, Nvidia is likely to go with its high-margin H100 AI GPUs, which are in short supply. Depending on the wafer size and yield, Intel could make between 300,000 and 800,000 H100 GPUs per month – roughly speaking.<br></span><span style="color:rgb(0,0,0);font-family:arial,sans-serif">. Although TSMC expects to double its capacity by the end of 2024 to 20,000 wafers per month, up from 11,000 back in 2023, Nvidia’s appetite will not be satiated. Since Intel already has an alternative to TSMC’s CoWoS-S packaging in the form of Foveros 3D stacking technology, Nvidia has more reasons to diversify." </span><a href="https://www.club386.com/nvidia-may-select-intel-for-some-of-its-h100-gpu-production/" target="_blank" style="color:rgb(0,0,0);font-family:arial,sans-serif">[4]</a><font color="#000000"><br></font><span style="font-family:arial,sans-serif"><font color="#000000"><br></font><font color="#333333">(5) "Nvidia reportedly selects Intel Foundry Services for GPU packaging production — could produce over 300,000 H100 GPUs per month." </font></span><a href="https://www.tomshardware.com/pc-components/gpus/nvidia-reportedly-selects-intel-foundry-services-for-chip-packaging-production-could-produce-over-300000-h100-gpus-per-month" style="font-family:arial,sans-serif">[5]</a><br><br><font face="arial, sans-serif">(6) "</font><span style="font-family:arial,sans-serif;color:rgb(0,0,0);letter-spacing:-0.02em">Which Companies Own The Most Nvidia H100 GPUs?" <a href="https://www.visualcapitalist.com/which-companies-own-the-most-nvidia-h100-gpus/">[6]</a><br>    - Meta ...    350K <br>    - xAI/X ...   100K<br>    - Tesla ...     35K<br>    - Lambda ...  30K<br>    - Google ...   26K<br>    - Oracle ...    16K<br>The first 3 companies use them for a "Private Cloud". The next 3 use them for a "Public Cloud".<br></span><span style="font-family:arial,sans-serif"><br>-------<br><br>All of this is - as Spock might say - "Interesting".<br></span><span style="font-family:arial,sans-serif"><br>What makes me most nervous about it all is NOT a Colossus takeover of the world. <br>Rather, it's China's temptation to invade Taiwan to take over the fabrication tech that makes all of this possible.<br></span><span style="font-family:arial,sans-serif"><br></span><span style="font-family:arial,sans-serif">Or as Jersey guys put it: "Eh, I'm just sayin."</span></p><p style="box-sizing:border-box;margin:0px auto 26px"><span style="font-family:arial,sans-serif">-- Uncle Ersatz</span></p></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Sat, 2 Nov 2024 at 18:05, Jeff Hayas <<a href="mailto:jeff.hayas@gmail.com" target="_blank">jeff.hayas@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><br><div>Wow. At <a href="https://www.amazon.com/NVIDIA-Hopper-Graphics-5120-Bit-Learning/dp/B0CXBNNNSD" target="_blank">28K$ (retail) per GPU</a>, that's 280M$ just for the GPU's.  </div><div>Then there is the cost of Racks, Power and Cooling systems, and of course the data-interconnects (I wonder what architecture they use); </div><div>for all that we can probably say the cost is 10X the GPU's. So yeah, $2-3 Billion(US). </div><div><br></div><div>I find it ironic that they chose to call the Super-AI system "Colossus", as in the 1970 film <a href="https://en.wikipedia.org/wiki/Colossus:_The_Forbin_Project" target="_blank">"Colossus: The Forbin Project"</a>. </div><div>We'll know we're in trouble if the new Colossus system has Musk assassinated.</div><div><br></div><div>-- Uncle Ersatz </div><div><br></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Fri, 1 Nov 2024 at 17:50, pt <<a href="mailto:mnemotronic@gmail.com" target="_blank">mnemotronic@gmail.com</a>> wrote:</div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<br>
<div dir="ltr"><div style="font-family:tahoma,sans-serif">For those who have heard stories of Elon
Musk’s xAI building a giant AI supercomputer in Memphis, this is that
cluster. With 100,000 NVIDIA H100 GPUs, this multi-billion-dollar AI
cluster is notable not just for its size but also for the speed at which
it was built. In only 122 days, the teams built this giant cluster.<br></div><div style="font-family:tahoma,sans-serif"><br></div><div style="font-family:tahoma,sans-serif"><a href="https://www.servethehome.com/inside-100000-nvidia-gpu-xai-colossus-cluster-supermicro-helped-build-for-elon-musk/" target="_blank">https://www.servethehome.com/inside-100000-nvidia-gpu-xai-colossus-cluster-supermicro-helped-build-for-elon-musk/</a><br clear="all"></div><div style="font-family:tahoma,sans-serif"><br></div></div>
</blockquote></div>
</blockquote></div>
</div>