In This Article:
-
Broadcom is seeing huge growth in its AI semiconductor division.
-
Broadcom's products will continue to grow with the rise of AI inference.
Broadcom (NASDAQ: AVGO) isn't the first company that comes to investors' minds when discussing mission-critical suppliers to the AI arms race. Companies like Nvidia and Taiwan Semiconductor Manufacturing are often mentioned, but Broadcom hovers in the background. However, Broadcom is a formidable company primed to benefit from a massive boom from two of its AI products.
Although Broadcom is a $1.2 trillion company right now, I could easily see it rising to a $2 trillion company within three years. That would easily allow it to outperform the market, which makes it a fantastic option to consider now.
Broadcom isn't top of mind for many investors because it isn't laser-focused on AI. It has products sprawling from mainframe software to cybersecurity to virtual desktops (thanks to its VMware acquisition). However, many of Broadcom's AI products are critical to data center infrastructure.
Broadcom's AI offerings are split into two categories: connectivity switches and custom AI accelerators, which it calls XPUs. Starting with the connectivity switches, data centers have computing clusters with thousands of GPUs that constantly process AI workloads. Often, an AI prompt may require multiple GPUs to process the answer, and data centers need devices to stitch together the multi-piece answer. That's where Broadcom's connectivity switches come in, and they will only become more important as we transition into the next phase of AI.
Although all AI hyperscalers are working on training better models, we're starting to see widespread usage of AI models in business and personal life. This means the AI hyperscalers must start thinking about inference, which occurs when an AI model is prompted for an answer. More inference capacity boosts the need for these connectivity switches, boosting Broadcom's sales.
Another product that has Broadcom excited is its XPUs. Its custom AI accelerators are an alternative to GPUs, as they can also process multiple calculations in parallel. The difference between an XPU and a GPU is that XPUs are designed for a specific workload and don't have the flexibility of a GPU. Because these XPUs are designed in collaboration with the end user, the AI hyperscalers can tailor the processing unit to their workloads, optimizing these units for whatever workload they would like. This allows XPUs to outperform GPUs in specific use cases, but it also does something even more important for its clients.