<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[Chipstrat]]></title><description><![CDATA[Semiconductors, AI, and business strategy. Read by tech leaders and investors. Sits between SemiAnalysis and Stratechery.]]></description><link>https://www.chipstrat.com</link><generator>Substack</generator><lastBuildDate>Wed, 22 Apr 2026 15:10:59 GMT</lastBuildDate><atom:link href="https://www.chipstrat.com/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[Austin Lyons]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[chipstrat@substack.com]]></webMaster><itunes:owner><itunes:email><![CDATA[chipstrat@substack.com]]></itunes:email><itunes:name><![CDATA[Austin Lyons]]></itunes:name></itunes:owner><itunes:author><![CDATA[Austin Lyons]]></itunes:author><googleplay:owner><![CDATA[chipstrat@substack.com]]></googleplay:owner><googleplay:email><![CDATA[chipstrat@substack.com]]></googleplay:email><googleplay:author><![CDATA[Austin Lyons]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[An Interview with Meta VP Matt Steiner About Ads Infrastructure]]></title><description><![CDATA[MTIA, co-designed NVIDIA SKUs, LLM-written kernels, a 1T-parameter recommender at sub-second, and more]]></description><link>https://www.chipstrat.com/p/an-interview-with-meta-vp-matt-steiner</link><guid isPermaLink="false">https://www.chipstrat.com/p/an-interview-with-meta-vp-matt-steiner</guid><dc:creator><![CDATA[Austin Lyons]]></dc:creator><pubDate>Mon, 20 Apr 2026 21:32:55 GMT</pubDate><enclosure url="https://substackcdn.com/image/youtube/w_728,c_limit/5dWovJ4YHTY" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Most people don&#8217;t fully appreciate Meta&#8217;s ads business, the recommender systems that power it, or how that shapes Meta&#8217;s hardware and CapEx decisions across both recommender systems and generative AI. So I reached out to <a href="https://www.linkedin.com/in/mattsteiner/">Matt Steiner</a>, VP of Monetization Infrastructure, Ranking &amp; AI Foundations at Meta to learn more.</p><p><strong>In this interview, we walk through Meta&#8217;s ads infrastructure from first principles. A few things that surprised me:</strong></p><ul><li><p><strong>Recommender workloads have a different compute-to-memory ratio</strong> than a standard LLM GPU, and this difference gave rise to MTIA custom silicon</p></li><li><p><strong>Retrieval isn&#8217;t a generic workload either</strong>. Meta&#8217;s scale makes it <strong>memory-bound,</strong> which is why Andromeda got its own custom NVIDIA Grace Hopper SKU that Meta co-designed</p></li><li><p><strong>Meta&#8217;s adaptive ranking model is an LLM-scale recommender</strong> (~1 trillion parameters) served at sub-second latency. It&#8217;s distilled from GEM, Meta&#8217;s Generative Ads Recommendation foundation model, <strong>and</strong> <strong>scales compute per user</strong> based on interaction history length</p></li><li><p><strong>Consolidating N ad ranking models</strong> into one (Lattice) <strong>improved performance, not just cost.</strong> A single model trained across varied objectives outperformed the specialized ones</p></li><li><p><strong>LLM-written kernels (Meta&#8217;s KernelEvolve) flip the economics of heterogeneous fleets.</strong> Demand for software engineering is going up as the price comes down, and Meta now wants ~100x more optimized kernels per chip</p><p></p></li></ul><p>We also cover how Meta&#8217;s GenAI and recommender systems teams cross-pollinate inside Meta, and what Meta&#8217;s infrastructure looks like two years out.</p><p><em>This interview is lightly edited for clarity.</em></p><div id="youtube2-5dWovJ4YHTY" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;5dWovJ4YHTY&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/5dWovJ4YHTY?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.chipstrat.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Chipstrat is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><h2>How Meta&#8217;s Ad System Works</h2><p><strong>Hello everyone. Today we have a special guest, Matt Steiner, VP of Monetization Infrastructure, Ranking, and AI Foundations at Meta. Welcome, Matt.</strong></p><p><strong>MS:</strong> Thanks, great to be here with you, Austin. Thanks for having me.</p><p><strong>What I wanted to get out of this conversation is to better understand Meta&#8217;s core advertising business and then how that drives infrastructure decisions. I&#8217;m going to assume listeners know nothing and we&#8217;ll walk through from first principles. At the highest level, how do ads work? What are the backend models that power Meta&#8217;s ad stack?</strong></p><p><strong>MS:</strong> Maybe let&#8217;s start with a quick overview of how the ad system works. On a very high level, an advertiser shows up and they say, &#8220;I have some creatives with some copy and I want to show them to some people.&#8221; Sometimes they pick explicitly who they want to show them to. Sometimes they say to our ad system, &#8220;show them to whoever is most likely to convert for the objective that I specify&#8221; &#8212; whether the objective is the person visits my website, the person adds something to a shopping cart on my website, or the person actually clicks buy on my website. Those are all different objectives. Advertisers can optimize for different things.</p><p>Once the ads are created, it is our job to record who these ads should be shown to. So we produce a big database and it says, &#8220;here are all the people that the advertiser would have wanted their ad to be shown to,&#8221; and we record in each person&#8217;s little mini database, &#8220;this is an ad that could be shown to Matt the next time Matt logs in.&#8221; Of course, that list of ads that could be shown to Matt the next time Matt logs in is very, very long.</p><p>So when Matt logs in and our front end asks for an ad, whether that&#8217;s on your mobile device on Instagram or Facebook, on the web &#8212; each front end queries our backend system and says, &#8220;give me the best ads to show Matt next.&#8221; The request goes through our systems and arrives at our indexing system, and our indexing system fetches all the ads that could be shown to Matt. That is where a piece of technology that we&#8217;ve talked about recently called Meta Andromeda comes into play.</p><p>A long time ago, we had a much shorter list of ads that could be shown to Matt. Today that list is extremely long, and to be able to process all of the ads that exist in that list we need to use a fairly powerful system. We worked with our hardware partners at NVIDIA and designed a custom hardware SKU with some GPUs in it, and we co-designed a machine learning model that runs specifically on that hardware SKU for the purposes of best assessing which ads are the top N ads to rank for Matt.</p><p>In the ads serving process, the two large steps are basically: find ads that could be shown to Matt, and then rank them to produce the top ads to be shown to Matt. </p><p>Andromeda operates in the first stage, which we call retrieval, and it uses a powerful machine learning model that has embedded some of my interests and past interactions to personalize which ads should be retrieved for me. Because not every product that is advertised to me is going to be a product that is interesting to me. So we&#8217;re basically sub-selecting the products and creatives that might be interesting to me in order to return to the ranking system to rank those.</p><p>The next step is ranking, where we apply these large and powerful machine learning models to figure out what is the right order of these ads in terms of highest conversion probability times expected value for advertisers. The ad system has a number of ranking models and they rank different ads based on the objective functions for the advertiser, and we have been on a long journey to consolidate those into a single ranking model using a technology we call Lattice.</p><p>The advantage of combining ads ranking models into a single larger model is of course cost savings. You don&#8217;t have to keep N copies of user interests in each machine learning model. You can keep one copy of a person&#8217;s interests in that machine learning model, which saves memory. You can compute the subnets for a machine learning model once instead of repeatedly computing the same subnets across a bunch of different models. You just do one computation. It&#8217;s more computationally efficient to have a single model. And then the other advantage is performance &#8212; a machine learning model trained on more data with more varied objectives performs better than a smaller machine learning model trained on all the data for that objective, partly because of the compute advantages, partly because of the memory pressure advantages, partly because each piece of data has some additional signal associated with it that the machine learning model can use to improve its own performance.</p><p>So: Lattice, consolidation. And then further along in the consolidation journey, we have built GEM, our Generative Ads Recommendation Model, which is our foundation model that we&#8217;ve tried to train on all of the data that&#8217;s available for Meta&#8217;s ad system to use to improve the probability of accurately predicting what somebody&#8217;s going to be interested in, what they&#8217;re going to convert when we show them an ad for achieving an advertiser&#8217;s objective. This large foundation model was then used to distill into smaller models that we could serve for specific purposes, encoding as much information as we can from the larger foundation model.</p><p>Now, like with any system, some people use it less and some people use it more. There are people that are very interactive with brands and content and ads. They&#8217;re commenting on the ads, they&#8217;re liking the ads, they&#8217;re interacting with the brand, they&#8217;re buying things from the brand. Those power users actually have much longer interaction histories with a brand or with all the brands together. It turns out that in our original architecture design, we did not have enough compute available to process all of those interactions given our extremely limited latency budget. For example, when a person shows up in a Meta property, we want to make sure that their feed loads and their ad loads in that feed in a certain fixed latency budget &#8212; let&#8217;s call it roughly one second. We want to have sub-second latency for all of our average retrieval requests. That means we can only process so many interactions when evaluating or inferring that machine learning model.</p><p>Recently, we&#8217;ve built a new ranking model called the adaptive ranking model that substantially varies the amount of compute used to evaluate the model based on how long a sequence from a user is of their interaction history with a brand or all the brands that are advertising on Meta systems. That way we can use a dramatic amount more compute for users with longer interaction histories and meaningfully increase the accuracy of our predictions about what they&#8217;re going to interact with next. That drives better results for advertising partners and much better experiences for the people that are seeing those ads. It&#8217;s all through the magic of right-sizing the compute and memory associated with each one of those requests, and right-sizing the model based on the amount of data that&#8217;s available to evaluate for a particular person.</p><p><strong>Okay, this is so fascinating, there&#8217;s so much here. At the highest level, you broke it down to retrieval and ranking &#8212; retrieval was Andromeda, ranking was Lattice. With Lattice, you talked about having lots of models but trying to simplify that down into one model for many reasons. And meanwhile, the whole backdrop here is &#8212; what kind of scale are we talking about again? Three-plus billion daily active users?</strong></p><p><strong>MS:</strong> That&#8217;s exactly right. More than three billion daily active users across Meta&#8217;s properties worldwide. A lot of people seeing a lot of organic content in their feed, a lot of paid content in their feed, and interacting with both.</p><p><strong>Take me back to GEM and remind me &#8212; we have retrieval and ranking, and where does GEM fit in?</strong></p><p><strong>MS:</strong> GEM is our foundation model. It&#8217;s the model that we train with all of the data that we can use for training to produce the largest, most sophisticated, most prediction-accurate model possible. At the same time, the model is so large it&#8217;s not servable effectively. So the model has to go through a distillation stage where a lot of the core learnings of the model are distilled into smaller models that are servable.</p><p>The next step after that was to try and make the largest possible servable model on the most powerful inference hardware we have available, to produce the most accurate predictions specifically for those users who are power users. They have long interaction histories with brands and content and interests that we can really do a lot better for &#8212; deliver them much better experiences and deliver advertisers much better predictions and consequently return on advertiser spend.</p><h2>Long User Histories and Adaptive Ranking</h2><p><strong>Nice, and that&#8217;s where adaptive ranking fits in. This is really interesting, because I think people are starting to get used to the idea of a foundation model that&#8217;s so big you can&#8217;t serve it, and then the consequences and trade-offs of having smaller models that are servable. For listeners thinking of generative AI, they might be thinking of smaller models that respond faster but aren&#8217;t as &#8220;intelligent.&#8221; Broadly when people are thinking about generative AI, they&#8217;re thinking about optimizing for intelligence or for interactivity &#8212; how quickly does it respond. You talked about latency, but you also talked about being willing to spend more compute at inference time to get a better outcome for the advertiser and a better experience for the user. Can you talk more about the outcomes? Why does adaptive ranking and spending more compute because you have that longer history yield a better outcome?</strong></p><p><strong>MS:</strong> Maybe one way to think about this is: imagine that you&#8217;re married and you have an anniversary, and every year you buy something for your spouse &#8212; something that they like that&#8217;s in their interest set that&#8217;s not necessarily in your interest set. If you can look at a long interaction history for a particular person, and you see, &#8220;every September they buy this particular class of item,&#8221; you don&#8217;t have to even know that it&#8217;s their anniversary, but you can see in that long interaction history, every September they buy something in this category. Then you can use that information to make a better prediction for what they&#8217;re likely to purchase in September.</p><p>That&#8217;s one example, but maybe you have a history of purchasing specific things in specific months corresponding to your children&#8217;s birthdays or a holiday or an anniversary. You can see how looking at longer sequences of interactions can deliver much-improved predictions about what a person is likely to want and then what a person is likely to purchase based on those longer interaction sequences.</p><p>But you can only process those longer interaction sequences if first, you&#8217;ve stored longer interaction sequences, and second, you have the computational power available at serve time to be able to process that whole interaction sequence when a person logs in. Not everybody has long interaction sequences. Not everybody interacts every month with an advertiser, but some people do, and where the data is available to deliver dramatically improved experiences for those people, you of course want to give them the best possible experience you can. That is a function of whether you have the compute available to be able to process all that information within that latency budget through parallelization, etc., that GPUs and large-scale GPUs in the inference stack now allow us to provide for people. Better providing which products and services people are interested in delivers better results for our advertising partners as well, because we&#8217;re just matchmaking. We are matching the person who wants to purchase a thing with an advertiser who has the thing to purchase.</p><p><strong>Yes, that makes a ton of sense. For me, you&#8217;re saying: if I only look temporally at the last month of what you&#8217;ve been doing, I could give you some ads. But you&#8217;ve been on Facebook since back when you had to get invited &#8212; so if I could look all the way back, maybe there&#8217;s interesting trends. But of course the trade-off &#8212; I&#8217;m thinking about an analogy to generative AI, which everyone can relate to. It&#8217;s kind of like context. I want a big model, I want to give it a ton of context, but that&#8217;s expensive and takes time. And with user-centric social apps, you&#8217;re thinking a lot about latency. So you&#8217;ve got that constraint of what is the most context I can give it, the biggest model I can give it, but still do it in sub-one-second. That&#8217;s a perfect segue to ask you more. You talked about co-designing with NVIDIA, you talked about GPUs. Take me back &#8212; did this stuff run on CPUs at one point? How has that evolved?</strong></p><h2>From CPUs to Custom ASICs</h2><p><strong>MS:</strong> Back in the day, retrieval of course ran on CPUs. And back in the day, even ranking ran on CPUs. There was always a push to deliver more compute for both retrieval and ranking, because the more compute available, the larger, more complex machine learning model we can evaluate, the larger the user history long-sequence context windows can be passed into those models, delivering better predictions.</p><p>We&#8217;ve been on a long march through smaller CPUs, medium-sized CPUs, larger CPUs, custom ASICs, GPUs, more sophisticated and powerful GPUs, more sophisticated and powerful custom ASICs. This is all in service of delivering better results for our customers at a reasonable cost to our business so that the ROI works out on both ends for both our advertising partners and Meta.</p><p><strong>Okay, that&#8217;s amazing. What I heard you saying was: it&#8217;s been a long history for Meta of asking &#8220;how can we get more compute to serve better ads?&#8221;, which is a win-win &#8212; you&#8217;re in a marketplace with users and businesses and you&#8217;re sitting in the middle. This idea of using compute to do predictions better has been the story of Meta&#8217;s business for quite some time.</strong></p><p><strong>MS:</strong> At least the last ten years, we&#8217;ve been investing really deeply in performance-optimizing the hardware, the networks, the data center designs, the silicon chips themselves, the machine learning models, the software infrastructure, the tooling associated with them. It&#8217;s a very large, complex optimization CP-SAT problem that we have to satisfy to deliver the best results for our customers and for the people that use our products and services. It&#8217;s a really fascinating technology problem in addition to a business problem.</p><h2>Co-Designing with NVIDIA</h2><p><strong>Yes, indeed. It&#8217;s an intersection of both. What did the practical process of hardware-software co-design look like when you were developing the retrieval engine, like with the NVIDIA Grace Hopper?</strong></p><p><strong>MS:</strong> We sit down with our partners and we say, &#8220;this is the amount of compute that we want to target for this particular use case. This is the latency budget. What are the configurable blocks you have in your portfolio that you could considerably make into a SKU, whether it&#8217;s a chip level or a hardware level, that would work for this particular use case?&#8221;</p><p>Our hardware partners have various configurations of machines and chips and boards available that they are willing to build in certain configurations. We looked at that and we said, &#8220;given the retrieval problem itself, it&#8217;s going to require a huge amount of memory. It&#8217;s maybe a little bit more memory bound than it is compute bound. So we need a lot of memory. We need a lot of specifically high-bandwidth memory, so there&#8217;s enough memory channels to keep those GPUs saturated when they&#8217;re doing that computation.&#8221; We wind up with a SKU design that is optimized for the retrieval space where it has the right amount of memory, the right amount of high-bandwidth channels between the memory and the compute, and the right amount of compute that is effectively balancing that for that particular use case.</p><p>That design is maybe different than the hardware SKUs that you would use in ranking broadly or in serving a web page. But we had some great partners to work with on the hardware side. And of course, we have truly brilliant AI researchers on the modeling side, and software engineers for distributed systems that are optimizing the software infrastructure layer, and networking engineers who are optimizing how these machines talk to each other so that we can minimize end-to-end latency while maximizing the parallelism and compute we have available to deliver the best results for people and businesses.</p><p><strong>So you sit down with your partner and say, &#8220;hey, we are a large customer. We have particular workloads that we run at scale and we know the shape of those workloads really well. This one with retrieval has these characteristics &#8212; memory bound, needs high memory capacity and bandwidth.&#8221; Does that lead you to look at those certain workloads that you have and ask, &#8220;what is the right shape of compute? What is the right SKU for retrieval versus ranking versus GEM training versus adaptive ranking?&#8221;</strong></p><p><strong>MS:</strong> That&#8217;s exactly right. We are always trying to work both sides of this problem. One problem is: how do we influence evolution of the hardware to better meet the needs of the software stack, and where we anticipate the software and AI stack is evolving over the next couple of years? Because &#8212; you&#8217;re probably familiar &#8212; hardware has relatively long lead times compared to software. On the other side of the problem, we are trying to influence the software stack evolution in a direction that is going to meet the hardware and maximize the potential of the hardware that&#8217;s going to be delivered to us this half, this quarter, this year, next year, and the following year.</p><p>We&#8217;re always trying to evolve them in similar directions. Sometimes there are hardware breakthroughs and we evolve our software stack to take advantage of those hardware breakthroughs. Sometimes there&#8217;s new software breakthroughs and we try to influence the hardware design in that direction to support those software breakthroughs. There&#8217;s a big discussion about this constantly across the industry. It&#8217;s particularly important given the rapid pace of innovation in the AI space &#8212; how quickly machine learning models are evolving, how quickly they are improving their performance and cost characteristics. It&#8217;s a wild time to work in the hardware-software intersection space.</p><h2>MTIA: Recommender Systems vs LLMs</h2><p><strong>Totally. And obviously with transformers coming into existence, you&#8217;ve probably gone from more traditional ML into evolving toward transformer-based ones, and we&#8217;ll get there. But first, take me to MTIA. You talked about CPUs, you talked about GPUs, and how with the Grace Hopper that fit nicely into particular workloads. What leads Meta toward MTIA? There&#8217;s been a lot of announcements on that front lately &#8212; showing a roadmap, partnering with Broadcom. Can you tell us the business and economic rationale for moving in that direction?</strong></p><p><strong>MS:</strong> We tend to think about this in terms of the evolution of our heterogeneous hardware fleet over time. We can see the offerings that are available from our hardware partners that have various configurations of memory and compute and memory channels and different ratios. Some of them work really well for a particular use case. Some of them work really well for a different use case. There are different trade-offs with running different machine learning models on each of those hardware configurations. Sometimes the trade-off is latency. Sometimes the trade-off is cost. Sometimes the trade-off is power. In this very complex constraint satisfaction and optimization space, you&#8217;re trying to figure out what is the best offering that maximizes your returns for your advertising partners and for your business as well.</p><p>That&#8217;s where sometimes we have a use case that is different from your standard use case in the space. That was the initial impetus for the Meta Training and Inference Accelerators. Ads is a recommender systems class of problem, which is a little bit different domain than your large language model class of problems. The large language model problem is what&#8217;s known in the industry as an embarrassingly parallel problem. You can process a bunch of stuff in parallel. It doesn&#8217;t have to have super-effective high-bandwidth communication to be able to sync up the weights at periodic intervals.</p><p>At the same time, in the recommender systems space, all of the data is personalized. In the large language model space, if I was to say to somebody, &#8220;complete the sentence &#8216;to be or not to&#8230;&#8217;&#8221; there&#8217;s an objective correct answer &#8212; the highest probability answer that almost everybody who speaks English and has taken high school English classes could guess what the next word is going to be. A machine learning model similarly can learn there&#8217;s an objective highest probability answer to that blank in that sentence.</p><p>Now in recommender systems, the world is not objective and highest probability like that. The question is: what is the next best ad to show Matt? And it is not &#8220;what is the next best ad to show,&#8221; because who&#8217;s looking at the ad slot dramatically determines whether the ad is going to matter to them. There&#8217;s no objectively correct answer to what is the next ad to show, but there is a highest probability answer to what is the next ad to show Matt. Every example that is fed into our training systems for recommender systems has to have that personalization attached to the example.</p><p>What does that personalization look like? Well, Matt likes gardening and cycling and seems to buy a lot of stuff for toddlers, a lot of cleaning products. As a result, things that fit in those domains may be much more appealing to Matt than things that are outside of those domains. I used to have hobbies, now I have young children. That&#8217;s changed what I purchase quite a bit. The machine learning model can encode that, and it changes what the correct answer is to that question of what ad should be shown to Matt next.</p><p>That changes the size of the data packet associated with each of those examples. You have to pass in this personalization blob for the example of &#8220;we showed this ad to Matt and Matt clicked on it,&#8221; or &#8220;we showed this ad to Matt and Matt didn&#8217;t click on it. Here&#8217;s Matt&#8217;s big personalization blob of things he&#8217;s interested in.&#8221; The machine learning model can learn, &#8220;with this kind of personalization blob associated with Matt, he likes cycling and toddler toys and gardening equipment. These kinds of ads are good ads to show Matt and these kinds of ads are not good ads to show Matt.&#8221;</p><p>But that literally changes the hardware characteristics that you want when you have a very different I/O ratio associated with each example. If your examples carry a lot more data with each example, then you have to have a much fatter network pipe to keep the chip fed. You have to have more memory on the hardware SKU to keep the chip fed. You have to have a lower ratio of compute to memory &#8212; and high-bandwidth memory at that &#8212; to be able to effectively utilize the compute. So the optimal hardware SKU for training recommender systems may not be the same as a GPU that is optimized for training large language models. There&#8217;s obviously pros and cons there, but you may want to build a SKU that fits that particular workload really well.</p><p>Now that&#8217;s not all of our workloads. We obviously use GPUs in a lot of places. We use them for a lot of different parts of the recommender systems problem. But for some types of models, we have a use case for a hardware SKU that has a different configuration than what&#8217;s commonly offered as a GPU-packaged SKU. For some circumstances, a custom SKU with a different compute-to-memory ratio makes a lot of sense. For other applications, the GPU SKU is much more performant or much more cost-effective for that workload. We&#8217;re really trying to optimize the available compute and memory to the available models that need to be trained and the data size with each of those models. It&#8217;s a fascinating, challenging technology optimization problem.</p><h2>Heterogeneous Hardware and LLM-Written Kernels</h2><p><strong>Yeah, that was really helpful. I like how you illustrated the problem to show that there&#8217;s specific I/O requirements and memory requirements, and how that could lead you to think about what, of all the possibilities out there, what SKU would fit best for this particular type of workload &#8212; and that might involve making your own. Now that&#8217;s talking about recommendation systems, which is really useful, and it&#8217;s a good reminder that the business involves training and inferencing recommender systems. Now, you did talk about GEM as a foundation model and needing to train that, and it being so big that it&#8217;s not cost-effective to serve. Can you tell us more about the compute challenges and the infrastructure demands on creating GEM and serving GEM?</strong></p><p><strong>MS:</strong> GEM as our foundation model is the largest model that we train in the ads recommender space. We try to feed it as much of our data as we can feed into the model to produce the largest, most complex, and best-predicting model that we have available. Some of the parts of the model are not super efficient, and that makes it not very effective to serve, particularly if you&#8217;re latency constrained. That&#8217;s why we had previously done this distillation process.</p><p>Now we&#8217;re using this distilled GEM variant that we&#8217;re calling the adaptive ranking model, where it&#8217;s distilled to be efficient enough to be served, but it&#8217;s not nearly as distilled as prior models, which were much smaller. The adaptive ranking model is an LLM-scale and complexity recommender model for Meta, with roughly one trillion parameters in this inference-time model. And it gets evaluated at sub-second latencies, which is a pretty fun and interesting software and hardware challenge.</p><p><strong>Sub-second latencies &#8212; that&#8217;s amazing. You&#8217;re talking about different SKUs and different workloads, and I&#8217;m tracking all that, and you mentioned at the end of the day you have a heterogeneous silicon environment &#8212; different vendors, some home-brews, some off-the-shelf, some custom. You talked about software, and obviously having to work internally to make sure your software is going to work with the hardware and vice versa. Can you tell me more about how you manage software across all that hardware? Because to the layman, that sounds like a lot of added complexity &#8212; but I don&#8217;t know how many different levels of abstraction you can have that makes it easier.</strong></p><p><strong>MS:</strong> In general, heterogeneous hardware is a challenging problem to solve because you have to make sure that each of your binaries not only is capable of running on that hardware, but is performance and cost effective on that hardware. This is where folks have historically been forced to choose between custom optimization of a binary on a particular hardware type, or translation layers, which abstract away a lot of the custom features of the hardware but also abstract away a lot of the performance improvements of the hardware as well. There was a very clear spectrum of performance trade-offs between abstraction layers, which make it simpler to deploy hardware but less cost effective, and customization of binaries for hardware, which is slow and costly to implement but much more performant and cost effective once implementation is done.</p><p>Recently, machine learning models have enabled really cool abilities to customize specific binaries for hardware such that you can now at scale deploy binaries that are custom modified and performance optimized for specific types of hardware rapidly and easily, without having an expert software engineer do those performance optimizations for you. We recently put out a paper, I believe we called it Alpha Evolve or Alpha Kernel, where a machine learning model &#8212; a large language model &#8212; will write a custom performance-optimized kernel for a particular binary or machine learning model and a particular hardware pair.</p><p>If we have a large number of machine learning models and a large number of heterogeneous hardware types, writing the custom hardware kernel that would optimize the performance of this binary on the hardware was very time consuming before. It&#8217;s effectively a matrix of custom software that had to be written and hand-tuned by an expert software engineer. Now we&#8217;ve entered an era where large language models with coding capabilities can produce these optimized kernels at extremely low cost, way, way, way cheaper than having someone sit there and meticulously pick through the various optimizations necessary to make this binary or model run on this particular type of hardware.</p><p>It&#8217;s a real breakthrough in the technology industry and it&#8217;s going to enable a lot more of that cost-effective optimization that allows you to take much more advantage of all of the hardware available to you. Now we&#8217;re thinking through all of our deployments of all of our binaries to all of our hardware. Whereas before we wouldn&#8217;t necessarily move a binary that was adapted to a particular type of hardware to another type of hardware because that would be high cost and maybe it wouldn&#8217;t be worth it &#8212; now we can ask the machine learning model to produce an optimized kernel for this binary or machine learning model on this hardware, and we can do a lot more active management of software running on hardware. Which is going to both lead to better performance for our advertising partners, better experiences for people, and of course lower costs for Meta, as we get to take more advantage of the hardware we have available to us and really right-size the hardware and software use cases together. It&#8217;s a long journey, we&#8217;re not done by any stretch, but some of the new breakthroughs here in AI are having really beneficial effects on our ability to optimize our hardware and our software for our business.</p><h2>GenAI Cross-Pollination and the Road Ahead</h2><p><strong>Amazing. What a world we live in. Reflecting back a bit &#8212; where my head is at is, back in the day it used to be software engineers were very expensive, and obviously Meta has probably always bought a lot of compute. But I could see the rationale for not having heterogeneous silicon because then you have to hire a bunch of software engineers if you want to optimize it for every different piece of silicon. Or on the other hand you just say, &#8220;software engineering is expensive, so we&#8217;re not going to perfectly optimize.&#8221; But at your scale you want to perfectly optimize everything so that you can eke out lower latency or better results. And interestingly, now we&#8217;re in a world where you need to buy lots and lots of hardware for your business, but the cost of software engineering has gone down to some extent with the help of generative AI LLMs, letting you still have a fleet &#8212; a matrix of different tasks and different hardware &#8212; and yet you can use LLMs to help optimize and fill out that spreadsheet in a cost-effective way. Which is very awesome. That leads me to the question about generative AI. How is Meta thinking about the relationship between its core recommendation systems and infrastructure and the investments in generative AI? Not only using generative AI in your core business, which alone is really cool and interesting, but also I know that you are training generative AI and offering that to customers.</strong></p><p><strong>MS:</strong> There is a lot of crosstalk between our various AI experts in the generative AI / large language model world and in our recommender systems world. Not only is there crosstalk and collaboration on hardware and data center design and performance optimization for the distributed systems, including things like the model trainer &#8212; we are both really focused on optimizing the machine learning model trainer and optimizing various aspects of the performance that the system needs to be able to train much larger models and serve much larger models. There&#8217;s a huge amount of joint investment that effectively benefits both sides of the house, the large language model side of the house and the recommender system side of the house.</p><p>We have experts in both types of ranking on both sides of the house so that we can improve the performance using both domains&#8217; techniques and capabilities. We are &#8212; maybe as evidenced by the pace of breakthroughs that we&#8217;re able to deploy in our services here &#8212; really seeing the benefits of the innovation in the AI space across both parts of the business today. That&#8217;s obviously very exciting. This is the weirdest, wackiest, most fun time to be a software engineer ever.</p><p><strong>Yes, seriously. It&#8217;s fascinating to think about those different sides of the house and how they cross-pollinate and impact each other, and just how fast both are moving. What an awesome time to be at Meta, and what a crazy time. Last question &#8212; looking forward, maybe two years because the rate of change makes it hard to look further than that &#8212; what do you see as the primary infrastructure needs for the next generation of AI-driven advertising?</strong></p><p><strong>MS:</strong> You can see we are all investing very heavily in building out data centers and purchasing large quantities of compute and memory and storage so that we can build better machine learning models, so we can find better machine learning models. The process of identifying performance improvements is really training a lot of machine learning models, tweaking various optimization parameters, coming up with new architectures and testing those to really drive maximum performance benefits. So, large investments in machine learning model training, machine learning model research that leads to performance improvements for training, that lead to performance improvements at inference time, substantial investments to make sure that we can infer these large language models and other generative models and ranking models both more cost effectively, but also driving more compute available at serve time and more memory available at serve time so we can feed things like longer sequence histories and larger context windows into these models.</p><p>The overarching theme here is end-to-end optimization. We&#8217;re trying to optimize the data center designs with the networking designs and the SKU designs and the software infrastructure designs for the distributed systems and the machine learning model infrastructure, the machine learning models themselves, the data that goes into them &#8212; all jointly, so we can drive maximum performance together.</p><p>Maybe to your point earlier, the demand for software engineering has effectively gone through the roof as the price has gone down. Whereas before we would invest in a limited number of hardware optimization kernels to run software on, now we want 100 times as many software optimization kernels for each piece of hardware because it&#8217;s available now. We can have machine learning models produce that, and now we have our expert hardware performance tuners supervising these models instead of writing the optimizations themselves. The same thing is true at every layer of the stack where we&#8217;re doing this optimization now. The demand for custom software that is more performant than a generic abstraction layer has gone through the roof. Every team at every layer is trying to do much better optimization to produce better results per dollar, better results per watt of power used in these data centers. That&#8217;s really leading to these meaningful breakthroughs that you&#8217;re seeing in terms of performance all across the industry, but particularly for the business as well.</p><p><strong>Yeah, what a wild cross-optimization problem, being vertically integrated in some respects from hardware through data center design all the way to the software, to the training and the inference. And then being able to use LLMs to help with all this super fast. What I like about what you&#8217;re talking about here is: you have to make all of these trade-off decisions, but there&#8217;s a clear optimization function that you&#8217;re solving for when you&#8217;re thinking of an ad-space business &#8212; an ROI, how much are you willing to spend, how much are they willing to pay, and how can better results lead to potentially paying more or the pie growing bigger. I&#8217;m just thinking out loud, contrasting that to maybe other players in the generative AI space where the economics aren&#8217;t quite as straightforward in making these decisions. Anyway &#8212; you guys have a lot to think through. My very final question for you personally: how do you stay on top of it all as it&#8217;s changing so fast up and down the stack?</strong></p><p><strong>MS:</strong> That&#8217;s a great question. I don&#8217;t think I have a fantastic answer. The rate of change is amazing. I try to use all of the AI tools available, including large language models, to summarize papers, produce a list of all the latest papers that have come out with breakthroughs that are relevant to the domain that I work in. I rely on a brilliant team of expert AI researchers to summarize the progress that&#8217;s happening in the space, how that should influence the roadmap that we&#8217;re building for the future. But the amount of information and the progress in the space is just wild. It&#8217;s really amazing and something to behold.</p><p><strong>Yes, totally. Well, you don&#8217;t sound bored, that&#8217;s for sure. Awesome. That&#8217;s it for today. Thanks so much, Matt, for taking the time to educate us. I&#8217;ve learned a lot and I know everyone will really get something out of this, so thank you.</strong></p><p><strong>MS:</strong> Definitely not. Thank you for having me, Austin. Great to chat with you.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.chipstrat.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Chipstrat is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[Substrate ]]></title><description><![CDATA[X-ray lithography worked. The industry chose a different path. Substrate wants to go back. Here's why.]]></description><link>https://www.chipstrat.com/p/substrate</link><guid isPermaLink="false">https://www.chipstrat.com/p/substrate</guid><dc:creator><![CDATA[Austin Lyons]]></dc:creator><pubDate>Wed, 15 Apr 2026 17:27:42 GMT</pubDate><enclosure url="https://substackcdn.com/image/youtube/w_728,c_limit/_G4XP-YRW0c" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Today, we&#8217;re talking <a href="https://substrate.com/">Substrate</a>.</p><p>Substrate is controversial. The debate tends to focus on individual objections, such as a lack of industry experience or the impracticality of particle accelerators. But I want to zoom out and look at the elephant as a whole: </p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!RvCn!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6f117dcb-7b8a-4c4f-94e6-b7dbc9447e23_1920x1628.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!RvCn!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6f117dcb-7b8a-4c4f-94e6-b7dbc9447e23_1920x1628.png 424w, https://substackcdn.com/image/fetch/$s_!RvCn!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6f117dcb-7b8a-4c4f-94e6-b7dbc9447e23_1920x1628.png 848w, https://substackcdn.com/image/fetch/$s_!RvCn!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6f117dcb-7b8a-4c4f-94e6-b7dbc9447e23_1920x1628.png 1272w, https://substackcdn.com/image/fetch/$s_!RvCn!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6f117dcb-7b8a-4c4f-94e6-b7dbc9447e23_1920x1628.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!RvCn!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6f117dcb-7b8a-4c4f-94e6-b7dbc9447e23_1920x1628.png" width="548" height="464.82142857142856" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/6f117dcb-7b8a-4c4f-94e6-b7dbc9447e23_1920x1628.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1235,&quot;width&quot;:1456,&quot;resizeWidth&quot;:548,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!RvCn!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6f117dcb-7b8a-4c4f-94e6-b7dbc9447e23_1920x1628.png 424w, https://substackcdn.com/image/fetch/$s_!RvCn!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6f117dcb-7b8a-4c4f-94e6-b7dbc9447e23_1920x1628.png 848w, https://substackcdn.com/image/fetch/$s_!RvCn!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6f117dcb-7b8a-4c4f-94e6-b7dbc9447e23_1920x1628.png 1272w, https://substackcdn.com/image/fetch/$s_!RvCn!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6f117dcb-7b8a-4c4f-94e6-b7dbc9447e23_1920x1628.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption"><a href="https://sketchplanations.com/the-blind-and-the-elephant">Source</a></figcaption></figure></div><p>To do that, let&#8217;s use a simple framework to test whether Substrate&#8217;s strategy is actually sound. </p><h2>How to Solve Problems</h2><p>A while back, I was listening to an old <a href="https://www.youtube.com/watch?v=M95m2EFb7IQ">Lex Fridman podcast with Ray Dalio</a>. And Ray was talking about his &#8220;5-Step Process&#8221;:</p><ol><li><p>Set your goal</p></li><li><p>Identify the problems blocking it</p></li><li><p>Diagnose the root cause</p></li><li><p>Design around it</p></li><li><p>Follow through</p></li></ol><p><em>Not rocket science. But there&#8217;s power in simple frameworks as a lens to view the world.</em></p><p>These five steps resonated with me, as they pattern-match well with my lived experiences (entrepreneurship, engineering, academic research, home projects, and so on). Once I heard it, I started seeing it everywhere. Listening through the How I Built This backlog, I found it in the <a href="https://www.npr.org/2017/02/20/515790641/crate-barrel-gordon-segal">Crate &amp; Barrel founding story</a> from the 1960s. Let me show you how it applies there, and then we&#8217;ll try it on Substrate.</p><h3>Crate &amp; Barrel</h3><p><strong>Quick context:</strong> Summer 1961. Gordon and Carol Segal are getting married. Both 23. They registered for stylish European housewares, but nobody in their lives had the money or taste to buy it for them. The Segals couldn&#8217;t afford it either. </p><p>But on their honeymoon in the Caribbean, they found the same products at a fraction of US prices:</p><blockquote><p><em>Gordon Segal: There was one Scandinavian store in the Virgin Islands. My wife picked up some of the items and said, &#8220;How can you have Danish 18/8 stainless at the $2.95 a place setting? It&#8217;s so much more expensive in America!&#8221; </em></p><p><em>And the Danish merchant there said, &#8220;We have salesmen from Europe come here, and we buy direct from factories.&#8221;</em></p></blockquote><p>Ah! A goal, a problem, a root cause. Gordon&#8217;s wheels started spinning:</p><blockquote><p><em>We got back to Chicago, and I was in the real estate business. She was teaching school. We were both sort of bored. And then, one night in February of &#8216;62, I was washing these dishes we had bought. I said, &#8220;You know, Carol, there had to be other young people like ourselves with good taste and no money. We should open a store.&#8221;</em></p></blockquote><p>Through the Dalio lens (and listening to the rest of the podcast):</p><ul><li><p><strong>Goal:</strong> Sell beautiful European housewares to young people with good taste and no money.</p></li><li><p><strong>Problem:</strong> Those goods were priced out of reach in the US, even though they were affordable in Europe and the Caribbean.</p></li><li><p><strong>Root cause:</strong> Middlemen in the US import distribution chain.</p></li><li><p><strong>Design:</strong> Skip the importers. Buy direct from European factories.</p></li><li><p><strong>Follow through:</strong> Hard work and grit.</p></li></ul><p>The Segals didn&#8217;t pencil out this strategy cleanly from Day 1. But the Dalio process was at work.       </p><p>Now let&#8217;s apply it to Substrate.  </p><h1>Substrate</h1><p>Substrate CEO James Proud made the goal very clear in this <a href="https://stratechery.com/2025/an-interview-with-substrate-ceo-james-proud-about-building-a-disruptive-foundry-in-america/">Stratechery interview</a>:  <strong>revive American chip manufacturing leadership.</strong></p><p><em>WTF? REVIVE AMERICAN CHIP MANUFACTURING? WHO DOES HE THINK HE IS?</em></p><p>Before you grab pitchforks, let&#8217;s work through the reasoning. We&#8217;ll address &#8220;yeah but no industry experience&#8221; and the rest later. First, the 5-step process.  </p><p><strong>Goal: American leading-edge semiconductor manufacturing</strong></p><p>What obstacles stand in the way?</p><p>Money and talent come to mind first. But those aren&#8217;t fundamental bottlenecks. Think about Elon and Terafab. Money and talent are tractable.</p><p>Dig deeper. Assume you&#8217;re well-capitalized and talent-rich. <em>OK, this sounds like Rapidus. We can cover them in another article.</em></p><p>Now what? </p><p>You know what&#8217;s actually hard? Creating a customer.</p><p><strong>Problem: No one will work with you</strong></p><p>Customer acquisition is the problem. </p><p>How could you possibly incentivize chip designers to use a brand-new, leading-edge foundry? You need a reason so compelling that it overcomes the risk premium.</p><p>On what plane can you even outperform TSMC?! They have 30+ years of process knowledge and relationships with every major chip company on Earth. And don&#8217;t say supply chain security. TSMC has n-1 capacity in Arizona and Intel Foundry is getting its swagger back&#8230;</p><p>Hmm&#8230;</p><p>Well, what pain points do chip designers have with TSMC today?</p><p><strong>Cost is a big one.</strong></p><h3><strong>The Cost Problem</strong></h3><p>Many companies can&#8217;t afford leading-edge nodes, and even those that can only use them for a select few SKUs. Design costs run in the hundreds of millions. Mask sets cost tens of millions. Only products with massive volume (smartphones) or high ASPs (Nvidia GPUs) can amortize that cost. And variable costs compound it. Leading-edge wafers are north of $20K and all signs point toward $100K by the end of the decade. So even for the highest-volume products, where fixed costs amortize to near zero, wafer price still matters.</p><p><strong>Why is the leading edge so expensive?</strong> Lithography.</p><p>EUV tools cost hundreds of millions each and the cost is only going up. You need dozens per fab, a significant driver of leading edge fab requires tens of billions to build. <em>Deeper background reading here:</em></p><div class="digest-post-embed" data-attrs="{&quot;nodeId&quot;:&quot;2f0f0810-a3a2-4164-94d5-12f52f32bc30&quot;,&quot;caption&quot;:&quot;ASML is the world&#8217;s sole supplier of EUV lithography systems, the machines required to manufacture leading-edge semiconductors. The Mag 7 depends very heavily on leading-edge semis. Nvidia. Apple. Google. Even Tesla, whose market cap depends heavily on the promise of autonomy, needs leading-edge semis for model training.&quot;,&quot;cta&quot;:&quot;Read full story&quot;,&quot;showBylines&quot;:true,&quot;size&quot;:&quot;sm&quot;,&quot;isEditorNode&quot;:true,&quot;title&quot;:&quot;Lithography Economics&quot;,&quot;publishedBylines&quot;:[{&quot;id&quot;:8066776,&quot;name&quot;:&quot;Austin Lyons&quot;,&quot;bio&quot;:&quot;Chipstrat, Creative Strategies, Semi Doped. MSEE + MBA.&quot;,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/c180a750-7572-4aff-88e4-317aa435d533_1203x902.jpeg&quot;,&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:100}],&quot;post_date&quot;:&quot;2026-01-03T19:04:07.068Z&quot;,&quot;cover_image&quot;:&quot;https://substackcdn.com/image/fetch/$s_!T77P!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F35c4dc10-1679-4734-b3f5-991344ffe0aa_2048x960.png&quot;,&quot;cover_image_alt&quot;:null,&quot;canonical_url&quot;:&quot;https://www.chipstrat.com/p/lithography-economics&quot;,&quot;section_name&quot;:null,&quot;video_upload_id&quot;:null,&quot;id&quot;:183370591,&quot;type&quot;:&quot;newsletter&quot;,&quot;reaction_count&quot;:31,&quot;comment_count&quot;:2,&quot;publication_id&quot;:2003179,&quot;publication_name&quot;:&quot;Chipstrat&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!rCMl!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F27769444-42f3-4b43-9683-4fe7826c06b8_608x608.png&quot;,&quot;belowTheFold&quot;:true,&quot;youtube_url&quot;:null,&quot;show_links&quot;:null,&quot;feed_url&quot;:null}"></div><p>But&#8230; what if you could somehow reduce the cost of lithography drastically?</p><p><strong>What if you could offer a value proposition of &#8220;2nm wafers at 28nm prices&#8221;?</strong></p><p>You could create customers. Existing medium and low volume SKUs that would love to use smaller, more power efficient transistors or build their own custom ASICs instead of using less efficient/broader off-the-shelf options.</p><p><em>Yeah, but EUV LITHOGRAPHY IS MAGIC! TIN DROPLETS! 30+ YEARS! ASML! NO WAY!</em></p><p>Yes, I know. Suspend disbelief for a bit and just follow Dalio&#8217;s process. Let&#8217;s pull on the thread more.</p><p><strong>What is the root cause of the lithography economics problem?</strong></p><p><a href="https://www.xlight.com/">XLight</a> has rightly pointed out that Laser-Produced Plasma (LPP) is very expensive. <em>The &#8220;shoot tin droplets and hit them twice with a laser to generate EUV light&#8221; part.</em> And every ASML EUV machine needs one.</p><p>XLight says, &#8220;Why not use a much higher power free electron laser (FEL) and share that light source amongst many EUV scanners?&#8221; This unlocks much better economics, not only from decoupling the light source from the scanner but also by increase the dose which impacts productivity and wafer economics.</p><p><em>See more here:</em></p><div class="digest-post-embed" data-attrs="{&quot;nodeId&quot;:&quot;ebfb5001-02b2-4bc5-ae8d-d1d9af65b9f0&quot;,&quot;caption&quot;:&quot;In January, I wrote about the worsening cost curve of EUV lithography and two startups trying to bend it:&quot;,&quot;cta&quot;:&quot;Read full story&quot;,&quot;showBylines&quot;:true,&quot;size&quot;:&quot;sm&quot;,&quot;isEditorNode&quot;:true,&quot;title&quot;:&quot;Photons as a Service&quot;,&quot;publishedBylines&quot;:[{&quot;id&quot;:8066776,&quot;name&quot;:&quot;Austin Lyons&quot;,&quot;bio&quot;:&quot;Chipstrat, Creative Strategies, Semi Doped. MSEE + MBA.&quot;,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/c180a750-7572-4aff-88e4-317aa435d533_1203x902.jpeg&quot;,&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:100}],&quot;post_date&quot;:&quot;2026-02-25T14:26:34.683Z&quot;,&quot;cover_image&quot;:&quot;https://substackcdn.com/image/fetch/$s_!ISCA!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8051981e-557c-4e9a-b8dd-3f84bf10eb14_1268x708.png&quot;,&quot;cover_image_alt&quot;:null,&quot;canonical_url&quot;:&quot;https://www.chipstrat.com/p/photons-as-a-service&quot;,&quot;section_name&quot;:null,&quot;video_upload_id&quot;:null,&quot;id&quot;:189140755,&quot;type&quot;:&quot;newsletter&quot;,&quot;reaction_count&quot;:15,&quot;comment_count&quot;:0,&quot;publication_id&quot;:2003179,&quot;publication_name&quot;:&quot;Chipstrat&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!rCMl!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F27769444-42f3-4b43-9683-4fe7826c06b8_608x608.png&quot;,&quot;belowTheFold&quot;:true,&quot;youtube_url&quot;:null,&quot;show_links&quot;:null,&quot;feed_url&quot;:null}"></div><p>But what if you pull on the root cause even further? Could it lead to a different solution?</p><p>The entire EUV system (light source, optics, scanner) is incredibly complex with known inefficiencies (many mirrors, lots of lost light&#8230;) all driving extreme cost.</p><p>Is that cost due to physics, meaning this is the globally optimal solution, and there is a physical limit preventing lithography from ever working differently?</p><p>Or did path dependence lead us to a local minimum, not a global one?</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!RB3f!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb7c1c6bf-e05f-4e1f-b13f-54ca44a62c2a_905x640.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!RB3f!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb7c1c6bf-e05f-4e1f-b13f-54ca44a62c2a_905x640.png 424w, https://substackcdn.com/image/fetch/$s_!RB3f!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb7c1c6bf-e05f-4e1f-b13f-54ca44a62c2a_905x640.png 848w, https://substackcdn.com/image/fetch/$s_!RB3f!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb7c1c6bf-e05f-4e1f-b13f-54ca44a62c2a_905x640.png 1272w, https://substackcdn.com/image/fetch/$s_!RB3f!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb7c1c6bf-e05f-4e1f-b13f-54ca44a62c2a_905x640.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!RB3f!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb7c1c6bf-e05f-4e1f-b13f-54ca44a62c2a_905x640.png" width="542" height="383.292817679558" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/b7c1c6bf-e05f-4e1f-b13f-54ca44a62c2a_905x640.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:640,&quot;width&quot;:905,&quot;resizeWidth&quot;:542,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!RB3f!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb7c1c6bf-e05f-4e1f-b13f-54ca44a62c2a_905x640.png 424w, https://substackcdn.com/image/fetch/$s_!RB3f!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb7c1c6bf-e05f-4e1f-b13f-54ca44a62c2a_905x640.png 848w, https://substackcdn.com/image/fetch/$s_!RB3f!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb7c1c6bf-e05f-4e1f-b13f-54ca44a62c2a_905x640.png 1272w, https://substackcdn.com/image/fetch/$s_!RB3f!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb7c1c6bf-e05f-4e1f-b13f-54ca44a62c2a_905x640.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Your path can lead you to a local minimum, and from there it&#8217;s hard to see if a global minimum exists. <a href="https://medium.com/aimonks/navigating-the-peaks-and-valleys-of-optimization-global-minimum-vs-25c05de6f69a">Source</a>.</figcaption></figure></div><p><strong>Is the root cause physics, or path dependence?</strong></p><h3>Retracing The Path</h3><p>Substrate believes it&#8217;s path dependence. And you can actually retrace the history to test that claim. In the &#8216;80s and &#8216;90s, the industry was actively researching X-ray lithography, which has a natural resolution advantage from its shorter wavelength. </p><p><em>Hmm&#8230; maybe we could learn what the shortcomings were and if they are till true today!</em></p><p>There are great old papers on this, especially from IBM Research. Check out this paper &#8220;<a href="https://ieeexplore.ieee.org/document/5389640">X-ray lithography in IBM, 1980-1992, the development years</a>&#8221;. It&#8217;s super enlightening. Let&#8217;s keep pulling on the thread, from Alan D. Wilson&#8217;s paper:</p><blockquote><p><em>Optical lithography, in 1980, was very poorly understood by the experts. On the basis of historical trends and current difficulties with existing tooling and technology, the limits of lithography were thought to be about 1&#8211;1.25 &#181;m for optics. X-ray lithography, consequently, was targeted for entry around 1 &#181;m, the perceived limit of optical lithography.</em></p><p><em>At the onset of the program the strategic advantages of X-ray lithography were stated to be high resolution (better than optical lithography), throughput superior to that of e-beam technology, better resist-processing characteristics, and potentially lower defects (no multilayer resists).</em></p></blockquote><p>X-ray seemed promising at the time. What did they learn? How did they learn? Were they thinking about manufacturability or just science?</p><blockquote><p><em>We spent the remainder of 1980 developing a financial and technical program plan for X-ray lithography based on synchrotrons, considering a number of basic questions: Where were exposures going to be done? What should the mask be made from, and how would it look? How would we develop a stepper/aligner system, and what would be the role of vendor assistance in this regard? What would be the staffing needs (the initial group included only six people) as the program progressed? And finally, what should the test vehicle be?</em></p><p><em>It was recognized early in the drafting of our program that we were targeting manufacturing, not device prototyping, but full manufacturing. Our manufacturing divisions would eventually be our customer. </em></p></blockquote><p>That&#8217;s really sound and reasonable thinking. They were thinking about full production scale and manufacturability, not just the science.  And if you read the rest of the paper, the X-ray lithography (XRL) technology actually worked:</p><blockquote><p><em>We had made complex, fully scaled CMOS devices with 0.5-&#956;m ground rules before our optical counterparts had reached the same level using a long-established technology. Our yield was also acceptable: not 100%, but acceptable. The principal goal of the X-ray program had been achieved.</em></p></blockquote><p>There are even tips for Substrate and us to think consider regarding a <em>practical </em>particle accelerator:</p><blockquote><p><em>For X-ray lithography to be viable in IBM, we needed to explore acquiring our own X-ray source&#8230; Early in 1982, Grobman and I were starting to learn about the physics of synchrotrons. Our interest in a synchrotron ring for IBM was kindled by reports from Munich of the design of a tabletop machine called Kleine-Erna&#8230;. We began this serious inquiry by visits to established rings in this country and in Europe. <strong>Perhaps not being part of the synchrotron establishment was beneficial: We could ask questions and find out what really made rings good and what did not, as well as who the real experts were.</strong></em></p></blockquote><p>A quick aside&#8230; Agree. Perhaps not being part of the establishment is a benefit at times. &#8230; <em>And figure out who the *real* experts are </em>&#128514;</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!71TO!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2bc8aa55-c0cb-4799-80ea-0d087b78c9ca_1133x500.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!71TO!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2bc8aa55-c0cb-4799-80ea-0d087b78c9ca_1133x500.jpeg 424w, https://substackcdn.com/image/fetch/$s_!71TO!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2bc8aa55-c0cb-4799-80ea-0d087b78c9ca_1133x500.jpeg 848w, https://substackcdn.com/image/fetch/$s_!71TO!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2bc8aa55-c0cb-4799-80ea-0d087b78c9ca_1133x500.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!71TO!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2bc8aa55-c0cb-4799-80ea-0d087b78c9ca_1133x500.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!71TO!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2bc8aa55-c0cb-4799-80ea-0d087b78c9ca_1133x500.jpeg" width="1133" height="500" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/2bc8aa55-c0cb-4799-80ea-0d087b78c9ca_1133x500.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:500,&quot;width&quot;:1133,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!71TO!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2bc8aa55-c0cb-4799-80ea-0d087b78c9ca_1133x500.jpeg 424w, https://substackcdn.com/image/fetch/$s_!71TO!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2bc8aa55-c0cb-4799-80ea-0d087b78c9ca_1133x500.jpeg 848w, https://substackcdn.com/image/fetch/$s_!71TO!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2bc8aa55-c0cb-4799-80ea-0d087b78c9ca_1133x500.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!71TO!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2bc8aa55-c0cb-4799-80ea-0d087b78c9ca_1133x500.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption"><em>Love this <a href="https://www.youtube.com/watch?v=oUO624DDYv8">video</a></em></figcaption></figure></div><p>This IBM researcher Alan D. Wilson is my dude. </p><p>Anyway, he continues</p><blockquote><p><em>Armed with this information, we asked ourselves what the specifications for the ring should be&#8230; <strong>I was invariably asked &#8220;What should a ring for industry be?&#8221; My answer was this: It should fit on a truck; plug into a wall socket; be reliable</strong> and available to operate 20 out of 21 shifts per week; have sufficient average output capable of sustaining a stepper throughput of more than 30 wafers per hour using an insensitive (having a sensitivity of -100 mJ/cm^) X-ray resist; be capable of being debugged/commissioned and assembled at the vendor, shipped intact to an IBM site, and rendered operational in a reasonable time at full specifications.</em></p></blockquote><p><em>Make sure it fits on a truck and plugs into a wall socket. Quite practical!</em> </p><p>And that kind of thinking is a lot different than the picture of particle accelerators I had in my head (i.e. CERN).</p><p>Oh last quick note from the paper, it seems that folks exploring XRL have always been doubted.</p><blockquote><p><em>The establishment of a program was a formidable task, since half of this distinguished group seriously questioned the need for and the viability of X-ray lithography.</em></p></blockquote><p>But IBM actually built a synchrotron (&#8220;Helios&#8221;) that fit on a truck and worked:</p><blockquote><p><em>Oxford proposed a superconducting dipole system with a cold bore. The magnet turned out to be difiicult to construct but had excellent performance. The Helios 1 ring was completed and commissioned at Oxford, England, in October 1990. During the design and building of the synchrotron we visited Oxford and the Daresbury team every four to eight weeks over a period of three and a half years. The ring was shipped to IBM in March 1991 and arrived</em> <em>at East Fishkill on March 29, 1991. <strong>It fit on a truck,</strong> as shown in Figure 12(a), and we slid it into ALF that same day [Figure 12(b)]. </em></p><p><em>The first beam was stored on or about May 20, 1991, and final specifications were met in January 1992. Our goal had been met and, in fact, exceeded, because the ring performs beyond specification [36]. <strong>Critics who thought design alone would not work were wrong.</strong> IBM and Oxford as a development team commissioned the Oxford ring in record time and with a very high level of performance.</em></p></blockquote><p>If it worked back then&#8230; why don&#8217;t we have X-ray lithography today? XRL wasn&#8217;t without practical shortcomings at the time. <a href="https://research.ibm.com/publications/challenges-and-progress-in-x-ray-lithography">This IBM paper</a> from 1998 said that XRL was mature enough to possibly be introduced at 130nm node, but admitted <em>manufacturing </em>issues, for example with masks.</p><blockquote><p><em>Nonetheless, there are challenges still to be met. Among the most important are the development and commercial availability of an improved e-beam mask writer; the ability to fabricate defect-free masks satisfying the image placement and critical dimension control requirements with good yields; the stability of the masks in usage (including the issue of possible radiation damage); the ability to correct for magnification errors; and the ability to satisfy the industry&#8217;s desire for a technology extendible to 70 nm ground rules. <strong>These issues are primarily manufacturing issues, as opposed to issues related to demonstrating proof-of-concept or feasibility,</strong> although demonstrating extendibility is still needed before the industry can commit to using XRL at 70 nm ground rules</em></p></blockquote><p>XRL was physically feasible but had engineering problems to solve at production scale. </p><p>Meanwhile, optical lithography kept working far beyond the ~1&#956;m wall Wilson predicted. So the industry kept pushing optical. First DUV, then EUV. </p><p>But a lot has changed in 35 years. US National Labs have spent years advancing particle accelerators and sources are now brighter, more reliable, and more compact. Computational lithography has improved significantly, too, and can help overcome the mask and proximity challenges that plagued IBM.</p><h3>Substrate&#8217;s Design</h3><p>So back to the Dallio process. The bottleneck is EUV-based lithography economics, and Substrate&#8217;s approach is to go back to the fork in the road and choose a different path. <em>Could there be a global optimum, and could it be XRL?</em></p><p>It would require co-designing the light source, optics, and scanner as an integrated whole from scratch. But to me, there seem to be many cost savings possibilities on the table:</p><ul><li><p>One particle accelerator source feeds many scanners, like IBM&#8217;s Helios ring which had 16 beamline ports. <em>The light source cost amortizes across many tools.</em></p></li><li><p>No multilayer mirrors means no compounding reflectivity losses. Almost all generated photons are available at the wafer, vs. single-digit percent for EUV.</p></li><li><p>No tin-droplet plasma source means no tin contamination, no collector degradation, no droplet generator maintenance. </p></li><li><p>Single-patterning at leading-edge nodes. Fewer exposures = fewer masks, fewer etch steps, fewer defect opportunities, faster cycle time.</p></li><li><p>In theory, the particle accelerator shouldn&#8217;t need cleanroom space. Only the wafer-handling end of the beamline sits in the cleanroom. This is contrary to EUV scanners which are massive and require substantial fab and subfab infrastructure:</p></li></ul><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!tUjt!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1502807e-518a-4de9-95ef-dd2ecd8e504a_953x689.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!tUjt!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1502807e-518a-4de9-95ef-dd2ecd8e504a_953x689.png 424w, https://substackcdn.com/image/fetch/$s_!tUjt!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1502807e-518a-4de9-95ef-dd2ecd8e504a_953x689.png 848w, https://substackcdn.com/image/fetch/$s_!tUjt!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1502807e-518a-4de9-95ef-dd2ecd8e504a_953x689.png 1272w, https://substackcdn.com/image/fetch/$s_!tUjt!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1502807e-518a-4de9-95ef-dd2ecd8e504a_953x689.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!tUjt!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1502807e-518a-4de9-95ef-dd2ecd8e504a_953x689.png" width="516" height="373.05771248688353" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/1502807e-518a-4de9-95ef-dd2ecd8e504a_953x689.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:689,&quot;width&quot;:953,&quot;resizeWidth&quot;:516,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!tUjt!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1502807e-518a-4de9-95ef-dd2ecd8e504a_953x689.png 424w, https://substackcdn.com/image/fetch/$s_!tUjt!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1502807e-518a-4de9-95ef-dd2ecd8e504a_953x689.png 848w, https://substackcdn.com/image/fetch/$s_!tUjt!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1502807e-518a-4de9-95ef-dd2ecd8e504a_953x689.png 1272w, https://substackcdn.com/image/fetch/$s_!tUjt!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1502807e-518a-4de9-95ef-dd2ecd8e504a_953x689.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">EUV takes a lot of cleanroom and subfloor space. <a href="https://semiengineering.com/why-euv-is-so-difficult/">Source</a></figcaption></figure></div><p>So here&#8217;s the chain of reasoning so far:</p><ul><li><p><strong>Goal</strong>: American leading-edge semiconductor manufacturing</p></li><li><p><strong>Problem</strong>: No one will work with a new foundry</p></li><li><p><strong>Root cause:</strong> The only lever that overcomes the risk premium is dramatically lower cost, and cost is dominated by lithography</p></li><li><p><strong>Root cause (a layer deeper)</strong>: EUV&#8217;s cost comes from path dependence, not physics</p></li><li><p><strong>Design:</strong> Go back to the 1990 fork. X-ray lithography with modern sources.</p></li></ul><p>This is sound. It sure seems XRL could genuinely untangle the lithography cost problem. </p><p>It&#8217;s believable. And it&#8217;s fundable. Substrate raised $100M from Founders Fund, General Catalysts, In-Q-Tel, and more. Founder&#8217;s Fund&#8217;s <a href="https://foundersfund.com/2017/01/manifesto/">thesis</a> is to invest in smart people solving difficult scientific problems where, if they succeed, the technology would be extraordinarily valuable. Substrate is a perfect fit. A 1% chance of reshaping the $1T+ semiconductor industry seems like just the type of asymmetric bet FF was built to make.</p><p>Of course, sound strategy doesn&#8217;t guarantee success. Execution is everything.</p><p>So can they actually pull this off? Behind the paywall I address the biggest objections head-on, work through whether XLight and Substrate can both win, and discuss the impact to TSMC and ASML.</p><p><em>It&#8217;s really, really interesting.</em></p>
      <p>
          <a href="https://www.chipstrat.com/p/substrate">
              Read more
          </a>
      </p>
   ]]></content:encoded></item><item><title><![CDATA[An Interview with MatX CEO Reiner Pope About LLM Chips]]></title><description><![CDATA[Hybrid SRAM + HBM, MoE interconnect, why frontier labs consider AI ASIC startups, and more]]></description><link>https://www.chipstrat.com/p/an-interview-with-matx-ceo-reiner</link><guid isPermaLink="false">https://www.chipstrat.com/p/an-interview-with-matx-ceo-reiner</guid><dc:creator><![CDATA[Austin Lyons]]></dc:creator><pubDate>Thu, 09 Apr 2026 21:30:43 GMT</pubDate><enclosure url="https://substackcdn.com/image/youtube/w_728,c_limit/7Ph9i1KYHxY" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>This interview is with Reiner Pope, co-founder and CEO of MatX. Pope and his co-founder Mike Gunter left Google &#8212; Pope from the Brain team, Gunter from the TPU team &#8212; one week before ChatGPT launched to build what they believe will be the best chips for LLMs that physics allows. The company has raised ~$600 million to date.</p><p>In this interview we discuss why Pope left Google to start a chip company, how to overcome the CUDA lock-in, and why frontier labs are the natural first customers. We get into the chip itself: a hybrid SRAM-HBM memory architecture that combines the low latency of Cerebras and Groq with the throughput of traditional HBM designs, and why that unlocks advantages across training, prefill, and decode. We also cover how agentic AI changes hardware requirements, how MatX uses AI internally in chip design, and the biggest skepticism Pope hears: can a 100-person startup manufacture at datacenter scale?</p><div id="youtube2-7Ph9i1KYHxY" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;7Ph9i1KYHxY&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/7Ph9i1KYHxY?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><p><em>This interview is lightly edited for clarity.</em></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.chipstrat.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.chipstrat.com/subscribe?"><span>Subscribe now</span></a></p><h2>Origin Story</h2><p><strong>Hello listeners, we have a special guest today, co-founder and CEO of MatX, Reiner Pope. Welcome Reiner, for listeners who haven&#8217;t heard of you and MatX, who are you, what is MatX, what are you guys trying to do?</strong></p><p><strong>RP:</strong> Thanks, very happy to be here. As you mentioned, I&#8217;m CEO at MatX. What we&#8217;re doing is making the best chips for LLMs that is allowable by physics. My co-founder Mike Gunter and I, prior to MatX, were working at Google for a long time. Most recently, I was on the Google Brain Team training one of the LLMs at the time, and Mike was on the TPU team. There were a lot of things we wanted to do to make the TPUs much better for running LLMs. Things like running at much lower precision, having much more compute performance based on large matrix support, and generally optimizing for LLMs, reducing a lot of the other circuitry that was needed for non-LLM workloads. At the time, this was in 2022, and it turned out the best way to do this would be by starting a separate company, which is MatX.</p><p><strong>So take me back, you mentioned 2022, you came out of Google, which I will say, it seems like everyone came out of Google that&#8217;s at the forefront of AI and hardware.</strong></p><p><strong>RP:</strong> It&#8217;s like the Bell Labs of the time.</p><p><strong>Yes! There&#8217;ll be a book written 10, 15 years from now that we&#8217;ll get to go back and read and it&#8217;ll be fun for us to remember the good old days.</strong></p><p><strong>But okay, back to 2022. I think it was November 30th when ChatGPT officially launched. How much ahead of that were you guys thinking about this direction? Did you launch before ChatGPT? And how did that inflection point&#8212;the general public becoming aware of transformers&#8212;how much did that change your life in terms of fundraising, vision casting, hiring?</strong></p><p><strong>RP:</strong> As it happened, we left Google one week before ChatGPT was released. We did not know it was coming. But the historical context was that GPT-3 had been released more than a year earlier in this developer demo.</p><p>It was really hard to use. You had to get in the mindset of &#8220;I am writing a document and I want the rest of this document to be the response I&#8217;m looking for.&#8221; It&#8217;s not a chat interface at all, totally different. But if you were paying a lot of attention, you could see the potential. A lot of insiders in the industry were appreciating something big is happening here.</p><p>And the question, really, pre-ChatGPT was: these models are incredible, but they&#8217;re 100 times more expensive than the models we&#8217;re used to running. There are 100 billion parameters instead of under a billion. Can we even afford to run them? The simple economics doesn&#8217;t work out if you&#8217;re used to running software as a service where every query is free and now you have to spend cents per query. When you&#8217;ve got millions of queries per second, it doesn&#8217;t pencil out. The big question prior to ChatGPT was: okay, cool demo, but it&#8217;s too expensive. Can you actually productize it? And there was a lot of skepticism that you actually could.</p><p>ChatGPT demonstrated that you can, and not only that, but the product is incredibly valuable. What that meant for us was we had already seen that prices are going to be high. If prices are high, how do you make them cheaper?</p><p>It turned out to be quite difficult for us to fundraise even after ChatGPT. It took about two quarters for that to really land, when the impact on Nvidia&#8217;s stock price showed up. Then there was the realization: okay, this is using a ton of GPUs, everyone is buying a ton of GPUs. Eventually Nvidia reported these gangbusters quarters, and at that point investors started seeing the potential.</p><p><strong>Okay, interesting. So you started by saying this is really transformational, but on the current hardware, it&#8217;s going to be too expensive. So there&#8217;s got to be a better hardware solution. Then ChatGPT launches a week after you guys leave, and I would expect investors to say, &#8220;I can see this is going to be productized!&#8221; But at the same time, Nvidia is the one capturing all the value and selling GPUs. So was the early skepticism just around: why will anyone buy hardware that&#8217;s not a GPU? Or did they quickly connect the dots that GPUs aren&#8217;t necessarily the most efficient?</strong></p><p><strong>RP:</strong> Some of the skepticism is definitely about why you would buy hardware that&#8217;s not a GPU. And then the other one is just: how do you compete with the world&#8217;s biggest company?</p><p>On the &#8220;why would you buy something that&#8217;s not a GPU&#8221; question, the big consideration is the software moat that Nvidia has. Everyone writes CUDA. Historically, we&#8217;ve seen how much software lock-in there is in so many businesses. Why is this one different? Isn&#8217;t there going to be software lock-in here? Would everyone really rewrite their software onto a different hardware platform?</p><h2>CUDA Lock-In</h2><p><strong>Is there lock-in? How are you thinking about it from a software perspective?</strong></p><p><strong>RP:</strong> At this point, I think it&#8217;s proven that the lock-in is pretty weak. Barring Google, who has been on TPUs forever, all of the other frontier labs are multi-platform. OpenAI, Anthropic, Meta, X &#8212; they are all on Nvidia, many of them are on TPUs. There are Cerebras announcements, AMD, some Broadcom-developed chips as well. All of these players are multi-platform. That is the proof already that the software lock-in is not that great.</p><p>If you want to think about the first principles reasons why, it&#8217;s because software versus hardware lock-in is really a question of how much spend you&#8217;re putting on the hardware versus how much you&#8217;re putting on software engineering to support the hardware. This is really the first time that balance has changed, and it has violated a lot of people&#8217;s intuitions.</p><p>Historically, the whole history of software as a service is you&#8217;re paying really large salaries to a large software engineering team, and the compute spend is a small fraction of that. Engineering time is precious is the mantra. Of course there you have to prioritize the ease of software.</p><p>But this is totally turned around now. All of the frontier labs are spending tens of billions of dollars on compute. The salaries of the people writing software for that compute are very high, but still small in comparison to the compute spend. So the rational choice is to do anything you can to get hardware costs down, be multi-platform, get the negotiating power that comes from that.</p><p><strong>I see, interesting. From first principles, it makes a lot of sense. Now that you&#8217;re going to spend so much money on hardware, how can you spend it correctly on software to unlock that? Even if it means you have a team writing kernels specifically for this architecture.</strong></p><p><strong>Fast forward from 2022 to now and we&#8217;re seeing everyone has multi-vendor silicon and it&#8217;s made your point. It&#8217;s very easy for you now. But back then, when you&#8217;re just starting and trying to raise that Series A, you clearly were trying to articulate that and hope that it came to fruition. Of your early investors, some of them must have believed. What got them to believe you in a world where it looked like Nvidia had all the GPUs and had the lock-in?</strong></p><p><strong>RP:</strong> Ultimately, all of early investing is primarily a bet on people rather than on technology. There&#8217;s a bit of both &#8212; you can have the best people in the world and have a business plan which doesn&#8217;t make any sense at all. But the premise that there is a physical product that we make that we will sell for dollars is a very easy business plan. It&#8217;s clear how you can make margins off of that.</p><p>In some sense, that&#8217;s even an easier business plan than starting a frontier lab. A frontier lab is like, &#8220;we&#8217;re going to make a model, we hope we can sell it in a product that hasn&#8217;t been defined yet.&#8221; With selling hardware, at least the business case is clear.</p><p>And for early seed stage investors, it&#8217;s primarily going off who we are, our backgrounds, and folks we&#8217;ve worked with who have vouched for us.</p><h2>The Chip</h2><p><strong>And of course you have the credibility of having been TPU people at Google. Tell me, actually really quick question. I don&#8217;t know if I&#8217;ve heard you say this anywhere. Explain the name MatX.</strong></p><p><strong>RP:</strong> Matrix multiply. One angle is you remove &#8220;ri&#8221; from matrix. Another one is the X is a &#8220;times.&#8221;</p><p><strong>Nice. So now take us into the first chip, the MatX One. I know you raised $100 million to get started, and then just a couple of months ago raised $500 million. We talk about [using that money to build] a chip, but I know you&#8217;re actually building a system. The goal is data center deployments. So with all of that context, tell me about the chip, but I want to get into the bigger system.</strong></p><p><strong>RP:</strong> A few of the core bets of the chip: primarily very high matrix multiply performance, higher than anyone else has announced in the market. There&#8217;s a whole story there, but in summary, the marginal returns on having more matrix multiply performance seem to be much higher than marginal returns on more HBM performance or other considerations. So you&#8217;ve got to invest in that first.</p><p>And then there&#8217;s this thing that had been like free money sitting on the table: get your memory system right. That is a combination of seeing two good ideas in the market. Nvidia, Google, Amazon have been all tensors in HBM &#8212; HBM first. Cerebras and Groq have been weights in SRAM. That gives you very low latencies, but it has some capacity problems. You can put those two together. It takes careful engineering and you need to balance the system right. It&#8217;s hard to balance the system right. But it is totally doable. That is the other thing we&#8217;ve done, and it gives some really big advantages in both latency and throughput.</p><p><strong>I think a lot of people are now starting to connect with that as they see the Groq LPUs and Cerebras; they see the benefit of weights in SRAM for low latency. But of course you need HBM for high throughput and KV caches. Everyone&#8217;s starting to realize that context is awesome &#8212; the more context you can give a model, the more interesting insights you get. You made the right bet. Was that an architectural bet made from day one, based on first principles?</strong></p><p><strong>RP:</strong> Yes. One of the things we&#8217;re very good at is workload mapping to hardware, and creative new ways to do that that are more optimal, especially when you consider the space of what potential hardware could be. This combination of different memory systems was a core idea going in.</p><p>One of the things it really enables &#8212; if you look through the list of parallelism and partitioning techniques: tensor parallelism, expert parallelism, pipeline parallelism. The last one is the ugly stepchild in some sense. It misses a lot of the advantages of optimizing latency and memory footprint that the other ones do. It turns out that&#8217;s actually a memory system choice. This combination of SRAM and HBM actually makes pipelining work as well as the other techniques for the first time ever. We understood that, and that was what we were going after.</p><p><strong>So back in 2022 when you&#8217;re making these early architectural decisions&#8212;the big systolic array, the right memory choice&#8212;you&#8217;re also thinking about mixture of experts and how different parallelism strategies require tuning those memory choices correctly. That&#8217;s IP and a differentiator for you versus someone who just says, &#8220;oh, weights in SRAM and HBM, let me go do the same thing.&#8221;</strong></p><p><strong>But reflecting back to 2022, I&#8217;m not sure mixture of experts was even out yet. So how much are you reading papers as stuff was happening in &#8216;22, &#8216;23, &#8216;24 and saying, do we need to tweak the architecture?</strong></p><p><strong>RP:</strong> We&#8217;ve been reading papers since 2017. I think the big and disappointing inflection point in 2022 was when Google stopped publishing. We were talking about how Google is where all the researchers came from. They had an incredible team in Google Brain and they were publishing everything, all of the good work they did. Very vibrant place to be. They stopped doing that in 2022 because of seeing the competitive market playing out. You could get all of the trend lines of where the best models are going until then, and then that stopped. DeepSeek publishing has been a pretty good reboot of that, but it&#8217;s sad that the volume has not been so large.</p><h2>Research and Publishing</h2><p><strong>Totally. I will admit I haven&#8217;t read all of your papers on your website, but I see that you guys do some publishing still. How are you thinking about that fine line of what to publish and what not to? Because for talent, it is exciting to get to publish to the world and share what you&#8217;re thinking about.</strong></p><p><strong>RP:</strong> The ability to publish neural net papers is a differentiator for us in terms of hiring. We have two different areas of neural net research. We&#8217;re a small company, especially our ML team is very small because that is part of what we do, but it is not the main thing we do. We&#8217;re not selling ML, we&#8217;re selling GEMMs.</p><p>But the agenda of our ML team is twofold. First is attention research, specifically focusing on memory bandwidth efficient attention. That is quite aligned to where we see the future of hardware being. The second is numerics. Numerics has been the single best improvement in chip performance over the last decade. I think we have some of the best numerics talent and IP here.</p><p>In terms of what we publish: we don&#8217;t currently publish the numerics that goes into our chip. We will probably publish it on a one or two year delay after releasing the chip. But we do publish all of the attention research, because what we&#8217;re doing there is advocacy. We&#8217;re saying: hey, model designers, you should probably have these considerations in mind, especially when you think of future hardware that&#8217;s going to have a ton of flops but is going to be somewhat more memory bandwidth constrained.</p><h2>Product Positioning</h2><p><strong>So you&#8217;re making hardware to sell at the end of the day, but you have ML researchers working on attention, memory bandwidth-efficient attention, and numerics. That informs your own architecture &#8212; extreme co-design. But you&#8217;re also trying to show model labs&#8212;the end customer&#8212;what&#8217;s possible. If they adopt your chips, how much will that change how they think about training or inference?</strong></p><p><strong>RP:</strong> We&#8217;re trying to not go too far outside of the comfort zone. If you want product-market fit, you have to mostly meet the customer where they are.</p><p>The way to quantify that: you can look at the chip specs and there are maybe five most important ones &#8212; HBM bandwidth and capacity, matrix multiply throughput, SRAM bandwidth and capacity, interconnect performance. Our attitude is we want to be at least on par with the best competition like Nvidia on all of these, and then substantially ahead on at least a few. The substantially ahead for us is obviously matrix multiply performance, also interconnect performance and SRAM.</p><p>There is no place where we are substantially behind in these big considerations. Maybe in some less LLM-relevant considerations we&#8217;re behind, but in these big five, we&#8217;re at least on par everywhere. That means the opportunity cost of switching to MatX is never too large.</p><p>But then the headroom you can get &#8212; if you want to maximize the benefit, you can tune your model. That means things like changing the balance between the MLP layer and the attention, more MLP less attention, or using some of our lower precision arithmetic. We have a range of precisions to get the biggest advantages out.</p><p><strong>Gotcha. So you make sure that in these five most important areas, none of them are too weak to prevent a customer from switching. You&#8217;ll be there on every front. But then if customers take a step further and optimize for your chips, they&#8217;ll have more headroom, they can do more.</strong></p><p><strong>RP:</strong> Yeah, that&#8217;s it.</p><h2>Customers and Workloads</h2><p><strong>Let me segue into who are those customers in broad strokes, the target customers for this chip system.</strong></p><p><strong>RP:</strong> The most interest has been from frontier labs, which is as expected. That is who we are designing for, and why they&#8217;re most interested is their spend is biggest. That also means the economics of being willing to tolerate a new software stack is biggest there too.</p><p>They also have this longer-term vision of three to five years out, which is where you need to be when you&#8217;re buying custom hardware. If you want to do really good co-design with your hardware provider, you need to be thinking on that time scale rather than just &#8220;I&#8217;ll buy what&#8217;s on the shelf today.&#8221; That&#8217;s where we&#8217;ve seen strong interest.</p><p>And this has shown up across all of the workloads &#8212; training, reinforcement learning, and inference both prefill and decode.</p><p><strong>Nice, okay, let&#8217;s talk about those workloads.</strong></p><p><strong>Let me reflect it back. Your customers are going to be the frontier labs. They have the most compute spend, they are the most incentivized to squeeze as much intelligence as they can out of that. They&#8217;re thinking three to five years ahead. They are incentivized to not only work with all their current partners, but to always be listening and see what else is out there.</strong></p><p><strong>The market is telling us the defining workload of our time is LLM inference. You can optimize around the transformer, around splitting it into prefill and decode. We see that with Nvidia and with Dynamo. Everyone&#8217;s getting used to that concept.</strong></p><p><strong>The market narrative has gone from GPUs for everything to actually at the rack scale, maybe it makes sense to have some SKUs that run prefill and some that run decode. This is their way of saying those sub-workloads have different constraints &#8212; if it&#8217;s memory bound, have the right hardware versus compute bound. But I know you had a great podcast that everyone should go listen to with John Collison and Cheeky Pint. You talked there about being competitive on all those workloads &#8212; training, prefill, decode, RL. And it kind of felt like going back to the days of a GPU can do everything. So how are you talking with these partners about their different workloads, and how do you not feel like a salesman just saying &#8220;yeah, we can do that, we can do that, we can do that&#8221;?</strong></p><p><strong>RP:</strong> We just have to be honest about what the strengths and weaknesses are. Let&#8217;s give that a shot here. Our product has a really large amount of compute. Traditionally, training and inference prefill are the compute-intensive workloads, and decode is memory bandwidth-intensive. So you might think, MatX has a lot of compute, why would we use that on a memory bandwidth intensive workload like decode?</p><p>That&#8217;s where the joint hybrid SRAM-HBM design really shines. You spend none of your HBM bandwidth on loading weights. All of that bandwidth is spent entirely on KV cache. So you can get better use out of your HBM bandwidth than you can with Nvidia. But you also get the very low latency because the weights are stored in SRAM, like Cerebras and Groq.</p><p>Digging into that further: low latency means small batch sizes &#8212; that&#8217;s just Little&#8217;s law. The number of things in flight are smaller. The memory occupancy in HBM is proportional to batch size. So you can actually fit longer contexts in HBM than you could if the latency were larger. Low latency is not just a usability win, but it actually improves your throughput as well.</p><p>This is similar to what Nvidia is now doing with the Groq and Nvidia racks side by side, but there are some taxes you pay by them being in different packages. Putting the whole thing in one package is the first principles way to do that and gives you the most advantages.</p><p><strong>Sure, that makes sense. You have a lot of compute. You make the right memory choices. Therefore you can do low latency and high throughput. And there are even benefits in the small batch size, low latency with respect to how the HBM is used. You talked about how Nvidia has essentially separate racks, the Groq rack in there, say Vera Rubin. You&#8217;re making one chip with benefits to both types of workload. How are you thinking about rack scale, interconnect, scale up, scale out?</strong></p><p><strong>RP:</strong> We have a lot of interconnect in the product. I think it is the most of any announced product, in fact. The reason: so you can support mixture of expert models with fairly small experts without becoming communication limited. Very sparse mixture of expert models are what primarily drive the interconnect requirements.</p><p>We deploy very large scale-up domains as well as supporting scale-out. The sizing of your scale-up domain is really driven by the sparsity and the kind of mixture of expert layers you want to support. You want to do the mixture of expert routing within your scale-up domain as much as possible &#8212; that is how everyone does it. Bigger scale-up domains allow bigger mixture of expert layers.</p><p>On topology, we do some interesting things with network topology. I won&#8217;t go into huge specifics, but contrasting what is in the market: Nvidia has done things like running everything through the NV switches. Google has these torus topologies. If you think about what you really want for mixture of expert layers, you can design something very custom for that.</p><p><strong>I see, nice. That again aligns with the idea of designing not just the chip but the whole system for the specific workload, even down to network topology. That makes a lot of sense.</strong></p><h2>The Team</h2><p><strong>So how many people, even hand-wavy, do you have at MatX? We&#8217;re talking about networking, ML, hardware. Probably you have to think about cooling and operations and all sorts of stuff because it&#8217;s really data center design. Tell me more about the company. It must be very cross-functional &#8212; what&#8217;s it like there?</strong></p><p><strong>RP:</strong> For a product like this, it&#8217;s a relatively small team. It&#8217;s over 100 people. But some of these projects &#8212; Nvidia has 10,000 or 20,000 people.</p><p>Most of the team is hardware, which includes the core chip itself, the logic design, design verification, physical design, and so on. We designed the rack in conjunction with a partner as well. So we have folks looking at what is the insertion force of a board into a rack, cable density, power delivery, thermals. That&#8217;s going down the stack.</p><p>Going up the stack, we have a really strong software team writing the software stack that runs LLMs on our chip. And then we have the ML team doing exactly the research agenda I described. Very cross-disciplinary. I think it&#8217;s a super fun place to work because in one day you&#8217;ll have a conversation about physical insertion forces and at the same time functional programming or SAT solvers for compilers.</p><h2>Agentic AI</h2><p><strong>Nice, sounds fun. So I&#8217;m thinking about your interdisciplinary team, everything you&#8217;re trying to build in your first system. And at the same time, the world is constantly changing. We&#8217;ve got agentic AI, Claude Code, OpenAI Codex, maybe an explosion of inference tokens needed. Opus is awesome but expensive. I can&#8217;t use my Mac subscription for Claude Code. All of a sudden Mythos has come out. And I&#8217;m wondering as a chip designer with ML researchers, how are you staying on top of all this? Are things changing that make you think in the next version of our chip we should do things differently? Or are you seeing it play out and feeling pretty confident, like, we can help this problem of awesome but expensive inference?</strong></p><p><strong>RP:</strong> Halfway through your question, I was like, is this going to be about how do we use agentic AI versus how do we serve it? Both are interesting.</p><p>How do we serve it: there is this ongoing trend where you see the incredibly fast pace of change in models, how people are using them, how they&#8217;re training them. But when you filter that through the lens of what does that mean for the hardware, it&#8217;s almost all noise &#8212; 95% is noise. The rate of change for what you need in hardware is much, much slower.</p><p>As that applies to agentic AI: what is it doing? It&#8217;s still doing decode. It&#8217;s still doing prefill and decode. Some things that are different: it has increased the demand. When the agent goes off and thinks for a long time and the user is sitting there waiting, you would like them to wait for 30 seconds instead of five minutes. So the demand for performance has gotten higher, but that&#8217;s within expectations. Demand is always going to get higher. That&#8217;s a great place to be.</p><p>One place where it&#8217;s actually a difference is sizing. Sizing exercises are what we do every day. One example: how long does the model sit idle while it&#8217;s waiting for a response from an outside system?</p><p>In a chatbot context, the model has responded to you, and then you as a human are thinking, maybe you&#8217;re going to type another message, maybe you never do, maybe you leave. That&#8217;s on the order of 30 seconds or a minute. The context for the model has to be kept in memory somewhere during that time, and you have to size that memory.</p><p>That has changed meaningfully in an agentic context where now the model is mostly waiting for tool calls &#8212; run a compiler, do a web search, check your email. The times for those are very different. Checking your email can run in seconds rather than waiting for a human to think. So the memories in service of that end up being smaller.</p><p>But then there are things like long-running jobs &#8212; running a compiler or running a place and route tool, which can take hours. I think that&#8217;s actually the biggest place it&#8217;s turned up: there is now increasing demand for storage systems for when the KV cache isn&#8217;t actively being used but is waiting for a response from an outside agent.</p><p><strong>Yeah, interesting. So tell me, how are you guys using agentic AI?</strong></p><p><strong>RP:</strong> Most of chip design is actually software development in practice. The way you express a chip is you write Verilog, which is a programming language. It&#8217;s an unusual programming language because it&#8217;s massively parallel, but it is a programming language. Can you write that better with AI?</p><p>One of the things we look at: the places where AIs are most effective is when there is a well-defined objective function. Does this compile? Is the area good? Is the power good? How many tests does it pass? We look at our processes and say, can we do development in a way that puts it in that regime, which is really the sweet spot for AI development.</p><p>The other thing we do: in addition to Verilog, we use other languages. There are popular ones like Rust and Python, but also some less popular ones &#8212; in our case we really like using BlueSpec. It&#8217;s a hardware description language that comes from functional programming. We are looking into how we can make sure AI is really good at BlueSpec even though it&#8217;s a niche language.</p><p><strong>Cool, interesting. I&#8217;ve never heard of it. Is that something you think about as a competitive advantage, or just generally wanting to make AI models better at BlueSpec and share this with the world?</strong></p><p><strong>RP:</strong> There are so few BlueSpec programmers in the world that we just want a higher pool of them, and then it becomes a competitive advantage.</p><h2>Go to Market</h2><p><strong>I love that. Okay, since you&#8217;re the CEO, I&#8217;m going to go back to talking about customers, route to market. On the one hand, it&#8217;s kind of nice because maybe there&#8217;s only five or six customers that would be great, so any one of these would be a great anchor customer. On the other hand, probably everyone in this space is wanting to talk with them and work with them. What does it look like to say, we&#8217;re a startup, trust us, we&#8217;re building this thing, it&#8217;s going to be awesome? How do you have those conversations to address their concerns, and ultimately, how will they end up buying your first chip or your roadmap of chips?</strong></p><p><strong>RP:</strong> &#8220;Trust us&#8221; goes as far as your word goes, right? Not very far. So you need to prove it.</p><p>For us, proof means a lot of detail on the artifacts we have. What is the core architecture? What are the very specific details inside the chip? How do we organize the chip &#8212; we talked about this splittable systolic array, these are the different compute units inside the chip, how do they connect to each other? What is the instruction set? What is the software SDK?</p><p>We give all of this information to customers under NDA. It is a lot, and it is uncomfortable for us to give that information, but it goes a long way towards proving credibility.</p><p><strong>Yeah, that makes a lot of sense. As far as the software, what is the level of effort they&#8217;ll have to commit to when they say, here&#8217;s yet another vendor, we&#8217;re excited about everything they told us, we believe them, but there&#8217;s probably still some effort to port?</strong></p><p><strong>RP:</strong> For sure. If you look at the sizes of teams supporting each of these multiple platforms, it&#8217;s on the order of 50 to 100 people per platform. Really good people doing kernel development, maybe building compilers, building debugging tools. I think that&#8217;s the ballpark of what folks should expect on our platform as well.</p><p>We want to help and we&#8217;ll do as much as we can to do that work for you rather than you needing to start it all yourself. But ultimately a frontier lab wants to protect its own IP, especially the model architecture. The last mile of kernel development is always going to remain in the frontier lab so they know specifically what they&#8217;re doing rather than giving it to us.</p><p>The first miles &#8212; giving a strong compiler and debugging infrastructure &#8212; is something we can actually do for you though.</p><p><strong>One or two last questions. What is the biggest skepticism that you hear from people?</strong></p><p><strong>RP:</strong> One of the things we&#8217;re focusing on over the next few years is: how can we as a relatively new startup manufacture in massive volume?</p><p>It&#8217;s a really exciting opportunity. The projections for data centers over the next few years are in the many gigawatts, tens of gigawatts. I don&#8217;t know when we&#8217;re going to hit a hundred gigawatts. Nvidia chips sell for about $15 or $20 billion a gigawatt. You might multiply that by 10 or 100. It&#8217;s a really large commitment.</p><p>The opportunity is really large, but being able to get very quickly to selling such a large volume is also a substantial challenge, and some big parts of that are ahead of us. I think that&#8217;s a really exciting thing for us to do over the next year and a half.</p><p><strong>Yeah, that&#8217;s a good point. It&#8217;s not just about building the system, it&#8217;s about can you scale it, can you production ramp it, can you get to huge deployments that people are comfortable with, that work, that are reliable. Okay, last question. Give me a hiring plug. You&#8217;re 100-some people, it&#8217;s very interdisciplinary. Why should people come work with you?</strong></p><p><strong>RP:</strong> Ultimately you have to believe in the product vision, and I think we just have the best product in the market. It&#8217;s designed from first principles for what LLMs really need, keeping in mind years of know-how and techniques of what is the right way to map applications to hardware. That&#8217;s the company vision. But the way we operate, it&#8217;s a very friendly and high-trust team with a ton of incredibly smart people. I think that&#8217;s the day to day of why it&#8217;s a really exciting place to be.</p><p><strong>Sure, A-plus people enjoy working with A-plus people. Awesome, Reiner, this was great. I learned a lot. Thank you for the time. I&#8217;ll be fascinated to check in over time and see how things are going with you.</strong></p><p><strong>RP:</strong> Yeah, thanks Austin, it was really fun talking.</p>]]></content:encoded></item><item><title><![CDATA[The Agentic Computer: New S-Curve or Another iPad? ]]></title><description><![CDATA[The next device category is here. But will the economics work?]]></description><link>https://www.chipstrat.com/p/the-agentic-computer-new-s-curve</link><guid isPermaLink="false">https://www.chipstrat.com/p/the-agentic-computer-new-s-curve</guid><dc:creator><![CDATA[Austin Lyons]]></dc:creator><pubDate>Tue, 07 Apr 2026 17:26:27 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!kW6d!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F10dfac48-d13a-42bf-b56d-97d43f2183ac_1698x1154.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>The client computing industry has been chasing the next big form factor for a long time. The PC and the smartphone were massive markets, but both have scaled their S-curves. The tablet was supposed to be next but never achieved escape velocity. <em>Smartwatches, same story.</em></p><p>One might be arriving, but it&#8217;s not what anyone expected. In 2017, Benedict Evans predicted <a href="https://www.ben-evans.com/benedictevans/2017/3/22/the-end-of-smartphone-innovation">augmented reality would be next</a>. Quite reasonable. But three months later, the transformer paper dropped. It took a while for that to change the world, but nine years on, the new device <em>isn&#8217;t</em> something even more mobile than a phone. It&#8217;s a box on your desk for your AI agents to live on: </p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!kW6d!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F10dfac48-d13a-42bf-b56d-97d43f2183ac_1698x1154.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!kW6d!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F10dfac48-d13a-42bf-b56d-97d43f2183ac_1698x1154.png 424w, https://substackcdn.com/image/fetch/$s_!kW6d!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F10dfac48-d13a-42bf-b56d-97d43f2183ac_1698x1154.png 848w, https://substackcdn.com/image/fetch/$s_!kW6d!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F10dfac48-d13a-42bf-b56d-97d43f2183ac_1698x1154.png 1272w, https://substackcdn.com/image/fetch/$s_!kW6d!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F10dfac48-d13a-42bf-b56d-97d43f2183ac_1698x1154.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!kW6d!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F10dfac48-d13a-42bf-b56d-97d43f2183ac_1698x1154.png" width="1456" height="990" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/10dfac48-d13a-42bf-b56d-97d43f2183ac_1698x1154.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:990,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1780592,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.chipstrat.com/i/193485602?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F10dfac48-d13a-42bf-b56d-97d43f2183ac_1698x1154.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!kW6d!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F10dfac48-d13a-42bf-b56d-97d43f2183ac_1698x1154.png 424w, https://substackcdn.com/image/fetch/$s_!kW6d!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F10dfac48-d13a-42bf-b56d-97d43f2183ac_1698x1154.png 848w, https://substackcdn.com/image/fetch/$s_!kW6d!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F10dfac48-d13a-42bf-b56d-97d43f2183ac_1698x1154.png 1272w, https://substackcdn.com/image/fetch/$s_!kW6d!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F10dfac48-d13a-42bf-b56d-97d43f2183ac_1698x1154.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption"><a href="https://www.youtube.com/live/0NBILspM4c4?si=GKKixfZ_r9GGhb9N&amp;t=1310">Source</a></figcaption></figure></div><p>Nvidia sells the <a href="https://www.nvidia.com/en-us/products/workstations/dgx-spark/">DGX Spark</a> as a &#8220;personal AI supercomputer.&#8221; AMD calls it an <a href="https://www.amd.com/en/products/processors/consumer/agent-computers.html#agent-computers">Agent Computer</a>. Perplexity is shipping Mac Minis as a <a href="https://www.perplexity.ai/hub/blog/everything-is-computer">Personal Computer</a> service.</p><p><strong>Is this the beginning of a new S-curve? Or is it another iPad?</strong> If the agentic computer takes hold, it&#8217;s additive TAM. It carries meaningful ASP because it needs serious memory and compute. <em>Does every knowledge worker&#8217;s desk eventually have two computers on it?</em>            </p><p>But there are headwinds. Recently, Anthropic banned always-on AI agents from using its Claude subscription plans:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!kOfe!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7356f446-aa88-4ae0-aece-6b3f5bb19199_1194x768.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!kOfe!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7356f446-aa88-4ae0-aece-6b3f5bb19199_1194x768.png 424w, https://substackcdn.com/image/fetch/$s_!kOfe!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7356f446-aa88-4ae0-aece-6b3f5bb19199_1194x768.png 848w, https://substackcdn.com/image/fetch/$s_!kOfe!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7356f446-aa88-4ae0-aece-6b3f5bb19199_1194x768.png 1272w, https://substackcdn.com/image/fetch/$s_!kOfe!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7356f446-aa88-4ae0-aece-6b3f5bb19199_1194x768.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!kOfe!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7356f446-aa88-4ae0-aece-6b3f5bb19199_1194x768.png" width="568" height="365.3467336683417" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/7356f446-aa88-4ae0-aece-6b3f5bb19199_1194x768.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:768,&quot;width&quot;:1194,&quot;resizeWidth&quot;:568,&quot;bytes&quot;:212724,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.chipstrat.com/i/193485602?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7356f446-aa88-4ae0-aece-6b3f5bb19199_1194x768.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!kOfe!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7356f446-aa88-4ae0-aece-6b3f5bb19199_1194x768.png 424w, https://substackcdn.com/image/fetch/$s_!kOfe!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7356f446-aa88-4ae0-aece-6b3f5bb19199_1194x768.png 848w, https://substackcdn.com/image/fetch/$s_!kOfe!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7356f446-aa88-4ae0-aece-6b3f5bb19199_1194x768.png 1272w, https://substackcdn.com/image/fetch/$s_!kOfe!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7356f446-aa88-4ae0-aece-6b3f5bb19199_1194x768.png 1456w" sizes="100vw"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption"><a href="https://x.com/bcherny/status/2040206441756471399">Source</a></figcaption></figure></div><p>That means running OpenClaw just got a lot more expensive. And that&#8217;s not the only headwind.</p>
      <p>
          <a href="https://www.chipstrat.com/p/the-agentic-computer-new-s-curve">
              Read more
          </a>
      </p>
   ]]></content:encoded></item><item><title><![CDATA[Coherent's Vertical Integration Strategy]]></title><description><![CDATA[Coherent makes more of the optical stack in-house than any competitor. We walk through the business, the growth vectors, and whether breadth beats depth.]]></description><link>https://www.chipstrat.com/p/coherents-vertical-integration-strategy</link><guid isPermaLink="false">https://www.chipstrat.com/p/coherents-vertical-integration-strategy</guid><dc:creator><![CDATA[Austin Lyons]]></dc:creator><pubDate>Wed, 01 Apr 2026 20:29:55 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!TytX!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1c1d7e53-ff6b-4edf-ae05-7eeeb24d7469_4001x2250.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Quick hits:</p><ul><li><p>Coherent makes its own EMLs, VCSELs, silicon photonics, and finished transceivers. <em>It&#8217;s hard to find another public company that touches this many layers of the optical stack.</em></p></li><li><p>Six-inch InP is ramping across four fabs with yields management says exceed 3-inch. <em>The performance comparison against Lumentum is still playing out.</em></p></li><li><p>Five growth vectors stacking: transceivers, OCS, DCI, CPO, thermal. <em>Management says CY2026 is mostly booked and CY2027 is filling fast.</em></p></li><li><p>Breadth vs. depth is the real question. <em>Does a hyperscaler want one partner for everything, or best-in-class at every layer?</em></p></li></ul><div><hr></div><p>We&#8217;ve covered <a href="https://www.chipstrat.com/p/lumentum-and-the-laser-bottleneck">Lumentum&#8217;s</a> and <a href="https://www.chipstrat.com/p/broadcom-makes-lasers">Broadcom&#8217;s</a> AI optical infra businesses so far. Lumentum is a shooting star thanks to its laser performance plus tailwinds of industrywide supply scarcity. Broadcom dominates the datacenter networking silicon (Tomahawk switches, 1.6T DSPs) and is pushing direct-attach copper in scale-up for as long as physics allows, even as it builds CPO technology (lasers included) for optical scale-up.</p><p>Time for Coherent.</p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!pb_W!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faf84b937-a46c-4eb1-bd8d-9712f7da8618_1514x318.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!pb_W!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faf84b937-a46c-4eb1-bd8d-9712f7da8618_1514x318.png 424w, https://substackcdn.com/image/fetch/$s_!pb_W!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faf84b937-a46c-4eb1-bd8d-9712f7da8618_1514x318.png 848w, https://substackcdn.com/image/fetch/$s_!pb_W!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faf84b937-a46c-4eb1-bd8d-9712f7da8618_1514x318.png 1272w, https://substackcdn.com/image/fetch/$s_!pb_W!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faf84b937-a46c-4eb1-bd8d-9712f7da8618_1514x318.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!pb_W!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faf84b937-a46c-4eb1-bd8d-9712f7da8618_1514x318.png" width="428" height="89.95054945054945" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/af84b937-a46c-4eb1-bd8d-9712f7da8618_1514x318.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:306,&quot;width&quot;:1456,&quot;resizeWidth&quot;:428,&quot;bytes&quot;:30124,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.chipstrat.com/i/192882627?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faf84b937-a46c-4eb1-bd8d-9712f7da8618_1514x318.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!pb_W!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faf84b937-a46c-4eb1-bd8d-9712f7da8618_1514x318.png 424w, https://substackcdn.com/image/fetch/$s_!pb_W!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faf84b937-a46c-4eb1-bd8d-9712f7da8618_1514x318.png 848w, https://substackcdn.com/image/fetch/$s_!pb_W!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faf84b937-a46c-4eb1-bd8d-9712f7da8618_1514x318.png 1272w, https://substackcdn.com/image/fetch/$s_!pb_W!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faf84b937-a46c-4eb1-bd8d-9712f7da8618_1514x318.png 1456w" sizes="100vw" fetchpriority="high"></picture><div></div></div></a></figure></div><p>Coherent&#8217;s angle is <strong>vertical integration</strong> across the photonics value chain. The company designs and manufactures InP-based EMLs and CW lasers, VCSELs, silicon photonics, detectors, and finished transceiver modules in-house. </p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!gZeg!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4fe9ab12-fb20-4f17-9187-122d9ffc0dd5_4001x2250.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!gZeg!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4fe9ab12-fb20-4f17-9187-122d9ffc0dd5_4001x2250.png 424w, https://substackcdn.com/image/fetch/$s_!gZeg!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4fe9ab12-fb20-4f17-9187-122d9ffc0dd5_4001x2250.png 848w, https://substackcdn.com/image/fetch/$s_!gZeg!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4fe9ab12-fb20-4f17-9187-122d9ffc0dd5_4001x2250.png 1272w, https://substackcdn.com/image/fetch/$s_!gZeg!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4fe9ab12-fb20-4f17-9187-122d9ffc0dd5_4001x2250.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!gZeg!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4fe9ab12-fb20-4f17-9187-122d9ffc0dd5_4001x2250.png" width="1456" height="819" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/4fe9ab12-fb20-4f17-9187-122d9ffc0dd5_4001x2250.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:819,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1264407,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.chipstrat.com/i/192882627?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4fe9ab12-fb20-4f17-9187-122d9ffc0dd5_4001x2250.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!gZeg!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4fe9ab12-fb20-4f17-9187-122d9ffc0dd5_4001x2250.png 424w, https://substackcdn.com/image/fetch/$s_!gZeg!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4fe9ab12-fb20-4f17-9187-122d9ffc0dd5_4001x2250.png 848w, https://substackcdn.com/image/fetch/$s_!gZeg!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4fe9ab12-fb20-4f17-9187-122d9ffc0dd5_4001x2250.png 1272w, https://substackcdn.com/image/fetch/$s_!gZeg!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4fe9ab12-fb20-4f17-9187-122d9ffc0dd5_4001x2250.png 1456w" sizes="100vw"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Coherent&#8217;s internal capability matrix for pluggable transceivers and CPO. Every checkmark is a component the company designs and manufactures  in-house. Notable absence: DSPs, which Coherent outsources. <em>Coherent OFC 2026 Technology Innovation Briefing.</em>         </figcaption></figure></div><p>That puts it in competition with Lumentum, Broadcom, and Sumitomo at the component layer, and with module vendors like InnoLight and Eoptolink at the transceiver level. The stock has run from $45 to $250 in fifteen months and still trades at a lower forward multiple than Lumentum.</p><p>In August 2025, Coherent began production on what management calls the world&#8217;s first 6-inch indium phosphide production platform in Sherman, Texas and Jarfalla, Sweden:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Wb4Q!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc4143780-c564-4a91-a414-73e191ebeff8_4001x2250.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Wb4Q!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc4143780-c564-4a91-a414-73e191ebeff8_4001x2250.png 424w, https://substackcdn.com/image/fetch/$s_!Wb4Q!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc4143780-c564-4a91-a414-73e191ebeff8_4001x2250.png 848w, https://substackcdn.com/image/fetch/$s_!Wb4Q!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc4143780-c564-4a91-a414-73e191ebeff8_4001x2250.png 1272w, https://substackcdn.com/image/fetch/$s_!Wb4Q!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc4143780-c564-4a91-a414-73e191ebeff8_4001x2250.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Wb4Q!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc4143780-c564-4a91-a414-73e191ebeff8_4001x2250.png" width="1456" height="819" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/c4143780-c564-4a91-a414-73e191ebeff8_4001x2250.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:819,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:2238086,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.chipstrat.com/i/192882627?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc4143780-c564-4a91-a414-73e191ebeff8_4001x2250.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Wb4Q!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc4143780-c564-4a91-a414-73e191ebeff8_4001x2250.png 424w, https://substackcdn.com/image/fetch/$s_!Wb4Q!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc4143780-c564-4a91-a414-73e191ebeff8_4001x2250.png 848w, https://substackcdn.com/image/fetch/$s_!Wb4Q!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc4143780-c564-4a91-a414-73e191ebeff8_4001x2250.png 1272w, https://substackcdn.com/image/fetch/$s_!Wb4Q!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc4143780-c564-4a91-a414-73e191ebeff8_4001x2250.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Now also ramping up Zurich with a 6&#8221; InP line</figcaption></figure></div><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!xxTI!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc3b59650-cda5-435c-8ddf-30ef3dd5e341_4001x2250.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!xxTI!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc3b59650-cda5-435c-8ddf-30ef3dd5e341_4001x2250.png 424w, https://substackcdn.com/image/fetch/$s_!xxTI!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc3b59650-cda5-435c-8ddf-30ef3dd5e341_4001x2250.png 848w, https://substackcdn.com/image/fetch/$s_!xxTI!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc3b59650-cda5-435c-8ddf-30ef3dd5e341_4001x2250.png 1272w, https://substackcdn.com/image/fetch/$s_!xxTI!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc3b59650-cda5-435c-8ddf-30ef3dd5e341_4001x2250.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!xxTI!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc3b59650-cda5-435c-8ddf-30ef3dd5e341_4001x2250.png" width="1456" height="819" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/c3b59650-cda5-435c-8ddf-30ef3dd5e341_4001x2250.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:819,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:278695,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.chipstrat.com/i/192882627?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc3b59650-cda5-435c-8ddf-30ef3dd5e341_4001x2250.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!xxTI!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc3b59650-cda5-435c-8ddf-30ef3dd5e341_4001x2250.png 424w, https://substackcdn.com/image/fetch/$s_!xxTI!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc3b59650-cda5-435c-8ddf-30ef3dd5e341_4001x2250.png 848w, https://substackcdn.com/image/fetch/$s_!xxTI!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc3b59650-cda5-435c-8ddf-30ef3dd5e341_4001x2250.png 1272w, https://substackcdn.com/image/fetch/$s_!xxTI!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc3b59650-cda5-435c-8ddf-30ef3dd5e341_4001x2250.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Increasing supply significantly over 24 months</figcaption></figure></div><p>If yields hold, that should materially improve Coherent&#8217;s cost structure and add meaningful InP supply to an industry that is currently constrained. How that affects pricing dynamics across the laser supply chain is one of the key tensions we&#8217;ll explore below.</p><p>Coherent&#8217;s vertical integration means it can supply components, modules, or systems across virtually every optical architecture a hyperscaler might adopt, from pluggable transceivers today to co-packaged optics and optical circuit switches tomorrow:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!TytX!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1c1d7e53-ff6b-4edf-ae05-7eeeb24d7469_4001x2250.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!TytX!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1c1d7e53-ff6b-4edf-ae05-7eeeb24d7469_4001x2250.png 424w, https://substackcdn.com/image/fetch/$s_!TytX!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1c1d7e53-ff6b-4edf-ae05-7eeeb24d7469_4001x2250.png 848w, https://substackcdn.com/image/fetch/$s_!TytX!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1c1d7e53-ff6b-4edf-ae05-7eeeb24d7469_4001x2250.png 1272w, https://substackcdn.com/image/fetch/$s_!TytX!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1c1d7e53-ff6b-4edf-ae05-7eeeb24d7469_4001x2250.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!TytX!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1c1d7e53-ff6b-4edf-ae05-7eeeb24d7469_4001x2250.png" width="1456" height="819" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/1c1d7e53-ff6b-4edf-ae05-7eeeb24d7469_4001x2250.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:819,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1283819,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.chipstrat.com/i/192882627?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1c1d7e53-ff6b-4edf-ae05-7eeeb24d7469_4001x2250.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!TytX!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1c1d7e53-ff6b-4edf-ae05-7eeeb24d7469_4001x2250.png 424w, https://substackcdn.com/image/fetch/$s_!TytX!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1c1d7e53-ff6b-4edf-ae05-7eeeb24d7469_4001x2250.png 848w, https://substackcdn.com/image/fetch/$s_!TytX!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1c1d7e53-ff6b-4edf-ae05-7eeeb24d7469_4001x2250.png 1272w, https://substackcdn.com/image/fetch/$s_!TytX!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1c1d7e53-ff6b-4edf-ae05-7eeeb24d7469_4001x2250.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>The bull case is that this flexibility becomes increasingly valuable as datacenter optical needs diversify. The bear case is that hyperscalers prefer to unbundle and multi-source each layer, buying best-in-class lasers from Lumentum, DSPs from Broadcom, and modules from whoever is cheapest.</p><p>Let&#8217;s walk through the business, the growth vectors, and the tensions that matter.</p><p><em>NFA, DYDD.</em></p><h2><strong>Coherent Corp</strong></h2><p>Coherent&#8217;s backstory begins with <strong>II-VI Incorporated</strong>, founded in 1971 and named after the II-VI compound semiconductor groups on the periodic table. <em>Naming is hard.</em></p><p>II-VI&#8217;s original business focused on supplying engineered semiconductor materials and optical substrates that form the foundation of lasers and other photonic devices. Thus, the company operated at the lowest layer of the value chain, upstream of components and with limited exposure to finished products.</p><p>There have been many acquisitions along the way:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!bcRB!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F521d008a-5168-440a-8fc6-2f0bb7b20429_2288x1242.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!bcRB!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F521d008a-5168-440a-8fc6-2f0bb7b20429_2288x1242.png 424w, https://substackcdn.com/image/fetch/$s_!bcRB!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F521d008a-5168-440a-8fc6-2f0bb7b20429_2288x1242.png 848w, https://substackcdn.com/image/fetch/$s_!bcRB!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F521d008a-5168-440a-8fc6-2f0bb7b20429_2288x1242.png 1272w, https://substackcdn.com/image/fetch/$s_!bcRB!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F521d008a-5168-440a-8fc6-2f0bb7b20429_2288x1242.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!bcRB!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F521d008a-5168-440a-8fc6-2f0bb7b20429_2288x1242.png" width="1456" height="790" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/521d008a-5168-440a-8fc6-2f0bb7b20429_2288x1242.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:790,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:891864,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.chipstrat.com/i/192882627?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F521d008a-5168-440a-8fc6-2f0bb7b20429_2288x1242.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!bcRB!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F521d008a-5168-440a-8fc6-2f0bb7b20429_2288x1242.png 424w, https://substackcdn.com/image/fetch/$s_!bcRB!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F521d008a-5168-440a-8fc6-2f0bb7b20429_2288x1242.png 848w, https://substackcdn.com/image/fetch/$s_!bcRB!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F521d008a-5168-440a-8fc6-2f0bb7b20429_2288x1242.png 1272w, https://substackcdn.com/image/fetch/$s_!bcRB!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F521d008a-5168-440a-8fc6-2f0bb7b20429_2288x1242.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>A few notable ones.</p><p><strong>Finisar</strong> was acquired in 2019 for ~$3.2B. It was a leading manufacturer of optical transceiver modules at the time and brought significant VCSEL capacity for 3D sensing (for Face ID). The deal extended II-VI from raw materials into finished modules and added scale in hyperscale networking.</p><p><strong>Coherent Inc</strong>. was acquired in July 2022 for ~$6.6B. Coherent was a storied laser systems company, founded in 1966, that built the first commercial CO2 laser and grew into a global leader in industrial laser systems, especially after acquiring Rofin-Sinar in 2016. It brought complete laser systems for cutting, welding, semiconductor lithography annealing, and display manufacturing, moving II-VI from components into full systems.</p><p>So II-VI started as a materials maker and eventually expanded to add photonic products and laser systems capability. They rebranded the parent company II-VI as Coherent Corp. in September 2022.</p><p>The result is a company with two distinct lines of business. ~72% of revenue comes from <strong>Datacenter &amp; Communications (DC&amp;C),</strong> the high-growth segment riding the AI optical buildout. The remaining 28% is <strong>Industrial</strong>; lasers for semiconductor equipment makers like ASML and Applied Materials, excimer lasers for OLED display fabs, silicon carbide substrates, and specialty materials. </p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Wms4!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff54526e6-ccd1-4346-90aa-2a242ec03e6c_2282x1180.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Wms4!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff54526e6-ccd1-4346-90aa-2a242ec03e6c_2282x1180.png 424w, https://substackcdn.com/image/fetch/$s_!Wms4!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff54526e6-ccd1-4346-90aa-2a242ec03e6c_2282x1180.png 848w, https://substackcdn.com/image/fetch/$s_!Wms4!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff54526e6-ccd1-4346-90aa-2a242ec03e6c_2282x1180.png 1272w, https://substackcdn.com/image/fetch/$s_!Wms4!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff54526e6-ccd1-4346-90aa-2a242ec03e6c_2282x1180.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Wms4!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff54526e6-ccd1-4346-90aa-2a242ec03e6c_2282x1180.png" width="1456" height="753" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/f54526e6-ccd1-4346-90aa-2a242ec03e6c_2282x1180.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:753,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:211888,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.chipstrat.com/i/192882627?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff54526e6-ccd1-4346-90aa-2a242ec03e6c_2282x1180.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Wms4!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff54526e6-ccd1-4346-90aa-2a242ec03e6c_2282x1180.png 424w, https://substackcdn.com/image/fetch/$s_!Wms4!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff54526e6-ccd1-4346-90aa-2a242ec03e6c_2282x1180.png 848w, https://substackcdn.com/image/fetch/$s_!Wms4!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff54526e6-ccd1-4346-90aa-2a242ec03e6c_2282x1180.png 1272w, https://substackcdn.com/image/fetch/$s_!Wms4!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff54526e6-ccd1-4346-90aa-2a242ec03e6c_2282x1180.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Source: Q2 2026 Earnings Slides</figcaption></figure></div><p>The industrial business is slow growth, but it&#8217;s profitable, sticky, and generates nice margins. The slide above suggests Industrial is shrinking QoQ, but the 2025 Analyst day suggests Coherent thinks it can still grow 5-10% CAGR over the next 3-4 years.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Va8u!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F520a13f8-e693-47d9-bc3e-a1beef4c4497_2284x1226.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Va8u!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F520a13f8-e693-47d9-bc3e-a1beef4c4497_2284x1226.png 424w, https://substackcdn.com/image/fetch/$s_!Va8u!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F520a13f8-e693-47d9-bc3e-a1beef4c4497_2284x1226.png 848w, https://substackcdn.com/image/fetch/$s_!Va8u!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F520a13f8-e693-47d9-bc3e-a1beef4c4497_2284x1226.png 1272w, https://substackcdn.com/image/fetch/$s_!Va8u!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F520a13f8-e693-47d9-bc3e-a1beef4c4497_2284x1226.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Va8u!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F520a13f8-e693-47d9-bc3e-a1beef4c4497_2284x1226.png" width="1456" height="782" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/520a13f8-e693-47d9-bc3e-a1beef4c4497_2284x1226.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:782,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:262065,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.chipstrat.com/i/192882627?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F520a13f8-e693-47d9-bc3e-a1beef4c4497_2284x1226.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Va8u!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F520a13f8-e693-47d9-bc3e-a1beef4c4497_2284x1226.png 424w, https://substackcdn.com/image/fetch/$s_!Va8u!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F520a13f8-e693-47d9-bc3e-a1beef4c4497_2284x1226.png 848w, https://substackcdn.com/image/fetch/$s_!Va8u!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F520a13f8-e693-47d9-bc3e-a1beef4c4497_2284x1226.png 1272w, https://substackcdn.com/image/fetch/$s_!Va8u!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F520a13f8-e693-47d9-bc3e-a1beef4c4497_2284x1226.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><em>Drag on the AI story? Or a diversified base? I tend to be comfortable with the latter &#8212; good source of margin dollars.</em></p><p>Coherent CEO Jim Anderson&#8217;s mandate is to reshape Coherent from a leveraged, margin-constrained conglomerate into a focused AI photonics platform, and the company is making solid progress. Revenue has grown from $4.73B in FY2024 to a ~$6.7B annualized run rate, while gross margins have expanded by roughly 500 bps to ~39% and EPS has scaled from $1.21 to over $5 on a run-rate basis. </p><p>At the same time, leverage has been reduced from over 3x to 1.7x, alongside a series of divestitures and footprint reductions to simplify the business:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!UK54!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2ea4d49c-8550-4d1a-b426-e2d6375a17ee_2270x1230.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!UK54!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2ea4d49c-8550-4d1a-b426-e2d6375a17ee_2270x1230.png 424w, https://substackcdn.com/image/fetch/$s_!UK54!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2ea4d49c-8550-4d1a-b426-e2d6375a17ee_2270x1230.png 848w, https://substackcdn.com/image/fetch/$s_!UK54!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2ea4d49c-8550-4d1a-b426-e2d6375a17ee_2270x1230.png 1272w, https://substackcdn.com/image/fetch/$s_!UK54!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2ea4d49c-8550-4d1a-b426-e2d6375a17ee_2270x1230.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!UK54!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2ea4d49c-8550-4d1a-b426-e2d6375a17ee_2270x1230.png" width="1456" height="789" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/2ea4d49c-8550-4d1a-b426-e2d6375a17ee_2270x1230.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:789,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:227353,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.chipstrat.com/i/192882627?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2ea4d49c-8550-4d1a-b426-e2d6375a17ee_2270x1230.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!UK54!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2ea4d49c-8550-4d1a-b426-e2d6375a17ee_2270x1230.png 424w, https://substackcdn.com/image/fetch/$s_!UK54!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2ea4d49c-8550-4d1a-b426-e2d6375a17ee_2270x1230.png 848w, https://substackcdn.com/image/fetch/$s_!UK54!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2ea4d49c-8550-4d1a-b426-e2d6375a17ee_2270x1230.png 1272w, https://substackcdn.com/image/fetch/$s_!UK54!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2ea4d49c-8550-4d1a-b426-e2d6375a17ee_2270x1230.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Source: 2025 Analyst Day</figcaption></figure></div><p><em>Coherent bonus points: Was recently included in the S&amp;P 500 and a $2B equity investment plus multibillion dollar supply agreement with Nvidia.</em></p><h2><strong>Vertical Integration</strong></h2><p>Coherent claims to have the broadest photonics portfolio in the industry. </p><p>Let&#8217;s walk through it.</p><p>First, indium phosphide (InP). Coherent has over 20 years of in-house InP capability spanning epitaxial growth, laser fabrication, modulators, photodiodes, and integrated subsystems.</p><p>One important clarification: Coherent does not grow its own raw InP crystals. It purchases InP substrate wafers from external vendors under 3-to-5-year supply agreements, then performs epitaxy and device fabrication in-house. This is the same fundamental supply chain dependency that Lumentum and Broadcom face, though Coherent has locked in multi-year contracts with multiple 6-inch substrate vendors to mitigate it. Worth noting that Coherent <em>does</em> grow its own SiC and GaAs crystals internally; it is specifically InP substrates that are sourced externally:</p><blockquote><p><strong>Gianmarco Conti, Analyst:</strong> Gianmarco from Deutsche Bank. You&#8217;re expanding indium phosphide capacity, <strong>but the raw indium feedstock is roughly 70% sourced from Chinese zinc smelters, which</strong> are now subject to export permit requirements with multi-month processing times. I guess my question is, how much visibility do you have on indium supply for the next 12 to 24 months? And are you actively diversifying sourcing away from China? Or do you hold strategic inventory buffer?</p><p><strong>James Anderson, CEO:</strong> We actually have a <strong>very diversified supply chain for indium phosphide substrates</strong>. We have -- and I think I&#8217;ve shared this in the past, we have over <strong>five different substrate suppliers</strong> today, and we work with those suppliers, not just on the next -- you mentioned next 12 or 24 months. We don&#8217;t work on just next 12 to 24 months. <strong>We work on the next like three to five years of capacity that we&#8217;re going to need</strong>. So we have, in some cases, very long-term agreements in place. And that includes not just the substrates, but all the key inputs that go into that. So we believe that we have very good visibility into substrate supply. And so that capacity expansion that Beck showed is we have commitments from our suppliers to supply the necessary indium phosphide substrates to support that.</p></blockquote><p>The <strong>6-inch InP production platform</strong> is an important topic. Management calls it the world&#8217;s first, and it is now ramping across four sites: Fremont, California (3-inch legacy); Sherman, Texas; J&#228;rf&#228;lla, Sweden; and Z&#252;rich, Switzerland (newest addition). The economics, per management, are roughly 4x the devices per wafer at about half the cost compared to legacy 3-inch lines, with capacity doubling this year. If yields hold at scale, this could result in competitive cost structure relative to Lumentum (which is migrating from 3-inch to 4-inch) and Broadcom (believed to be on 3-to-4-inch). <em>The timing of those yields is one of the main questions we&#8217;ll look at below.</em></p><p>On <strong>EML laser chips</strong>, Coherent manufactures 100G EMLs for 400G and 800G transceivers, 200G EMLs for 800G and 1.6T, and demonstrated a 400G Differential EML for 3.2T and 6.4T at OFC 2026. </p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!7MKp!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc6c7511e-2ed3-4f58-b267-0dafc5db00c1_4001x2250.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!7MKp!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc6c7511e-2ed3-4f58-b267-0dafc5db00c1_4001x2250.png 424w, https://substackcdn.com/image/fetch/$s_!7MKp!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc6c7511e-2ed3-4f58-b267-0dafc5db00c1_4001x2250.png 848w, https://substackcdn.com/image/fetch/$s_!7MKp!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc6c7511e-2ed3-4f58-b267-0dafc5db00c1_4001x2250.png 1272w, https://substackcdn.com/image/fetch/$s_!7MKp!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc6c7511e-2ed3-4f58-b267-0dafc5db00c1_4001x2250.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!7MKp!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc6c7511e-2ed3-4f58-b267-0dafc5db00c1_4001x2250.png" width="1456" height="819" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/c6c7511e-2ed3-4f58-b267-0dafc5db00c1_4001x2250.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:819,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:2377737,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.chipstrat.com/i/192882627?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc6c7511e-2ed3-4f58-b267-0dafc5db00c1_4001x2250.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!7MKp!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc6c7511e-2ed3-4f58-b267-0dafc5db00c1_4001x2250.png 424w, https://substackcdn.com/image/fetch/$s_!7MKp!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc6c7511e-2ed3-4f58-b267-0dafc5db00c1_4001x2250.png 848w, https://substackcdn.com/image/fetch/$s_!7MKp!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc6c7511e-2ed3-4f58-b267-0dafc5db00c1_4001x2250.png 1272w, https://substackcdn.com/image/fetch/$s_!7MKp!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc6c7511e-2ed3-4f58-b267-0dafc5db00c1_4001x2250.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!JK3k!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F10f85059-719f-4c00-9070-1cd7f9055fe8_4001x2250.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!JK3k!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F10f85059-719f-4c00-9070-1cd7f9055fe8_4001x2250.png 424w, https://substackcdn.com/image/fetch/$s_!JK3k!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F10f85059-719f-4c00-9070-1cd7f9055fe8_4001x2250.png 848w, https://substackcdn.com/image/fetch/$s_!JK3k!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F10f85059-719f-4c00-9070-1cd7f9055fe8_4001x2250.png 1272w, https://substackcdn.com/image/fetch/$s_!JK3k!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F10f85059-719f-4c00-9070-1cd7f9055fe8_4001x2250.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!JK3k!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F10f85059-719f-4c00-9070-1cd7f9055fe8_4001x2250.png" width="1456" height="819" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/10f85059-719f-4c00-9070-1cd7f9055fe8_4001x2250.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:819,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:639035,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.chipstrat.com/i/192882627?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F10f85059-719f-4c00-9070-1cd7f9055fe8_4001x2250.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!JK3k!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F10f85059-719f-4c00-9070-1cd7f9055fe8_4001x2250.png 424w, https://substackcdn.com/image/fetch/$s_!JK3k!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F10f85059-719f-4c00-9070-1cd7f9055fe8_4001x2250.png 848w, https://substackcdn.com/image/fetch/$s_!JK3k!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F10f85059-719f-4c00-9070-1cd7f9055fe8_4001x2250.png 1272w, https://substackcdn.com/image/fetch/$s_!JK3k!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F10f85059-719f-4c00-9070-1cd7f9055fe8_4001x2250.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Coherent currently uses a mix of internal and external laser sourcing. Anderson confirmed at the Morgan Stanley conference that Lumentum is both a customer and a supplier, suggesting that Coherent&#8217;s internal EML capability has not fully displaced external supply across all performance tiers. The 6-inch cost advantage may be closing the gap, but the performance comparison against Lumentum&#8217;s epitaxy is still playing out. </p><p>For <strong>CW lasers</strong>, which power silicon photonics transceivers and are critical for co-packaged optics, Coherent is in full production and ramping on 6-inch InP in Sherman. At OFC 2026, it showed a 400mW high-power version for CPO applications. </p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Jcdl!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F049dcd1e-2efe-4a0d-af74-ff1ac95e24c9_4001x2250.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Jcdl!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F049dcd1e-2efe-4a0d-af74-ff1ac95e24c9_4001x2250.png 424w, https://substackcdn.com/image/fetch/$s_!Jcdl!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F049dcd1e-2efe-4a0d-af74-ff1ac95e24c9_4001x2250.png 848w, https://substackcdn.com/image/fetch/$s_!Jcdl!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F049dcd1e-2efe-4a0d-af74-ff1ac95e24c9_4001x2250.png 1272w, https://substackcdn.com/image/fetch/$s_!Jcdl!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F049dcd1e-2efe-4a0d-af74-ff1ac95e24c9_4001x2250.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Jcdl!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F049dcd1e-2efe-4a0d-af74-ff1ac95e24c9_4001x2250.png" width="1456" height="819" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/049dcd1e-2efe-4a0d-af74-ff1ac95e24c9_4001x2250.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:819,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1203922,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.chipstrat.com/i/192882627?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F049dcd1e-2efe-4a0d-af74-ff1ac95e24c9_4001x2250.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Jcdl!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F049dcd1e-2efe-4a0d-af74-ff1ac95e24c9_4001x2250.png 424w, https://substackcdn.com/image/fetch/$s_!Jcdl!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F049dcd1e-2efe-4a0d-af74-ff1ac95e24c9_4001x2250.png 848w, https://substackcdn.com/image/fetch/$s_!Jcdl!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F049dcd1e-2efe-4a0d-af74-ff1ac95e24c9_4001x2250.png 1272w, https://substackcdn.com/image/fetch/$s_!Jcdl!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F049dcd1e-2efe-4a0d-af74-ff1ac95e24c9_4001x2250.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>CW lasers are one of the key product families covered by the Nvidia supply agreement. On the OFC status update call, Beck Mason (EVP Semiconductor Devices) set out to reassure everyone that CW laser yields on 6-inch wafers are looking promising:</p><blockquote><p><strong>BM:</strong> We&#8217;re currently running three main categories of devices on 6-inch indium phosphide, EMLs, high-power CW lasers and high-speed photodetectors. And<br>all three of those categories, we&#8217;re seeing higher yield and better throughput efficiency on our 6-inch lines than we&#8217;ve been able to achieve even on our very mature 3-inch production lines.</p></blockquote><p>On <strong>VCSELs</strong>, Coherent manufactures GaAs-based vertical-cavity surface-emitting lasers. These serve Apple under a new multiyear 3D sensing agreement, and Coherent plans to launch a VCSEL-based 1.6T transceiver in the second half of calendar 2026.</p><p>VCSELs are a lower-power alternative to InP-based solutions, but with shorter reach. CTO Julie Eng explained the tradeoff at the OFC briefing:</p><blockquote><p>&#8220;The VCSEL actually is an interesting potential for silicon photonics because the power is very, very low&#8230; it&#8217;s basically between 4x and 5x lower power than the silicon photonics solution. But it doesn&#8217;t go as far. It&#8217;s shorter reach.&#8221;</p></blockquote><p>That means VCSELs could be used for in-rack and near-rack scale-up, where power and density are prioritized over distance, typically under 100 meters on multimode fiber. </p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!RNiu!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4858e9be-30bb-4730-b96f-2cf30675b8b8_4001x2250.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!RNiu!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4858e9be-30bb-4730-b96f-2cf30675b8b8_4001x2250.png 424w, https://substackcdn.com/image/fetch/$s_!RNiu!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4858e9be-30bb-4730-b96f-2cf30675b8b8_4001x2250.png 848w, https://substackcdn.com/image/fetch/$s_!RNiu!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4858e9be-30bb-4730-b96f-2cf30675b8b8_4001x2250.png 1272w, https://substackcdn.com/image/fetch/$s_!RNiu!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4858e9be-30bb-4730-b96f-2cf30675b8b8_4001x2250.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!RNiu!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4858e9be-30bb-4730-b96f-2cf30675b8b8_4001x2250.png" width="1456" height="819" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/4858e9be-30bb-4730-b96f-2cf30675b8b8_4001x2250.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:819,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:2038566,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.chipstrat.com/i/192882627?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4858e9be-30bb-4730-b96f-2cf30675b8b8_4001x2250.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!RNiu!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4858e9be-30bb-4730-b96f-2cf30675b8b8_4001x2250.png 424w, https://substackcdn.com/image/fetch/$s_!RNiu!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4858e9be-30bb-4730-b96f-2cf30675b8b8_4001x2250.png 848w, https://substackcdn.com/image/fetch/$s_!RNiu!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4858e9be-30bb-4730-b96f-2cf30675b8b8_4001x2250.png 1272w, https://substackcdn.com/image/fetch/$s_!RNiu!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4858e9be-30bb-4730-b96f-2cf30675b8b8_4001x2250.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Management says Coherent has ample gallium arsenide capacity, which is relevant because <strong>GaAs circumvents the industrywide InP bottleneck</strong>. Anderson expects VCSELs and InP-based approaches to coexist rather than compete:</p><blockquote><p>&#8220;In the VCSEL-based solution, you usually have the laser in it. And so there&#8217;s some pluses and minuses of that. But I do think that these will coexist in CPO/NPO just as they have in pluggable transceivers.&#8221;</p></blockquote><p>Coherent also has its <strong>own silicon photonics PIC platform</strong> and <a href="https://www.coherent.com/news/press-releases/coherent-demonstrates-next-gen-pluggable-transceiver-ofc-2026">demonstrated</a> a 400G pure silicon PN-junction Mach-Zehnder Modulator at OFC 2026. This is a pathway to 3.2T transceivers via silicon photonics rather than InP, giving Coherent optionality across both technology approaches.</p><p>At the transceiver module level, Coherent ships full OSFP modules at 800G and 1.6T. At OFC, it showed 1.6T transceivers built with three different DSP solutions from three different industry leaders. That is notable because it <strong>positions Coherent as technology-agnostic at the DSP layer, in contrast to Broadcom,</strong> which makes its own DSPs and can offer a vertically integrated laser-plus-DSP package. Coherent is effectively saying it will work with any DSP partner, giving hyperscalers flexibility to choose.</p><p>Beyond transceivers, Coherent makes <strong>optical circuit switches (OCS)</strong> using digital liquid crystal technology, which is non-mechanical and has no moving parts. </p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Sk3d!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1ddf936c-1ff0-4fc4-918b-f97955ba13c4_4001x2250.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Sk3d!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1ddf936c-1ff0-4fc4-918b-f97955ba13c4_4001x2250.png 424w, https://substackcdn.com/image/fetch/$s_!Sk3d!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1ddf936c-1ff0-4fc4-918b-f97955ba13c4_4001x2250.png 848w, https://substackcdn.com/image/fetch/$s_!Sk3d!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1ddf936c-1ff0-4fc4-918b-f97955ba13c4_4001x2250.png 1272w, https://substackcdn.com/image/fetch/$s_!Sk3d!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1ddf936c-1ff0-4fc4-918b-f97955ba13c4_4001x2250.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Sk3d!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1ddf936c-1ff0-4fc4-918b-f97955ba13c4_4001x2250.png" width="1456" height="819" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/1ddf936c-1ff0-4fc4-918b-f97955ba13c4_4001x2250.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:819,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:3160656,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.chipstrat.com/i/192882627?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1ddf936c-1ff0-4fc4-918b-f97955ba13c4_4001x2250.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Sk3d!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1ddf936c-1ff0-4fc4-918b-f97955ba13c4_4001x2250.png 424w, https://substackcdn.com/image/fetch/$s_!Sk3d!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1ddf936c-1ff0-4fc4-918b-f97955ba13c4_4001x2250.png 848w, https://substackcdn.com/image/fetch/$s_!Sk3d!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1ddf936c-1ff0-4fc4-918b-f97955ba13c4_4001x2250.png 1272w, https://substackcdn.com/image/fetch/$s_!Sk3d!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1ddf936c-1ff0-4fc4-918b-f97955ba13c4_4001x2250.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Anderson said at Morgan Stanley that Coherent is engaged with over 10 customers and that &#8220;multiple customers have already deployed in real data center applications&#8221;. Revenue shipments began in Q4 FY2025. This is a different technology from Lumentum&#8217;s MEMS-based OCS, and the two approaches are competing for the same emerging market.</p><p>Finally, on the industrial side, Coherent has some interesting datacenter-adjacent materials. Thermadite is a proprietary material with what management describes as exceptional heat-transfer characteristics, which is being evaluated by large customers as a replacement for copper heat sinks in data centers. Coherent also has a thermoelectric material that can convert waste heat back into electricity. </p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!26-o!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffa28cf64-f119-47ce-967f-319a6145ec15_4001x2250.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!26-o!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffa28cf64-f119-47ce-967f-319a6145ec15_4001x2250.png 424w, https://substackcdn.com/image/fetch/$s_!26-o!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffa28cf64-f119-47ce-967f-319a6145ec15_4001x2250.png 848w, https://substackcdn.com/image/fetch/$s_!26-o!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffa28cf64-f119-47ce-967f-319a6145ec15_4001x2250.png 1272w, https://substackcdn.com/image/fetch/$s_!26-o!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffa28cf64-f119-47ce-967f-319a6145ec15_4001x2250.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!26-o!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffa28cf64-f119-47ce-967f-319a6145ec15_4001x2250.png" width="1456" height="819" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/fa28cf64-f119-47ce-967f-319a6145ec15_4001x2250.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:819,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:831887,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.chipstrat.com/i/192882627?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffa28cf64-f119-47ce-967f-319a6145ec15_4001x2250.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!26-o!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffa28cf64-f119-47ce-967f-319a6145ec15_4001x2250.png 424w, https://substackcdn.com/image/fetch/$s_!26-o!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffa28cf64-f119-47ce-967f-319a6145ec15_4001x2250.png 848w, https://substackcdn.com/image/fetch/$s_!26-o!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffa28cf64-f119-47ce-967f-319a6145ec15_4001x2250.png 1272w, https://substackcdn.com/image/fetch/$s_!26-o!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffa28cf64-f119-47ce-967f-319a6145ec15_4001x2250.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Both are early-stage, but Anderson highlighted them at Morgan Stanley as longer-term growth opportunities that bridge industrial materials expertise with datacenter demand.</p><p>The <strong>bull case</strong> for all of this vertical integration is greater internal control over cost, supply, and iteration speed. Customers value having a single partner that can build EML-based, VCSEL-based, or silicon-photonics-based solutions, depending on the application.</p><p>Plus supply chain resilience across 60-plus manufacturing sites in 14 countries, more than 20 of which are in the United States <em>(Q3 FY25 transcript)</em>. </p><blockquote><p><strong>JA:</strong> But the other -- the second point I would make in terms of supply chain resiliency is around vertical integration. And this applies to not just our data center business, but also to our industrial business, for instance, our laser business is if you look at a lot of the very key technology in feeds for whether it&#8217;s a data center transceiver or an industrial laser, we make ourselves, <strong>manufacture ourselves a lot of the very key components that go into our transceivers or laser systems or other products. And so that&#8217;s an important part of our supply chain resiliency</strong> and flexibility. So to the extent that there are changes in the landscape, the tariff landscape and to the extent we need to adapt manufacturing, move manufacturing to different places for the benefit of our customers, we certainly feel like we&#8217;ve got a very good, resilient, adaptable supply chain to leverage.</p></blockquote><p>The <strong>bear case</strong> is essentially &#8220;doing everything means doing nothing best&#8221;. Lumentum&#8217;s epitaxy appears to be ahead (Coherent still sources some lasers externally). Broadcom&#8217;s <em>system</em> integration is deeper (laser plus DSP plus switch on-package). The risk is that Coherent ends up as a jack of all trades competing against specialists at every layer.</p><h2><strong>Growth Vectors</strong></h2><p>Coherent frames its growth story in two layers. The existing engines, pluggable transceivers (800G through 3.2T), DCI coherent transceivers, transport/transmission equipment, and optical components, collectively address a $50 billion-plus SAM per management&#8217;s estimates. These are shipping now and growing.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!KACL!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F82140e7d-b362-42fc-9b5b-4be1811c6669_4001x2250.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!KACL!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F82140e7d-b362-42fc-9b5b-4be1811c6669_4001x2250.png 424w, https://substackcdn.com/image/fetch/$s_!KACL!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F82140e7d-b362-42fc-9b5b-4be1811c6669_4001x2250.png 848w, https://substackcdn.com/image/fetch/$s_!KACL!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F82140e7d-b362-42fc-9b5b-4be1811c6669_4001x2250.png 1272w, https://substackcdn.com/image/fetch/$s_!KACL!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F82140e7d-b362-42fc-9b5b-4be1811c6669_4001x2250.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!KACL!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F82140e7d-b362-42fc-9b5b-4be1811c6669_4001x2250.png" width="1456" height="819" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/82140e7d-b362-42fc-9b5b-4be1811c6669_4001x2250.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:819,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1283819,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.chipstrat.com/i/192882627?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F82140e7d-b362-42fc-9b5b-4be1811c6669_4001x2250.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!KACL!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F82140e7d-b362-42fc-9b5b-4be1811c6669_4001x2250.png 424w, https://substackcdn.com/image/fetch/$s_!KACL!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F82140e7d-b362-42fc-9b5b-4be1811c6669_4001x2250.png 848w, https://substackcdn.com/image/fetch/$s_!KACL!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F82140e7d-b362-42fc-9b5b-4be1811c6669_4001x2250.png 1272w, https://substackcdn.com/image/fetch/$s_!KACL!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F82140e7d-b362-42fc-9b5b-4be1811c6669_4001x2250.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>On top of that base, Coherent identifies four new growth engines that add over $20 billion in incremental SAM by 2030:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!_5qG!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F02be4a96-e2c4-4ce9-bdfc-848dbcf6c890_2152x674.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!_5qG!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F02be4a96-e2c4-4ce9-bdfc-848dbcf6c890_2152x674.png 424w, https://substackcdn.com/image/fetch/$s_!_5qG!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F02be4a96-e2c4-4ce9-bdfc-848dbcf6c890_2152x674.png 848w, https://substackcdn.com/image/fetch/$s_!_5qG!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F02be4a96-e2c4-4ce9-bdfc-848dbcf6c890_2152x674.png 1272w, https://substackcdn.com/image/fetch/$s_!_5qG!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F02be4a96-e2c4-4ce9-bdfc-848dbcf6c890_2152x674.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!_5qG!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F02be4a96-e2c4-4ce9-bdfc-848dbcf6c890_2152x674.png" width="1456" height="456" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/02be4a96-e2c4-4ce9-bdfc-848dbcf6c890_2152x674.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:456,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:368847,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.chipstrat.com/i/192882627?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F02be4a96-e2c4-4ce9-bdfc-848dbcf6c890_2152x674.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!_5qG!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F02be4a96-e2c4-4ce9-bdfc-848dbcf6c890_2152x674.png 424w, https://substackcdn.com/image/fetch/$s_!_5qG!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F02be4a96-e2c4-4ce9-bdfc-848dbcf6c890_2152x674.png 848w, https://substackcdn.com/image/fetch/$s_!_5qG!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F02be4a96-e2c4-4ce9-bdfc-848dbcf6c890_2152x674.png 1272w, https://substackcdn.com/image/fetch/$s_!_5qG!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F02be4a96-e2c4-4ce9-bdfc-848dbcf6c890_2152x674.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>The stacking of these new engines on top of an already-growing base is why management says CY2026 is mostly booked, CY2027 is &#8220;filling very, very quickly,&#8221; and FY2027 revenue growth will exceed FY2026. Each new engine is at a different stage of maturity, so inflection points are spread over the next 18 months rather than concentrated in a single quarter.</p><h2><strong>Open Questions</strong></h2><p>So&#8230; Coherent has a very broad photonics stack with promising growth vectors at various stages of inflection. The stock has run from $45 to $250. The sell-side is overwhelmingly bullish.</p><p>Yet Coherent still sources some lasers externally, including from Lumentum. Broadcom has the full-stack CPO lead, even if its CEO says CPO is &#8220;not anytime soon.&#8221; Chinese module makers are winning volume at 800G. And the BIS Huawei investigation is still unresolved.</p><p>Which of these growth vectors holds up under scrutiny? Where is management credible and where are they hand-waving? How does Coherent stack up head-to-head against Lumentum and Broadcom across each product category?</p><p>I went through all of this against Q2 FY2026 earnings, the Morgan Stanley TMT Conference (March 3), OFC 2026 announcements, and sell-side research, then put together a three-way comparison with Lumentum and Broadcom.</p><p>Here&#8217;s what&#8217;s behind the paywall:</p><ul><li><p><strong>COHR vs. LITE vs. AVGO:</strong> Head-to-head across every product category</p></li><li><p><strong>Six things to watch,</strong> each with a bull case, bear case, and what to look for next: the 6-inch InP bet, OCS liquid crystal vs. MEMS, CPO positioning, the margin path to 42%, BIS risk, and valuation</p></li><li><p><strong>How the latest quarter stacks up</strong> against each of those</p></li><li><p><strong>What the Street is saying</strong> and where analysts disagree</p></li><li><p><strong>Catalysts</strong> for the rest of CY2026 and into CY2027</p></li></ul><p>and more!</p>
      <p>
          <a href="https://www.chipstrat.com/p/coherents-vertical-integration-strategy">
              Read more
          </a>
      </p>
   ]]></content:encoded></item><item><title><![CDATA[Agentic AI Needs CPUs. Whose CPUs? ]]></title><description><![CDATA[Nvidia Vera, Arm AGI CPU, Meta, x86, more]]></description><link>https://www.chipstrat.com/p/agentic-ai-needs-cpus-whose-cpus</link><guid isPermaLink="false">https://www.chipstrat.com/p/agentic-ai-needs-cpus-whose-cpus</guid><dc:creator><![CDATA[Austin Lyons]]></dc:creator><pubDate>Sat, 28 Mar 2026 00:53:19 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!CbOB!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F09cf701d-d4ad-4742-a362-df5ca3080f05_3080x1836.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>My kids like to make browser-based video games on demand, inspired by books they read (e.g. Super Rabbit Boy) or just random ideas:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!CbOB!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F09cf701d-d4ad-4742-a362-df5ca3080f05_3080x1836.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!CbOB!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F09cf701d-d4ad-4742-a362-df5ca3080f05_3080x1836.png 424w, https://substackcdn.com/image/fetch/$s_!CbOB!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F09cf701d-d4ad-4742-a362-df5ca3080f05_3080x1836.png 848w, https://substackcdn.com/image/fetch/$s_!CbOB!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F09cf701d-d4ad-4742-a362-df5ca3080f05_3080x1836.png 1272w, https://substackcdn.com/image/fetch/$s_!CbOB!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F09cf701d-d4ad-4742-a362-df5ca3080f05_3080x1836.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!CbOB!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F09cf701d-d4ad-4742-a362-df5ca3080f05_3080x1836.png" width="1456" height="868" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/09cf701d-d4ad-4742-a362-df5ca3080f05_3080x1836.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:868,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:830525,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.chipstrat.com/i/192360810?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F09cf701d-d4ad-4742-a362-df5ca3080f05_3080x1836.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!CbOB!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F09cf701d-d4ad-4742-a362-df5ca3080f05_3080x1836.png 424w, https://substackcdn.com/image/fetch/$s_!CbOB!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F09cf701d-d4ad-4742-a362-df5ca3080f05_3080x1836.png 848w, https://substackcdn.com/image/fetch/$s_!CbOB!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F09cf701d-d4ad-4742-a362-df5ca3080f05_3080x1836.png 1272w, https://substackcdn.com/image/fetch/$s_!CbOB!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F09cf701d-d4ad-4742-a362-df5ca3080f05_3080x1836.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Building Super Rabbit Boy via the Mac Claude app.</figcaption></figure></div><p>It&#8217;s amazing. They just describe what they want, and Claude code asks some clarifying questions, and then runs off and builds it.</p><p><em>If you haven&#8217;t tried having AI build software for you yet... you gotta try it. Claude Code or Claude Cowork, Cursor, OpenAI&#8217;s Codex, Perplexity Computer, whatever. It&#8217;s easy to get started with something simple on your laptop.</em> </p><p>We often spin up several simultaneous agents and build many things at once. <em>Dad, can we create a black hole simulator while we wait for this game to build? Sure, buddy! </em></p><p>Now, if I spin up a bunch of agents and they are all doing heavy token generation on my behalf, are the deterministic tasks like API fetching and code execution running on server CPUs or just locally on my machine? <em>Great question.</em> </p><p>I personally use the command line version of Claude to spin up agents, so let&#8217;s see if we can figure out what it does.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!m6m7!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F86482975-69c4-481b-9c89-57eb4a7b06bb_1320x1468.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!m6m7!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F86482975-69c4-481b-9c89-57eb4a7b06bb_1320x1468.png 424w, https://substackcdn.com/image/fetch/$s_!m6m7!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F86482975-69c4-481b-9c89-57eb4a7b06bb_1320x1468.png 848w, https://substackcdn.com/image/fetch/$s_!m6m7!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F86482975-69c4-481b-9c89-57eb4a7b06bb_1320x1468.png 1272w, https://substackcdn.com/image/fetch/$s_!m6m7!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F86482975-69c4-481b-9c89-57eb4a7b06bb_1320x1468.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!m6m7!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F86482975-69c4-481b-9c89-57eb4a7b06bb_1320x1468.png" width="556" height="618.339393939394" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/86482975-69c4-481b-9c89-57eb4a7b06bb_1320x1468.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1468,&quot;width&quot;:1320,&quot;resizeWidth&quot;:556,&quot;bytes&quot;:259453,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.chipstrat.com/i/192360810?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F86482975-69c4-481b-9c89-57eb4a7b06bb_1320x1468.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!m6m7!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F86482975-69c4-481b-9c89-57eb4a7b06bb_1320x1468.png 424w, https://substackcdn.com/image/fetch/$s_!m6m7!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F86482975-69c4-481b-9c89-57eb4a7b06bb_1320x1468.png 848w, https://substackcdn.com/image/fetch/$s_!m6m7!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F86482975-69c4-481b-9c89-57eb4a7b06bb_1320x1468.png 1272w, https://substackcdn.com/image/fetch/$s_!m6m7!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F86482975-69c4-481b-9c89-57eb4a7b06bb_1320x1468.png 1456w" sizes="100vw"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Love the Claude Code CLI.</figcaption></figure></div><p>This CLI is open source, so we can read <a href="https://github.com/anthropics/claude-code">the code</a> to figure out how it works. lol jk, we won&#8217;t read the code, we&#8217;ll just have Claude Code read its own source code for us. It turns out the CLI runs almost everything <em>locally</em>, which, in my case, is on an Apple M4 chip. So when Claude Code on my MacBook writes and executes code to generate the ASCII art in the image above, the non-GenAI bits run locally on my MacBook. </p><p>Now, that doesn&#8217;t mean EVERYTHING agentic will run locally. As cool as Claude Code and OpenClaw are, they aren&#8217;t the only way to run agentic AI. A lot of agentic AI will be kicked off via phone and web apps. The user will taps a button or type a sentence in a normal app, and behind the scenes an agent spins up in the cloud, runs tools, makes API calls, executes code, fetches data &#8212; all CPU work on server racks. The apps themselves can use the Claude API, which offers the same primitives the CLI offers locally &#8212; <a href="https://platform.claude.com/docs/en/agents-and-tools/tool-use/bash-tool">code execution</a>, <a href="https://platform.claude.com/docs/en/agents-and-tools/tool-use/web-search-tool">web search</a>, <a href="https://platform.claude.com/docs/en/agents-and-tools/tool-use/programmatic-tool-calling">programmatic tool calling</a>, and so on. But they run in the cloud and can be scaled. </p><p>Cloud-based agents will contribute a ton of agentic AI inference in the coming years, and will continue to be <a href="https://www.saastr.com/anthropics-4b-arr-the-enterprise-ai-growth-playbook-thats-rewriting-saas-economics/">a key driver of the majority of Anthropic&#8217;s revenue</a>.</p><p>In GPU inference servers, the head node's CPU is there to keep GPUs fully utilized. Agentic workloads introduce orchestration and tool execution overhead that erodes CPU headroom and constrains GPU utilization. The fix is to extend beyond the head node into a proximate CPU rack, positioned close to the GPU racks to minimize latency and maintain throughput.</p><p><strong>Ok then, if we need racks of AI CPUs for agentic AI... which racks?</strong> <em>Like AMD Epyc or Intel Xeon racks? Or Graviton / Cobalt / Axion racks? Or what?</em></p><p>Well, it probably makes sense to start with the same kind of CPUs that are already running the Gen AI related CPU workloads today, right?  The existing head node CPUs already handle orchestration, multi-modal fan-out, data pre- and post-processing, and so on.</p><p>Maybe just buy more of those? <em>Like my dad always said, if it ain&#8217;t broke, don&#8217;t fix it.</em></p><p>And interesting, those are often Arm CPUs these days. </p><p>As a reminder, Hopper systems typically paired GPUs with Intel Xeon CPUs, most often Sapphire Rapids or Emerald Rapids. <em>Nvidia had little incentive to drive AMD CPU share given Instinct competition, though OEMs still offered server configurations with AMD EPYC head nodes.</em> </p><p>But, Blackwell-era GPUs were mostly paired with Nvidia&#8217;s Grace CPU (Arm Neoverse V2). <strong>Which means the most important workload of our lifetime to date, LLM inference, was initially deployed on x86 but quickly moved to Arm.</strong></p><p><em>Now I&#8217;m curious. What about XPUs?</em> </p><p>Claude inference on Trainium 2 uses x86 head nodes, <a href="https://newsletter.semianalysis.com/p/amazons-ai-self-sufficiency-trainium2-architecture-networking">purportedly</a> Intel Sapphire Rapids. At roughly one CPU socket per eight Trn2 chips, the <a href="https://www.fool.com/earnings/call-transcripts/2025/10/31/amazon-amzn-q3-2025-earnings-call-transcript/">one-million-accelerator Trn2 deployment</a> implies about 125K x86 CPUs. <em>Nice. </em>But wait! According to SemiAnalysis, <a href="https://newsletter.semianalysis.com/p/aws-trainium3-deep-dive-a-potential">Trainium3 is moving to Graviton4</a> (Arm Neoverse V2). </p><p><em>Hmm. Another instance of inference clusters swapping CPU sockets to Arm.</em></p><p>And maybe TPUs are heading that direction too? <a href="https://newsletter.semianalysis.com/p/cpus-are-back-the-datacenter-cpu">Per SA</a>, &#8220;in the future, Google will design Axion CPUs for use as head nodes in their TPU clusters powering Gemini&#8221;.</p><p>It sure seems like a lot of the CPU workloads that support LLM inference are already running on Arm, or heading in that direction. </p><p>And thus adding nearby racks of Arm CPUs to create headroom for agentic AI seems very sensible.</p><p><strong>OK then, where does one buy such racks of Arm CPUs anyway?</strong> </p><h3><strong>Option A: Build Your Own</strong></h3><p>The cloud providers (AWS, Google, Microsoft) are already building custom Arm CPUs (Graviton, Axion, Cobalt). But these existing Arm CPUs were designed for traditional cloud server workloads. <em>Ya know, APIs, databases, web servers, and all that jazz used to build SaaS empires. </em>Agentic AI workloads have different requirements than cloud-native ones. <em>They want much more memory bandwidth per core. Low tail latency. Etc.</em></p><p>So CSPs could add new custom CPU SKUs to the roadmap, tuned appropriately. <em>Same team can reuse a lot of the same IP, it wouldn&#8217;t be that bad.</em> But the agentic AI CPU demand/supply imbalance is hitting RIGHT NOW. No one has time to wait for an agentic-flavored Graviton to be designed, validated, and taped out.</p><p>In the meantime, sure, CSPs could just use their existing custom CPUs. They might not be perfectly optimized for the workload, but given we&#8217;re in agentic takeoff, all CPUs on deck. <em>Perfect is the enemy of good. But, of</em> <em>course, at massive scale, every inefficiency adds up, so there&#8217;s still going to be a need for the right CPUs designed for the agentic AI workload in the fullness of time.</em></p><p>Btw, this &#8220;good enough&#8221; argument could be applied to Arm-based server SKUs from Qualcomm and Ampere, which were <em>not</em> designed with Agentic AI in mind. <em>But silicon is in short supply... if you&#8217;ve got &#8216;em, sell &#8216;em. </em></p><p>One final observation: future agentic CPUs from hyperscalers will likely adopt Arm&#8217;s Neoverse V3 CSS, which carries higher royalty rates than V2. If so, agentic AI drives both unit volume and Arm&#8217;s average royalty per chip.</p><h3><strong>Option B: Buy Nvidia Vera</strong></h3><p>If you want to stay on Arm and you need racks <em>now</em>, you&#8217;re in luck &#8212; Nvidia is selling them. The Vera CPU, which Nvidia <a href="https://nvidianews.nvidia.com/news/nvidia-launches-vera-cpu-purpose-built-for-agentic-ai">calls</a> &#8220;the world&#8217;s first processor purpose-built for the age of agentic AI,&#8221; is built on 88 custom Olympus cores (Arm Neoverse V2). The liquid-cooled Vera rack has &#8220;256 liquid-cooled Vera CPUs to sustain more than 22,500 concurrent CPU environments, each running independently at full performance&#8221;.</p><p>Nvidia&#8217;s gonna sell a lot of these Vera racks for agentic AI.</p><p>Profit pools don&#8217;t stay uncontested, though. If there&#8217;s money to be made, competitors will appear. And who might that be? The CSPs are the obvious candidates, but their model centers on deploying infrastructure for internal use and rental, not selling merchant silicon. <em>Well, maybe not GCP if the rumors about selling TPUs are true. </em></p><p>OK. I&#8217;ve buried the lede... <em>you&#8217;ve surely put this together by now&#8230; </em></p><p>Who else can sell Arm-based agentic AI CPU racks?</p><h3><strong>Option C: Buy from Arm</strong></h3><p>Arm can!</p><p>They&#8217;ve already done most of the heavy lifting with <a href="https://www.arm.com/products/cloud-datacenter/neoverse-compute-subsystems">CSS</a>, the Compute Subsystem that stitches together CPU cores and system IP, and handles the backend physical design too. At that point, why not just take it all the way across the finish line and build the full chip?</p><p><em>And that is exactly what happened.</em></p><p>This week, Arm announced the Arm AGI CPU, the first merchant silicon offering from Arm.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!mqcq!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9a53190c-c1ca-4790-9855-88dd3835382c_5712x4284.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!mqcq!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9a53190c-c1ca-4790-9855-88dd3835382c_5712x4284.jpeg 424w, https://substackcdn.com/image/fetch/$s_!mqcq!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9a53190c-c1ca-4790-9855-88dd3835382c_5712x4284.jpeg 848w, https://substackcdn.com/image/fetch/$s_!mqcq!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9a53190c-c1ca-4790-9855-88dd3835382c_5712x4284.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!mqcq!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9a53190c-c1ca-4790-9855-88dd3835382c_5712x4284.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!mqcq!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9a53190c-c1ca-4790-9855-88dd3835382c_5712x4284.jpeg" width="1456" height="1092" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/9a53190c-c1ca-4790-9855-88dd3835382c_5712x4284.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1092,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:7941224,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.chipstrat.com/i/192360810?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9a53190c-c1ca-4790-9855-88dd3835382c_5712x4284.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!mqcq!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9a53190c-c1ca-4790-9855-88dd3835382c_5712x4284.jpeg 424w, https://substackcdn.com/image/fetch/$s_!mqcq!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9a53190c-c1ca-4790-9855-88dd3835382c_5712x4284.jpeg 848w, https://substackcdn.com/image/fetch/$s_!mqcq!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9a53190c-c1ca-4790-9855-88dd3835382c_5712x4284.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!mqcq!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9a53190c-c1ca-4790-9855-88dd3835382c_5712x4284.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Source: my iPhone.</figcaption></figure></div><p>For the first time in its 35+ year history, Arm is a merchant silicon CPU vendor. </p><p>Some details from the <a href="https://newsroom.arm.com/blog/introducing-arm-agi-cpu">announcement</a>:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!sSEy!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc4eece33-5a86-40ab-ab9e-3e0cd20a9110_1600x900.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!sSEy!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc4eece33-5a86-40ab-ab9e-3e0cd20a9110_1600x900.jpeg 424w, https://substackcdn.com/image/fetch/$s_!sSEy!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc4eece33-5a86-40ab-ab9e-3e0cd20a9110_1600x900.jpeg 848w, https://substackcdn.com/image/fetch/$s_!sSEy!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc4eece33-5a86-40ab-ab9e-3e0cd20a9110_1600x900.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!sSEy!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc4eece33-5a86-40ab-ab9e-3e0cd20a9110_1600x900.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!sSEy!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc4eece33-5a86-40ab-ab9e-3e0cd20a9110_1600x900.jpeg" width="1456" height="819" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/c4eece33-5a86-40ab-ab9e-3e0cd20a9110_1600x900.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:819,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!sSEy!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc4eece33-5a86-40ab-ab9e-3e0cd20a9110_1600x900.jpeg 424w, https://substackcdn.com/image/fetch/$s_!sSEy!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc4eece33-5a86-40ab-ab9e-3e0cd20a9110_1600x900.jpeg 848w, https://substackcdn.com/image/fetch/$s_!sSEy!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc4eece33-5a86-40ab-ab9e-3e0cd20a9110_1600x900.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!sSEy!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc4eece33-5a86-40ab-ab9e-3e0cd20a9110_1600x900.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>For the workloads driving CPU demand in AI like agentic orchestration (web searches, API calls, agent fan-out), <a href="https://newsletter.semianalysis.com/p/cpus-are-back-the-datacenter-cpu">RL training sandboxes</a> (code compilation, verification, tool use), and data processing, you need lots of performant cores.</p><p>That&#8217;s exactly what the AGI CPU claims to deliver.  In a liquid-cooled 200kW rack it has over 45,000 cores:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!U_HA!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8e953cbb-e64a-4cb4-8547-af31aaad22e2_2748x1378.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!U_HA!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8e953cbb-e64a-4cb4-8547-af31aaad22e2_2748x1378.png 424w, https://substackcdn.com/image/fetch/$s_!U_HA!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8e953cbb-e64a-4cb4-8547-af31aaad22e2_2748x1378.png 848w, https://substackcdn.com/image/fetch/$s_!U_HA!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8e953cbb-e64a-4cb4-8547-af31aaad22e2_2748x1378.png 1272w, https://substackcdn.com/image/fetch/$s_!U_HA!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8e953cbb-e64a-4cb4-8547-af31aaad22e2_2748x1378.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!U_HA!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8e953cbb-e64a-4cb4-8547-af31aaad22e2_2748x1378.png" width="1456" height="730" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/8e953cbb-e64a-4cb4-8547-af31aaad22e2_2748x1378.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:730,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:2101206,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.chipstrat.com/i/192360810?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8e953cbb-e64a-4cb4-8547-af31aaad22e2_2748x1378.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!U_HA!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8e953cbb-e64a-4cb4-8547-af31aaad22e2_2748x1378.png 424w, https://substackcdn.com/image/fetch/$s_!U_HA!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8e953cbb-e64a-4cb4-8547-af31aaad22e2_2748x1378.png 848w, https://substackcdn.com/image/fetch/$s_!U_HA!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8e953cbb-e64a-4cb4-8547-af31aaad22e2_2748x1378.png 1272w, https://substackcdn.com/image/fetch/$s_!U_HA!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8e953cbb-e64a-4cb4-8547-af31aaad22e2_2748x1378.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Note that these CPUs to GPU racks over the network, not via direct chip-to-chip links. <em>Nvidia&#8217;s NVLink C2C only applies in the integrated Vera Rubin NVL72, where CPUs and GPUs live in the same rack.</em> <em>I had</em> <em>heard some confusion here so wanted to clarify.</em></p><p>So&#8230; Arm is now selling chips. If you need agentic Arm racks you can build, buy from Nvidia, or buy from Arm.</p><p>But there are still many questions to think through, including:</p><ul><li><p><strong>Vera vs. AGI.</strong> Different strengths, different price points.</p></li><li><p><strong>Where are the x86 agentic AI CPU racks?</strong></p></li><li><p><strong>Why are Meta and OpenAI launch customers for </strong><em><strong>both</strong></em><strong> Vera and AGI CPU?</strong> </p></li></ul><p>I&#8217;ll hit on these and more for paid subscribers.</p>
      <p>
          <a href="https://www.chipstrat.com/p/agentic-ai-needs-cpus-whose-cpus">
              Read more
          </a>
      </p>
   ]]></content:encoded></item><item><title><![CDATA[The Multi-Silicon Era Is Here ]]></title><description><![CDATA[Disagg is out of the bag. What it means for Nvidia, CPUs, XPUs, startups, and more.]]></description><link>https://www.chipstrat.com/p/the-multi-silicon-era-is-here</link><guid isPermaLink="false">https://www.chipstrat.com/p/the-multi-silicon-era-is-here</guid><dc:creator><![CDATA[Austin Lyons]]></dc:creator><pubDate>Mon, 23 Mar 2026 16:33:24 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!H5tS!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa73cc1a3-45e3-43f3-bf23-c520621eb8b2_1600x926.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><strong>QUICK HITS</strong></p><ul><li><p>Disaggregation is now official Nvidia doctrine. <em>Not just a startup pitch.</em></p></li><li><p>Agentic AI is the killer app driving all of this. <em>Vera CPU racks because CPUs were bottlenecking GPUs. LPUs for ultra-low latency coding agents.</em>  </p></li><li><p>Nvidia&#8217;s strategy hasn&#8217;t changed. The whole system must beat any mix-and-match alternative. <em>Does InferenceX sufficiently measure Agentic AI Factory performance?</em></p></li><li><p>The unbundling of inference workloads is the unbundling of the datacenter. <em>If Groq can slot in, so can Cerebras, Etched, MatX, AMD. Conceptually, anyway&#8230;</em> </p></li></ul><div><hr></div><p>Without further ado, the most important slide from GTC 2026, demonstrating that Nvidia GPUs plus AI ASICs lead to a better and expanded Pareto frontier:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!H5tS!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa73cc1a3-45e3-43f3-bf23-c520621eb8b2_1600x926.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!H5tS!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa73cc1a3-45e3-43f3-bf23-c520621eb8b2_1600x926.png 424w, https://substackcdn.com/image/fetch/$s_!H5tS!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa73cc1a3-45e3-43f3-bf23-c520621eb8b2_1600x926.png 848w, https://substackcdn.com/image/fetch/$s_!H5tS!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa73cc1a3-45e3-43f3-bf23-c520621eb8b2_1600x926.png 1272w, https://substackcdn.com/image/fetch/$s_!H5tS!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa73cc1a3-45e3-43f3-bf23-c520621eb8b2_1600x926.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!H5tS!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa73cc1a3-45e3-43f3-bf23-c520621eb8b2_1600x926.png" width="1456" height="843" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/a73cc1a3-45e3-43f3-bf23-c520621eb8b2_1600x926.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:843,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!H5tS!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa73cc1a3-45e3-43f3-bf23-c520621eb8b2_1600x926.png 424w, https://substackcdn.com/image/fetch/$s_!H5tS!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa73cc1a3-45e3-43f3-bf23-c520621eb8b2_1600x926.png 848w, https://substackcdn.com/image/fetch/$s_!H5tS!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa73cc1a3-45e3-43f3-bf23-c520621eb8b2_1600x926.png 1272w, https://substackcdn.com/image/fetch/$s_!H5tS!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa73cc1a3-45e3-43f3-bf23-c520621eb8b2_1600x926.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">For certain use cases, Rubin GPUs + Groq LPUs better than just Rubin GPUs.</figcaption></figure></div><p>Want to unlock super low latency (or very high tokens/sec) for insanely fast Claude coding? <em>ABC &#8211; Always Be Claudin&#8217;, amiright? </em></p><p>You got it! GPU + LPU. <em>See the far right side of the chart.</em></p><p><strong>Disaggregation unlocks &#8220;right silicon for the workload&#8221;. </strong>And the right silicon isn&#8217;t always GPUs; a bunch of Groq&#8217;s chips can tally up much more bandwidth for memory-bandwidth bound workloads:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!odu7!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F821bbcea-e890-412f-8efa-014e6c4f709e_1872x1376.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!odu7!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F821bbcea-e890-412f-8efa-014e6c4f709e_1872x1376.png 424w, https://substackcdn.com/image/fetch/$s_!odu7!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F821bbcea-e890-412f-8efa-014e6c4f709e_1872x1376.png 848w, https://substackcdn.com/image/fetch/$s_!odu7!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F821bbcea-e890-412f-8efa-014e6c4f709e_1872x1376.png 1272w, https://substackcdn.com/image/fetch/$s_!odu7!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F821bbcea-e890-412f-8efa-014e6c4f709e_1872x1376.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!odu7!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F821bbcea-e890-412f-8efa-014e6c4f709e_1872x1376.png" width="1456" height="1070" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/821bbcea-e890-412f-8efa-014e6c4f709e_1872x1376.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1070,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1756188,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.chipstrat.com/i/191876851?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F821bbcea-e890-412f-8efa-014e6c4f709e_1872x1376.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!odu7!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F821bbcea-e890-412f-8efa-014e6c4f709e_1872x1376.png 424w, https://substackcdn.com/image/fetch/$s_!odu7!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F821bbcea-e890-412f-8efa-014e6c4f709e_1872x1376.png 848w, https://substackcdn.com/image/fetch/$s_!odu7!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F821bbcea-e890-412f-8efa-014e6c4f709e_1872x1376.png 1272w, https://substackcdn.com/image/fetch/$s_!odu7!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F821bbcea-e890-412f-8efa-014e6c4f709e_1872x1376.png 1456w" sizes="100vw"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Tons of bandwidth from SRAM-only LPUs. Tons of LPUs too though.</figcaption></figure></div><p>Yet Rubin has way more FLOPs for compute-bound workloads. So put the two together, and you can outperform <em>just</em> GPUs. <em>Of course it will cost you $$$, but for certain customers and points on the Pareto curve it can make economic sense.</em></p><p>To be fair, disagg has been out of the bag for over a year now, but the full ramifications are becoming clear. In the past year we learned of Dynamo and prefill/decode disaggregation. And we even saw Nvidia unveil the Rubin CPX as a SKU specifically for prefill. But that was still just splitting the workload amongst <em>GPUs</em>. But now we see further disaggregation, and AI ASICs have entered the picture:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!HBtJ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F66149f56-b489-429e-8a1e-e11feb269609_1600x983.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!HBtJ!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F66149f56-b489-429e-8a1e-e11feb269609_1600x983.png 424w, https://substackcdn.com/image/fetch/$s_!HBtJ!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F66149f56-b489-429e-8a1e-e11feb269609_1600x983.png 848w, https://substackcdn.com/image/fetch/$s_!HBtJ!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F66149f56-b489-429e-8a1e-e11feb269609_1600x983.png 1272w, https://substackcdn.com/image/fetch/$s_!HBtJ!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F66149f56-b489-429e-8a1e-e11feb269609_1600x983.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!HBtJ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F66149f56-b489-429e-8a1e-e11feb269609_1600x983.png" width="1456" height="895" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/66149f56-b489-429e-8a1e-e11feb269609_1600x983.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:895,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!HBtJ!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F66149f56-b489-429e-8a1e-e11feb269609_1600x983.png 424w, https://substackcdn.com/image/fetch/$s_!HBtJ!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F66149f56-b489-429e-8a1e-e11feb269609_1600x983.png 848w, https://substackcdn.com/image/fetch/$s_!HBtJ!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F66149f56-b489-429e-8a1e-e11feb269609_1600x983.png 1272w, https://substackcdn.com/image/fetch/$s_!HBtJ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F66149f56-b489-429e-8a1e-e11feb269609_1600x983.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Groq is not a GPU.</figcaption></figure></div><p><em>So GPUs aren&#8217;t enough! </em></p><p>The narrative has officially moved past GPU for everything, as said on <a href="https://developer.nvidia.com/blog/inside-nvidia-groq-3-lpx-the-low-latency-inference-accelerator-for-the-nvidia-vera-rubin-platform/">Nvidia&#8217;s blog</a>:</p><blockquote><p>Hardware tuned for peak throughput under large batches isn&#8217;t ideal for the most latency-sensitive execution paths, while hardware optimized for low-latency execution is less efficient for the most compute-intensive phases.</p></blockquote><h2>Multi-Vendor Inference</h2><p><strong>It&#8217;s not a stretch to call this a multi-vendor inference system. </strong>Or a heterogeneous system if you prefer. Sure, it&#8217;s the Groq 3 LPU with an Nvidia label on it, but conceptually its an AI ASIC startup rack with an Nvidia GPU rack.</p><p><strong>Hence, the corollary: If Groq racks can be slotted in, so can Cerebras, MatX, Etched, AMD, Intel, and so on.</strong></p><p>The unbundling of the workload is the unbundling the AI inference datacenter.</p><p>That said, although the <em>narrative</em> has changed from &#8220;GPUs for everything&#8221; to &#8220;right silicon for the workload&#8221;, I&#8217;d argue that Nvidia&#8217;s <em>strategy</em> hasn&#8217;t changed one bit.</p><p><strong>I&#8217;d sum up Nvidia&#8217;s strategy as ensuring that the whole inference system is greater than the sum of the parts. </strong>Said another way, Nvidia is betting that full Nvidia inference AI clusters will outperform competitive clusters piecemealed together from different vendors. </p><p>And Nvidia is betting that outperformance is unlocked through vertical integration. <em>Think Apple, Tesla. </em></p><p>Take Nvidia GPUs vs AMD Instinct GPUs as an example. When AMD first came on the scene, even though MI300X was a good inference chip, it didn&#8217;t have performant enough software to extract all the value from the chip. So Nvidia&#8217;s whole system (H100s + CUDA) was better than AMD (MI300 + ROCm).</p><p>Grace Blackwell NVL72 took it up a notch. The scale-up NVLink switch enabled Nvidia to have 72 GPUs acting as one big GPU. Thus, Nvidia&#8217;s whole system (accelerator + scale-up networking + software) outperformed AMD MI350, which didn&#8217;t have competitive scale-up networking technology. <em>AMD won&#8217;t have a scale-up domain size of 72 until MI450 with Helios, and even then, it&#8217;s UALoE.</em></p><p>And Nvidia is already showing they&#8217;ll be running further ahead with a portfolio of inference-centric offerings like the Vera CPU racks for agentic AI and the STX Storage racks:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!0a6T!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2528e22f-eb6b-4711-8dca-8fc819dc55f1_1600x1036.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!0a6T!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2528e22f-eb6b-4711-8dca-8fc819dc55f1_1600x1036.png 424w, https://substackcdn.com/image/fetch/$s_!0a6T!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2528e22f-eb6b-4711-8dca-8fc819dc55f1_1600x1036.png 848w, https://substackcdn.com/image/fetch/$s_!0a6T!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2528e22f-eb6b-4711-8dca-8fc819dc55f1_1600x1036.png 1272w, https://substackcdn.com/image/fetch/$s_!0a6T!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2528e22f-eb6b-4711-8dca-8fc819dc55f1_1600x1036.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!0a6T!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2528e22f-eb6b-4711-8dca-8fc819dc55f1_1600x1036.png" width="1456" height="943" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/2528e22f-eb6b-4711-8dca-8fc819dc55f1_1600x1036.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:943,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!0a6T!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2528e22f-eb6b-4711-8dca-8fc819dc55f1_1600x1036.png 424w, https://substackcdn.com/image/fetch/$s_!0a6T!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2528e22f-eb6b-4711-8dca-8fc819dc55f1_1600x1036.png 848w, https://substackcdn.com/image/fetch/$s_!0a6T!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2528e22f-eb6b-4711-8dca-8fc819dc55f1_1600x1036.png 1272w, https://substackcdn.com/image/fetch/$s_!0a6T!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2528e22f-eb6b-4711-8dca-8fc819dc55f1_1600x1036.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Nvidia&#8217;s Agentic AI Factory. She&#8217;s a beaut.</figcaption></figure></div><p>Just look at that row of CPU + GPU + LPU + switches + storage. <em>Whatever agentic AI factory you can put together, can it outperform this fully integrated system? And was the opportunity cost of sourcing it all, hooking it up, validating it, and making sure the software works across it all worth it?</em></p><p>Again, even though disagg is out of the bag, Nvidia believes it can outperform an &#8220;open&#8221; modular system with components from different vendors, e.g. CPU and GPU from AMD, AI ASIC from a startup, scale-up switch from Celestica / HPE / Astera Labs / etc.</p><h2>Agentic AI is GenAI&#8217;s Killer App</h2><p>Given that Nvidia is competing on &#8220;full AI factory performance&#8221;, it&#8217;s no wonder Nvidia is shipping Vera CPU racks.</p><p>No, Nvidia isn&#8217;t trying to take down Intel or AMD.</p><p>It&#8217;s much simpler. What was the North Star? <em>The whole is greater than the sum of the parts.</em> </p><p><strong>And the &#8220;killer app&#8221; for GenAI has appeared; it&#8217;s agentic AI. </strong><em>Claude Code. OpenClaw. The world will never be the same. </em></p><p>And Nvidia wants to make the best <em>Agentic AI Factory</em>. </p><p>Agentic AI involves a lot of tool calling, data processing, and so on. The head node CPUs can&#8217;t handle it all. So Nvidia added <a href="https://www.viksnewsletter.com/p/beyond-gtc-a-deep-dive-into-compute-lpx">two racks</a> of Vera CPUs in the same row as all the GPUs, optimized for agentic AI. From Ben Thompson&#8217;s <a href="https://stratechery.com/2026/an-interview-with-nvidia-ceo-jensen-huang-about-accelerated-computing/">interview with Jensen</a> after the keynote:</p><blockquote><p><strong>JH:</strong> ... you want the fastest single-threaded computer you can possibly get&#8230; the most important thing is single-threaded performance and the I/O has to be really great&#8230; if the CPU gets throttled, then we&#8217;re holding back a whole bunch of GPUs.</p></blockquote><p>Clearly, Jensen is thinking about making the whole Agentic AI Factory greater than the sum of the parts. CPU was a bottleneck, so CPU racks were included. </p><p>And we want agents to be fast. Generating code is awesome. Generating hours worth of code in minutes is even more awesome. High interactivity is necessary for agentic AI, hence LPUs.</p><h2>Implications</h2><p>So disagg is out of the bag, and now many implications and questions are piling up:</p><ul><li><p>What does &#8220;right silicon for the workload&#8221; mean for the CPU market when agentic AI is the killer app?</p></li><li><p>If the datacenter can be disaggregated, the door is open for other SKUs&#8230; but can they actually slot in? Will Dynamo allow it? Will an open alternative emerge?</p></li><li><p>Where do XPUs fit?  </p></li><li><p>What about other Pareto frontiers? Why only the largest models at 1000+ token throughput? What about medium or small models at 1000+ tokens? </p></li></ul><p>I have strong views on all of these, and more. Let&#8217;s go deeper.</p>
      <p>
          <a href="https://www.chipstrat.com/p/the-multi-silicon-era-is-here">
              Read more
          </a>
      </p>
   ]]></content:encoded></item><item><title><![CDATA[GTC 2026 Keynote Debrief]]></title><description><![CDATA[Scale up, CPUs, Groq, tiered inference economics, SaaS is not dead]]></description><link>https://www.chipstrat.com/p/gtc-2026-keynote-debrief</link><guid isPermaLink="false">https://www.chipstrat.com/p/gtc-2026-keynote-debrief</guid><dc:creator><![CDATA[Austin Lyons]]></dc:creator><pubDate>Tue, 17 Mar 2026 16:28:23 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/7000c046-5ed6-46ab-b03c-d7738c9839b1_1396x640.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>This is our emergency episode, recorded the night of the keynote.</em> </p><p>Three things from Jensen&#8217;s GTC 2026 keynote were notable:</p><ol><li><p>Scale-up is going copper <em>and</em> optical</p></li><li><p>Nvidia is selling lots of standalone CPUs, calling it a multi-billion-dollar business</p></li><li><p>The Groq 3 LPX chip is real, fabbed by Samsung and shipping Q3 </p></li></ol><p>We cover the full Vera Rubin data center system, tiered inference economics, CPO in production, and take a detour into whether the vibe-coding era actually kills SaaS (we think not).</p><div id="youtube2-UfaIU6h0YdY" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;UfaIU6h0YdY&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/UfaIU6h0YdY?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><p><em>You can find it on podcast players too, e.g. on <a href="https://open.spotify.com/show/0Uuu3s1Nw09f6Xmg24rCZm">Spotify</a> and <a href="https://podcasts.apple.com/za/podcast/semi-doped/id1866707196">Apple Podcasts</a>.</em></p><p><em>This interview is lightly edited for clarity. Transcript is available for paid subscribers who prefer to read vs watch.</em></p>
      <p>
          <a href="https://www.chipstrat.com/p/gtc-2026-keynote-debrief">
              Read more
          </a>
      </p>
   ]]></content:encoded></item><item><title><![CDATA[Meta's MTIA Roadmap]]></title><description><![CDATA[Inference-centric design. ROIC is clear. Implications for the industry.]]></description><link>https://www.chipstrat.com/p/metas-mtia-roadmap</link><guid isPermaLink="false">https://www.chipstrat.com/p/metas-mtia-roadmap</guid><dc:creator><![CDATA[Austin Lyons]]></dc:creator><pubDate>Thu, 12 Mar 2026 21:27:13 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/3ff80f0a-0edf-454f-bceb-542a6e6e75f3_1460x1008.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><strong>QUICK HITS</strong></p><ul><li><p>Four MTIA chips in two years, all inference optimized</p></li><li><p>ROIC story is straightforward</p></li><li><p>Further validates the industry is past &#8220;GPUs for everything&#8221; value prop </p></li><li><p>Implications for Broadcom, Nvidia, AMD, HBM suppliers, TSMC, Arista, and inference startups</p></li></ul><div><hr></div><p>The narrative around AI silicon and GPUs has changed. Six months ago, the default assumption was that GPUs are the answer for everything. But as I wrote about last October in <a href="https://www.chipstrat.com/p/right-sized-ai-infrastructure-marvell">Right-Sized AI Infrastructure</a>, as AI workloads mature and become better understood, the economics favor purpose-built hardware over general-purpose GPUs. <em>I&#8217;ve been long saying this, see <a href="https://www.chipstrat.com/p/gpu-bloat-stifles-ai">GPU bloat stifles AI</a> from Feb 2024. But that was long before reasoning models and agentic AI, so the industry was still stuck on &#8220;is GenAI even useful?&#8221; and &#8220;what if LLM architectures change?</em></p><p>The demand for transformer-based LLM inference is exploding. As Ben Thompson <a href="https://stratechery.com/2026/oracle-earnings-oracles-cloud-growth-oracles-software-defense/">stated this week</a>, we&#8217;re in a new era and demand is increasing exponentially :</p><blockquote><p>This [functional agents] <strong>increases the market</strong> <strong>in two directions</strong>: first, humans can run multiple agents, and secondly, agents can leverage reasoning models multiple times to accomplish a task. This isn&#8217;t just an exponential increase in the addressable market for tokens, it&#8217;s <strong>two exponential increases</strong> squared.</p></blockquote><p>In this era of intense demand (<em>NEED MOAR TOKENS FOR MY AGENTS!</em>), developers care about latency (<em>MAKE AGENTS FASTER!</em>) and throughput (<em>DANG OPUS IS EXPENSIVE!</em>)</p><p>The market is responding rationally. In just the past several months we&#8217;ve seen NVIDIA+Groq and OpenAI+Cerebras. <em>And those were pre-ChatGPT chips, aka they made design decisions not optimized for transformer LLM inference.</em> </p><p>We&#8217;ve also seen maturing XPUs and roadmaps, including Trainium 3/4/5, Microsoft Maia 200/300, and now Meta MTIA 300/400/450/500.</p><p>Meta&#8217;s annoucement was this <a href="https://ai.meta.com/blog/meta-mtia-scale-ai-chips-for-billions/">detailed technical blog</a> articulating four iterations of MTIA that have shipped or are planned within roughly two years. </p><p>And they were very transparent with specs, timelines, etc. This is really cool. Of course, a company like AWS with Trainium is going to share XPU specs because it&#8217;s cloud rental business might rent out the XPUs too. But Meta&#8217;s only using them internally; no cloud biz. So Meta doesn&#8217;t HAVE to share as much info. But they shared anyway. This is good for investor transparency, for current employees and potential hires, and for us nerds.</p><p>Here are said specs from Meta:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!oPSj!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F952b515b-7153-47da-b500-f67a77c2a222_1486x1356.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!oPSj!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F952b515b-7153-47da-b500-f67a77c2a222_1486x1356.png 424w, https://substackcdn.com/image/fetch/$s_!oPSj!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F952b515b-7153-47da-b500-f67a77c2a222_1486x1356.png 848w, https://substackcdn.com/image/fetch/$s_!oPSj!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F952b515b-7153-47da-b500-f67a77c2a222_1486x1356.png 1272w, https://substackcdn.com/image/fetch/$s_!oPSj!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F952b515b-7153-47da-b500-f67a77c2a222_1486x1356.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!oPSj!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F952b515b-7153-47da-b500-f67a77c2a222_1486x1356.png" width="1456" height="1329" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/952b515b-7153-47da-b500-f67a77c2a222_1486x1356.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1329,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:189085,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.chipstrat.com/i/190771551?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F952b515b-7153-47da-b500-f67a77c2a222_1486x1356.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!oPSj!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F952b515b-7153-47da-b500-f67a77c2a222_1486x1356.png 424w, https://substackcdn.com/image/fetch/$s_!oPSj!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F952b515b-7153-47da-b500-f67a77c2a222_1486x1356.png 848w, https://substackcdn.com/image/fetch/$s_!oPSj!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F952b515b-7153-47da-b500-f67a77c2a222_1486x1356.png 1272w, https://substackcdn.com/image/fetch/$s_!oPSj!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F952b515b-7153-47da-b500-f67a77c2a222_1486x1356.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>And the roadmap makes sense.</p><p>As I wrote in <a href="https://chipstrat.substack.com/p/metas-roic-strategy-gem-now-llms-later">Meta&#8217;s ROIC Strategy: GEM Now, LLMs Later</a>, Meta&#8217;s business has <em>always</em> been powered by ML/AI infrastructure. Facebook launched the algorithmic feed in 2011, and ever since then ML recommendation systems have driven what 3.5B daily users see and which ads they&#8217;re shown. <em>To the tune of $150B+ annual advertising revenue.</em> Hence, the custom inference silicon roadmap is driven by workload, starting with clearest ROI (recommendation systems) and moving toward GenAI.</p><p>By the way, Meta and many others are showing that multi-vendor hardware portfolios are no problem. <em>We need compute. We can deal with the software implications of building across stacks.</em></p><p>From the Meta Q4 earnings call:</p><blockquote><p><em>&#8220;We extended our Andromeda ads retrieval engine, so it can now run on NVIDIA, AMD, and MTIA. This, along with model innovations, enabled us to nearly triple Andromeda&#8217;s compute efficiency.&#8221;</em></p></blockquote><p>Done right, custom silicon for recommendations is margin expansion on the core business. Custom silicon for GenAI inference is cost reduction on the fastest-growing workloads. <em>A penny saved is a penny earned.</em> <em>Same story AWS tells with Trainium 3+, same story as Microsoft and Maia 200+.</em></p><p>And again, it&#8217;s inference-first. From Meta&#8217;s blog:</p><blockquote><p><em>&#8220;Mainstream GPUs are typically built for the most demanding workload &#8212; large-scale GenAI pre-training &#8212; and then applied, often less cost-effectively, to other workloads such as GenAI inference. <strong>We take a different approach: MTIA 450 and 500 are optimized first for GenAI inference,</strong> and can then be used to support other workloads as needed&#8221;</em></p></blockquote><p>IMO this pushes back on concerns around The Information&#8217;s recent article: <a href="https://www.theinformation.com/articles/metas-internal-chip-design-efforts-hit-roadblocks">Meta&#8217;s Internal Chip Design Efforts Hit Roadblocks</a> </p><blockquote><p>Meta last week scrapped the most advanced chip it was developing for training AI models, after struggling with the chip&#8217;s design.</p></blockquote><p>This generated some hand-wringing for Broadcom and Hock had to address it on the earnings call. But in context, it&#8217;s a bit of a nothing burger, right? Given where we are today, internal custom silicon efforts like Meta&#8217;s <em>ought</em> to prioritize inference. Use Nvidia GPUs for training. AMD Helios racks too. <em>IMO there&#8217;s no urgent need to build custom silicon for training.</em></p><p>The ~6-month cadence is possible because&#8230; <a href="https://www.chipstrat.com/p/chiplets-and-the-future-of-system">chiplets</a>!</p><blockquote><p>Because each chiplet can be upgraded separately, we can implement improvements in months rather than years. Moreover, different chiplets can be manufactured at different process nodes that are most cost-effective while meeting performance and power requirements.</p></blockquote><p><em>Disclosure, big chiplet fan here. </em> </p><p>AMD has long punched above it&#8217;s weight thanks to chiplets. And here&#8217;s Meta doing the same. </p><p>And as you would expect, all the systems use the same chassis, rack, and network infra. <em>Yay OCP.</em></p><p><em>Oh, and so much related news.</em> </p><p>Remember Meta&#8217;s Sep 2025 acquisition of Rivos, a &#8220;CUDA compatible RISC-V AI startup&#8221;? From <a href="https://www.reuters.com/business/meta-buy-chip-startup-rivos-ai-effort-source-says-2025-09-30/">Reuters</a></p><blockquote><p>&#8220;Our custom silicon work is progressing quickly and this will further accelerate our efforts,&#8221; a Meta spokesperson said when contacted by Reuters.</p></blockquote><p>Rivos was already collaborating with Meta on MTIA before the acquisition, according to reporting from <a href="https://www.nextplatform.com/compute/2025/10/02/meta-buys-rivos-to-accelerate-compute-engine-engineering/1642477">The Next Platform</a>: </p><blockquote><p>Rivos, which was founded in May 2021, was pretty secretive about what it was up to and it had a partnership with Meta Platforms <strong>where it apparently helped in the design of the MTIA 1i and MTIA 2i compute engines</strong> (using the more recent and descriptive way of talking about them). The exact nature of this collaboration was unknown. Separate from this, Rivos was working on its own RISC-V CPU and GPU designs.</p></blockquote><p>And four chips in two years requires a large silicon team. Which apparently Rivos had:</p><blockquote><p>With the backing of Walden International with the help of Dell Capital Ventures and Matrix Capital Management, <strong>Rivos started off with more than a hundred employees on day one</strong>, and Tan was named chairman of the board. This has, in part, given Rivos access to advanced EDA tools and to foundry expertise and capacity at Taiwan Semiconductor Manufacturing Co. <strong>Hiring nearly 50 engineers from Apple in 2023 landed it in a lawsuit with Apple</strong>, and Tan negotiated a settlement.</p></blockquote><p><em>Oh interesting. Also, Lip-Bu Tan is EXPERIENCED.</em></p><p>Software seems not to be a problem. Meta&#8217;s MTIA blog emphasizes PyTorch/Triton/vLLM compatibility at the ML framework layer. And Rivos also claimed to have built a &#8220;CUDA-compatible software stack&#8221;. And Meta is good at software.</p><p>Sure seems like MTIA will be successful.</p><p><strong>Behind the paywall:</strong></p><ul><li><p><strong>HBM vs SRAM:</strong> two planes of inference competition, and where MTIA sits</p></li><li><p><strong>Supply chain impact:</strong> Broadcom, NVIDIA, AMD, HBM suppliers, TSMC, Arista, inference chip startups</p></li></ul>
      <p>
          <a href="https://www.chipstrat.com/p/metas-mtia-roadmap">
              Read more
          </a>
      </p>
   ]]></content:encoded></item><item><title><![CDATA[Optics Primer, Part 3: Co-Packaged Optics (CPO)]]></title><description><![CDATA[From EML lasers and DSPs to silicon photonics and external CW lasers. How CPO works and the impact on the optical supply chain.]]></description><link>https://www.chipstrat.com/p/optics-primer-part-3-co-packaged</link><guid isPermaLink="false">https://www.chipstrat.com/p/optics-primer-part-3-co-packaged</guid><dc:creator><![CDATA[Austin Lyons]]></dc:creator><pubDate>Mon, 09 Mar 2026 16:31:50 GMT</pubDate><enclosure url="https://substackcdn.com/image/youtube/w_728,c_limit/kS8r7UcexJU" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>This series has been walking through the different ways datacenters connect optics to switch silicon, from pluggable transceivers to LRO to LPO:</p><div class="digest-post-embed" data-attrs="{&quot;nodeId&quot;:&quot;a23db9dc-3bf9-4078-9a05-74228da82408&quot;,&quot;caption&quot;:&quot;This is the first in a series we&#8217;ll return to periodically with clear explainers on optical interconnects and the photonics technologies behind them.&quot;,&quot;cta&quot;:&quot;Read full story&quot;,&quot;showBylines&quot;:true,&quot;size&quot;:&quot;sm&quot;,&quot;isEditorNode&quot;:true,&quot;title&quot;:&quot;Optics Primer, Part 1: Traditional Pluggable Optics&quot;,&quot;publishedBylines&quot;:[{&quot;id&quot;:8066776,&quot;name&quot;:&quot;Austin Lyons&quot;,&quot;bio&quot;:&quot;Chipstrat, Creative Strategies, Semi Doped. MSEE + MBA.&quot;,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/c180a750-7572-4aff-88e4-317aa435d533_1203x902.jpeg&quot;,&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:100}],&quot;post_date&quot;:&quot;2025-12-23T20:17:16.862Z&quot;,&quot;cover_image&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/c6e5bd33-41a4-4130-a855-7caafe3aeb47_1292x772.png&quot;,&quot;cover_image_alt&quot;:null,&quot;canonical_url&quot;:&quot;https://www.chipstrat.com/p/optics-primer-part-1-traditional&quot;,&quot;section_name&quot;:null,&quot;video_upload_id&quot;:null,&quot;id&quot;:182452349,&quot;type&quot;:&quot;newsletter&quot;,&quot;reaction_count&quot;:85,&quot;comment_count&quot;:0,&quot;publication_id&quot;:2003179,&quot;publication_name&quot;:&quot;Chipstrat&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!rCMl!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F27769444-42f3-4b43-9683-4fe7826c06b8_608x608.png&quot;,&quot;belowTheFold&quot;:false,&quot;youtube_url&quot;:null,&quot;show_links&quot;:null,&quot;feed_url&quot;:null}"></div><div class="digest-post-embed" data-attrs="{&quot;nodeId&quot;:&quot;9c9aea72-e782-44d8-8908-b6cadebd510b&quot;,&quot;caption&quot;:&quot;This short piece walks through linear receive optics (LRO) and linear pluggable optics (LPO). We&#8217;re stepping incrementally from traditional pluggable optics toward co-packaged optics (CPO). Each step trades flexibility for efficiency.&quot;,&quot;cta&quot;:&quot;Read full story&quot;,&quot;showBylines&quot;:true,&quot;size&quot;:&quot;sm&quot;,&quot;isEditorNode&quot;:true,&quot;title&quot;:&quot;Optics Primer, Part 2: LRO &amp; LPO&quot;,&quot;publishedBylines&quot;:[{&quot;id&quot;:8066776,&quot;name&quot;:&quot;Austin Lyons&quot;,&quot;bio&quot;:&quot;Chipstrat, Creative Strategies, Semi Doped. MSEE + MBA.&quot;,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/c180a750-7572-4aff-88e4-317aa435d533_1203x902.jpeg&quot;,&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:100}],&quot;post_date&quot;:&quot;2026-01-26T15:00:20.047Z&quot;,&quot;cover_image&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/a137b7e9-c8d7-43ce-ba1c-34cb22eb072e_1536x836.png&quot;,&quot;cover_image_alt&quot;:null,&quot;canonical_url&quot;:&quot;https://www.chipstrat.com/p/linear-optics-trade-offs-lro-and&quot;,&quot;section_name&quot;:null,&quot;video_upload_id&quot;:null,&quot;id&quot;:185845887,&quot;type&quot;:&quot;newsletter&quot;,&quot;reaction_count&quot;:25,&quot;comment_count&quot;:0,&quot;publication_id&quot;:2003179,&quot;publication_name&quot;:&quot;Chipstrat&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!rCMl!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F27769444-42f3-4b43-9683-4fe7826c06b8_608x608.png&quot;,&quot;belowTheFold&quot;:false,&quot;youtube_url&quot;:null,&quot;show_links&quot;:null,&quot;feed_url&quot;:null}"></div><p><em>If you haven&#8217;t read those, they are easy reads. I recommend skimming through them first.</em></p><p>Each step trades flexibility for efficiency. And the root source of the inefficiency is that long, noisy copper trace between the switch and the optics:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!2im1!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1de14afa-e6ec-4f2f-aa69-39fbe00a500a_1456x662.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!2im1!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1de14afa-e6ec-4f2f-aa69-39fbe00a500a_1456x662.png 424w, https://substackcdn.com/image/fetch/$s_!2im1!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1de14afa-e6ec-4f2f-aa69-39fbe00a500a_1456x662.png 848w, https://substackcdn.com/image/fetch/$s_!2im1!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1de14afa-e6ec-4f2f-aa69-39fbe00a500a_1456x662.png 1272w, https://substackcdn.com/image/fetch/$s_!2im1!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1de14afa-e6ec-4f2f-aa69-39fbe00a500a_1456x662.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!2im1!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1de14afa-e6ec-4f2f-aa69-39fbe00a500a_1456x662.png" width="552" height="250.97802197802199" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/1de14afa-e6ec-4f2f-aa69-39fbe00a500a_1456x662.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:662,&quot;width&quot;:1456,&quot;resizeWidth&quot;:552,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!2im1!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1de14afa-e6ec-4f2f-aa69-39fbe00a500a_1456x662.png 424w, https://substackcdn.com/image/fetch/$s_!2im1!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1de14afa-e6ec-4f2f-aa69-39fbe00a500a_1456x662.png 848w, https://substackcdn.com/image/fetch/$s_!2im1!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1de14afa-e6ec-4f2f-aa69-39fbe00a500a_1456x662.png 1272w, https://substackcdn.com/image/fetch/$s_!2im1!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1de14afa-e6ec-4f2f-aa69-39fbe00a500a_1456x662.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>At modern lane rates (e.g 50G, 100G per lane), electrical signals pick up a ton of noise and distortion crossing the copper trace between the switch and the transceiver. Pluggable optics handle this with a DSP that overcomes the noise during transmit and receive. And LRO and LPO save power by relocating that DSP into the switch:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!3BGO!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2835062-db30-4ebd-8f43-a946a2dca527_1326x1744.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!3BGO!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2835062-db30-4ebd-8f43-a946a2dca527_1326x1744.png 424w, https://substackcdn.com/image/fetch/$s_!3BGO!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2835062-db30-4ebd-8f43-a946a2dca527_1326x1744.png 848w, https://substackcdn.com/image/fetch/$s_!3BGO!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2835062-db30-4ebd-8f43-a946a2dca527_1326x1744.png 1272w, https://substackcdn.com/image/fetch/$s_!3BGO!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2835062-db30-4ebd-8f43-a946a2dca527_1326x1744.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!3BGO!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2835062-db30-4ebd-8f43-a946a2dca527_1326x1744.png" width="428" height="562.9200603318251" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/b2835062-db30-4ebd-8f43-a946a2dca527_1326x1744.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1744,&quot;width&quot;:1326,&quot;resizeWidth&quot;:428,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!3BGO!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2835062-db30-4ebd-8f43-a946a2dca527_1326x1744.png 424w, https://substackcdn.com/image/fetch/$s_!3BGO!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2835062-db30-4ebd-8f43-a946a2dca527_1326x1744.png 848w, https://substackcdn.com/image/fetch/$s_!3BGO!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2835062-db30-4ebd-8f43-a946a2dca527_1326x1744.png 1272w, https://substackcdn.com/image/fetch/$s_!3BGO!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb2835062-db30-4ebd-8f43-a946a2dca527_1326x1744.png 1456w" sizes="100vw"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>But the system becomes less modular and harder to mix-and-match.But why deal with that copper trace at all? What happens when you just... put the optics right next to the silicon?</p><h2>NPO</h2><p><strong>Near package optics (NPO)</strong> brings the optics module on the same substrate or very close to the switch package, but not inside it:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!IBNX!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5d08f1e3-9963-4cd7-9fa8-33581f65a8c1_1456x812.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!IBNX!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5d08f1e3-9963-4cd7-9fa8-33581f65a8c1_1456x812.png 424w, https://substackcdn.com/image/fetch/$s_!IBNX!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5d08f1e3-9963-4cd7-9fa8-33581f65a8c1_1456x812.png 848w, https://substackcdn.com/image/fetch/$s_!IBNX!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5d08f1e3-9963-4cd7-9fa8-33581f65a8c1_1456x812.png 1272w, https://substackcdn.com/image/fetch/$s_!IBNX!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5d08f1e3-9963-4cd7-9fa8-33581f65a8c1_1456x812.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!IBNX!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5d08f1e3-9963-4cd7-9fa8-33581f65a8c1_1456x812.png" width="632" height="352.46153846153845" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/5d08f1e3-9963-4cd7-9fa8-33581f65a8c1_1456x812.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:812,&quot;width&quot;:1456,&quot;resizeWidth&quot;:632,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!IBNX!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5d08f1e3-9963-4cd7-9fa8-33581f65a8c1_1456x812.png 424w, https://substackcdn.com/image/fetch/$s_!IBNX!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5d08f1e3-9963-4cd7-9fa8-33581f65a8c1_1456x812.png 848w, https://substackcdn.com/image/fetch/$s_!IBNX!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5d08f1e3-9963-4cd7-9fa8-33581f65a8c1_1456x812.png 1272w, https://substackcdn.com/image/fetch/$s_!IBNX!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5d08f1e3-9963-4cd7-9fa8-33581f65a8c1_1456x812.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>It&#8217;s close enough to reduce most copper impairments. This is a pragmatic middle ground, but the major players are largely leapfrogging it and going straight to CPO. </p><p><em>Might as well just reduce the copper distance to nearly zero right?</em></p><h2>CPO</h2><p>Finally! Co-packaged optics (CPO). </p><p>The optics move onto (or into) the switch package itself:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!UQl3!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe683e823-41a2-45ea-b084-9daf2ef1b840_1536x1024.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!UQl3!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe683e823-41a2-45ea-b084-9daf2ef1b840_1536x1024.png 424w, https://substackcdn.com/image/fetch/$s_!UQl3!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe683e823-41a2-45ea-b084-9daf2ef1b840_1536x1024.png 848w, https://substackcdn.com/image/fetch/$s_!UQl3!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe683e823-41a2-45ea-b084-9daf2ef1b840_1536x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!UQl3!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe683e823-41a2-45ea-b084-9daf2ef1b840_1536x1024.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!UQl3!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe683e823-41a2-45ea-b084-9daf2ef1b840_1536x1024.png" width="1456" height="971" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/e683e823-41a2-45ea-b084-9daf2ef1b840_1536x1024.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:971,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!UQl3!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe683e823-41a2-45ea-b084-9daf2ef1b840_1536x1024.png 424w, https://substackcdn.com/image/fetch/$s_!UQl3!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe683e823-41a2-45ea-b084-9daf2ef1b840_1536x1024.png 848w, https://substackcdn.com/image/fetch/$s_!UQl3!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe683e823-41a2-45ea-b084-9daf2ef1b840_1536x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!UQl3!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe683e823-41a2-45ea-b084-9daf2ef1b840_1536x1024.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption"><em>Cleaned up version of the following low-res image: <a href="https://www.latitudeds.com/post/tutorial-the-emergence-of-co-packaged-optics">Source</a></em></figcaption></figure></div><p>The electrical path between the switch die and the optical engine is now very short (millimeters or less). Since there&#8217;s no long copper trace, we needn&#8217;t have a DSP to compensate for it! <em>Less silicon content, and less power.</em> </p><p>There&#8217;s also much less <a href="https://www.chipstrat.com/p/serdes-matters">SerDes</a> power overhead as you only need extra short-reach (XSR) SerDes, the simplest, lowest-power tier:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!C8eo!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7a5f3888-bf75-477c-8164-47f3e035713a_834x412.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!C8eo!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7a5f3888-bf75-477c-8164-47f3e035713a_834x412.png 424w, https://substackcdn.com/image/fetch/$s_!C8eo!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7a5f3888-bf75-477c-8164-47f3e035713a_834x412.png 848w, https://substackcdn.com/image/fetch/$s_!C8eo!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7a5f3888-bf75-477c-8164-47f3e035713a_834x412.png 1272w, https://substackcdn.com/image/fetch/$s_!C8eo!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7a5f3888-bf75-477c-8164-47f3e035713a_834x412.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!C8eo!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7a5f3888-bf75-477c-8164-47f3e035713a_834x412.png" width="584" height="288.4988009592326" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/7a5f3888-bf75-477c-8164-47f3e035713a_834x412.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:412,&quot;width&quot;:834,&quot;resizeWidth&quot;:584,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!C8eo!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7a5f3888-bf75-477c-8164-47f3e035713a_834x412.png 424w, https://substackcdn.com/image/fetch/$s_!C8eo!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7a5f3888-bf75-477c-8164-47f3e035713a_834x412.png 848w, https://substackcdn.com/image/fetch/$s_!C8eo!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7a5f3888-bf75-477c-8164-47f3e035713a_834x412.png 1272w, https://substackcdn.com/image/fetch/$s_!C8eo!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7a5f3888-bf75-477c-8164-47f3e035713a_834x412.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption"><a href="https://semiengineering.com/one-serdes-solution-doesnt-fit-all/">source</a></figcaption></figure></div><p>The simplest way to think about CPO is that <strong>the transceiver disappears</strong> and the optical engine moves onto the switch package itself.</p><h3>How It Works</h3><p>SemiAnalysis has an <a href="https://newsletter.semianalysis.com/p/co-packaged-optics-cpo-book-scaling">in-depth CPO article</a> with a helpful diagram:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!r5c5!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F23886490-5277-4d66-ba9a-86a9f43fe36b_1024x333.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!r5c5!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F23886490-5277-4d66-ba9a-86a9f43fe36b_1024x333.png 424w, https://substackcdn.com/image/fetch/$s_!r5c5!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F23886490-5277-4d66-ba9a-86a9f43fe36b_1024x333.png 848w, https://substackcdn.com/image/fetch/$s_!r5c5!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F23886490-5277-4d66-ba9a-86a9f43fe36b_1024x333.png 1272w, https://substackcdn.com/image/fetch/$s_!r5c5!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F23886490-5277-4d66-ba9a-86a9f43fe36b_1024x333.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!r5c5!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F23886490-5277-4d66-ba9a-86a9f43fe36b_1024x333.png" width="1024" height="333" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/23886490-5277-4d66-ba9a-86a9f43fe36b_1024x333.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:333,&quot;width&quot;:1024,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!r5c5!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F23886490-5277-4d66-ba9a-86a9f43fe36b_1024x333.png 424w, https://substackcdn.com/image/fetch/$s_!r5c5!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F23886490-5277-4d66-ba9a-86a9f43fe36b_1024x333.png 848w, https://substackcdn.com/image/fetch/$s_!r5c5!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F23886490-5277-4d66-ba9a-86a9f43fe36b_1024x333.png 1272w, https://substackcdn.com/image/fetch/$s_!r5c5!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F23886490-5277-4d66-ba9a-86a9f43fe36b_1024x333.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption"><a href="https://newsletter.semianalysis.com/p/co-packaged-optics-cpo-book-scaling">Source</a></figcaption></figure></div><p>The optical engine is the core of CPO; it converts between the optical and electrical domains. Since the OE is on-package, fiber runs directly to the package edge. And now the electrical path to the switch is so short that signals stay clean without heavy conditioning. The switch ASIC&#8217;s SerDes handles what little remains.</p><p>Another helpful diagram from Nvidia&#8217;s <a href="https://developer.nvidia.com/blog/scaling-ai-factories-with-co-packaged-optics-for-better-power-efficiency/">blog</a>. The red circles highlight noisy copper channels. Notice how CPO eliminates most of them:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!NOjq!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F316db97a-6b9f-42e7-95c0-90f37379dca1_2048x931.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!NOjq!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F316db97a-6b9f-42e7-95c0-90f37379dca1_2048x931.png 424w, https://substackcdn.com/image/fetch/$s_!NOjq!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F316db97a-6b9f-42e7-95c0-90f37379dca1_2048x931.png 848w, https://substackcdn.com/image/fetch/$s_!NOjq!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F316db97a-6b9f-42e7-95c0-90f37379dca1_2048x931.png 1272w, https://substackcdn.com/image/fetch/$s_!NOjq!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F316db97a-6b9f-42e7-95c0-90f37379dca1_2048x931.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!NOjq!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F316db97a-6b9f-42e7-95c0-90f37379dca1_2048x931.png" width="598" height="271.89285714285717" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/316db97a-6b9f-42e7-95c0-90f37379dca1_2048x931.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:662,&quot;width&quot;:1456,&quot;resizeWidth&quot;:598,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!NOjq!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F316db97a-6b9f-42e7-95c0-90f37379dca1_2048x931.png 424w, https://substackcdn.com/image/fetch/$s_!NOjq!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F316db97a-6b9f-42e7-95c0-90f37379dca1_2048x931.png 848w, https://substackcdn.com/image/fetch/$s_!NOjq!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F316db97a-6b9f-42e7-95c0-90f37379dca1_2048x931.png 1272w, https://substackcdn.com/image/fetch/$s_!NOjq!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F316db97a-6b9f-42e7-95c0-90f37379dca1_2048x931.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption"><a href="https://developer.nvidia.com/blog/scaling-ai-factories-with-co-packaged-optics-for-better-power-efficiency/">Source</a></figcaption></figure></div><p>CPO cuts down on the overall power consumption, too. Per this example from Nvidia, power consumption cuts down from 30W for pluggables to 9W for CPO:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!RNVc!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F439ef977-4aa5-4724-8509-d8a48fe79c55_2048x948.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!RNVc!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F439ef977-4aa5-4724-8509-d8a48fe79c55_2048x948.png 424w, https://substackcdn.com/image/fetch/$s_!RNVc!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F439ef977-4aa5-4724-8509-d8a48fe79c55_2048x948.png 848w, https://substackcdn.com/image/fetch/$s_!RNVc!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F439ef977-4aa5-4724-8509-d8a48fe79c55_2048x948.png 1272w, https://substackcdn.com/image/fetch/$s_!RNVc!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F439ef977-4aa5-4724-8509-d8a48fe79c55_2048x948.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!RNVc!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F439ef977-4aa5-4724-8509-d8a48fe79c55_2048x948.png" width="632" height="292.56043956043953" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/439ef977-4aa5-4724-8509-d8a48fe79c55_2048x948.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:674,&quot;width&quot;:1456,&quot;resizeWidth&quot;:632,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!RNVc!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F439ef977-4aa5-4724-8509-d8a48fe79c55_2048x948.png 424w, https://substackcdn.com/image/fetch/$s_!RNVc!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F439ef977-4aa5-4724-8509-d8a48fe79c55_2048x948.png 848w, https://substackcdn.com/image/fetch/$s_!RNVc!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F439ef977-4aa5-4724-8509-d8a48fe79c55_2048x948.png 1272w, https://substackcdn.com/image/fetch/$s_!RNVc!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F439ef977-4aa5-4724-8509-d8a48fe79c55_2048x948.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption"><a href="https://developer.nvidia.com/blog/scaling-ai-factories-with-co-packaged-optics-for-better-power-efficiency/">Source</a></figcaption></figure></div><p><em>As I always say, in this power-constrained era, every watt saved is a watt that can be used for computation.</em></p><p>At this point, you should watch this Nvidia CPO video again, as it will make a lot of sense now:</p><div id="youtube2-kS8r7UcexJU" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;kS8r7UcexJU&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/kS8r7UcexJU?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><h2>Lasers and Silicon Photonics</h2><p>Oh yeah, an important call out. Look at the Nvidia diagrams above again. The pluggable transceiver uses <strong>externally modulated lasers (EMLs)</strong> for 1.6Tb. These are discrete <strong>InP lasers</strong> + modulators. </p><p>CPO uses lasers differently. Instead of modulating the laser itself, it uses a simple <strong>continuous wave (CW) laser </strong>(just a constant beam of light) and performs the modulation on a silicon photonics chip on the switch substrate:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!SGZo!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F46a02c2f-fc24-46cf-9766-621e5cec2d79_3428x1900.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!SGZo!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F46a02c2f-fc24-46cf-9766-621e5cec2d79_3428x1900.png 424w, https://substackcdn.com/image/fetch/$s_!SGZo!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F46a02c2f-fc24-46cf-9766-621e5cec2d79_3428x1900.png 848w, https://substackcdn.com/image/fetch/$s_!SGZo!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F46a02c2f-fc24-46cf-9766-621e5cec2d79_3428x1900.png 1272w, https://substackcdn.com/image/fetch/$s_!SGZo!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F46a02c2f-fc24-46cf-9766-621e5cec2d79_3428x1900.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!SGZo!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F46a02c2f-fc24-46cf-9766-621e5cec2d79_3428x1900.png" width="1456" height="807" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/46a02c2f-fc24-46cf-9766-621e5cec2d79_3428x1900.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:807,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:4703461,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.chipstrat.com/i/190398630?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F46a02c2f-fc24-46cf-9766-621e5cec2d79_3428x1900.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!SGZo!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F46a02c2f-fc24-46cf-9766-621e5cec2d79_3428x1900.png 424w, https://substackcdn.com/image/fetch/$s_!SGZo!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F46a02c2f-fc24-46cf-9766-621e5cec2d79_3428x1900.png 848w, https://substackcdn.com/image/fetch/$s_!SGZo!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F46a02c2f-fc24-46cf-9766-621e5cec2d79_3428x1900.png 1272w, https://substackcdn.com/image/fetch/$s_!SGZo!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F46a02c2f-fc24-46cf-9766-621e5cec2d79_3428x1900.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Nvidia&#8217;s external CW laser</figcaption></figure></div><p><strong>Silicon photonics is an optical circuit built in silicon</strong> using CMOS-compatible fabrication of waveguides, modulators, photodetectors:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!J3DO!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F470eab3f-ae8b-4d11-86a8-b62c922134cc_3008x1922.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!J3DO!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F470eab3f-ae8b-4d11-86a8-b62c922134cc_3008x1922.png 424w, https://substackcdn.com/image/fetch/$s_!J3DO!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F470eab3f-ae8b-4d11-86a8-b62c922134cc_3008x1922.png 848w, https://substackcdn.com/image/fetch/$s_!J3DO!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F470eab3f-ae8b-4d11-86a8-b62c922134cc_3008x1922.png 1272w, https://substackcdn.com/image/fetch/$s_!J3DO!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F470eab3f-ae8b-4d11-86a8-b62c922134cc_3008x1922.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!J3DO!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F470eab3f-ae8b-4d11-86a8-b62c922134cc_3008x1922.png" width="1456" height="930" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/470eab3f-ae8b-4d11-86a8-b62c922134cc_3008x1922.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:930,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:5770527,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.chipstrat.com/i/190398630?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F470eab3f-ae8b-4d11-86a8-b62c922134cc_3008x1922.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!J3DO!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F470eab3f-ae8b-4d11-86a8-b62c922134cc_3008x1922.png 424w, https://substackcdn.com/image/fetch/$s_!J3DO!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F470eab3f-ae8b-4d11-86a8-b62c922134cc_3008x1922.png 848w, https://substackcdn.com/image/fetch/$s_!J3DO!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F470eab3f-ae8b-4d11-86a8-b62c922134cc_3008x1922.png 1272w, https://substackcdn.com/image/fetch/$s_!J3DO!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F470eab3f-ae8b-4d11-86a8-b62c922134cc_3008x1922.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Nvidia&#8217;s silicon photonics</figcaption></figure></div><p><strong>What about serviceability though? Don&#8217;t lasers fail?</strong></p><p>Yes, and this was one of the earliest objections to CPO. In pluggable optics, a failed laser means swapping the transceiver module which is easy to access. <em>But if the laser is near the switch&#8230; that&#8217;s a lot harder right?</em></p><p>Well, don&#8217;t put the laser near the switch! The CW laser is external. So if it fails, you can still easily replace the laser source and not the switch. </p><p>And as Nvidia shows here, the silicon photonic engines themselves are designed as detachable sub-assemblies. Not as easy as swapping a front-panel pluggable, but far better than scrapping the switch:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!4SWP!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F992e6d61-d8f5-4f25-b870-d2f65b828c05_2840x1812.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!4SWP!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F992e6d61-d8f5-4f25-b870-d2f65b828c05_2840x1812.png 424w, https://substackcdn.com/image/fetch/$s_!4SWP!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F992e6d61-d8f5-4f25-b870-d2f65b828c05_2840x1812.png 848w, https://substackcdn.com/image/fetch/$s_!4SWP!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F992e6d61-d8f5-4f25-b870-d2f65b828c05_2840x1812.png 1272w, https://substackcdn.com/image/fetch/$s_!4SWP!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F992e6d61-d8f5-4f25-b870-d2f65b828c05_2840x1812.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!4SWP!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F992e6d61-d8f5-4f25-b870-d2f65b828c05_2840x1812.png" width="1456" height="929" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/992e6d61-d8f5-4f25-b870-d2f65b828c05_2840x1812.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:929,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:4002938,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.chipstrat.com/i/190398630?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F992e6d61-d8f5-4f25-b870-d2f65b828c05_2840x1812.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!4SWP!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F992e6d61-d8f5-4f25-b870-d2f65b828c05_2840x1812.png 424w, https://substackcdn.com/image/fetch/$s_!4SWP!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F992e6d61-d8f5-4f25-b870-d2f65b828c05_2840x1812.png 848w, https://substackcdn.com/image/fetch/$s_!4SWP!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F992e6d61-d8f5-4f25-b870-d2f65b828c05_2840x1812.png 1272w, https://substackcdn.com/image/fetch/$s_!4SWP!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F992e6d61-d8f5-4f25-b870-d2f65b828c05_2840x1812.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Thermal concerns are similar. Lasers are temperature-sensitive and switch ASICs run hot; pluggable EMLs combine a laser and modulator in a single InP device running at high speed, and they run hot and are among the more failure-prone components. </p><p>But with CPO, the laser is just a simple CW source and the high-speed modulation moves to silicon photonics. If the laser sits off-package, you&#8217;ve removed the most temperature-sensitive component from the equation!</p><p>Moving optics closer to the switch looked like a reliability problem, but it may end up <em>improving</em> reliability instead. In fact, SemiAnalysis shared this nice slide from Meta that suggests Meta had <em>fewer </em>failures using CPO than with pluggables:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!yJAa!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F200a6678-6062-4b81-a7f2-0c7246a4f8c2_2746x2067.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!yJAa!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F200a6678-6062-4b81-a7f2-0c7246a4f8c2_2746x2067.jpeg 424w, https://substackcdn.com/image/fetch/$s_!yJAa!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F200a6678-6062-4b81-a7f2-0c7246a4f8c2_2746x2067.jpeg 848w, https://substackcdn.com/image/fetch/$s_!yJAa!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F200a6678-6062-4b81-a7f2-0c7246a4f8c2_2746x2067.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!yJAa!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F200a6678-6062-4b81-a7f2-0c7246a4f8c2_2746x2067.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!yJAa!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F200a6678-6062-4b81-a7f2-0c7246a4f8c2_2746x2067.jpeg" width="1456" height="1096" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/200a6678-6062-4b81-a7f2-0c7246a4f8c2_2746x2067.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1096,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!yJAa!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F200a6678-6062-4b81-a7f2-0c7246a4f8c2_2746x2067.jpeg 424w, https://substackcdn.com/image/fetch/$s_!yJAa!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F200a6678-6062-4b81-a7f2-0c7246a4f8c2_2746x2067.jpeg 848w, https://substackcdn.com/image/fetch/$s_!yJAa!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F200a6678-6062-4b81-a7f2-0c7246a4f8c2_2746x2067.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!yJAa!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F200a6678-6062-4b81-a7f2-0c7246a4f8c2_2746x2067.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption"><a href="https://newsletter.semianalysis.com/p/co-packaged-optics-cpo-book-scaling">Source</a></figcaption></figure></div><h2>Trade-offs</h2><p>Everything in engineering is about trade-offs, and CPO is no different.</p><p>Manufacturing is non-trivial. The silicon photonics engines sit very close to the switch ASIC, which means integrating optical and electrical components with different materials, process flows, and reliability characteristics. That complicates packaging, thermal design, and testing compared with traditional pluggable optics.</p><p>And CPO also tightens the coupling between the optics and the switch platform. With pluggables, operators can mix transceiver vendors or change optical reaches independently of the switch. In a CPO system, the optical engines are designed and qualified as part of the switch platform, which reduces that flexibility.</p><p>But that&#8217;s a manageable trade for hyperscalers, who co-design systems with their silicon partners (Broadcom, Marvell), control board layout and qualification, and deploy into environments they fully manage. </p><p>And Nvidia is bringing CPO to its merchant switch lineup (Spectrum-X Photonics, Quantum-X Photonics), which could eventually bring CPO within reach of non-hyperscalers who don&#8217;t have that kind of vertical integration. <em>Well, they do have that kind of vertical integration&#8230; via Nvidia.</em></p><h2>Pluggables Are Not Dead</h2><p>Pluggable transceivers are EML-based InP modules with DSPs. As CPO scales, we need more silicon photonics engines and CW laser sources, and fewer DSP chips for pluggables.</p><p>People like to jump to conclusions. <em>Remember: Copper is dead! Long live optical!</em></p><p>But as Vik and I discussed recently, I&#8217;d sum it up as: <em>The answer is both. The question is when.</em></p><div id="youtube2-47cQTPjDUB8" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;47cQTPjDUB8&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/47cQTPjDUB8?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><p>Yes, the direction of travel is toward tighter integration of optics and silicon. But the debate is how fast. After all, Hock Tan claimed 400G SerDes on the last Broadcom earnings call, which would extend the pluggable runway. But Broadcom is also shipping early-access CPO switches. Nvidia is shipping CPO in 2026. <a href="https://investor.marvell.com/news-events/press-releases/detail/1000/marvell-to-acquire-celestial-ai-accelerating-scale-up-connectivity-for-next-generation-data-centers">Marvell acquired Celestial AI</a> to get in the game. </p><p>Today, pluggable optics concentrate value in the transceiver module. InP EML lasers, DSPs, driver and TIA chips, optical packaging. Companies like Lumentum, Coherent, Fabrinet, and DSP suppliers like Marvell and Broadcom sit in that value chain. </p><p>But with CPO, there&#8217;s an unbundling in the value chain. The optical functions move onto the switch package as silicon photonics engines, the laser becomes a separate CW source, and the DSP largely goes away. </p><p>Hence, value shifts toward silicon photonics, CW laser production, and advanced packaging, and away from standalone transceiver DSPs and pluggable module assembly. </p><p>But again, these shifts are happening over time. As I said in the video above, the transition between technologies isn&#8217;t just a binary thing on a particular date, but more of an adoption curve. Even a single hyperscaler can be deploying different technologies at different places at roughly the same time. So pluggables are still a great business.</p><p>I&#8217;ve already started pulling on these value chain threads. If you want to understand the laser side in depth, check out <a href="https://www.chipstrat.com/p/lumentum-and-the-laser-bottleneck">Lumentum and the Laser Bottleneck</a> and <a href="https://www.chipstrat.com/p/broadcom-makes-lasers">Broadcom Makes Lasers</a>, which are both directly connected to the CW laser and silicon photonics shifts we covered here. <em>And more to come!</em> <em>Coherent, Applied Optoelectronics, Astera, Poet, etc. Getting lots of requests from readers :)</em></p><p>If you found this useful, subscribe. And if you want the deeper company-level analysis as CPO scales, consider going paid.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.chipstrat.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.chipstrat.com/subscribe?"><span>Subscribe now</span></a></p><p></p>]]></content:encoded></item><item><title><![CDATA[An Interview with Rivian’s Mukund Chavan About RAP1]]></title><description><![CDATA[Rivian designed its own autonomy SoC. Why?]]></description><link>https://www.chipstrat.com/p/an-interview-with-rivians-mukund</link><guid isPermaLink="false">https://www.chipstrat.com/p/an-interview-with-rivians-mukund</guid><dc:creator><![CDATA[Austin Lyons]]></dc:creator><pubDate>Thu, 05 Mar 2026 17:56:22 GMT</pubDate><enclosure url="https://substackcdn.com/image/youtube/w_728,c_limit/nHfKyO9Afj0" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>I am happy to welcome <a href="https://www.linkedin.com/in/chavanmukund/">Mukund Chavan</a>, VP of ASIC Design at Rivian, to discuss the Rivian Autonomy Processor (RAP-1). <em>This is the custom silicon chip Rivian announced at its Autonomy and AI Day back in December.</em></p><p>Rivian designing its own chip was a genuine surprise. The company built its own zonal ECU architecture, so electronics has been a core competency, but custom silicon is a different level of ambition.</p><p>We start the interview by building up an understanding of the autonomy workload. What does it actually mean to do multimodal inference at the edge, when you have 11 cameras, radar, and lidar all streaming into a chip that has to make driving decisions in a 100-millisecond loop?</p><p>From there we get into RAP1 itself, including the custom neural network engine, the safety architecture, and a chip-to-chip interconnect called RivLink that hints at ambitions beyond autonomous vehicles. We discuss build vs. buy decisions, like why Rivian designed its own neural network engine and interconnect but licensed Arm cores. And a very interesting discussion about how silicon economics drove the 800 TOPS compute target.</p><p>Finally, we get to the heart of the matter. Why custom silicon over merchant? How does vertical integration unlock optimizations you simply can&#8217;t get off the shelf?</p><p>We close with the team and timeline, about 2.5 years from project go to silicon in hand. This was fun and educational. Enjoy!</p><div id="youtube2-nHfKyO9Afj0" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;nHfKyO9Afj0&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/nHfKyO9Afj0?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><p><em>This interview is lightly edited for clarity. Transcript is available for paid subscribers who prefer to read vs watch.</em></p><h1>An Interview with Rivian&#8217;s Mukund Chavan About RAP1</h1>
      <p>
          <a href="https://www.chipstrat.com/p/an-interview-with-rivians-mukund">
              Read more
          </a>
      </p>
   ]]></content:encoded></item><item><title><![CDATA[Broadcom Makes Lasers?]]></title><description><![CDATA[Broadcom's InP fab, EMLs at 1.6T, full-stack CPO play, and comparing against Lumentum and Coherent]]></description><link>https://www.chipstrat.com/p/broadcom-makes-lasers</link><guid isPermaLink="false">https://www.chipstrat.com/p/broadcom-makes-lasers</guid><dc:creator><![CDATA[Austin Lyons]]></dc:creator><pubDate>Tue, 03 Mar 2026 23:44:33 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/f6f56088-6646-46dd-835d-5f52468b7faf_779x348.webp" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>After covering <a href="https://www.chipstrat.com/p/lumentum-and-the-laser-bottleneck">Lumentum&#8217;s lasers</a>, I was planning to hit Coherent next... but a friend reached out and inspired me to sharpen up on Broadcom. So let&#8217;s do that together.</p><p>Most people hear Broadcom and think Tomahawk switches, XPUs, VMware, Hock Tan. <em>Not lasers.</em> But Broadcom has actually been the primary 200G EML supplier to date, given its position in Nvidia&#8217;s 1.6T transceivers. It turns out there&#8217;s an interesting history and a great hidden business here. </p><p>Heck, Broadcom owns the former Bell Labs InP fab in Breinigsville, Pennsylvania. <em>And they are hiring!</em></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!SZSK!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7032c14a-0961-436e-861d-e223810d4a00_1656x732.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!SZSK!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7032c14a-0961-436e-861d-e223810d4a00_1656x732.png 424w, https://substackcdn.com/image/fetch/$s_!SZSK!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7032c14a-0961-436e-861d-e223810d4a00_1656x732.png 848w, https://substackcdn.com/image/fetch/$s_!SZSK!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7032c14a-0961-436e-861d-e223810d4a00_1656x732.png 1272w, https://substackcdn.com/image/fetch/$s_!SZSK!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7032c14a-0961-436e-861d-e223810d4a00_1656x732.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!SZSK!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7032c14a-0961-436e-861d-e223810d4a00_1656x732.png" width="1456" height="644" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/7032c14a-0961-436e-861d-e223810d4a00_1656x732.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:644,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:148547,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.chipstrat.com/i/189822965?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7032c14a-0961-436e-861d-e223810d4a00_1656x732.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!SZSK!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7032c14a-0961-436e-861d-e223810d4a00_1656x732.png 424w, https://substackcdn.com/image/fetch/$s_!SZSK!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7032c14a-0961-436e-861d-e223810d4a00_1656x732.png 848w, https://substackcdn.com/image/fetch/$s_!SZSK!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7032c14a-0961-436e-861d-e223810d4a00_1656x732.png 1272w, https://substackcdn.com/image/fetch/$s_!SZSK!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7032c14a-0961-436e-861d-e223810d4a00_1656x732.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption"><a href="https://broadcom.wd1.myworkdayjobs.com/External_Career/job/USA-Pennsylvania-Breinigsville-9999-Hamilton-Blvd/Fab-Process-Engineer_R024655">Broadcom Job Board</a></figcaption></figure></div><p>So Broadcom makes its own EMLs, VCSELs, and CW lasers, and pairs them with the industry&#8217;s leading DSPs and switch ASICs.</p><p>I&#8217;m genuinely impressed. </p><p>That&#8217;s arguably the most complete photonics stack in the industry. I had no idea. <em>Broadcom, you should tell this story!</em></p><h2>Broadcom&#8217;s Laser Business</h2><p>Digging in, the laser business looks great:</p><ul><li><p>Total EML capacity is projected to grow from over 40 million units in 2025 to 50 million in 2026. <em>This is across all speeds. 100G for 800G transceivers, 200G for 1.6T, and other applications</em></p></li><li><p>CW laser capacity is expanding from mid-teens millions in 2025 to roughly 30 million in 2026</p></li><li><p>Broadcom was first to ship high-volume 100G-per-lane EMLs and VCSEL technology</p></li><li><p>Management claims leadership in 200G EMLs for 1.6T transceivers</p></li><li><p>The networking segment, which includes these components, carries Broadcom&#8217;s highest margins outside of software</p></li></ul><p>Note that Broadcom doesn&#8217;t break out its photonics revenue separately. <em>They should!</em> Lumentum trades at ~60x forward earnings as a pure-play laser company. Coherent trades at ~44x on vertical integration. Broadcom&#8217;s laser business... not sure how the Street values it. It&#8217;s buried inside a networking segment within a semiconductor solutions business unit. inside a $1.5T market cap conglomerate. <em>Investors can&#8217;t price what they can&#8217;t see. I said this about Nvidia autonomy business recently too, and previously about the Nvidia networking business.</em></p><p>On the Q4 FY2025 earnings call, CEO Hock Tan said:</p><blockquote><p>&#8220;The demand for our latest 1.6 terabit per second DSPs that enables optical interconnects for scale-out, particularly. It&#8217;s just very, very strong. And by extension, demand for the optical components like lasers, PIN diodes, just going nuts.&#8221;</p></blockquote><p><em>Going nuts.</em></p><p>With EML capacity ramping 25% year-over-year and demand going nuts, photonics breakout would give the Street something to actually model in a sum-of-the-parts fashion. And Jefferies notes that Broadcom&#8217;s networking segment carries the highest margins outside software.</p><p>InP lasers are high-ASP, high-margin components, and the 100G-to-200G mix shift roughly <em>doubles</em> the ASP per laser.  <em>Vik and I over at <a href="https://x.com/semidoped">Semi Doped</a> have been very interested in exploring components companies with stacking s-curves, where each generation of new product has higher ASPs.</em></p><p>How does Broadcom&#8217;s laser position compare to Lumentum and Coherent?</p><p>Today I dug into the 1.6T competitive dynamics, CPO timeline, the substrate supply risk (and a good update this week from Lumentum derisking this substrate supply risk), and some bull/bear tensions. </p><p><strong>Plus a three-way comparison between AVGO, LITE, and COHR during this optical supercycle.</strong></p><p><em>Went down the rabbit hole over the past few days, thanks for coming along on the ride and learning with me!</em></p><p><em>Not financial advice. Do your own due diligence.</em></p>
      <p>
          <a href="https://www.chipstrat.com/p/broadcom-makes-lasers">
              Read more
          </a>
      </p>
   ]]></content:encoded></item><item><title><![CDATA[Lumentum and the Laser Bottleneck ]]></title><description><![CDATA[Lumentum is winning and still can't keep up. Coherent is closing in. Earnings, sell-side sentiment, and the tensions to track.]]></description><link>https://www.chipstrat.com/p/lumentum-and-the-laser-bottleneck</link><guid isPermaLink="false">https://www.chipstrat.com/p/lumentum-and-the-laser-bottleneck</guid><dc:creator><![CDATA[Austin Lyons]]></dc:creator><pubDate>Mon, 02 Mar 2026 15:54:44 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/d93bd343-8456-4a66-a2bf-7fc4c7ff155d_1200x630.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>There are so many bottlenecks in the AI cluster value chain. GPUs. CoWoS. HBM. And today, our focus is on another one: lasers.</p><p>Electro-absorption modulated lasers (EMLs) are the component inside optical transceivers that converts electrical data into light. It&#8217;s a laser <em>and</em> a high-speed modulator on a single indium phosphide chip, and every 800G and 1.6T transceiver needs them.</p><p>Lumentum makes more of these lasers than anyone, somewhere between 50-60% of the global market. Yet demand outstrips supply by roughly 30%! </p><p>Lumentum CEO Michael Hurlston said as much on the Q2 earnings call:</p><blockquote><p>&#8220;We&#8217;re undershipping our customers&#8217; demand by somewhere around 30%. Even as we&#8217;ve added 20% additional capacity, the demand-supply imbalance has increased.&#8221;</p></blockquote><p>Dang. All EML capacity is locked under long-term agreements through calendar 2027. Customers coming back for more than their LTAs cover are paying premium prices. Customers who won&#8217;t sign LTAs risk losing supply priority entirely. <em>Must be nice!</em></p><p>As CFO Wajid Ali said,</p><blockquote><p>the LTA structure is &#8220;allowing us to have incremental pricing discussions around those incremental units.&#8221;</p></blockquote><p>Of course, this pricing power is showing up in the financials. Revenue hit $665M, up 65% YoY, with <em>$805M</em> guided next quarter. Operating margins went from 7.5% to 25.2% in twelve months, and 30%+ guided next quarter. <em>Whew!</em></p><p>Yeah,  that explains why the stock trades at more than 60x forward earnings, and the current price has run ahead of the average sell-side target.</p><p>But bottlenecks and winning like this encourage competition. </p><p>The question thus is how long this temporary advantage holds, and whether the stock already prices in the answer. <em>NFA, DYDD.</em> </p><p>To get there, we first need to understand why the entire AI data center is going optical, and then why the laser layer is a bottleneck.</p><h2>Why the Data Center Is Going Optical</h2><p>Three shifts are expanding the optical TAM.</p><p><strong>First, scale-out networking at 800G and above is increasingly optical.</strong> At 100G-per-lane speeds, copper alternatives like Credo&#8217;s AECs still work for short reaches (up to ~5-7 meters) and <a href="https://www.chipstrat.com/p/credos-reliability-thesis">ALCs could push that to 30 meters</a>. But anything beyond that requires optical transceivers, and each one needs EML laser chips. As lane speeds jump to 200G for 1.6T, copper reach compresses further, expanding the optical TAM with every speed generation.</p><p><strong>Second, scale-up is going optical.</strong> Copper has dominated inside-the-rack connectivity (NVLink), but it&#8217;s hitting a physical wall at higher speeds. The industry is actively exploring alternatives. Nvidia demonstrated die-to-die optics at ISSCC in February using TSMC&#8217;s COUPE 3D hybrid bonding platform with silicon photonics engines. Marvell acquired Celestial AI for its photonic fabric technology. Ayar Labs (backed by Intel and Nvidia) is building optical I/O chiplets for in-package chip-to-chip communication. <a href="https://www.broadcom.com/blog/introducing-ethernet-scale-up-networking-advancing-ethernet-for-scale-up-ai-infrastructure">Broadcom&#8217;s ESUN spec</a> targets Ethernet-based optical scale-up. Even Credo, the copper champion, is hedging with ALCs that use micro-LED light sources for row-scale scale-up.</p><p>This creates an entirely new optical TAM that didn&#8217;t exist two years ago. Lumentum CEO Hurlston on the Q2 call:</p><blockquote><p>&#8220;Copper has long been the gold standard for scale-up... it is hitting a physical wall. An industry pivot is underway. By late calendar 2027, we would expect our first <strong>scale-up</strong> CPO shipments, replacing longer copper connections.&#8221;</p></blockquote><p><em>One note: I see folks conflating Nvidia&#8217;s CPO announcements with scale-up. Nvidia&#8217;s current CPO switches (Quantum-X Photonics, Spectrum-X Photonics) are on the scale-out fabric, not scale-up. NVLink remains copper today. Optical scale up is a transition that&#8217;s just in its infancy.</em></p><p><strong>Third, Optical Circuit Switching is a new category.</strong> OCS reconfigures optical paths in milliseconds, replacing fixed electrical spine switches. Google pioneered this for AI cluster networking. I think this technology is awesome; it&#8217;s lower-power because it stays in the optical domain, so there's no need to convert light into electrical signals for switching and routing.</p><p>Google historically built its own, but the technology is moving to merchant vendors. Lumentum&#8217;s OCS backlog went from zero to over $400 million in roughly a year. Three customers, four use cases (spine replacement, scale-across, scale-up, redundancy). Cignal AI projects the OCS market will <a href="https://cignal.ai/2025/12/optical-circuit-switching-market-to-exceed-2-5b-in-2029/">exceed $2.5 billion</a> by 2029.  </p><p>Lumentum&#8217;s approach uses MEMS-based mirrors:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!IrNG!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6d06a6f0-8bc8-424d-88c4-75dbf714e4d4_1386x785.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!IrNG!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6d06a6f0-8bc8-424d-88c4-75dbf714e4d4_1386x785.png 424w, https://substackcdn.com/image/fetch/$s_!IrNG!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6d06a6f0-8bc8-424d-88c4-75dbf714e4d4_1386x785.png 848w, https://substackcdn.com/image/fetch/$s_!IrNG!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6d06a6f0-8bc8-424d-88c4-75dbf714e4d4_1386x785.png 1272w, https://substackcdn.com/image/fetch/$s_!IrNG!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6d06a6f0-8bc8-424d-88c4-75dbf714e4d4_1386x785.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!IrNG!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6d06a6f0-8bc8-424d-88c4-75dbf714e4d4_1386x785.png" width="1386" height="785" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/6d06a6f0-8bc8-424d-88c4-75dbf714e4d4_1386x785.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:785,&quot;width&quot;:1386,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!IrNG!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6d06a6f0-8bc8-424d-88c4-75dbf714e4d4_1386x785.png 424w, https://substackcdn.com/image/fetch/$s_!IrNG!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6d06a6f0-8bc8-424d-88c4-75dbf714e4d4_1386x785.png 848w, https://substackcdn.com/image/fetch/$s_!IrNG!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6d06a6f0-8bc8-424d-88c4-75dbf714e4d4_1386x785.png 1272w, https://substackcdn.com/image/fetch/$s_!IrNG!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6d06a6f0-8bc8-424d-88c4-75dbf714e4d4_1386x785.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption"> <a href="https://cignal.ai/2025/12/optical-circuit-switching-market-to-exceed-2-5b-in-2029/">source</a></figcaption></figure></div><h2>So Where Does Lumentum Stand?</h2><p><strong>Lumentum is a great spot, with three expanding optical TAMs (scale-out, scale-up, and OCS) and shipping lasers into all of them.</strong></p><p>And icing on the cake, just today Nvidia invested $2 billion in both Lumentum and Coherent today, with multibillion-dollar purchase commitments for future capacity. <em>See <a href="https://nvidianews.nvidia.com/news/nvidia-announces-strategic-partnership-with-lumentum-to-develop-state-of-the-art-optics-technology">NVIDIA Announces Strategic Partnership With Lumentum to Develop State-of-the-Art Optics Technology</a></em></p><p>As we said, bottlenecks attract competition though. Coherent is ramping 6-inch indium phosphide wafers, which could yield 4x as many dies per wafer at significantly lower cost. Tower and GlobalFoundries are 5Xing SiPho capacity, which could eventually route around EML-based transceivers entirely. <a href="https://www.meshoptical.com/blog/introducing-mesh">Mesh Optical Technologies</a>, a startup founded by SpaceX alumni, raised $50M to build 1.6T transceivers using semiconductor-style packaging instead of traditional manual optical assembly. <em>The typical big claims from a new startup, but an interesting point to watch because it would simultanesouly increase capacity and decrease cost.</em></p><p><strong>The nearest-term capacity risk is Coherent</strong>. If Coherent&#8217;s 6-inch yields truly close the gap with Lumentum&#8217;s 3-inch performance (as they claim), the scarcity that drives Lumentum&#8217;s pricing power will end. There&#8217;s still a side-by-side performance comparison to make at that point, not to mention future generation laser supply constraints. Whoever can get there first can enjoy the same pricing dynamics.</p><p>I broke Lumentum&#8217;s story into some clear points of tension to track, and scored each against Q2 FY2026 earnings and sell-side research. </p><p>Here&#8217;s what&#8217;s behind the paywall:</p><ul><li><p><strong>Lumentum&#8217;s business units:</strong> the component and systems businesses, the epitaxy moat, and why even competitors buy LITE&#8217;s lasers</p></li><li><p><strong>Four Growth Vectors:</strong> EML, OCS, CPO, and optical scale-up. What&#8217;s shipping now, what inflects in H2 2026, and what&#8217;s a 2027+ story</p></li><li><p><strong>Tensions:</strong> bull vs. bear on the EML moat, OCS market creation, CPO timing, supply chain risk</p></li><li><p><strong>Sell-Side Sentiment:</strong> what the Street is saying and where analysts disagree</p></li><li><p><strong>Earnings Scorecard:</strong> each tension scored against Q2 results, management quotes, and sell-side evidence</p></li></ul>
      <p>
          <a href="https://www.chipstrat.com/p/lumentum-and-the-laser-bottleneck">
              Read more
          </a>
      </p>
   ]]></content:encoded></item><item><title><![CDATA[Photons as a Service]]></title><description><![CDATA[xLight's plan to swap EUV's light source for a free-electron laser and sell light like a utility.]]></description><link>https://www.chipstrat.com/p/photons-as-a-service</link><guid isPermaLink="false">https://www.chipstrat.com/p/photons-as-a-service</guid><dc:creator><![CDATA[Austin Lyons]]></dc:creator><pubDate>Wed, 25 Feb 2026 14:26:34 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!ISCA!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8051981e-557c-4e9a-b8dd-3f84bf10eb14_1268x708.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In January, I wrote about the worsening cost curve of EUV lithography and two startups trying to bend it:</p><div class="digest-post-embed" data-attrs="{&quot;nodeId&quot;:&quot;07e01de7-5acc-4b92-a7e4-cdd4d6c5e3e4&quot;,&quot;caption&quot;:&quot;&quot;,&quot;cta&quot;:&quot;Read full story&quot;,&quot;showBylines&quot;:true,&quot;size&quot;:&quot;lg&quot;,&quot;isEditorNode&quot;:true,&quot;title&quot;:&quot;Lithography Economics&quot;,&quot;publishedBylines&quot;:[{&quot;id&quot;:8066776,&quot;name&quot;:&quot;Austin Lyons&quot;,&quot;bio&quot;:&quot;Chipstrat, Creative Strategies, Semi Doped. MSEE + MBA.&quot;,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/c180a750-7572-4aff-88e4-317aa435d533_1203x902.jpeg&quot;,&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:100}],&quot;post_date&quot;:&quot;2026-01-03T19:04:07.068Z&quot;,&quot;cover_image&quot;:&quot;https://substackcdn.com/image/fetch/$s_!T77P!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F35c4dc10-1679-4734-b3f5-991344ffe0aa_2048x960.png&quot;,&quot;cover_image_alt&quot;:null,&quot;canonical_url&quot;:&quot;https://www.chipstrat.com/p/lithography-economics&quot;,&quot;section_name&quot;:null,&quot;video_upload_id&quot;:null,&quot;id&quot;:183370591,&quot;type&quot;:&quot;newsletter&quot;,&quot;reaction_count&quot;:29,&quot;comment_count&quot;:2,&quot;publication_id&quot;:2003179,&quot;publication_name&quot;:&quot;Chipstrat&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!rCMl!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F27769444-42f3-4b43-9683-4fe7826c06b8_608x608.png&quot;,&quot;belowTheFold&quot;:false,&quot;youtube_url&quot;:null,&quot;show_links&quot;:null,&quot;feed_url&quot;:null}"></div><p>I teased xLight and Substrate and promised deeper dives on both. </p><p>Today I&#8217;m delivering the first one on xLight. Now that we understand the broader economic problem the industry needs to solve, let&#8217;s dive into technical details and areas for improvement. </p><p>We&#8217;ll start with some fundamental limits of LPP EUV scanners.</p><h2>The Photon Problem</h2><p>Every ASML EUV scanner generates its light the same way. <strong>It&#8217;s called Laser Produced Plasma, or LPP.</strong> It&#8217;s the &#8220;shoot tin droplets and hit &#8216;em each twice with a laser&#8221; light source magic trick. <em>I&#8217;m assuming most readers are somewhat familiar, but if not here are great YouTubes references get you up to speed plus a short recap</em></p><div id="youtube2-5Ge2RcvDlgw" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;5Ge2RcvDlgw&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/5Ge2RcvDlgw?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><div id="youtube2-B2482h_TNwg" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;B2482h_TNwg&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/B2482h_TNwg?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><p><strong>How EUV&#8217;s light source LPP works:</strong> A high-intensity CO2 laser fires at tiny droplets of molten tin traveling at 100 meters per second, 50,000 times per second. Each droplet is actually hit twice: first a pre-pulse reshapes the 30-micrometer tin sphere into a concave sheet, then the main CO2 pulse vaporizes it into a dense plasma that emits 13.5nm EUV photons. These photons bounce off a series of collector mirrors, pass through a photomask, and expose the wafer below. As you can imagine, it took decades to get this concept to work at reliability and yield needed for scaled manufacturing. <em>It&#8217;s freaking nuts that it works. That&#8217;s what I love about the semiconductor industry by the way, literally everywhere you look it&#8217;s just mind-blowing technology.</em></p><p>One interesting thing to note is that the industry actually <strong>first tried particle accelerators called synchrotrons</strong> to generate this high-energy, small-wavelength light. But at the time, synchrotrons didn&#8217;t have enough directional light. Next they tried xenon gas lasers, but that had a fundamental physics ceiling resulting in less than 1% conversion efficiency. Eventually the industry landed on tin back in 2002. Blasting tin particles into a plasma produces EUV light, but it has a debris problem (contaminating everything). That&#8217;s an engineering problem though (how to deal with this debris) but not a fundamental physics roadblock.</p><p>Fast forward 20 years and LPP with tin works. Hitting 50,000 falling tin droplets per second with a laser, twice each, 24/7. <em>It&#8217;s genuinely an engineering marvel, should be discussed in freshman physics and engineering classes to get people excited about science</em></p><p>But.... the best LPP achieves only about 6% conversion efficiency. 94% of the energy put in is wasted. And the system has fundamental problems that get worse as chips get smaller.</p><p>For example, EUV photons carry about 14x more energy than the 193nm photons used in previous-generation DUV lithography. That means for the same power output, an EUV source produces 14x fewer photons.</p><p>This is where <em>dose</em> comes in. Dose is the amount of light energy delivered per unit area of the wafer, measured in millijoules per square centimeter (mJ/cm&#178;). Think of it as the photon budget for each exposure. The photoresist (<em>the light-sensitive chemical coating on the wafer</em>) needs a minimum number of photons to trigger the chemical reaction that creates the pattern. Too few photons and edges come out rough or some features are missing:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Nt98!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7a93db25-6175-4cdd-914f-49d0cf0c4041_1242x898.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Nt98!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7a93db25-6175-4cdd-914f-49d0cf0c4041_1242x898.png 424w, https://substackcdn.com/image/fetch/$s_!Nt98!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7a93db25-6175-4cdd-914f-49d0cf0c4041_1242x898.png 848w, https://substackcdn.com/image/fetch/$s_!Nt98!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7a93db25-6175-4cdd-914f-49d0cf0c4041_1242x898.png 1272w, https://substackcdn.com/image/fetch/$s_!Nt98!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7a93db25-6175-4cdd-914f-49d0cf0c4041_1242x898.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Nt98!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7a93db25-6175-4cdd-914f-49d0cf0c4041_1242x898.png" width="488" height="352.83735909822866" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/7a93db25-6175-4cdd-914f-49d0cf0c4041_1242x898.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:898,&quot;width&quot;:1242,&quot;resizeWidth&quot;:488,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Nt98!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7a93db25-6175-4cdd-914f-49d0cf0c4041_1242x898.png 424w, https://substackcdn.com/image/fetch/$s_!Nt98!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7a93db25-6175-4cdd-914f-49d0cf0c4041_1242x898.png 848w, https://substackcdn.com/image/fetch/$s_!Nt98!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7a93db25-6175-4cdd-914f-49d0cf0c4041_1242x898.png 1272w, https://substackcdn.com/image/fetch/$s_!Nt98!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7a93db25-6175-4cdd-914f-49d0cf0c4041_1242x898.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption"><em>Source: Another great <a href="https://www.youtube.com/watch?v=0igQuerc3J0">Asianometry video</a></em></figcaption></figure></div><p>The required dose isn&#8217;t a knob you can turn down to save time, as it&#8217;s set by the resist chemistry and the feature size. And as features shrink, dose requirements go <em>up</em>. Why? Because smaller features mean each &#8220;pixel&#8221; on the wafer covers fewer atoms of resist. Fewer atoms mean fewer photon interactions, leading to greater statistical randomness in whether the pattern prints correctly. To compensate, you need more photons per pixel  (a higher dose) to keep the randomness under control. The industry calls these randomness problems <em>stochastic effects</em>, and as <a href="https://www.youtube.com/watch?v=0igQuerc3J0">Asianometry puts it</a>:</p><blockquote><p>&#8220;Stochastic print failures are far smaller, completely random and thus non-repeating, and come via a law of nature. The randomness is unavoidable. So how do we usually deal with randomness caused by the law of small numbers? We increase numbers. Just send more photons.&#8221;</p></blockquote><p>So every new node needs a higher dose, <strong>but the light source isn&#8217;t getting proportionally brighter.</strong></p><p>For example, a Low-NA EUV tool is rated for about 220 wafers per hour:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!knUt!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa7c6dac6-afb3-4c35-a876-4c073849fc64_1516x1116.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!knUt!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa7c6dac6-afb3-4c35-a876-4c073849fc64_1516x1116.png 424w, https://substackcdn.com/image/fetch/$s_!knUt!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa7c6dac6-afb3-4c35-a876-4c073849fc64_1516x1116.png 848w, https://substackcdn.com/image/fetch/$s_!knUt!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa7c6dac6-afb3-4c35-a876-4c073849fc64_1516x1116.png 1272w, https://substackcdn.com/image/fetch/$s_!knUt!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa7c6dac6-afb3-4c35-a876-4c073849fc64_1516x1116.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!knUt!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa7c6dac6-afb3-4c35-a876-4c073849fc64_1516x1116.png" width="581" height="427.7692307692308" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/a7c6dac6-afb3-4c35-a876-4c073849fc64_1516x1116.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1072,&quot;width&quot;:1456,&quot;resizeWidth&quot;:581,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!knUt!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa7c6dac6-afb3-4c35-a876-4c073849fc64_1516x1116.png 424w, https://substackcdn.com/image/fetch/$s_!knUt!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa7c6dac6-afb3-4c35-a876-4c073849fc64_1516x1116.png 848w, https://substackcdn.com/image/fetch/$s_!knUt!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa7c6dac6-afb3-4c35-a876-4c073849fc64_1516x1116.png 1272w, https://substackcdn.com/image/fetch/$s_!knUt!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa7c6dac6-afb3-4c35-a876-4c073849fc64_1516x1116.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>But that rating assumes a dose of 30 mJ/cm&#178;. Move to High-NA which prints finer features and needs a higher dose and look what happens:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!0xD9!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F25be9a0b-1986-4bd5-a3be-bc7eaad856bf_1410x1064.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!0xD9!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F25be9a0b-1986-4bd5-a3be-bc7eaad856bf_1410x1064.png 424w, https://substackcdn.com/image/fetch/$s_!0xD9!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F25be9a0b-1986-4bd5-a3be-bc7eaad856bf_1410x1064.png 848w, https://substackcdn.com/image/fetch/$s_!0xD9!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F25be9a0b-1986-4bd5-a3be-bc7eaad856bf_1410x1064.png 1272w, https://substackcdn.com/image/fetch/$s_!0xD9!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F25be9a0b-1986-4bd5-a3be-bc7eaad856bf_1410x1064.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!0xD9!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F25be9a0b-1986-4bd5-a3be-bc7eaad856bf_1410x1064.png" width="580" height="437.6737588652482" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/25be9a0b-1986-4bd5-a3be-bc7eaad856bf_1410x1064.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1064,&quot;width&quot;:1410,&quot;resizeWidth&quot;:580,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!0xD9!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F25be9a0b-1986-4bd5-a3be-bc7eaad856bf_1410x1064.png 424w, https://substackcdn.com/image/fetch/$s_!0xD9!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F25be9a0b-1986-4bd5-a3be-bc7eaad856bf_1410x1064.png 848w, https://substackcdn.com/image/fetch/$s_!0xD9!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F25be9a0b-1986-4bd5-a3be-bc7eaad856bf_1410x1064.png 1272w, https://substackcdn.com/image/fetch/$s_!0xD9!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F25be9a0b-1986-4bd5-a3be-bc7eaad856bf_1410x1064.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Notice throughput is only 175 wafers per hour at 50 mJ/cm&#178;.</p><p>The more advanced tool is <em>slower</em> because the higher dose eats into throughput. High-NA can save you total exposure steps by eliminating double patterning, which helps. But the long-term trend is evident. Every step to smaller features demands more dose, and more dose means fewer wafers per hour <em>unless the light source gets brighter.</em> Today&#8217;s LPP sources produce about 500 watts of EUV power. ASML has shown a roadmap to 600-800 watts. <em>Note: per this week&#8217;s <a href="https://www.reuters.com/world/china/asml-unveils-euv-light-source-advance-that-could-yield-50-more-chips-by-2030-2026-02-23/">ASML disclosure in Reuters</a>, ASML says its R&amp;D team has unlocked 1,000+ W which would increase throughput back up to &gt;300 wph.</em></p><p><strong>But it&#8217;s still not enough.</strong> Why not? Because dose requirements are increasing too. Low-NA needs 30 mJ/cm&#178; and High-NA needs 50+ mJ/cm&#178;. Future nodes will push higher as stochastic effects worsen with each shrink.</p><p>And that&#8217;s just the power problem. Most of the photons you <em>do</em> generate never reach the wafer. Tin debris fogs the collector mirrors. A layer just 1.2nm thick cuts collector efficiency by 20%. And each mirror in the optical chain absorbs about 30% of the light that hits it, so after the full series of reflections, less than 10% of source photons make it to the wafer.</p><p>Then the kicker. <strong>Every scanner has its own dedicated LPP source.</strong> A fab running 15 EUV tools is operating 15 separate tin-burning light sources, each 6% efficient, each losing 90%+ to mirrors, each fighting its own debris.</p><p>Whew.  Again, it&#8217;s a miracle it even works. But it&#8217;s pretty inefficient, and the problems mount as we look ahead on the EUV roadmap.</p><p>With LPP light sources, lithography costs will continue to increase. It&#8217;s inevitable. And as we discussed in the first article in this series, those costs have a real impact on the broader industry.</p><p><em>Kind of reminds me of this chart... we need to bend the cost per chip of lithography downward over time...like good technology curves do&#8230;</em></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!2mlD!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F55e54fe2-001e-4a13-9f42-b5ea88107582_1282x1206.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!2mlD!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F55e54fe2-001e-4a13-9f42-b5ea88107582_1282x1206.png 424w, https://substackcdn.com/image/fetch/$s_!2mlD!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F55e54fe2-001e-4a13-9f42-b5ea88107582_1282x1206.png 848w, https://substackcdn.com/image/fetch/$s_!2mlD!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F55e54fe2-001e-4a13-9f42-b5ea88107582_1282x1206.png 1272w, https://substackcdn.com/image/fetch/$s_!2mlD!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F55e54fe2-001e-4a13-9f42-b5ea88107582_1282x1206.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!2mlD!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F55e54fe2-001e-4a13-9f42-b5ea88107582_1282x1206.png" width="516" height="485.41029641185645" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/55e54fe2-001e-4a13-9f42-b5ea88107582_1282x1206.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1206,&quot;width&quot;:1282,&quot;resizeWidth&quot;:516,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!2mlD!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F55e54fe2-001e-4a13-9f42-b5ea88107582_1282x1206.png 424w, https://substackcdn.com/image/fetch/$s_!2mlD!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F55e54fe2-001e-4a13-9f42-b5ea88107582_1282x1206.png 848w, https://substackcdn.com/image/fetch/$s_!2mlD!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F55e54fe2-001e-4a13-9f42-b5ea88107582_1282x1206.png 1272w, https://substackcdn.com/image/fetch/$s_!2mlD!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F55e54fe2-001e-4a13-9f42-b5ea88107582_1282x1206.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h2>A Different Way to Make Light</h2><p>But what if there were a light source that produced orders of magnitude more power, eliminated tin entirely, and could serve an entire fleet of scanners from a single system?</p><p>That&#8217;s a Free Electron Laser. </p><div id="youtube2-0igQuerc3J0" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;0igQuerc3J0&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/0igQuerc3J0?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><p><em>(Shout out Asianometry again! A <a href="https://www.youtube.com/watch?v=0igQuerc3J0">great video</a> on the topic if you want the deep dive.)</em></p><p>Remember that the industry first tried synchrotrons for EUV back in the 1980s, but rejected them because the light was too diffuse.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!fZ1y!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0776c37f-8228-47c3-8a4d-52a36e6d8413_1268x506.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!fZ1y!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0776c37f-8228-47c3-8a4d-52a36e6d8413_1268x506.png 424w, https://substackcdn.com/image/fetch/$s_!fZ1y!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0776c37f-8228-47c3-8a4d-52a36e6d8413_1268x506.png 848w, https://substackcdn.com/image/fetch/$s_!fZ1y!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0776c37f-8228-47c3-8a4d-52a36e6d8413_1268x506.png 1272w, https://substackcdn.com/image/fetch/$s_!fZ1y!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0776c37f-8228-47c3-8a4d-52a36e6d8413_1268x506.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!fZ1y!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0776c37f-8228-47c3-8a4d-52a36e6d8413_1268x506.png" width="1268" height="506" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/0776c37f-8228-47c3-8a4d-52a36e6d8413_1268x506.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:506,&quot;width&quot;:1268,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!fZ1y!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0776c37f-8228-47c3-8a4d-52a36e6d8413_1268x506.png 424w, https://substackcdn.com/image/fetch/$s_!fZ1y!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0776c37f-8228-47c3-8a4d-52a36e6d8413_1268x506.png 848w, https://substackcdn.com/image/fetch/$s_!fZ1y!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0776c37f-8228-47c3-8a4d-52a36e6d8413_1268x506.png 1272w, https://substackcdn.com/image/fetch/$s_!fZ1y!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0776c37f-8228-47c3-8a4d-52a36e6d8413_1268x506.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption"><em>Synchrotron. <a href="https://www6.slac.stanford.edu/research/slac-science-explained/synchrotrons">Source: SLAC</a></em></figcaption></figure></div><p>A Free Electron Laser is different. Instead of passively emitting radiation as electrons circle, an FEL actively amplifies coherent, directional light through a linear accelerator. <em>Directional light! That distinction solves the problem that killed synchrotrons.</em></p><p>Here&#8217;s the short version of how an FEL works. A linear accelerator shoots electrons to near light speed, then sends them through an <em>undulator</em>, a chain of alternating magnets that wiggles the beam back and forth. That wiggling causes the electrons to emit photons, which interact with the beam itself, causing the electrons to self-organize into coherent bunches that amplify the light exponentially. The result is intense, coherent, laser-like light:</p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Af9h!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6baf0b38-5c50-4402-bca5-a2a9eb1999f4_1268x482.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Af9h!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6baf0b38-5c50-4402-bca5-a2a9eb1999f4_1268x482.png 424w, https://substackcdn.com/image/fetch/$s_!Af9h!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6baf0b38-5c50-4402-bca5-a2a9eb1999f4_1268x482.png 848w, https://substackcdn.com/image/fetch/$s_!Af9h!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6baf0b38-5c50-4402-bca5-a2a9eb1999f4_1268x482.png 1272w, https://substackcdn.com/image/fetch/$s_!Af9h!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6baf0b38-5c50-4402-bca5-a2a9eb1999f4_1268x482.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Af9h!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6baf0b38-5c50-4402-bca5-a2a9eb1999f4_1268x482.png" width="630" height="239.4794952681388" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/6baf0b38-5c50-4402-bca5-a2a9eb1999f4_1268x482.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:482,&quot;width&quot;:1268,&quot;resizeWidth&quot;:630,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Af9h!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6baf0b38-5c50-4402-bca5-a2a9eb1999f4_1268x482.png 424w, https://substackcdn.com/image/fetch/$s_!Af9h!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6baf0b38-5c50-4402-bca5-a2a9eb1999f4_1268x482.png 848w, https://substackcdn.com/image/fetch/$s_!Af9h!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6baf0b38-5c50-4402-bca5-a2a9eb1999f4_1268x482.png 1272w, https://substackcdn.com/image/fetch/$s_!Af9h!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6baf0b38-5c50-4402-bca5-a2a9eb1999f4_1268x482.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a><figcaption class="image-caption"><em><a href="https://en.wikipedia.org/wiki/Free-electron_laser">Source: Wikipedia</a></em></figcaption></figure></div><p>Exactly what EUV scanners need! No tin. No plasma. No debris. <em>And you can tune it to 13.5nm to plug in this light source directly into existing 13.5nm EUV scanners!</em></p><p>Three properties make FELs really awesome for lithography.</p><ol><li><p><strong>Tunable wavelength.</strong> By adjusting the electron energy and magnet spacing, you can dial the output to any wavelength: 13.5nm for EUV, smaller for future &#8220;Beyond EUV&#8221; nodes, or even dial it back and use longer wavelengths for less cutting-edge process nodes.</p></li></ol><ol start="2"><li><p><strong>Energy recovery.</strong> Instead of dumping spent electrons, you loop them back through the accelerator in reverse, recovering most of their kinetic energy. FEL light sources can be several times more efficient than LPP.</p></li></ol><ol start="3"><li><p><strong>No tin, no debris.</strong> The entire contamination problem goes away. Cleaner light means longer mirror life and less maintenance downtime.</p></li></ol><h2>Enter xLight</h2><p><a href="https://www.xlight.com/">xLight</a> is a venture-backed company in Palo Alto building Free Electron Lasers purpose-built for semiconductor manufacturing. Their team comes from the US National Lab ecosystem, with deep experience designing, building, and operating FELs for research and national security. They&#8217;ve partnered with Fermilab on the superconducting cavity technology at the heart of the accelerator.</p><p>And they&#8217;re not doing this in a garage. Their prototype is being built at <a href="https://ny-creates.org/">NY CREATES</a> in Albany, NY, a semiconductor R&amp;D hub where IBM runs a full 2nm process flow and where Tokyo Electron, Applied Materials, and other major equipment companies have a presence. xLight&#8217;s prototype was accelerated by <a href="https://www.xlight.com/company-news/xlight-signs-150-million-letter-of-intent-with-the-us-department-of-commerce">investment from the Commerce Department</a> as part of the CHIPS Act R&amp;D program.</p><h3>Replace the Source, Not the Scanner</h3><p>So&#8230; what&#8217;s the product xLight sells? Well, light. xLight doesn&#8217;t compete with ASML&#8217;s <em>scanners, </em>but rather, replaces the light source.<em> Unplug LPP, plug in xLight&#8217;s FEL.</em></p><p>xLight&#8217;s FEL is backward compatible with ASML&#8217;s existing scanner platforms. From the scanner&#8217;s perspective, the swap is transparent; the light comes in at the exact same angle, same wavelength, through the same intermediate focus point:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!ISCA!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8051981e-557c-4e9a-b8dd-3f84bf10eb14_1268x708.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!ISCA!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8051981e-557c-4e9a-b8dd-3f84bf10eb14_1268x708.png 424w, https://substackcdn.com/image/fetch/$s_!ISCA!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8051981e-557c-4e9a-b8dd-3f84bf10eb14_1268x708.png 848w, https://substackcdn.com/image/fetch/$s_!ISCA!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8051981e-557c-4e9a-b8dd-3f84bf10eb14_1268x708.png 1272w, https://substackcdn.com/image/fetch/$s_!ISCA!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8051981e-557c-4e9a-b8dd-3f84bf10eb14_1268x708.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!ISCA!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8051981e-557c-4e9a-b8dd-3f84bf10eb14_1268x708.png" width="578" height="322.73186119873816" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/8051981e-557c-4e9a-b8dd-3f84bf10eb14_1268x708.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:708,&quot;width&quot;:1268,&quot;resizeWidth&quot;:578,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!ISCA!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8051981e-557c-4e9a-b8dd-3f84bf10eb14_1268x708.png 424w, https://substackcdn.com/image/fetch/$s_!ISCA!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8051981e-557c-4e9a-b8dd-3f84bf10eb14_1268x708.png 848w, https://substackcdn.com/image/fetch/$s_!ISCA!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8051981e-557c-4e9a-b8dd-3f84bf10eb14_1268x708.png 1272w, https://substackcdn.com/image/fetch/$s_!ISCA!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8051981e-557c-4e9a-b8dd-3f84bf10eb14_1268x708.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>The only difference is that this approach doesn&#8217;t have tin splattering around and doesn&#8217;t have out-of-band radiation causing heat in the optics. In fact, because FEL light has a much narrower bandwidth than LPP, there&#8217;s less junk light and less thermal noise. Hence, the same scanner that&#8217;s designed for LPP light can actually handle <em>more</em> FEL light.</p><p>Oh, and did you notice in that image how <em>many</em> scanners are all being fed at the same time? This is the economic differentiator.</p><h3>120 Kilowatts</h3><p>xLight&#8217;s system generates 120 kilowatts of EUV power at the source.</p><p>Remember, current LPP sources produce just over 500 watts. ASML&#8217;s roadmap targets 800 watts. xLight is building a source that produces 120 kilowatts. <em>Over 100x more power!</em></p><p>Now, you don&#8217;t get all 120 kW to the scanner. As light is distributed through the fab via vacuum beamlines and grazing-incidence optics, there are transmission losses. Depending on whether it&#8217;s a greenfield or brownfield fab, you deliver somewhere between 25% and 50% to the scanner fleet. Divided across up to say 16 scanner connections, that&#8217;s roughly 2-4 kilowatts per scanner. <em>That&#8217;s still several times more than what LPP delivers today&#8230;</em></p><p>Abundant light unlocks the full dose at full productivity. No more running $500M High-NA tools at a slower speed because you can&#8217;t deliver enough photons. And beyond throughput, more light means better patterning. And that unlocks better uniformity and line edge roughness, resulting in higher yields. <em>win win win.</em></p><p>xLight&#8217;s approach seems compelling. But there are still a lot of questions, like</p><ul><li><p>How do they plan to sell it?</p></li><li><p>Who are the customers?</p></li><li><p>How to fund it? <em>Building particle accelerators sounds expensive&#8230;</em></p></li><li><p>What does this mean for the broader lithography ecosystem?</p></li></ul><p>Interestingly, if xLight&#8217;s economics work, it doesn&#8217;t necessarily disrupt ASML, and could arguably <em>expand</em> ASML&#8217;s addressable market. It&#8217;s nuanced though. We&#8217;ll get into it, as it has implications for anyone modeling ASML&#8217;s long-term revenue.</p><p>And the business model itself is fascinating. xLight doesn&#8217;t sell light sources. They sell light as a service, like a utility. <em>Your electric bill is really just consumption-based electrons-as-a-service, right?</em> We&#8217;ll dig into this.</p><p>And finally, adoption seems hard, right? I mean, semiconductor manufacturing is interestingly both super high-tech and yet quite conservative in some respects. Will they adopt?</p><p>Let&#8217;s get into it.</p>
      <p>
          <a href="https://www.chipstrat.com/p/photons-as-a-service">
              Read more
          </a>
      </p>
   ]]></content:encoded></item><item><title><![CDATA[Arista's Ethernet Bet]]></title><description><![CDATA[Q4 FY2025 earnings scored against the bull and bear cases]]></description><link>https://www.chipstrat.com/p/aristas-3-billion-ai-bet-and-the</link><guid isPermaLink="false">https://www.chipstrat.com/p/aristas-3-billion-ai-bet-and-the</guid><dc:creator><![CDATA[Austin Lyons]]></dc:creator><pubDate>Mon, 23 Feb 2026 23:37:38 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/c6ce29b1-ef43-4ec0-8730-5529220963b6_1200x627.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Arista Networks just printed its first billion-dollar quarter. AI networking revenue doubled to $1.5B in FY2025 and management guided $3.25B for 2026. <em>Doubling again.</em> The stock trades at ~40x forward earnings. </p><p>Is that justified? Or is Arista priced for perfect execution in a market that could stumble?</p><p>The answer depends on six specific debates: Ethernet vs. InfiniBand, white-box competition, customer concentration, gross margins, supply chains, and valuation. </p><p>I scored each one against Q4 earnings evidence. But first, you need to understand why the Ethernet TAM is expanding faster than most investors realize.</p><h3>Ethernet&#8217;s Opening Is Wider Than You Think</h3><p>Nvidia has historically dominated AI cluster networking with a vertically integrated stack. NVLink for scale-up, InfiniBand for scale-out, CUDA libraries like NCCL to make it all programmable. <em>The famous Mellanox acquisition!</em> Nvidia&#8217;s networking segment alone generated <a href="https://nvidianews.nvidia.com/news/nvidia-announces-financial-results-for-third-quarter-fiscal-2026">$8.2B in Q3 FY2026</a>. <em>Up 162% year-over-year!</em></p><p>But two shifts underway are expanding the TAM in Ethernet&#8217;s direction. (<em>Read <a href="https://www.chipstrat.com/p/gpu-networking-basics-part-3-scale">GPU Networking Basics Part 3</a> if you need a refresher on scale-up vs. scale-out.</em>)</p><p><strong>First, Ethernet is making inroads in scale-out.</strong> InfiniBand was purpose-built for ultra-low-latency distributed computing, but Ethernet has been closing the gap through RoCE (RDMA over Converged Ethernet) and a new generation of lossless Ethernet fabrics designed for AI. Nvidia itself launched Spectrum-X, its own Ethernet-for-AI platform, and Meta, Microsoft, Oracle, and xAI are all building on it. But Spectrum-X isn&#8217;t the only game in town, as hyperscalers are also choosing vendor-neutral Ethernet fabrics. <em>The Ultra Ethernet Consortium is standardizing this.</em> </p><p><strong>Second, scale-up is no longer Nvidia-only.</strong> Scale-up used to mean NVLink. Only game in town. But as I covered in <a href="https://www.chipstrat.com/p/gpu-networking-part-4-year-end-wrap">GPU Networking Part 4: Year End Wrap</a>, every major XPU vendor now has its own scale-up architecture, and several are building on Ethernet:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!hpiJ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe7a82cae-5a68-4347-906d-ca85fd087d13_1842x1212.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!hpiJ!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe7a82cae-5a68-4347-906d-ca85fd087d13_1842x1212.png 424w, https://substackcdn.com/image/fetch/$s_!hpiJ!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe7a82cae-5a68-4347-906d-ca85fd087d13_1842x1212.png 848w, https://substackcdn.com/image/fetch/$s_!hpiJ!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe7a82cae-5a68-4347-906d-ca85fd087d13_1842x1212.png 1272w, https://substackcdn.com/image/fetch/$s_!hpiJ!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe7a82cae-5a68-4347-906d-ca85fd087d13_1842x1212.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!hpiJ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe7a82cae-5a68-4347-906d-ca85fd087d13_1842x1212.png" width="1456" height="958" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/e7a82cae-5a68-4347-906d-ca85fd087d13_1842x1212.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:958,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:398234,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.chipstrat.com/i/188960909?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe7a82cae-5a68-4347-906d-ca85fd087d13_1842x1212.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!hpiJ!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe7a82cae-5a68-4347-906d-ca85fd087d13_1842x1212.png 424w, https://substackcdn.com/image/fetch/$s_!hpiJ!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe7a82cae-5a68-4347-906d-ca85fd087d13_1842x1212.png 848w, https://substackcdn.com/image/fetch/$s_!hpiJ!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe7a82cae-5a68-4347-906d-ca85fd087d13_1842x1212.png 1272w, https://substackcdn.com/image/fetch/$s_!hpiJ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe7a82cae-5a68-4347-906d-ca85fd087d13_1842x1212.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>AMD Helios uses Ethernet for both scale-up and scale-out. Microsoft Maia 200 is Ethernet-native for scale-up. AWS Trainium3 uses Ethernet for scale-out. Even Nvidia now offers Spectrum-X Ethernet alongside InfiniBand. As accelerator vendors build on Ethernet and hyperscalers standardize on it, more of the AI cluster fabric shifts into domains served by Ethernet switch vendors.</p><p>Take Microsoft&#8217;s Maia 200. As Saurabh Dighe explained in <a href="https://www.chipstrat.com/p/an-interview-with-microsofts-saurabh">our interview</a>: <em>&#8220;We have not invested in a scale-out network. We have invested in an inference-driven chip. We have invested in taking a scale-up approach more than a scale-out approach. That&#8217;s brought our cost down.&#8221;</em> Microsoft deliberately chose Ethernet-native scale-up and skipped proprietary scale-out entirely, optimizing for inference cost per token. That design choice routes directly through standard Ethernet switches. <em>Pun intended.</em></p><h3>Agentic AI Adds Another Layer of Demand</h3><p>GPUs don&#8217;t operate in isolation. Every GPU needs a host CPU for orchestration, data loading, and network management. Inside a rack, CPUs and GPUs are increasingly tightly coupled; Nvidia&#8217;s GB200/GB300 systems integrate them via NVLink-C2C into a single coherent unit. <strong>But agentic AI adds another layer of Ethernet demand on top of that.</strong></p><p>In agentic workflows, a CPU-heavy orchestration tier sits <em>outside</em> the GPU racks running agent loops, tool calls, sub-agent coordination, and dispatching inference requests to GPU pools over the network. Vik Sekar&#8217;s <a href="https://www.viksnewsletter.com/p/the-cpu-bottleneck-in-agentic-ai">the CPU bottleneck in agentic AI</a> illustrates:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!7-UJ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F760171a6-49f4-4718-a16b-3eccb2082a51_2048x1048.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!7-UJ!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F760171a6-49f4-4718-a16b-3eccb2082a51_2048x1048.png 424w, https://substackcdn.com/image/fetch/$s_!7-UJ!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F760171a6-49f4-4718-a16b-3eccb2082a51_2048x1048.png 848w, https://substackcdn.com/image/fetch/$s_!7-UJ!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F760171a6-49f4-4718-a16b-3eccb2082a51_2048x1048.png 1272w, https://substackcdn.com/image/fetch/$s_!7-UJ!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F760171a6-49f4-4718-a16b-3eccb2082a51_2048x1048.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!7-UJ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F760171a6-49f4-4718-a16b-3eccb2082a51_2048x1048.png" width="1456" height="745" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/760171a6-49f4-4718-a16b-3eccb2082a51_2048x1048.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:745,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!7-UJ!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F760171a6-49f4-4718-a16b-3eccb2082a51_2048x1048.png 424w, https://substackcdn.com/image/fetch/$s_!7-UJ!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F760171a6-49f4-4718-a16b-3eccb2082a51_2048x1048.png 848w, https://substackcdn.com/image/fetch/$s_!7-UJ!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F760171a6-49f4-4718-a16b-3eccb2082a51_2048x1048.png 1272w, https://substackcdn.com/image/fetch/$s_!7-UJ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F760171a6-49f4-4718-a16b-3eccb2082a51_2048x1048.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>That&#8217;s a new class of traffic between CPU server racks and GPU clusters, all flowing over Ethernet. Add in scale-across opportunities (connecting clusters to clusters), and the networking TAM keeps expanding.</p><h3>So Where Does Arista Stand?</h3><p>Ethernet is gaining share in scale-out, making inroads in scale-up, and now serving a new orchestration tier driven by agentic AI. </p><p>AI clusters need more ports per rack, faster speeds (400G &#8594; 800G &#8594; 1.6T), and less oversubscription than traditional cloud networks, <strong>meaning more switches per cluster at higher ASPs.</strong></p><p>And Arista shipped 150 million Ethernet ports in FY25, grew revenue from $5.9B to $9.0B in two years, and just guided for $11.25B in 2026. </p><p>But&#8230; Arista doesn&#8217;t design its own chips. It outsources manufacturing. Its two largest customers account for 42% of revenue. And its CEO just called memory costs &#8220;horrendous&#8221;.</p><p>So is the stock a compounding machine riding a structural Ethernet tailwind? Or is it priced for perfection in a business with real concentration risk, margin pressure, and a supply chain that can&#8217;t keep up with demand?</p><p>I identified six specific debates that drive Arista&#8217;s valuation and scored each one against Q4 FY2025 earnings evidence. Here&#8217;s what&#8217;s behind the paywall:</p><ul><li><p><strong>What Arista actually sells</strong>: the business units, the EOS software moat, and why hyperscalers keep buying blue-box switches when white-box is cheaper</p></li><li><p><strong>The Six Debates</strong>: bull vs. bear on Ethernet vs. InfiniBand, white-box competition, customer concentration, gross margins, the &#8220;golden screw&#8221; supply chain, and whether ~40x P/E is justified</p></li><li><p><strong>Earnings Validation Scorecard</strong>: each debate scored against Q4 results, with key management quotes</p></li></ul>
      <p>
          <a href="https://www.chipstrat.com/p/aristas-3-billion-ai-bet-and-the">
              Read more
          </a>
      </p>
   ]]></content:encoded></item><item><title><![CDATA[The Nvidia Business Hiding in Plain Sight]]></title><description><![CDATA[And it's bigger than Wall Street thinks]]></description><link>https://www.chipstrat.com/p/the-nvidia-business-hiding-in-plain</link><guid isPermaLink="false">https://www.chipstrat.com/p/the-nvidia-business-hiding-in-plain</guid><dc:creator><![CDATA[Austin Lyons]]></dc:creator><pubDate>Tue, 17 Feb 2026 12:31:04 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/0308fd26-c264-4ffd-9755-c496908b18de_1070x601.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>I&#8217;ve had a lot of requests to better understand Nvidia&#8217;s autonomy business. Here it is. I meant to get this out sooner, but it took a while. So let&#8217;s get into it! </em></p><p>Wall Street models Nvidia&#8217;s autonomy business as a ~$1-2B automotive SoC competing with Qualcomm and Mobileye for in-car socket wins. But Nvidia&#8217;s current automotive opportunity is actually $5-10B. <em>Why?</em> The data center infra revenue from automotive OEMs is significantly larger than the in-car silicon, but it&#8217;s hidden within the Data Center revenue line. Let&#8217;s dig into that more, and also ask:</p><ul><li><p>Why would Nvidia open-source its VLA models? <em>Expensive to train!</em></p></li><li><p>How does Nvidia&#8217;s autonomy business compare to Mobileye and Qualcomm?</p></li><li><p>Is custom silicon from Tesla and Rivian a threat?</p></li><li><p>Could Chinese OEMs be a tailwind?</p></li><li><p>Does AMD have a play here?</p></li></ul><h2>Three Computers, Not One</h2><p>At <a href="https://www.youtube.com/watch?v=0NBILspM4c4">CES 2026</a>, Jensen explained the &#8220;three computers&#8221; concept:</p><blockquote><p>&#8220;Inferencing the model is essentially a <strong>robotics computer that runs in a car</strong>...  But there has to be <strong>another computer that&#8217;s designed for simulation</strong>&#8230; One computer, of course, the one that we know that Nvidia builds for <strong>training the AI models.</strong>&#8221;</p></blockquote><p>So three &#8220;computers&#8221; are</p><ol><li><p>Frontier training cluster  <em>(e.g., NVL72 or Rubin)</em></p></li><li><p>Simulation node  <em>(could be on older GPUs like Hopper, or cost optimized GPUs like<a href="https://www.nvidia.com/en-us/data-center/rtx-pro-6000-blackwell-server-edition/"> Blackwell RTX Pro 6000</a>, etc)</em></p></li><li><p>Onboard compute in the car <em>(e.g. Orin, Thor)</em></p></li></ol><p>Nvidia&#8217;s reported automotive revenue is #3 only, which dramatically understates the real business. Jensen <a href="https://research.alpha-sense.com/doc/ET-32307-1976924115?">said</a> that automotive is</p><blockquote><p>already a multibillion-dollar business... <strong>venture to say somewhere between $5 billion to $10 billion</strong>...And by the end of 2030, by the end of the decade, it&#8217;s going to be a very large business</p></blockquote><p>JP Morgan Research confirmed as much in its Oct 15th 2025 report <a href="https://research.alpha-sense.com/search?docid=ASR-JPMR-b2d3581462c9b3b28c39f7e9a9d5693c">From Chips to Cars: Deep Dive into ADAS and Robotaxis</a>:</p><blockquote><p>&#8220;Within its automotive industry vertical, Data Center compute/networking currently represents a more significant revenue stream for Nvidia than its in-car hardware/software business.&#8221;</p></blockquote><p>Thus, Nvidia&#8217;s serviceable addressable market spans all three computers: training infra, simulation, and onboard compute. That&#8217;s a fundamentally larger SAM than any competitor focused only on the in-car socket. </p><p>Given Nvidia&#8217;s Ampere/Hopper/Blackwell dominance in data center GPUs, it&#8217;s very likely Nvidia gets paid for computers #1 and #2 for most OEMs regardless of which vendor wins the in-car socket. <em>Nvidia can lose, but still win.</em></p><h2>The Three-Computer Business Model</h2><p>Let&#8217;s go one layer deeper.</p><p><strong>Training is the largest revenue driver for Nvidia.</strong> Every OEM needs massive GPU compute to train AV models on their own sensor data. For example, per JPMorgan&#8217;s February 2026 Korea Autos note, Hyundai alone &#8220;signed a supply contract last October with Nvidia for 50,000 Blackwell GPUs to build its data center.&#8221; At roughly $30-40K per GPU plus networking, that&#8217;s an estimated $1.5-2.5B deal from a single automotive OEM, rivaling Nvidia&#8217;s entire reported annual automotive revenue! But it shows up in the Data Center segment, not Automotive. <em>This is the perfect illustration of why Nvidia&#8217;s reported autonomy business looks undersized, as the largest revenue streams are hiding in a different line item.</em></p><p><strong>Simulation in Omniverse is important for safe AV development.</strong> No one should be skipping computer #2. An example workflow: when something fails, the autonomy ML team reconstructs the scene using <a href="https://docs.omniverse.nvidia.com/materials-and-rendering/latest/neural-rendering.html">NeuRec</a> (neural reconstruction) and tests a fix on exactly that failure. </p><p><em>That might seem crazy at first blush. No harm if your reaction is: what, we feel confident putting our lives in the hands of a model thats&#8230; tested on computer simulations&#8230;.!?</em></p><p>But the fidelity of such a simulation is quite good these days.  </p><p>Herman Ross, Nvidia&#8217;s Director of Simulation Ecosystem Development, <a href="https://www.youtube.com/watch?v=x5sAVkj9Bjg">noted</a> that reconstructed sensor data is &#8220;pretty much impossible to tell apart from the real sensor. So the gap there is considered to be close to zero.&#8221; </p><div id="youtube2-TRcFBSA-Lnk" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;TRcFBSA-Lnk&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/TRcFBSA-Lnk?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><p>Anyway, after reconstructing the scene, Cosmos Transfer can give you many &#8220;similar but different&#8221; scenarios for testing. So a single recorded driving scene can be transformed via prompt into a different location and different weather pattern in under 60 seconds.</p><p><strong>In-car compute is the final piece.</strong> Nvidia has the AGX Orin and Thor portfolio, from entry-level (~100 TOPS Orin) through dual-Thor for L3/L4. </p><p>BYD, for example, uses the entry Orin for its Gods Eye highway L2+ product.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!hydB!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F056ef585-2fec-4f36-9adc-ef118404d08f_1276x598.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!hydB!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F056ef585-2fec-4f36-9adc-ef118404d08f_1276x598.png 424w, https://substackcdn.com/image/fetch/$s_!hydB!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F056ef585-2fec-4f36-9adc-ef118404d08f_1276x598.png 848w, https://substackcdn.com/image/fetch/$s_!hydB!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F056ef585-2fec-4f36-9adc-ef118404d08f_1276x598.png 1272w, https://substackcdn.com/image/fetch/$s_!hydB!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F056ef585-2fec-4f36-9adc-ef118404d08f_1276x598.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!hydB!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F056ef585-2fec-4f36-9adc-ef118404d08f_1276x598.png" width="533" height="249.79153605015674" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/056ef585-2fec-4f36-9adc-ef118404d08f_1276x598.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:598,&quot;width&quot;:1276,&quot;resizeWidth&quot;:533,&quot;bytes&quot;:366692,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.chipstrat.com/i/188194185?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F056ef585-2fec-4f36-9adc-ef118404d08f_1276x598.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!hydB!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F056ef585-2fec-4f36-9adc-ef118404d08f_1276x598.png 424w, https://substackcdn.com/image/fetch/$s_!hydB!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F056ef585-2fec-4f36-9adc-ef118404d08f_1276x598.png 848w, https://substackcdn.com/image/fetch/$s_!hydB!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F056ef585-2fec-4f36-9adc-ef118404d08f_1276x598.png 1272w, https://substackcdn.com/image/fetch/$s_!hydB!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F056ef585-2fec-4f36-9adc-ef118404d08f_1276x598.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption"><em>Data from <a href="https://research.alpha-sense.com/?docid=ASR-CGSI-610e3810f71b3c0e0153f4601628f183">CGS International</a></em></figcaption></figure></div><p>Rivian&#8217;s Gen2 platform showed how this business model plays out in practice, as it ran on dual Orin chips in the car and accessed &#8220;tens of thousands of GPUs&#8221; in the cloud for training, without owning the clusters. <em>Read up on it here:</em></p><div class="digest-post-embed" data-attrs="{&quot;nodeId&quot;:&quot;8b88d98f-1282-4595-9063-6058cbb214f8&quot;,&quot;caption&quot;:&quot;Let&#8217;s dive deep into Rivian ahead of its Dec 11 AI &amp; Autonomy Day.&quot;,&quot;cta&quot;:&quot;Read full story&quot;,&quot;showBylines&quot;:true,&quot;size&quot;:&quot;sm&quot;,&quot;isEditorNode&quot;:true,&quot;title&quot;:&quot;Rivian's Silicon &amp; Physical AI&quot;,&quot;publishedBylines&quot;:[{&quot;id&quot;:8066776,&quot;name&quot;:&quot;Austin Lyons&quot;,&quot;bio&quot;:&quot;Chipstrat, Creative Strategies, Semi Doped. MSEE + MBA.&quot;,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/c180a750-7572-4aff-88e4-317aa435d533_1203x902.jpeg&quot;,&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:100}],&quot;post_date&quot;:&quot;2025-12-03T17:43:20.376Z&quot;,&quot;cover_image&quot;:&quot;https://substackcdn.com/image/fetch/$s_!vpH5!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fff13a58f-3406-48ef-ba35-c4b6516ba6c5_1730x1030.png&quot;,&quot;cover_image_alt&quot;:null,&quot;canonical_url&quot;:&quot;https://www.chipstrat.com/p/rivian-silicon-and-physical-ai&quot;,&quot;section_name&quot;:null,&quot;video_upload_id&quot;:null,&quot;id&quot;:180614483,&quot;type&quot;:&quot;newsletter&quot;,&quot;reaction_count&quot;:29,&quot;comment_count&quot;:0,&quot;publication_id&quot;:2003179,&quot;publication_name&quot;:&quot;Chipstrat&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!rCMl!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F27769444-42f3-4b43-9683-4fe7826c06b8_608x608.png&quot;,&quot;belowTheFold&quot;:true,&quot;youtube_url&quot;:null,&quot;show_links&quot;:null,&quot;feed_url&quot;:null}"></div><p>So that&#8217;s the framework: three computers, not one. Training dwarfs the in-car business. Simulations adds up too. Now, in the long run, I think the onboard compute market will grow significantly too. I&#8217;m bullish on that TAM long-term. <em>Won&#8217;t more than half of new cars by 2030 come equipped with autonomy functionality?</em> And that&#8217;s before counting commercial trucking (e.g. <em><a href="https://aurora.tech/">Aurora</a></em>) and robotaxis.  </p><p>Ok, so now you appreciate the Three Computer business model. But that still leaves questions.</p><ul><li><p><strong>Why would Nvidia open-source its AV models?</strong> </p></li><li><p><strong>Nvidia has tons of OEM engagements &#8212; how do you monetize that without building a bunch of custom stacks?</strong> <em>After all, they each have different sensors and compute configurations...</em></p></li><li><p><strong>Qualcomm leaves one of the computers (training) to Nvidia, but what about simulation?</strong></p></li><li><p><strong>Which OEMs are actually software-defined versus just marketing it? </strong><em>And how might that impact adoption</em></p></li><li><p>What does the L2-to-L4 economic shift mean for <strong>who captures value over the next five years?</strong></p></li><li><p>And <strong>might AMD have a play?</strong></p></li></ul><p>We&#8217;ll get into all of that below the fold. </p><p><em>I&#8217;m aiming for institutional-grade quality, but this is industry analysis, not investment advice.</em></p>
      <p>
          <a href="https://www.chipstrat.com/p/the-nvidia-business-hiding-in-plain">
              Read more
          </a>
      </p>
   ]]></content:encoded></item><item><title><![CDATA[The $200 Billion Bet]]></title><description><![CDATA[Why Amazon has the hardest ROIC story in tech, and what to do about it]]></description><link>https://www.chipstrat.com/p/the-200-billion-bet</link><guid isPermaLink="false">https://www.chipstrat.com/p/the-200-billion-bet</guid><dc:creator><![CDATA[Austin Lyons]]></dc:creator><pubDate>Fri, 13 Feb 2026 20:24:00 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!FNeL!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb38355f8-db5e-467f-af74-78c761321da5_1098x630.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Amazon reported arguably its best quarter ever. Yet shares are down almost 15% since the call. From <a href="https://www.bloomberg.com/news/articles/2026-02-05/amazon-boosts-spending-far-ahead-of-estimates-on-ai-build-out">Bloomberg</a>:</p><blockquote><p>Amazon.com Inc. shares dropped the most in six months after the company announced plans to spend $200 billion this year on data centers, chips and other equipment, worrying investors that its colossal bet on artificial intelligence may not pay off in the long run.</p></blockquote><p>The market reaction is understandable. $200 billion is unprecedented. FCF will almost certainly go negative in 2026. Management didn&#8217;t communicate any guardrails when asked directly.</p><p><strong>And ROIC is structurally harder for Amazon than Meta, Google, and even Microsoft.</strong> AWS runs at a 35% operating margin, but accounts for less than 20% of total revenue. The rest of revenue comes from retail at single-digit margins. So Amazon has to justify the biggest CapEx in corporate history through cloud margins alone. </p><p>On that front though, the market may be underappreciating the custom silicon story. Could Trainium lift AWS margins? And, should advertising revenue and margins be broken out to show how Trainium can directly impact non-retail margins?</p><p>For paid subscribers, we&#8217;ll dig into all of that and more:</p><ul><li><p><strong>How the custom silicon narrative shifted from defensive to confident in one quarter,</strong> and what the $10B chips ARR disclosure means</p></li><li><p><strong>Why Amazon&#8217;s margin structure makes the ROIC story structurally harder</strong> than Meta or Google, and what Trainium has to deliver to close that gap</p></li><li><p><strong>The CapEx escalation from Q3 to Q4.</strong> How $125B became $200B with no explicit guardrails </p></li><li><p><strong>Where the sell side landed</strong> and why</p></li><li><p><strong>Four things Amazon&#8217;s IR team should be telling investors</strong>, from Trainium margin quantification to the Graviton-agentic AI connection</p></li></ul><p><em>Keep reading for institutional-grade work but with an independent perspective.</em> </p>
      <p>
          <a href="https://www.chipstrat.com/p/the-200-billion-bet">
              Read more
          </a>
      </p>
   ]]></content:encoded></item><item><title><![CDATA[Meta’s ROIC Strategy: GEM Now, LLMs Later]]></title><description><![CDATA[Meta doesn't need the smartest AI. It needs the most profitable one.]]></description><link>https://www.chipstrat.com/p/metas-roic-strategy-gem-now-llms</link><guid isPermaLink="false">https://www.chipstrat.com/p/metas-roic-strategy-gem-now-llms</guid><dc:creator><![CDATA[Austin Lyons]]></dc:creator><pubDate>Tue, 10 Feb 2026 22:25:54 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/b6448e0e-db81-461e-8549-4d663b5440fa_2516x3046.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>The Street has struggled to understand Meta&#8217;s CapEx strategy. Sentiment has been all over the place, and so has the stock. <em>Down 15% after Q3 earnings, up 15% after Q4.</em></p><p>But Meta articulated the same strategy the entire time, and it&#8217;s actually quite simple. GEM now, LLMs later.</p><p><strong>GEM now:</strong> immediate, measurable ROI in the core ads business<br><strong>LLMs later:</strong> incremental core gains today, substantial potential upside later</p><p>Let&#8217;s dig in. Then we&#8217;ll address potential risks.</p><h2><strong>GEM Now</strong></h2><p>Let&#8217;s start with the basics. </p><p>Facebook launched the News Feed in 2006, ads in 2007, and by 2011 the feed was algorithmically curated. From the beginning, what you see in the feed and which ads you are shown have been driven by machine learning. Meta has used AI infrastructure to drive engagement and advertising revenue for nearly two decades, evolving from CPU-based datacenters to GPU-accelerated systems. </p><p>This is not a new pivot. It is the core of the business. <strong>Meta&#8217;s business has always depended on using AI infrastructure to drive engagement and generate advertising revenue</strong>. </p><p>Today, that stack includes <a href="https://engineering.fb.com/2024/12/02/production-engineering/meta-andromeda-advantage-automation-next-gen-personalized-ads-retrieval-engine/">Andromeda</a> for large-scale ad retrieval, <a href="https://ai.meta.com/blog/ai-ads-performance-efficiency-meta-lattice/">Lattice</a> as a unified prediction architecture across objectives, and <a href="https://engineering.fb.com/2025/11/10/ml-applications/metas-generative-ads-model-gem-the-central-brain-accelerating-ads-recommendation-ai-innovation/">GEM</a>, Meta&#8217;s foundation model for ad ranking. They are distinct components, but together they form a single system optimized around monetization.</p><p><em>BTW, to learn the basics of Meta&#8217;s core business, I highly recommend <a href="https://mobiledevmemo.com/podcast-metas-ai-advertising-playbook-with-matt-steiner">this podcast</a> episode between Eric Seufert and Matt Steiner, Meta&#8217;s Vice President of Monetization Infrastructure, Ranking &amp; AI Foundations. </em></p><p>An important topic to understand is the <a href="https://engineering.fb.com/2025/11/10/ml-applications/metas-generative-ads-model-gem-the-central-brain-accelerating-ads-recommendation-ai-innovation/">Generative Ads Recommendation Model</a> (GEM), Meta&#8217;s foundation model for ad ranking. Conceptually, it&#8217;s a recommendation system that makes one decision, billions of times per day: <em>which ad should this person see right now?</em></p><p>Again, ML recommendation systems have always driven Meta&#8217;s business, but this latest model, GEM, introduces an important innovation. Meta figured out how to unlock scaling laws for recommenders in a similar fashion to LLMs, where more training compute yields better rankings:</p><blockquote><p>&#8220;This is the first time we have found a recommendation model architecture that can scale with similar efficiency as LLMs.&#8221;</p></blockquote><p>Better rankings lead to more revenue.<strong> So when Meta says they want to buy more GPUs, there&#8217;s a clear line of sight to increased revenue. </strong></p><p>This was missed by investors, who early on punished Meta&#8217;s CapEx increase with confused statements like &#8220;well Meta isn&#8217;t a CSP, so it&#8217;ll take longer to get an ROI on that CapEx&#8221;. <em>Ummm&#8230; have you compared cloud margins to Meta&#8217;s advertising margins? Meta&#8217;s gonna have no trouble paying off that incremental compute with incremental advertising dollars from GEM improvements, even if it takes a bit longer than renting out a GPU&#8230;</em></p><p>Investors need to think about the ROI of Meta&#8217;s incremental compute differently. Remember, Meta is building recommendation systems like GEM <em>and </em>training frontier LLMs. The ROI has to account for these separately.</p><p>So, some things to note. LLM AI labs build massive models (e.g. 1T+ params) that run in production at enormous cost (think Claude Opus). GEM doesn&#8217;t work that way. From CFO Susan Li on the recent Q425 earnings call in January,</p><blockquote><p>&#8220;So we don&#8217;t use our larger model architectures like GEM for inference because their size and complexity would make it too cost-prohibitive. The way that we drive performance from those models is by using them to transfer knowledge to smaller, lightweight models that are used at runtime.&#8221;</p></blockquote><p>GEM is a teacher. It absorbs massive user, content, and advertiser signals, then distills that knowledge into small, latency-sensitive models that serve ads. Inference remains cheap. Scaling GEM during training does not translate into a proportional increase in runtime model size or serving cost.</p><p>And that efficiency is improving:</p><blockquote><p>&#8220;In Q3, we made improvements to GEM&#8217;s model architecture that doubled the performance benefit we get from adding a given amount of data and compute.&#8221;</p></blockquote><p><em>Double the performance. </em>In other words, the same incremental compute+data is now producing roughly twice the impact.</p><p>Viewed through that lens, the right question for evaluating Meta&#8217;s CapEx is not whether its models beat OpenAI or Anthropic on public benchmarks of how &#8220;smart&#8221; the AI is. Rather, GEM CapEx should be measured as <strong>incremental revenue per dollar of compute. </strong>Put differently, Meta is underwriting CapEx against an internal efficiency curve tied directly to monetization, not against leaderboard performance. </p><p>On a risk-adjusted basis, I think we can all agree GEM is a more defensible place <em>right now</em> to deploy capital than frontier LLMs, whose returns remain uncertain. <em>When do OAI and Anthropic become profitable? Whereas GEM is immediately profitable&#8230; As I said <a href="https://www.chipstrat.com/p/the-four-horsemen-google-microsoft">before</a>,  </em></p><blockquote><p><strong>Wouldn&#8217;t others (cough cough OpenAI) LOVE to have 3.5B DAILY active users&#8230; and &#8230;. Wait for it&#8230; make almost $60 PER USER PER YEAR!</strong></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!GBgP!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7aa38bfc-f563-45b2-89a9-701dee7f8277_2048x1163.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!GBgP!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7aa38bfc-f563-45b2-89a9-701dee7f8277_2048x1163.png 424w, https://substackcdn.com/image/fetch/$s_!GBgP!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7aa38bfc-f563-45b2-89a9-701dee7f8277_2048x1163.png 848w, https://substackcdn.com/image/fetch/$s_!GBgP!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7aa38bfc-f563-45b2-89a9-701dee7f8277_2048x1163.png 1272w, https://substackcdn.com/image/fetch/$s_!GBgP!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7aa38bfc-f563-45b2-89a9-701dee7f8277_2048x1163.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!GBgP!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7aa38bfc-f563-45b2-89a9-701dee7f8277_2048x1163.png" width="1456" height="827" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/7aa38bfc-f563-45b2-89a9-701dee7f8277_2048x1163.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:827,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!GBgP!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7aa38bfc-f563-45b2-89a9-701dee7f8277_2048x1163.png 424w, https://substackcdn.com/image/fetch/$s_!GBgP!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7aa38bfc-f563-45b2-89a9-701dee7f8277_2048x1163.png 848w, https://substackcdn.com/image/fetch/$s_!GBgP!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7aa38bfc-f563-45b2-89a9-701dee7f8277_2048x1163.png 1272w, https://substackcdn.com/image/fetch/$s_!GBgP!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7aa38bfc-f563-45b2-89a9-701dee7f8277_2048x1163.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div></blockquote><p>As we&#8217;ll see later, of course, Meta is investing in LLMs, and there&#8217;s upside there&#8230; but just wanted to point out that CapEx spent on GEMs is a no-brainer.</p><p>Now, you might ask, are we sure additional CapEx spent on GEM is truly driving revenue? Concrete results from the earnings call:</p><ul><li><p>+5% Instagram ad conversions</p></li><li><p>+3% Facebook Feed ad conversions</p></li><li><p>+3.5% Facebook ad clicks in Q4, alongside sequence learning</p></li></ul><p>With a $150B+ ad business, each of those incremental points seriously adds up.</p><p>Again, that&#8217;s a lot more straightforward than investing in larger training clusters to unlock an extra IQ point for a GenAI LLM. <em>I&#8217;m not saying GenAI LLMs aren&#8217;t worth it &#8212; just pointing out the clear line between CapEx and measurable results.</em></p><p>And while incremental revenue from improved ad recommendation is great, so is decreasing the cost of that compute. After all, a dollar saved goes straight to the bottom line. One way to decrease cost is through compute diversification. Like with custom silicon (MTIA):</p><blockquote><p>&#8220;We extended our Andromeda ads retrieval engine, so it can now run on NVIDIA, AMD, and MTIA. This, along with model innovations, enabled us to nearly triple Andromeda&#8217;s compute efficiency.&#8221;</p><p> &#8220;In Q1, we will extend our MTIA program to support our core ranking and recommendation training workloads, in addition to the inference workloads it currently runs.&#8221;</p></blockquote><p>This looks a lot like the Microsoft Maia conversation, where Saurabh discussed building custom silicon to optimize known workloads:</p><div class="digest-post-embed" data-attrs="{&quot;nodeId&quot;:&quot;1fc05e28-6623-4c55-84d5-c0c361ccf34c&quot;,&quot;caption&quot;:&quot;Hello readers! This is another Chipstrat interview, where I speak with builders and operators about the decisions and trade-offs shaping their strategy.&quot;,&quot;cta&quot;:&quot;Read full story&quot;,&quot;showBylines&quot;:true,&quot;size&quot;:&quot;sm&quot;,&quot;isEditorNode&quot;:true,&quot;title&quot;:&quot;An Interview with Microsoft's Saurabh Dighe About Maia 200&quot;,&quot;publishedBylines&quot;:[{&quot;id&quot;:8066776,&quot;name&quot;:&quot;Austin Lyons&quot;,&quot;bio&quot;:&quot;Chipstrat, Creative Strategies, Semi Doped. MSEE + MBA.&quot;,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/c180a750-7572-4aff-88e4-317aa435d533_1203x902.jpeg&quot;,&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:100}],&quot;post_date&quot;:&quot;2026-01-28T16:25:00.664Z&quot;,&quot;cover_image&quot;:&quot;https://substackcdn.com/image/youtube/w_728,c_limit/6R-oMCdnLiI&quot;,&quot;cover_image_alt&quot;:null,&quot;canonical_url&quot;:&quot;https://www.chipstrat.com/p/an-interview-with-microsofts-saurabh&quot;,&quot;section_name&quot;:null,&quot;video_upload_id&quot;:null,&quot;id&quot;:186082249,&quot;type&quot;:&quot;newsletter&quot;,&quot;reaction_count&quot;:7,&quot;comment_count&quot;:0,&quot;publication_id&quot;:2003179,&quot;publication_name&quot;:&quot;Chipstrat&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!rCMl!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F27769444-42f3-4b43-9683-4fe7826c06b8_608x608.png&quot;,&quot;belowTheFold&quot;:true,&quot;youtube_url&quot;:null,&quot;show_links&quot;:null,&quot;feed_url&quot;:null}"></div><p><em>I hope to talk with Meta about MTIA in the future so we can dig into the rationale and ROIC of the CapEx invested in the MTIA roadmap.</em></p><p>And one interesting note. While GEM focuses on ads, Meta is explicitly unifying ads ranking with organic content:</p><blockquote><p>&#8220;We&#8217;re also going to start validating the use of ad signals in organic content recommendations as we continue to work towards having a more shared platform for organic and ads recommendations over time.&#8221;</p></blockquote><p>One model ranking both organic and paid content across 3.5B daily users. <em>Sounds like a good use of CapEx.</em></p><h2><strong>Part II: LLMs Later</strong></h2><p>So GEM is the sure thing &#8212; CapEx with a clear, measurable payoff. But Meta isn&#8217;t stopping there. Zuck is investing heavily in Meta Superintelligence Labs, co-led by Alexandr Wang and Nat Friedman, to build frontier LLMs. And this is where most of the Street&#8217;s anxiety lives, because it pattern-matches to the metaverse spending era. But the framing of &#8220;Meta needs to beat OpenAI and Anthropic&#8221; misses the point.</p><p>The way I think about it: GEM alone can deliver serious returns on invested capital. LLMs are layered on top. And some of that LLM upside is already showing up in the core business today. Some of it is more speculative &#8212; future products and features that could expand engagement in ways we can&#8217;t fully predict, yet. </p><p>Let&#8217;s start with the straightforward stuff that LLM enables:</p><p><strong>Better understanding of what people actually want.</strong> Traditional recommendation systems are mostly pattern-matching on past behavior. LLMs bring something different, as they can reason about what someone might be interested in, even without a ton of historical data:</p><blockquote><p>&#8220;Building new model architectures from the ground up that will work on top of LLMs, leveraging the world knowledge and reasoning capabilities of an LLM to better infer people&#8217;s interests.&#8221;</p></blockquote><p><strong>Making it easier for businesses to create ads.</strong> One of the biggest constraints in digital advertising is creative (photos, videos, etc). Small businesses don&#8217;t have design teams cranking out polished video ads or the means to outsource. GenAI changes that. If anyone can quickly and cheaply generate ad variations, more businesses can run better ads. That&#8217;s good for advertisers and good for Meta&#8217;s revenue. From <a href="https://about.fb.com/news/2026/01/2026-ai-drives-performance/">2026: AI Drives Performance</a>:</p><blockquote><p>AI is also powering stronger ad creative. In Q4 2025, the combined revenue run-rate of our video generation tools hit $10 billion, with quarter-over-quarter growth nearly three times faster than overall ads revenue.</p></blockquote><p><strong>Solving the cold start problem. </strong>Recommendation systems struggle with new content because there&#8217;s no engagement data to work with yet. LLMs can fill that gap because they can infer what a piece of content is about and who might care before anyone&#8217;s even seen it:</p><blockquote><p>&#8220;We will work on more deeply incorporating LLMs into our existing recommendation systems&#8230; useful for content that has been more recently posted since there&#8217;s less engagement data.&#8221;</p></blockquote><p>These are all additive to GEM. But there&#8217;s also the less predictable upside of future features that expand the surface area of engagement itself. One that caught my eye is AI dubbing. Meta is auto-translating videos into nine languages, and hundreds of millions of people are watching them every day.</p><blockquote><p>&#8220;One area we&#8217;re already seeing promise is with AI dubbing of videos into local languages. We are now supporting nine different languages, with hundreds of millions of people watching AI-translated videos every day. This is already driving incremental time spent on Instagram, and we plan to launch support for more languages over the course of this year.&#8221;</p></blockquote><p>That&#8217;s content that simply wouldn&#8217;t have reached those users before. More content people want to watch means more time spent, which means more ad inventory.</p><p>IMO the bull case is clear. But what about the risks? </p><p>Behind the paywall, we dig into the noise surrounding Meta&#8217;s AI strategy: why Meta is spending billions on frontier models when it could just use OpenAI or Anthropic as teachers, whether this is really different from the metaverse spending era that has yet to payoff, and whether there&#8217;s a hard ceiling on engagement gains when you already reach half the planet every day. We also cover a less obvious near-term catalyst for incremental time spent on Meta&#8217;s platforms.</p>
      <p>
          <a href="https://www.chipstrat.com/p/metas-roic-strategy-gem-now-llms">
              Read more
          </a>
      </p>
   ]]></content:encoded></item><item><title><![CDATA[The MI450 Waiting Game]]></title><description><![CDATA[AMD delivered its best quarter ever. Yet the market wants to know why to own it right now.]]></description><link>https://www.chipstrat.com/p/the-mi450-waiting-game</link><guid isPermaLink="false">https://www.chipstrat.com/p/the-mi450-waiting-game</guid><dc:creator><![CDATA[Austin Lyons]]></dc:creator><pubDate>Fri, 06 Feb 2026 17:21:33 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/afcffd21-d776-42c2-99c5-399bd4b3f41d_1480x758.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>AMD just hit $10 billion in quarterly revenue for the first time. Margins are expanding. Lisa Su reaffirmed the 35% revenue CAGR target. And yet, the stock sold off by ~15%. </p><p>Investors are telling us they don&#8217;t know why to own AMD <em>right now</em>. </p><p>The MI450 inflection doesn&#8217;t hit until Q4 2026, and investors won&#8217;t see those numbers until early 2027. That&#8217;s a long time to wait. Server CPU is strong, but is it enough? Client faces a softening TAM. China's revenue was helpful to the beat but might not be repeatable.</p><p>Meanwhile, Nvidia is describing an expanding inference datacenter portfolio, with recent announcements including <a href="https://nvidianews.nvidia.com/news/nvidia-unveils-rubin-cpx-a-new-class-of-gpu-designed-for-massive-context-inference">Rubin CPX</a>, <a href="https://developer.nvidia.com/blog/introducing-nvidia-bluefield-4-powered-inference-context-memory-storage-platform-for-the-next-frontier-of-ai/">Context Memory Storage</a>, and Groq. <em>What&#8217;s AMD&#8217;s counterpositioning?</em></p><p>Here&#8217;s my breakdown of the quarter. What the numbers say, what management said on the call, and what they didn&#8217;t say. Including:</p><ul><li><p><strong>MI450 timing: </strong>why Q4 2026 is perceived as the only 2026 quarter that matters, and the buy-side&#8217;s <em>why now?</em> problem</p></li><li><p><strong>OpenAI:</strong> the leading indicator investors should watch and AMD should talk about</p></li><li><p><strong>Inference strategy:</strong> AMD vs Nvidia&#8217;s portfolio</p></li><li><p><strong>Server CPU</strong>: a bright spot. But 2026?</p></li><li><p><strong>China: </strong>strip out $390M in MI308 sales and the beat looks different</p></li><li><p><strong>Client and Embedded: </strong>where margins are heading in 2027</p></li></ul><p>Let&#8217;s dig in.</p>
      <p>
          <a href="https://www.chipstrat.com/p/the-mi450-waiting-game">
              Read more
          </a>
      </p>
   ]]></content:encoded></item><item><title><![CDATA[This Month in Review (Jan '26)]]></title><description><![CDATA[Infra Economics, Post-GPT Architectures, Autonomy & LiDAR, Intel, Semi Doped]]></description><link>https://www.chipstrat.com/p/this-month-in-review-jan-26</link><guid isPermaLink="false">https://www.chipstrat.com/p/this-month-in-review-jan-26</guid><dc:creator><![CDATA[Austin Lyons]]></dc:creator><pubDate>Fri, 30 Jan 2026 13:03:59 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!T77P!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F35c4dc10-1679-4734-b3f5-991344ffe0aa_2048x960.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>This was a busy month. Here&#8217;s a summary what we covered in case you missed some of it.</em></p><h2>Infrastructure Economics</h2><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!T77P!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F35c4dc10-1679-4734-b3f5-991344ffe0aa_2048x960.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!T77P!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F35c4dc10-1679-4734-b3f5-991344ffe0aa_2048x960.png 424w, https://substackcdn.com/image/fetch/$s_!T77P!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F35c4dc10-1679-4734-b3f5-991344ffe0aa_2048x960.png 848w, https://substackcdn.com/image/fetch/$s_!T77P!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F35c4dc10-1679-4734-b3f5-991344ffe0aa_2048x960.png 1272w, https://substackcdn.com/image/fetch/$s_!T77P!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F35c4dc10-1679-4734-b3f5-991344ffe0aa_2048x960.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!T77P!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F35c4dc10-1679-4734-b3f5-991344ffe0aa_2048x960.png" width="1456" height="683" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/35c4dc10-1679-4734-b3f5-991344ffe0aa_2048x960.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:683,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!T77P!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F35c4dc10-1679-4734-b3f5-991344ffe0aa_2048x960.png 424w, https://substackcdn.com/image/fetch/$s_!T77P!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F35c4dc10-1679-4734-b3f5-991344ffe0aa_2048x960.png 848w, https://substackcdn.com/image/fetch/$s_!T77P!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F35c4dc10-1679-4734-b3f5-991344ffe0aa_2048x960.png 1272w, https://substackcdn.com/image/fetch/$s_!T77P!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F35c4dc10-1679-4734-b3f5-991344ffe0aa_2048x960.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><strong><a href="https://www.chipstrat.com/p/lithography-economics">Lithography Economics</a></strong>: EUV has a cost problem. Two US-based startups, xLight and Substrate, are taking different paths to bend the cost curve.</p><p><strong><a href="https://www.chipstrat.com/p/linear-optics-trade-offs-lro-and">LRO/LPO Optics</a></strong>: At datacenter scale, every watt consumed by networking is a watt not available for compute. LRO and LPO offload transceiver DSP to the switch, trading modularity for power efficiency. They&#8217;re a stepping stone toward co-packaged optics.</p><p><strong><a href="https://www.chipstrat.com/p/credos-reliability-thesis">Credo&#8217;s Reliability Thesis</a></strong>: AECs made Credo. But Credo is more than AECs. </p><h2>Post-GPT Architecture</h2><p><strong><a href="https://www.chipstrat.com/p/an-interview-with-microsofts-saurabh">Microsoft Maia 200</a></strong>: Maia 200 is inference-first by design with a large on-die SRAM, 6,144-accelerator scale-up via Ethernet, 750W TDP, and no scale-out network. These decisions unlock a 30% improvement in price-performance. Deployed through Azure services, not bare metal. </p><p><strong><a href="https://www.chipstrat.com/p/right-systems-for-agentic-workloads">Agentic Workloads</a></strong>: Speed isn&#8217;t enough for coding agents. Context grows with each iteration, and the KV cache must stay hot. Can pre-GPT accelerators built for stateless inference handle memory hierarchy? What about post-GPT designs (Etched, MatX)?</p><p><strong><a href="https://www.chipstrat.com/p/when-frontier-ai-goes-from-cloud">Frontier AI From Cloud to Desk in 5 Years</a>: </strong>Frontier models have been migrating from the cloud to the desk on a 4 to 5-year cadence. GPT-4-class capabilities are now reaching workstations. What&#8217;s the implication? Think minicomputers in the mainframe era: TAM expansion.</p><h2>Autonomy and LiDAR</h2><p><strong><a href="https://www.chipstrat.com/p/lidar-explained-how-it-works-and">LiDAR Primer</a></strong>: A primer on how LiDAR works. Wavelengths (905nm silicon vs 1550nm InGaAs), sensing methods (ToF vs FMCW), and why the technology is now viable at scale.</p><p><strong><a href="https://www.chipstrat.com/p/why-lidar-is-consolidating-now">LiDAR Market</a></strong>: Waymo&#8217;s momentum has triggered an L4 gold rush. L3 programs are reaccelerating now that OEMs are decoupling ADAS from stalled EV platform transitions. Behind-the-windshield is the winning form factor. The market is consolidating toward one or two Western suppliers.</p><h2>Intel</h2><p><strong><a href="https://www.chipstrat.com/p/intel-q425-back-to-reality">Intel Q4&#8217;25: Back to Reality</a>: </strong>The stock run-up was vibes; the correction was reality. Supply constraints were telegraphed already, though. The turnaround is execution-bound, not quarter-driven. And don&#8217;t expect anything dramatic next quarter.</p><p><strong><a href="https://www.chipstrat.com/p/intels-product-marketing-at-ces-b">Intel&#8217;s Product Marketing at CES: B-</a></strong>: Intel led with real value props (battery, graphics) instead of AI hype. The AI PC pitch still lacks compelling GenAI use cases. Bright spot: Panther Lake on 18A shipping to edge and consumer simultaneously.</p><h2>Semi Doped Podcast</h2><p>I launched <a href="https://www.semidoped.fm/">Semi Doped</a> this month with <a href="https://www.viksnewsletter.com/">Vik Sekar</a>. The first seven episodes have surpassed 150K cumulative views on X. Follow along <a href="https://x.com/semidoped">on X</a>, any podcast player, or YouTube.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.chipstrat.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Chipstrat is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p>]]></content:encoded></item></channel></rss>