Get Updates Now
GP BULLHOUND'S WEEKLY REVIEW OF THE LATEST NEWS IN PUBLIC MARKETS

Tech Thoughts Newsletter – 8 December 2023.

Subscribe to our Tech Thoughts Newsletter
8 December - This week’s update covers AMD and Google’s AI events, which both saw a significant stock reaction in the week.

Market: The market broadly has taken a breather after a strong November, but there’s still upside left in tech – AMD’s AI event driving shares up 9% yesterday, and Alphabet up 5% following its LLM launch (both owned, more details below).

Portfolio: We made no major changes this week. 

Google Gemini and the battle of LLMs

  • In surprising timing, Google announced its new and largest language model, Gemini. The headline is that Gemini Ultra (though not released yet) rivals OpenAI’s GPT-4.5 in terms of performance. 
  • In our weekly letter on 29th September, we argued that there’s a significant chance that LLMs end up being totally commoditised, and that an LLM’s ultimate success is much more likely to come down to distribution and whether they are underlying source codes for other apps. 
  • Google’s advantage in this respect is that it does have a consumer/device business (its smallest Gemini Nano model will power features in the Pixel 8 Pro smartphone) not to mention among the largest customer bases in tech for its search business. 
  • Google says that Gemini is coming to Search next year, though we’ve said before it’s unclear how LLMs will work with search  firstly adding a significant cost of serve to every search made, as well as whether it works from the core business model perspective. 
  • One of the problems Google has is that its business model doesn’t monetise per query  advertisers pay if and when a user clicks on an ad. That means the high incremental cost per query of adding inference capabilities (when most of those queries aren’t monetisable) results in a cost/revenue mismatch, which is a fundamental issue. 
  • And if we get an answer generated by AI, do we stop clicking blue links? Google has a separate brand “Bard”, and maybe it’s hoping that the 20-year-old habit of going straight to Google.com is too hard to break (which so far has been the case).

Portfolio view: For us in the portfolio, commoditisation of LLMs isn’t bad – it ultimately means we’ll likely get to those new feasible revenue and business models much more quickly, which will drive more cloud compute and more sustainable investment in AI chips. We own Microsoft, Amazon and Google, which should see more demand within their (highly profitable) cloud services businesses. We also own Nvidia, the company most obviously downstream of their capex (and TSMC and semicap, which also benefit). 

And there is upside for the enterprise software companies that can effectively connect their own vast proprietary data sets to pre-trained LLMs, and offer AI tools and features to drive ARPU growth. 

On Google and its core search business, margin erosion remains a possible (probable?) scenario. However, it will still be a business with a ROIC well over the cost of capital  for now, we see Google as being in a position to defend its incumbent position. We continue to watch it closely.

AMD (owned) held its AI event, formally introducing the MI300X GPU competitor

  • Three things we’d highlight (1) the sizing of the market – $400bn by 2027 (their prior forecast, given only four months ago, was for $150bn); (2) the importance of memory/HBM; and (3) new hyperscaler partners. 
  • Firstly, CEO Lisa Su outlined a new $400bn target for AI:  “We’re now expecting that the data center accelerator TAM will grow more than 70% annually over the next four years to over 400 billion in 2027”.
  • It’s the sort of increase that would usually be scoffed at, were it not for the fact that we know that AMD and Nvidia both have a tremendous amount of visibility driven by the current supply constraints across the industry.  
  • Secondly, we need to note the memory content of the MI300X (192GB total, which tops Nvidia’s latest H200 of 141 GB). We’ve spoken before about the importance of memory in the AI world  given the need to store and retrieve large amounts of data. Su commented:

Now let’s talk about some of the performance and why it’s so great. For generative AI, memory capacity and bandwidth are really important for performance. If you look at MI300X, we made a very conscious decision to add more flexibility, more memory capacity and more bandwidth. And what that translates to is 2.4x more memory capacity and 1.6x more memory bandwidth than the competition.”

  • Finally, a swathe of new hyperscaler announcements – joining AMD were senior cloud executives from each of Microsoft, Oracle and Meta. 
  • We’ve said before that there are many reasons why AI players will not want to be tied into Nvidia. Everyone is motivated to look for a credible alternative, particularly in the bigger inference market, to avoid being tied to one very powerful supplier. More importantly, Nvidia is currently supply-constrained, so many customers struggle to get their hands on Nvidia H100 chips. AMD is the most credible alternative here.
  • Doing some maths around potential supply, we think the 2024 number could be much higher than $2bn (closer to $4-5bn) – given Nvidia will be supply-constrained on both the H100 and the new B100 until Q4 next year. It’s good news for TSMC, which makes AMD’s (and Nvidia’s) chips. 

Portfolio view: We own Nvidia and its position as the leader in GPU is clear (with benefits around its CUDA ecosystem), but we are also seeing a benefit to players across the AI value chain. We’ve commented before that the extent to which Nvidia is currently supply-constrained is very helpful for AMD’s entry into the GPU space, and there is no doubt that no customer will want to be entirely tied into one powerful provider (and it’s no secret that Nvidia chips are very expensive). TSMC makes both Nvidia and AMD chips, and will benefit too. Relatedly, we saw KLA Tencor management this week at a conference, who commented on utilisation rates improving in leading-edge logic (5 and 3nm). 

The $400bn is important not only for companies like AMD and Nvidia, which will take a majority share of that revenue and growth (the market will be ~$75bn next year), but also specifically for the semicap equipment providers. Their forecasts have been based on a doubling of the semiconductor market from $500bn to $1trn in 2030 – the reality is that if the $400m number is correct, that $1trn number is much higher, and semicap equipment providers will see continued strength in demand. 

Taiwanese monthly numbers – some volatility remains

  • TSMC November sales were the ones we were waiting for this morning – after a record October number, sales were down 15% mth/mth. We don’t know what’s causing the volatility – it could be a weaker 3nm Apple build (the strong Hon Hai numbers below likely relate to TSMC October builds). Nevertheless, it still means that TSMC’s 2023 full-year guide continues to look pretty achievable/beatable, and we believe it’s benefitting from Nvidia GPU demand filling in 7nm (A100) and 5nm (H100/H200) demand.
  • Separately, TSMC held its supplier awards this week where the CEO announced the expectation for AI compute to triple over the next two years (circling back to AMD’s $400bn market comment).
  • Hon Hai released November revenue of +18% yr/yr growth, driven by the smart consumer segment (which, in short, is Apple). Recall a year ago the shutdowns which started in its Zhengzhou factory so lapping easy comps, but Hon Hai also revised up its Q4 outlook (which was originally for “significant growth”) – so likely a stronger market more broadly (Hon Hai also has a dominant share in Nvidia AI GPU modules).
  • Largan – the sole supplier for the iPhone 15 Pro Max periscope lens – sales also continued their ramp to an all-time monthly high number, we think. The camera continues to be one of the standout features every year for the iPhone – and in the context of the usual commoditised smartphone suppliers, Largan continues to do a decent job of staying ahead of the competition. 
  • UMC November sales are still hovering at NT$19bn, down slightly month on month and still tracking 20% below year-ago levels. That speaks to bouncing along the bottom (though hopefully with clearer inventory levels).
  • Vanguard reported monthly sales down 9% mth/mth – we think it’s suffering significantly from 8-inch China competition.

Portfolio view: We were surprised TSMC’s October strength didn’t continue into November – we have to wait for its Q4 results for a node-by-node breakdown. Our best guess right now is around Apple build volatility at 3nm, and we think its 5nm node remains fully utilised driven by Nvidia/AMD demand. 

AI driving semis beyond Nvidia and AMD

  • Broadcom (owned) sales and earnings came broadly in line with expectations.
  • Much more interesting were the comments around guidance  2024 semiconductor revenue is expected to grow mid to high single digits yr/yr, the growth mainly coming from its hyperscaler business, connecting AI clusters of GPUs and CPUs, while the rest of the business is experiencing a soft landing.
  • Specifically, they expect networking revenue to accelerate and for AI networks revenue to double in 2024 (which they said last quarter too) and that will account for ~25% of total chip sales. 

Portfolio view: We’ve commented before that AI and semis are more than an Nvidia/AMD story – players like Broadcom and Arista (both owned) also benefit from increasing networking requirements. It is worth noting that Broadcom is a TSMC customer, too.

Another month and more stellar Chinese EV numbers

  • China auto sales were up 25.5% yr/yr in November, driven by EV/PHEV, which were up 40% yr/yr and reached an all-time high monthly sales number of 841k, bringing China EV penetration to over 40%, helped by continued subsidies. 
  • BYD sold over a third of those – 302k – setting another new record. Li Auto, Xpeng and Leapmotor all had record deliveries. 

Portfolio view: The EV shift and the increased semiconductor requirements in EV continue to be a key exposure in the portfolio for us – we own Infineon and NXP, both of whom sell into BYD. 

October semiconductor numbers  memory pricing driving upside

  • Circling back to AMD’s MI300X HBM memory content, SIA announced October monthly semiconductor sales above seasonality (again)  down 6.3% mth/mth vs usual seasonality -8.3%, driven by pricing, in particular DRAM pricing (which we commented on with Micron’s revenue forecast upgrade last week). 
  • After the worst memory downcycle since the financial crisis, DRAM pricing has started its upturn after production cuts earlier in the year began to feed through to better utilisation, compounded by the beginning of the ramp of HBM alongside AI demand. 

Portfolio view: We don’t own memory players given their commoditised nature, but an upward pricing environment is very good for capex spend, and we’d expect to benefit through our semicap equipment holdings, LAM Research in particular. 

Mixed software – billings dynamics and consumption trends

  • MongoDB reported continued stabilisation (though not necessarily improvement) in consumption (we commented last week on Datadog and Elastic seeing similar trends).
  • Box reported its quarterly results, which are worth a quick note. Box IPO’d back in 2015, when its revenue was growing by 80%. Now it’s growing revenue at 7% (Microsoft, which has 200x the revenue of Box, is growing at 10%, and that’s before we start to look at GAAP margins). Box is a fallen darling of cloud software, which hasn’t lived up to expectations. 
  • Docusign had a similar dynamic that we’ve seen elsewhere around billings and revenues, this time not necessarily contract duration decreases, but more driven by lapping early renewals, which companies look less keen to do in this interest rate environment (why pay up cash before they need to when they may have a hefty refinancing commitment ahead?)

Portfolio view: We don’t own any of the names above, and though we’re generally seeing a reasonably robust picture for software demand, our preference continues to be for the platform beneficiaries of spend consolidation and potential ARPU upside from AI monetisation. 

Post Script: We will send one more letter before the end of the year, which will also include our look ahead for 2024. 

For enquiries, please contact:
Inge Heydorn, Partner, at inge.heydorn@gpbullhound.com
Jenny Hardy, Portfolio Manager, at jenny.hardy@gpbullhound.com
Nejla-Selma Salkovic, Analyst, at nejla-selma.salkovic@gpbullhound.com

About GP Bullhound
GP Bullhound is a leading technology advisory and investment firm, providing transaction advice and capital to the world’s best entrepreneurs and founders. Founded in 1999 in London and Menlo Park, the firm today has 14 offices spanning Europe, the US and Asia.

Related Articles