Get Updates Now
GP BULLHOUND'S WEEKLY REVIEW OF THE LATEST NEWS IN PUBLIC MARKETS

Tech Thoughts Newsletter – 29 September 2023.

Subscribe to our Tech Thoughts Newsletter
29 September - This week’s update covers the LLM market landscape, Workday and ASMI’s capital markets days, and Huawei’s launch event.

Market: September volatility was happily less present this week – the market is still in a bit of wait-and-see mode ahead of Q3 numbers while still digesting the higher-for-longer comments from the US Fed.

Portfolio: We made no significant changes to the portfolio this week. 

AI arms race  LLMs, data, chips  what really matters?

Lots of AI news this week – Amazon announced that it would invest in Anthropic; the Meta Connect event showcased its Llama LLM; ChatGPT announced new features; and OpenAI is raising money from SoftBank’s Masa Son to build the “iPhone of AI” with Jony Ive. It meant we spent a lot of time thinking about the AI landscape  large language models, AI business models, what it means for chips, cloud providers and the stocks we own. And, importantly, where in the value chain should we think about investing (short answer: chips).

Anthropic is a foundational model company – its most famous competitor is OpenAI. They both own and operate large language models. The relationship between the cloud providers (Amazon AWS, Microsoft Azure, Google GCP) and the large language model businesses (like OpenAI and Anthropic) is twofold: the cloud providers want to offer access to LLMs for their customers, and the LLMs need to use the compute capacity and chips of the cloud services companies to train their models as well as to then use the model (inference). 

Microsoft is perhaps the most clear cut: it has its partnership and investment with OpenAI – that means it generates revenue from OpenAI/ChatGPT via its cloud business, and Microsoft Azure’s customers have access to OpenAI. And Microsoft can implement OpenAI into its products (we think Copilot is still predominantly based on OpenAI). For OpenAI, it has critical access to Microsoft’s compute capacity  and the current hottest commodity: Nvidia GPUs. 

Meta has its own LLM, Llama, trained on Nvidia GPUs (after its extraordinary couple of years capex spend, Meta is very likely one of the biggest owners of Nvidia chips). And to the extent that Meta has been able to apply its AI capability to its extensive existing ads business (to mitigate ATT) it has been one of the biggest AI beneficiaries in the short term. 

Llama is open source – available for free for commercial and research use, except that any company with more than 700 million monthly active users needs to request a license from Meta, which we believe each of Microsoft, Amazon and Google has now done. 

Google is undoubtedly the most advanced in terms of its chips. Its TPU and TPU architecture is the most competitive with Nvidia. Google has its own LLM, the basis for Bard, which it integrates into its Google services.  

Amazon develops its own chips, Trainium and Inferentia. However, the performance metrics are very far from Nvidia/Google’s TPUs, and we know Amazon is still doing anything to get its hands on Nvidia GPUs. What makes the Amazon/Anthropic deal interesting is that from an LLM perspective, AWS has primarily sold itself so far as being LLM agnostic –so that customers and developers just need to bring their datasets to any of the pre-trained models that are available on AWSCEO Andy Jassy said the following on the earnings call back in August:

This is what our service Bedrock does and offers customers all of these aforementioned capabilities with not just one large language model but with access to models from multiple leading large language model companies like Anthropic, Stability AI, AI21 Labs, Cohere and Amazon’s own developed large language models called Titan. If you think about these first two layers I’ve talked about, what we’re doing is democratising access to generative AI, lowering the cost of training and running models, enabling access to large language model of choice instead of there only being one option, making it simpler for companies of all sizes and technical acumen to customise their own large language model and build generative AI applications in a secure and enterprise-grade fashion, these are all part of making generative AI accessible to everybody and very much what AWS has been doing for technology infrastructure over the last 17 years.”

It’s worth noting OpenAI and Llama are absent from that list – Amazon doesn’t have direct access to OpenAI (despite the name, OpenAI is not open source), but it announced yesterday Llama would now be available on Bedrock (since Amazon has more than 700 million customers, we assume it has signed a licensing deal/revenue share with Meta).

There are still many unknowns around large language models – how many will there be, will there be a winner? One assumption might be that the underlying data the model has been trained on might be a competitive advantage, and so Google and Meta might be at an advantage over OpenAI with their proprietary data. In reality, we’re not as sure this is the case  even those very vast proprietary data sets are likely lost in the sea of internet history LLMs are trained on. 

Much more likely is that by effectively letting the pretrained model access your own proprietary data (whether that’s enterprise data, or customer data), companies can gain a competitive advantage. That’s how we think about the likes of Microsoft, Salesforce, ServiceNow et al. as being able to offer AI services to their customers. 

For Anthropic and the other standalone LLMs, their ultimate success will likely be whether they are the underlying source models for other apps. And that’s where the open source LLMs – like Llama – are likely to benefit from being applied to a whole swathe of new use cases and systems on which it can run in the short term – simply because they’re free. 

Meta’s motivation to make its model available for free (rather than explicitly monetise it like OpenAI does with ChatGPT) isn’t totally clear – it could ultimately be supporting its future competitors. But there are benefits – its business is built on content, and Llama will ultimately produce a lot of content; and if their hardware business (the VR headset – which relaunched this week) comes back into focus, owning the developer LLM of choice (can Meta be to AI what Apple is to smartphones?) will surely be valuable. 

The other point is that Llama 2 likely leads to total commoditisation of LLMs  Llama 2 is as good as GPT 3.5, but not as good as GPT 4. Llama 2, though, is free. This means that the whole LLM space is likely to experience pricing pressure.

In that context, OpenAI’s reported $90bn valuation (or about 80x revenue) this week needs it to be a consumer app and not just an LLM. And Amazon’s investment in Anthropic (it’s investing up to $4bn though not clear at what valuation) may well be more about (1) guaranteeing Anthropic availability for Bedrock and (2) running LLMs in Amazon end devices (like Alexa)  and for Anthropic it is much more about access to cheap money, compute capacity and chips, and perhaps finding that use case that enables mass distribution.

For us in the portfolio, commoditisation of LLMs isn’t bad – it ultimately means we’ll likely get to those new feasible revenue and business models much more quickly, which will drive more cloud compute and more sustainable investment in AI chips. We own Microsoft, Amazon and Google who we think should see more demand within their (highly profitable) cloud services business. We also own Nvidia, which is the company most obviously downstream of their capex (and TSMC and semicap which also benefit). 

There is upside for the enterprise software companies that are able to effectively connect their own vast proprietary data sets to pre-trained LLMs, and offer AI tools and features to drive ARPU growth. And on Jony Ive’s “iPhone of AI”, it goes without saying that any mass-adopted AI consumer product, which we assume would be built with some hefty smartphone GPUs would be very positive indeed for TSMC and Semicap equipment.

A long intro this week – onto the newsflow.

Enterprise spend still experiencing headwinds 

  • Workday (owned) capital markets day was mixed – slightly confusing for us after such a strong report in May (where the new management team upgraded FY24 numbers).
  • Now, we have new management resetting numbers – it’s a bit odd given the positive momentum they say they are seeing, but we have a new co-CEO, CFO, CMO, CIO who all likely want in with a fresh sheet of paper (and, perhaps, a resetting of compensation hurdles). 
  • Its mid-term 20% subscription growth goes to 17-19%, 25% Operating margin guide is maintained, but they are lowering the FCF guide (predominantly on higher cash taxes). 
  • Management commented that deal scrutiny is continuing  especially in the medium enterprise segment (and focused on tech and banks). But there is a significant opportunity for Workday to take share from SAP as SAP stops supporting its on-prem installed base. 
  • Elsewhere AI was clearly a big focus (as it has been for everyone) – interesting with regards to the LLM discussion above, that Workday is using multiple models and LLMs, including domain-specific and smaller models, as well as its own 40bn parameter model. 
  • Its Workday AI Marketplace will effectively allow customers to access AI apps integrated with Workday data (we think the bulk of these workloads will in turn be run on AWS, which Workday partners with). 
  • While some software companies have been explicit about the ARPU upside opportunity in AI (Microsoft, Salesforce, ServiceNow have all fairly explicitly called out ARPU uplift), for Workday the focus is on better win rates and retention. 
  • Accenture (not owned) reported a damp set of results. The top line was missed in the quarter (though EPS beat) and guidance was slightly below (2-5% growth, consensus at 4%), with the critical new bookings number down 10% yr/yr.
  • North America was weaker, as were Communications, Media and Tech (CMT). Accenture has a habit of guiding conservatively and beating, but it is clearly seeing late cycle softness – commenting on a continued challenging macro environment. 

Portfolio view: On Workday, which we own, a mid-term target reset is never great (a reminder to us about the will of new management teams), but taking a step back, it is still compounding earnings in the 20%. And we still view Workday as being one of the sticky platform businesses able to add AI features into its product and drive continued sustained growth in customers (Workday continues to gain share from legacy on-premise) and ARPU. Accenture is a late-cycle business rolling over (after two years of double-digit growth) – and while we view it as a long-term quality compounder (it continues to deliver margin growth and strong FCF), we think there are better opportunities elsewhere in tech. 

Memory market bottoming but still painful write-downs to get through 

  • Micron (not owned) reported revenue beat consensus with better than expected DRAM sales, but the guidance was weaker  next quarter revenue is guided up 10% qtr/qtr, better than expected but gross margins were guided at -5.5%. We’ve commented before how hurt margins have been given negative pricing, lower utilisation and inventory write-downs  those headwinds still haven’t gone away. 
  • We don’t own any memory players, but worth noting a few bits from the call for read to the broader space/semicap: 
  • Management is calling the bottom of the memory market (and DDR4 spot prices are increasing): We believe pricing has now bottomed. Ongoing demand growth, customer inventory normalisation, and industry-wide supply reductions have set the stage for increased revenue, along with improved pricing and profitability throughout fiscal 2024. We continue to expect record industry TAM in calendar 2025 with more normalised levels of profitability”.
  • HBM (High Bandwidth Memory – which is required in AI given higher processing speeds) die sizes are twice the size of equivalent-capacity D5  that’s important because higher die sizes naturally limit industry supply growth (which has both a positive impact on pricing and ultimately needs more semicap equipment).
  • Capex is going higher, although it is unclear how much higher given CHIPS grants will result in a higher gross capex spend than they will report in the numbers. 
  • 2024 capex for HBM capacity (all of the semicap players spoke to that driving growth) have “substantially increased” vs prior plans “in response to strong customer demand”.

Portfolio view: This memory downturn is the worst the market has seen in over 10 years. While much of this is the result of extraordinary circumstances (pandemic, inflation), some of it is also the nature of the industry – we don’t own any memory players in the portfolio as the commoditised nature and reliance on each player staying rational on supply means that for us it doesn’t match with our sustainable return on invested capital process

There is limited new news on capex for our semicap equipment exposure (LAM research particularly memory-spend exposed) – we continue to expect a relatively weak year for memory spend, but we increasingly think there’s upside risk of a stabilisation of DRAM spend helped by the AI build out (and increased die sizes of HBM). The commentary around a bottoming of the inventory correction and the cycle is a clear positive for the broader semis industry. Things might not be improving, but hopefully, they’ve stopped worsening.

Semicap debate continues, but long-term drivers remain intact

  • ASM International held its investor day, and though they raised guidance, it’s still seen as relatively conservative.
  • Its ALD (atomic layer deposition) market significantly increased with the introduction of the new Gate All-Around technology at TSMC (GAA) for N2. One of the big debates in the industry is when Apple moves to GAA and 2nm (and whether it skips N3E). 
  • There continues to be discussion around 2024 and TSMC’s capex plans. As we noted last week, there are reasons (mainly Intel’s competitiveness) that will likely mean TSMC needs to continue to spend. 
  • The latest reports on the iPhone 15 Pro’s performance and N3 chip problems is something to keep watching – having an underperforming node can either result in a pull forward of capex (because TSMC may decide to jump more quickly to the next GAA node), or it can equally push back mass production of a new node – it will largely depend on whether Apple decides to jump. 

Portfolio view: We own ASML, LAM Research, Applied Materials and KLA – all peers of ASM International (all have very high market shares in each stage of the semiconductor manufacturing process). There are clearly moving parts in semicap and capex spend – by its nature capex is cyclical and semicap is exposed to those trends. However, we continue to believe in the long-term growth drivers of the industry – we spoke above about AI and LLMs and we need more, faster, bigger GPUs and CPUs – that’s a long-term structural driver, as well as the tech leadership race between Intel, TSMC and Samsung which we spoke about last week. Today, Intel will livestream its European fab opening  the first European fab to use ASML’s EUV tools in mass production. And while spend can be cyclical in the short term, these are all businesses that have shown an ability to generate strong FCF and profitability in a downturn  which is key in our investment process.  

Huawei is back  what happens to China’s smartphone market shares? 

  • Huawei held its Autumn product launch event this week. 
  • It announced pricing for its Mate 60 Pro (using Huawei’s proprietary 7nm chipset)  RMB 6,999 vs RMB 7,999 for the cheapest iPhone 15. Chinese smartphone market shares after Huawei’s return to the market will be one of the most interesting things to watch at the end of the year. Especially given the very mixed benchmarking tests for Apple’s A17 Pro (there are some – notably Chinese – articles that pitch the Kirin chip as outperforming the A17). 
  • Huawei used to have 16% smartphone market share in China, which declined to ~2% by 2022 (after the US blacklist).  
  • Huawei also previewed the Luxeed S7 cars that will be launched in November, competing with Tesla’s Model S, which we assume will significantly undercut Tesla, possibly igniting another price war in China. 
  • Separately, Tesla released a video in China effectively showing much of the R&D/engineering effort for the Model 3 happening inside China – it adds to the debate around technology capability and the ability of China to develop their own high-end tech. 

Portfolio view: We own a small position in Apple, where Huawei represents a credible threat in its home market (also to the extent that the political situation creates a local bias). Another new EV entrant will likely mean the aggressive price competition continues. We own semi suppliers into electric vehicles  Infineon and NXP – which we view as benefiting from the move to EV (semiconductor content significantly increases) and, unlike the auto OEMs, have the pricing power to sustain high levels of margin/returns over time. 

Meta Connect  what a difference a year makes

  • Meta (not owned) held its Connect 2023 conference (worth noting that Connect 2022 was all about the Metaverse and Mark Zuckerberg’s Avatar). This year, of course, was all about AI.
  • It announced a new bot called Meta AI, which will be powered by Llama and Emu, its image generation engine. 
  • On Llama, Meta announced more than 30m downloads of Llama models, 10m in the last month, and 7000 developer releases, which have improved model performance by nearly 10%. It’s another reason why Meta might have wanted to open source Llama. 
  • There was also new hardware – the Quest headset and Ray Ban Smart glasses. 

Portfolio view: We don’t own Meta, given the headwinds we perceive from new competition in digital advertising (though we do recognise it has done a great job of applying AI to its advertising business to mitigate the ATT impacts and drive back growth). As discussed above, Llama as an open source LLM is interesting (and might make Meta’s end devices interesting too). It still needs to be made clear if Meta will be to AI what Apple is to smartphones. But we wait and see (from the sidelines for now).

Amazon Prime ads  a slight offset to content wars

  • Last Friday, Amazon announced that it would include adverts in Prime Video and offer an ad-free version for $2.99/month for US Prime members. 
  • We’ve spoken about the streaming platforms moving into advertising to drive both subscribers (by offering a lower-priced offering) and ARPU growth. For Amazon, the advertising addition is likely more about revenue than subscribers – trying to offset some of the recent content costs (it is spending ~$1bn a year for its NFL Thursday Night Football).

Portfolio view: We sold Disney in the portfolio earlier this year and don’t own Netflix. For us in the portfolio, we’re still sceptical about the long-term returns in streaming, and it’s hard not to view the presence of Amazon, Apple, and YouTube as a negative to the path to profitability and the sustainable level of profitability. Advertising revenue is an upside but will take time to ramp up and will be a minimal offset to overall ballooning content costs. Now, our only real exposure to streaming is through Sony, which is the perfect non-streaming streaming player, benefitting from the content inflation in its Pictures business without running a cash flow-negative streaming platform.

For enquiries, please contact:
Inge Heydorn, Partner, at inge.heydorn@gpbullhound.com
Jenny Hardy, Portfolio Manager, at jenny.hardy@gpbullhound.com
Nejla-Selma Salkovic, Analyst, at nejla-selma.salkovic@gpbullhound.com

About GP Bullhound
GP Bullhound is a leading technology advisory and investment firm, providing transaction advice and capital to the world’s best entrepreneurs and founders. Founded in 1999 in London and Menlo Park, the firm today has 14 offices spanning Europe, the US and Asia.

Related Articles