Get Updates Now

Tech Thoughts Newsletter – 15 December 2023.

15 December 2023 - This week’s update covers the last of the year’s results season, as well as more AI chip news and Google/Epic news.

Market: a volatile week with the fed back in focus (rates curve reducing, good news for tech); The biggest “triple witching” will happen today – so perhaps some more volatility before the end of the year. 

Portfolio: we started building positions in Datadog and Snowflake this week – we’ve commented over the last few weeks in more stability in cloud consumption, and longer term we think AI workloads will be a significant tailwind for them. 

There is one more letter still to come from us this year which will include a look ahead into 2024. 

Nvidia shot back at AMD – the GPU debate continues 

  • Last week we spoke about the AMD AI day, where their MI300X GPU was formally launched – along with some performance metrics showing them as on par with Nvidia in training and beating Nvidia on inference. This week Nvidia put out a “No, we’re the best” response in a blog post:
  • “At a recent launch event, AMD talked about the inference performance of the H100 GPU compared to that of its MI300X chip. The results shared did not use optimized software, and the H100, if benchmarked properly, is 2x faster.” 
  • There’s not a huge amount to say on this – as we’ve commented before, the reality is that Nvidia has the best chip hardware, but more importantly the best ecosystem in CUDA. 
  • AMD’s advantage is that everyone is motivated to look for an alternative to Nvidia so as not to be tied into one very powerful supplier. At its AI event this week (more below) Intel CEO Pat Gelsinger was also out with an even bolder proposition that: “the entire industry is motivated to eliminate the CUDA market” 
  • AMD is the most credible alternative here, as evidenced by the long roster of customers it announced last week. 

Portfolio view: we own both Nvidia and AMD and see them as very likely to share a very large and growing market in AI GPUs (with in house hyperscaler solutions suited for very specific internal workloads and unlikely to be adopted widely). 

Intel “AI Everywhere” – all about on device and edge 

  • Intel held its “AI Everywhere” event this week, launching its Core Ultra processors which include a neural processor (NPU) which will effectively allow for more AI applications to take place on the end device. 
  • Despite the aggressive Gelsinger comments above on CUDA/Nvidia, the reality is that Intel are focusing more on trying to capture the Edge/end device opportunity than they are trying to compete directly with Nvidia/AMD, which makes sense given how far behind they now are in the GPU/LLM space. 
  • They do have a GPU alternative to Nvidia – Gaudi, but that will only ship $1bn in 2024 – a fraction of Nvidia/AMD – which shows just how far behind they are. It’s the edge opportunity – given its PC dominance – makes the most sense for Intel. 
  • And shifting AI workloads to the edge where you can (either at the network edge or end device) makes sense too – both from a cost and a latency perspective. 
  • Edge/on device AI goes back to last week’s Google AI event and its Gemini Nano model which is available on the Pixel 8 Pro – The Pixel Pro uses Google Tensor G3 (TPU), which means that Google will control both the model and the hardware. 
  • Hardware/software integration as an advantage remains a key factor in tech – and continues to factor into our thinking on AI. That brings Apple into focus which clearly controls its own CPU and GPU Silicon, and has also announced (last week) its own framework for LLMs to run more effectively on Apple devices. But it still doesn’t have its own models – yet. 
  • Somewhat relatedly for Intel, November notebook shipments (important for both AMD and Intel) were in line – up 13% mth/mth. But it means that Q4 will likely come down 10-15% qtr/qtr, below typical seasonalityand, while the inventory correction looks to be behind us, still fundamental demand is just not recovering. That hurts Intel (and to a lesser degree AMD)

Portfolio view: To the extent that Intel appears to have nearly given up on the cloud GPU space shows Nvidia’s clear leadership. On the CPU opportunity, we still back AMD to be the winner in CPU workloads within AI, where they have a significant performance leadership over Intel. We own both Nvidia and AMD (both of whom make their chips at TSMC). We don’t hold Intel, though interestingly, a large part of Intel’s processor (based on its Meteor Lake) will also be made at TSMC. 

Semicap wars 

  • Samsung and ASML this week announced that they will jointly invest KrW1 trn (~$800m) to build a research fab in South Korea. Importantly in the announcement included the detail that they would introduce ASML’s High NA EUV – at least one more (>€250m) tool in the order book… 
  • In more geopolitics, the US CHIPS Act doled out its first award this week – $35m to BAE Systems. It’s the first news we’ve had for a while around the US Chips Act, after the US in March outlined restrictions around recipients of the CHIPS Act which added some wrinkles for companies looking to receive the subsidies – the new measures included: (1) $100k spending cap on investments to add capacity in China; (2) ban on recipients adding more than 5% capacity to existing facilities (10% for legacy – lower than 28nm – nodes); 
  • As we’ve said before, we’ll see how these get implemented in reality – it means that if TSMC takes its subsidies for the Arizona fab (which it will – the fabs make no economic sense with the subsidies, so without would be a disaster) – it must basically commit to not expanding its Nanjing fab in China – which would presumably be a fairly high return on invested capital investment. Not super helpful (for TSMC)
  • In other semicap news, after a budget deadlock in Germany, it has now confirmed the rumoured subsidies for Intel and TSMC fabs – with Intel expected to receive €10bn and TSMC €5bn. 

Portfolio view: we still don’t know when the Germany fab construction for Intel/TSMC will start but it could still benefit the semicap equipment names order books this year (which we think were largely taken out so far this year given the uncertainty of timing). 

Semis and geopolitics continue to be intertwined. We think we’re close to peak globalisation as it relates to chip manufacturing. That is part of what informs ourpositive view on semicap equipment – more localised fabs means more equipment (which remains one of the drivers of our above consensus view on the semicap space), even if that might make very little sense economically (TSMC’s Arizona chips will cost ~40% more than its Hsinchu fabs). 

Oracle and GPU supply – “Gold rush”

  • Oracle (not owned) results this week were a miss, but most interesting for us was around its capex comments and GPU availability – their cash capex came in $1bn below expectation – for the simple reason that they weren’t able to get hold of enough Nvidia chips. (Slightly bear in mind Oracle’s bias for bullishness in the below comments!) 
  • CEO Safra Catz: “And as you can see in my CapEx guidance, we expect OCI to just grow astronomically, frankly. It is the ideal infrastructure for so much use. And of course, also as more GPUs become available, and we can put those in, we have just a really unlimited amount of demand.”
  • And Larry’s follow up: “Let me give you one example of that, what Safra is describing, is we got enough Nvidia GPUs for Elon Musk’s company xAI to bring up the first version, the first available version of their large language model called Grok. They got that up and running. But boy did they want a lot more. Boy did they want a lot more GPUs than we gave them. We gave them quite a few, but they wanted more and we are in the process of getting them more. So, the demand, we got that up pretty quickly. They were able to use it, but they want dramatically more. There’s this gold rush towards building the world’s greatest large language model… Oracle is in the process of expanding 66 of our existing cloud data centers and building 100 new cloud data centers. We have to build 100 additional cloud data centers because there are billions of dollars more in contracted demand than we currently can supply.”

Portfolio view: Like all the hyperscalers, Oracle needs to invest in AI to keep up with competitors doing the same. We don’t own Oracle but importantly for us, their capex will grow significantly next year. It’s exactly what we’re expecting for all of the hyperscalers (Oracle in magnitude is clearly the smallest, but the supply constraints it is seeing currently will be reflected everywhere)

It’s important for our thesis on Nvidia, AMD and TSMC – cloud players need to spend on GPU compute capacity and AI features to maintain share – that ultimately means more high-end servers and more chips. 

Software ticking along but perhaps not meeting high AI-driven expectations.. 

  • Adobe (owned, but we’ve trimmed our position quite aggressively on strength over H2) reported this week – our last portfolio company to report this year. 
  • It was a fairly unexciting report – the price increases that it announced earlier in the year (and which were effective Nov 1st) are coming through, though lapping two price increases in the prior year, which created some volatility in the reported growth rates. 
  • Adobe’s share price has benefited from being amongst the earliest of the enterprise software plays to announce very explicit pricing around AI and its Firefly product. The reality is that that isn’t feeding through into growth rates yet – which is why we have continued to reduce our conviction and position. 
  • Salesforce (owned) held its World Tour this week which included expanded capabilities of its AI Einstein and Data Cloud businesses – Einstein will reach general availability in February. 

Portfolio view: The question for us in software and AI has been when and if AI starts to be a meaningful revenue generator for software companies, rather than the current cost of doing business. The bull case is that it creates more durable growth rates in software, the bear case is that it only hurts margins – all of these businesses will likely need to either increase capex to build out their own infrastructure, or buy AI compute capacity from cloud providers, which we’ll likely see impact their gross margin. 

The added variable for Adobe is around its Figma acquisition Adobe is still trying to get its Figma acquisition through the US and European regulators and, while we hated the price paid, we think without it Adobe might still be at risk of abstraction/commoditisation in an “AI agent” world. 

We still think Microsoft will show the most meaningful and immediate revenue opportunity – some news this week that the swedish municipality of Uddevalla (a town of 35,000 people) will spend $100k per year for Microsoft’s Copilot.. Quite extraordinary. 

Google loses its Epic battle 

  • Google lost its battle with Epic this week. The details of the case are pretty damning for Google – The core comes down to Google making some very large and very opaque deals with games developers (paying them off to not launch on competitor game stores) and OEMs (paying them off to stop shipping with alternative stores) in order to effectively maintain monopoly power. 
  • The most asked question was why Apple won its antitrust case and Google lost, when on the surface they seem to be very similar cases. 
  • They key is that Apple is different fundamentally in its business model – it is an integrated hardware and software device manufacturer which inherently controls its own after-market (of which the app store is part of it) 
  • What happens next? The judge will come up with remedies and Google will appeal – so, in honestly – no big changes. It isn’t insignificant it’s the first antitrust loss for any of the big tech players. The reality though is that the regulator has historically had little impact on the monopoly power of big tech – and we’re not exactly sure this will be any different. Ultimately customer habit and convenience is in Google’s favour, even if Epic is able to incorporate its own app store. 

Portfolio view: We own Alphabet, and though this is certainly an additional datapoint to consider when we think about the regulatory tail risk (for all of big tech) it’s not enough for us to meaningfully change our long term returns expectations and we haven’t made any changes to our position in the portfolio. 

Netflix and content arms dealers 

  • Netflix released an 18000 row excel file showing hours watched for each of its titles. It’s the first time it has released that level of detail around its viewer engagement
  • The streaming industry has been notoriously opaque in its disclosure (which was one of the features of the recent actors strikes)
  • Much more than that though, there were clearly some key takeaways Netflix wanted to get across to its investors – and perhaps more importantly – to its competitors…
  • Unsurprisingly, the 80/20 rule around the most popular shows stays true – 15% of titles represented 80% of view time.  
  • 55% of the viewing was original content, so importantly close to half of viewing is licensed content. That follows last quarter’s earnings call focus around Suits and the success Netflix has had in generating a “new hit” from an effectively fully depreciated show. 
  • That is a very effective way to persuade content owners to licence their content to Netflix – in a world where content owners like Disney, Paramount, Universal and Comcast are struggling to get to profitability and scale in streaming – are they forced into a corner where the most obvious thing to do is to licence content to their biggest competitor?

Portfolio view: there are still plenty of bear and bull arguments out there for Netflix (we don’t own any streaming players given we’ve struggled a bit with the long term streaming trajectory) – but this release certainly adds credence to the bull case of Netflix as the single powerful buyer in a world where all the content owners turn into arms dealers (and – perhaps ultimately – shut down their own returns diluting streaming businesses…) 

Power hungry AI drives the need for node transition

  • Microsoft is training an AI to do paperwork on setting up nuclear power plants, which it needs to power electricity-hungry AI chips… 
  • A bit fun, but it also brings into focus the importance of power efficiency of chips. It was also IEDM (International Electron Devices Meeting) this week – where many of the semis companies were presenting. In particular was TSMC out detailing their 2nm node and the first time talking about their A14 (1.4nm node). 
  • The important point – as it relates back to Microsoft’s power problem, is that, while node on node scaling is clearly costly, perhaps even more than performance, the critical thing is the per watt power consumption savings it brings. 

Portfolio view: there is a bear argument around chips that Moore’s law is slowing and node transitions are becoming too costly to make sense – ASML’s High NA tools are ~€250m – does it become like Concorde? The best but too expensive? The clear counter to this is that node transitions – however expensive – are a necessity given power consumption and availability.

For enquiries, please contact:
Inge Heydorn, Partner, at
Jenny Hardy, Portfolio Manager, at
Nejla-Selma Salkovic, Analyst, at

About GP Bullhound
GP Bullhound is a leading technology advisory and investment firm, providing transaction advice and capital to the world’s best entrepreneurs and founders. Founded in 1999 in London and Menlo Park, the firm today has 14 offices spanning Europe, the US and Asia.

Related Articles