KINGS LANGLEY, U.K. — British IP provider Imagination Technologies has raised investment in the form of a $100 million convertible term loan from Fortress Investment Group. Imagination will use the funds to support the development of its technologies for graphics, compute and AI at the edge, as well as fund its ambitious growth objectives.
Imagination was established almost 40 years ago in the U.K. and today, Imagination IP can be found in 13 billion devices around the world. In recent years, the company has faced a rather turbulent period, being bought by Chinese-backed private equity firm Canyon Bridge in November 2017 after losing its then biggest customer, Apple. Imagination also sold its recent acquisition, MIPS, at the time. The company has been trying to turn itself around since then, and while the process is ongoing with a new leadership team in place, there have been hiccups along the way.
“The China market has been a bit more challenging than we’d have thought, because of the export control regime here in the U.K. and also the entity list in China that has had an impact on us and our revenues,” Imagination CEO Simon Beresford-Wylie told EE Times in an exclusive interview. “The consequence was some downsizing we did last year to get costs in sync with revenue.”
China represented a third of Imagination’s business in 2019, then-CEO Ron Black told EE Times at the time. Alongside a more difficult regulatory environment, China’s Covid-19 response had an impact on the market there, and Chinese customers went into “cash preservation mode,” Beresford-Wylie said. A combination of these factors meant Imagination lost some of its key Chinese customers in the data center and desktop segments. This led to flat revenues, Beresford-Wylie said, which in turn led to a 30% decrease in Imagination’s headcount at the end of 2023—a move he described as “painful.
“That brings pain to everybody, particularly those who have left us, and for those still here, there’s a sense of grieving that happens,” Beresford-Wylie said.
Six months on, the company had a good first half of the year, he said, with more clarity on the regulatory environment regarding China than in the last few years, and several blue-chip deals “close to closing.” Overall, Beresford-Wylie is pleased with how the company has raised its profile in recent years (since what he calls the “trauma” of 2016-2020), and is now stable and attracting customers, he said. This includes hyperscale customers on both sides of the Pacific.
As part of a multi-year “reboot” of the company, Imagination has hired a new leadership team, including new chief of innovation and engineering, Tim Mamtora.
“[Mamtora] has worked on fundamental processes and getting the team in place and we’ve invested in infrastructure to help, otherwise there’d have been an impact on our ability to scale,” Beresford-Wylie said. “Our product team has also done a great job sharpening our segment focus and what that means in terms of product flow and product families.”
While the reboot has been drastic, some things have not changed.
“We are still laser-focused on three segments: automotive, data center and desktop, with mobile/consumer important enough now for us to think of that as a fourth segment,” Beresford-Wylie said. “From a strategy perspective, there is no pivot. We are on the same path [that we were in 2021], but what’s changed since then is the importance of AI.”
Imagination has also “rebooted” its AI technology strategy in the last 18 months. The company had three generations of standalone neural network accelerator IP under its belt, but has discontinued development on this line.
“Like pretty much every other company, we saw convolutional neural network accelerators and thought, ‘Yeah, we should do one of those,’” Mamtora told EE Times. “And then you start developing hardware which is fantastic at MatMul operations, and then you find, oh, what’s the software stack to go with it? How do we expose that to our customers and developers?”
This was difficult to develop as an IP supplier, Mamtora said, especially considering the rate of evolution in AI models and frameworks and the fragmented state of AI software infrastructure. It was primarily the challenges of developing an AI software stack that led to the decision to stop developing Imagination’s AI accelerator.
“Many [customers] said, ‘You’ve shown me MobileNet or ResNet, great, here’s something we developed in-house, get that running!’” Mamtora said. “Then we’d be in a race with various competitors for two or three weeks with our team taking something as immature as a research paper or home-grown network, trying to get it working on hardware.”
Migrating and optimizing for end customers’ algorithms and supporting them over the product’s lifetime became very challenging. Imagination’s customers wanted to customize software for their own customers, and they wanted to push support for that back onto Imagination.
“It became a huge software task, which morphed from hardware we could develop in-house to having to have a big team of software engineers to support an ever-evolving and changing software stack, with customers needing so many levels of support,” Mamtora said. “We didn’t see that as sustainable.”
Imagination’s new AI strategy is focused on bringing edge AI to the edge GPU. Imagination believes it has already solved key AI challenges, like power efficiency and efficient data movement, in its GPU IP lines. Technology developed for the accelerator, including its graph compiler, could be reused in its GPU stack.
“There are two ways to approach [AI],” Mamtora said. “Either take something NPU-like and make it more general, sacrificing some PPA, or start with a GPU, which is inherently flexible and then add to its PPA by adding dense, low-precision compute capability.”
The key, he said, is getting the right balance between flexible and fixed-function compute in a solution software developers can access. GPUs are already efficient parallel processors, and offer efficient orchestration of workloads and efficient use of memory, he argued, highlighting efficient memory use as particularly important—since memory does not scale like logic does at the most advanced process nodes.
“The way we’ve developed our GPU gives you the right foundational building blocks,” Mamtora said. “If you start with an NPU, you start with something PPA-efficient, then grow to address flexibility. Whereas with a GPU, you’ve solved the scheduling problem, you’ve thought about the memory problem in a mobile-heritage GPU, and you’ve already got some efficient flexible compute. The fixed function compute is actually the easy thing to [add]. It’s all the other stuff around managing the movement of data, and managing power effectively that’s hard, and GPUs do that quite well already.”
GPUs suit all but the smallest edge systems—those with an application processor and above—and Imagination customers in this space typically already have an on-chip GPU for graphics workloads, enabling flexibility and asset reuse. AI and graphics are also not readily separable, with increasing use of AI techniques in graphics workloads.
An then there’s software. In contrast to the fragmented AI software ecosystem, GPU software stacks are already mature. Imagination also works with the UXL Foundation on SYCL.
“The key problem we’re trying to solve is to develop something to go up against [Nvidia’s] CUDA,” Mamtora said. “Developers want [their applications] running on a platform, they just want to translate it and for it to work—forget performance to start with, it just has to work. The fundamental problem we had with our AI accelerator was getting things running in the first place.”
This is where UXL/SYCL comes in. Developers use Sycl-omatic to convert CUDA kernels into something that will run on different hardware.
“Then we need to think about performance profiling and how to make it work really well, and that’s where we can focus our investment,” Mamtora said. “Rather than focusing on all the layers of the stack to get down to something that’s running on our hardware, we focus on the smarts that get you to 70% plus utilization of our ALU cores and make maximum use of our memory bandwidth. That allows us to focus on where we really do differentiate—between graph level and GPU.”
Standardizing some of the higher layers, and sharing development between dozens of companies, make software a more practical problem to solve, Mamtora said.
The problem of having to support customers’ in-house algorithms is less acute since the most sophisticated customers can program GPUs down to the metal, if required. Allowing this level of access to the AI accelerator was difficult since when it was being developed, the space was still emerging. The maturation of MLIR (the compiler intermediate representation format for AI), and some consolidation around Pytorch as a framework, now allow the AI software stack to be broken into cleaner abstractions.
If AI software infrastructure develops sufficiently, would Imagination consider making another dedicated AI accelerator further down the line?
“We’re building some quite focused AI acceleration within a GPU wrapper,” Mamtora said. “We could choose to develop that further, or take bits of the GPU out, it depends how things evolve. We’ve got all the bits and pieces. But as it stands today, so much of what you have to solve is GPU-like that [an AI accelerator] would seem like the wrong direction.”
----Form EE Times