THE LATEST NEWS
Wave Computing Chooses MIPS 64-bit RISC

PARIS — MIPS, a storied but beleagured RISC processor core company, is coming back to life. Breathing new life into MIPS are a new customer — Wave Computing — and a number of existing clients that include Intel/Mobileye, NetSpeed, Fungible, ThiCI and Denso. All have pledged to use MIPS 64-bit multi-threaded processor core to handle device management and control functions inside their respective AI processors — many either in development or ready for rollout.

Wave Computing is a designer of a massively parallel dataflow architecture called Wave Dataflow Processing Unit (DPU) for deep learning. Wave Computing, which is getting ready to roll out its beta system in the next few weeks by using the company’s first-generation processor, has decided to use MIPS 64-bit CPU for the company’s second-generation DPU, Derek Meyer, Wave Computing CEO and a veteran of MIPS, told EE Times.

In the first-generation DPU, Wave Computing used a 32-bit RISC processor core developed by Taiwan’s Andes Technology Corp. Replacing it with a 64-bit RISC processor was Wave Computing’s plan all along, said Meyer. The question, however, was which 64-bit RISC core to choose. “Obviously, during our research, we looked at RISC-V, and a whole bunch of others,” Meyer said. 

But when the issue comes down to a “RISC processor with hardware multi-threading architecture and cache coherence,” Meyer said, “MIPS is the only one. There are no other RISC processors that can do that today.”

MIPS’ future was uncertain when the company, acquired by Imagination Technologies in 2013, was believed to have lost its focus and momentum under the then new management. Choosing MIPS was too risky a decision for many SoC companies’ standard. This changed when Tallwood Venture Capital bought MIPS late last fall. The deal brought Dado Banatao, Tallwood’s managing partner, into MIPS as chairman of the board.   

“With Dado [Banatao] heading the company, we see the stability is coming back to MIPS,” said Meyer. “I’ve always loved MIPS and love it more with Dado involved. He’s a real visionary.” Banatao is an investor in both MIPS and Wave Computing.





Why multithreading and cache coherence are important
Wave Computing’s Meyer sees MIPS’ multithreading technology as key for why his team wants MIPS. In Wave Computing’s dataflow processing, “when we load, unload and reload data for machine learning agent, hardware multithreading architecture is effective,” said Meyer.

Kevin Krewell, principal analyst at Tirias Research, told us, “Multithreading is a way to efficiently add many threads with a smaller number of cores. It's very effective for workloads that have a lot of short tasks.”

Cache coherence is another positive Wave’s team sees in MIPS. “Because our DPU is 64-bit, it only makes sense both MIPS and DPU talk to the same memory in 64-bit address space,” said Meyer.

Paul Teich, principle analyst at Tirias Research, explained, “Cache coherence means that the results of a convolution are available to all other threads on a chip.” He noted, “As a layer of neurons in a model is processed, larger on-chip caches mean more of the layer can stay resident in cache, and maybe even multiple layers. That means fewer latency-inducing accesses to system memory and better performance.”

Still early in AI market growth
While MIPS rattled off a host of AI processor companies who have adopted MIPS, the AI market is still in an early phase.

Teich told us he sees several different AI accelerator camps. First, there is the GPU gang consisting of Nvida and AMD, plus Arm, Qualcomm and others for mobile. Krewell added, “Nvidia rules the market from here.”

Then there is an FPGA posse, including Intel and Xilinx.

There is also a DSP camp consisting of Qualcomm, Ceva and a few others.

Finally, there is a team working on new architecture, said Teich. This group includes Arm, Fungible, Mobileye, Thinci, Wave Computing, and others. Google's TPU is a member, Teich added.

With all said and done, Teich concluded that Tirias Research believes AI is going to contribute to most workloads in the future.

“The industry is just at the start of this ride, so there is plenty of upside for the foreseeable future," he said. "It's unlikely Nvidia’s competitors will impede its current growth, but we're still early in AI market growth.”

Teich added, “It is not a zero -sum game.” There will be a lot of market opportunities available and a lot of experimentation will be going on. “MIPS can benefit from that,” Teich said.


Back
How Did Nvidia Improve Hopper Inference Performance 30x?
SAN JOSE, Calif.– Nvidia has dramatically boosted the inference performance of its GPUs with new data center inference orchestr...
More info
Lightmatter Unveils 3D Co-Packaged Optics for 256 Tbps in One Package
MOUNTAIN VIEW, Calif. — Lightmatter unveiled two new optical interconnect technologies that mean large multi-die chiplet desi...
More info
SoCs Get a Helping Hand from AI Platform FlexGen
FlexGen, a network-on-chip (NoC) interconnect IP, is aiming to accelerate SoC creation by leveraging AI.Developed by Arteris In...
More info