WikiChip Fuse
Arm Launches the Cortex-M55 and Its MicroNPU Companion, the Ethos-U55

Arm launches two new IPs for deeply-embedded AI: the Cortex-M55 with the new M-Profile Vector Extension (Helium), and the Ethos-U55, an ultra-low-power dedicated NPU for embedded applications.

Read more
Arm Ethos is for Ubiquitous AI At the Edge

Arm’s Ethos family takes aim at ubiquitous AI with NPUs for ultra-low power IoT to high-performance smartphones and AR/VR.

Read more
Centaur New x86 Server Processor Packs an AI Punch

A look at Centaur’s new server-class x86 SoC with an integrated neural processor.

Read more
A Look At Celerity’s Second-Gen 496-Core RISC-V Mesh NoC

A look at the 496-core RISC-V manycore array, network-on-chip, and the digital PLL of the Celerity open-source RISC-V tiered accelerator.

Read more
A Look At The Habana Inference And Training Neural Processors

A look at the Habana inference and training neural processors designed for the acceleration of data center workloads.

Read more
Centaur Unveils Its New Server-Class x86 Core: CNS; Adds AVX-512

Centaur lifts the veil on CNS, its next-generation x86 core for data center and edge computing. The core improving performance in many areas and adds support for the AVX-512 extension.

Read more
Intel Starts Shipping Initial Nervana NNP Lineup

Intel starts shipping its initial Nervana NNP lineup for both inference and training acceleration with four initial models in three different form factors.

Read more
Groq Tensor Streaming Processor Delivers 1 PetaOPS of Compute

AI startup Groq makes an initial disclosure of their Tensor Streaming Processor (TSP); a single chip capable of 1 petaOPS or 250 teraFLOPS of compute.

Read more
Intel Announces Keem Bay: 3rd Generation Movidius VPU

Intel announces Keem Bay, its 3rd-generation Movidius VPU edge inference processor.

Read more
Intel Spring Hill: Morphing Ice Lake SoC Into A Power-Efficient Data Center Inference Accelerator

First detailed at Hot Chips 31, Intel Spring Hill morphs the Ice Lake SoC into a highly power-efficient data center inference accelerator.

Read more