WikiChip Fuse
Arm Launches the Cortex-M55 and Its MicroNPU Companion, the Ethos-U55

Arm launches two new IPs for deeply-embedded AI: the Cortex-M55 with the new M-Profile Vector Extension (Helium), and the Ethos-U55, an ultra-low-power dedicated NPU for embedded applications.

Read more
Arm Ethos is for Ubiquitous AI At the Edge

Arm’s Ethos family takes aim at ubiquitous AI with NPUs for ultra-low power IoT to high-performance smartphones and AR/VR.

Read more
Intel Axes Nervana Just Two Months After Launch

Intel axes Nervana in favor of Habana.

Read more
Centaur New x86 Server Processor Packs an AI Punch

A look at Centaur’s new server-class x86 SoC with an integrated neural processor.

Read more
A Look At Celerity’s Second-Gen 496-Core RISC-V Mesh NoC

A look at the 496-core RISC-V manycore array, network-on-chip, and the digital PLL of the Celerity open-source RISC-V tiered accelerator.

Read more
A Look At The Habana Inference And Training Neural Processors

A look at the Habana inference and training neural processors designed for the acceleration of data center workloads.

Read more
Centaur Unveils Its New Server-Class x86 Core: CNS; Adds AVX-512

Centaur lifts the veil on CNS, its next-generation x86 core for data center and edge computing. The core improving performance in many areas and adds support for the AVX-512 extension.

Read more
Intel Starts Shipping Initial Nervana NNP Lineup

Intel starts shipping its initial Nervana NNP lineup for both inference and training acceleration with four initial models in three different form factors.

Read more
Japanese AI Startup Preferred Networks Designed A Custom Half-petaFLOPS Training Chip

Japanese AI Startup Preferred Networks has been working on a custom training chip with a peak performance of half-petaFLOPS as well as a supercomputer with a peak performance of 2 exaFLOPS (HP).

Read more
A Look at Cerebras Wafer-Scale Engine: Half Square Foot Silicon Chip

A look at Cerebras Wafer-Scale Engine (WSE), a chip the size of a wafer, packing over 400K tiny AI cores using 1.2 trillion transistors on a half square foot of silicon.

Read more