WikiChip Fuse
AI
Intel Starts Shipping Initial Nervana NNP Lineup

Intel starts shipping its initial Nervana NNP lineup for both inference and training acceleration with four initial models in three different form factors.

Read more
Japanese AI Startup Preferred Networks Designed A Custom Half-petaFLOPS Training Chip

Japanese AI Startup Preferred Networks has been working on a custom training chip with a peak performance of half-petaFLOPS as well as a supercomputer with a peak performance of 2 exaFLOPS (HP).

Read more
A Look at Cerebras Wafer-Scale Engine: Half Square Foot Silicon Chip

A look at Cerebras Wafer-Scale Engine (WSE), a chip the size of a wafer, packing over 400K tiny AI cores using 1.2 trillion transistors on a half square foot of silicon.

Read more
Groq Tensor Streaming Processor Delivers 1 PetaOPS of Compute

AI startup Groq makes an initial disclosure of their Tensor Streaming Processor (TSP); a single chip capable of 1 petaOPS or 250 teraFLOPS of compute.

Read more
Intel Announces Keem Bay: 3rd Generation Movidius VPU

Intel announces Keem Bay, its 3rd-generation Movidius VPU edge inference processor.

Read more
A Look at Spring Crest: Intel Next-Generation DC Training Neural Processor

A look at the microarchitecture of Intel Nervana next-generation data center training neural processor, codename Spring Crest.

Read more
Intel Spring Hill: Morphing Ice Lake SoC Into A Power-Efficient Data Center Inference Accelerator

First detailed at Hot Chips 31, Intel Spring Hill morphs the Ice Lake SoC into a highly power-efficient data center inference accelerator.

Read more
Analog AI Startup Mythic To Compute And Scale In Flash

A look at the IPU architecture of analog AI startup Mythic which attempts to significantly reduce the power consumption by computing directly in analog in flash.

Read more
Alibaba Launches DC Inference Accelerators

Alibaba launches its own homegrown inference accelerator for their own cloud.

Read more
Inside Tesla’s Neural Processor In The FSD Chip

A deep dive into the custom-designed Tesla neural processing units integrated inside the company’s full self-driving (FSD) chip based on the Tesla Hot Chips 31 talk.

Read more