WikiChip Fuse
Japanese AI Startup Preferred Networks Designed A Custom Half-petaFLOPS Training Chip

Japanese AI Startup Preferred Networks has been working on a custom training chip with a peak performance of half-petaFLOPS as well as a supercomputer with a peak performance of 2 exaFLOPS (HP).

Read more
Samsung M5 Core Details Show Up

Samsung details the high-level changes to the Exynos M5 core found in the Exynos 990.

Read more
SC19: Aurora Supercomputer To Feature Intel First Exascale Xe GPGPU, 7nm Ponte Vecchio

Intel unveils the node architecture of the Aurora Supercomputer; the system will feature Intel’s first Xe GPGPU for HPC, 7nm Ponte Vecchio.

Read more
A Look at Cerebras Wafer-Scale Engine: Half Square Foot Silicon Chip

A look at Cerebras Wafer-Scale Engine (WSE), a chip the size of a wafer, packing over 400K tiny AI cores using 1.2 trillion transistors on a half square foot of silicon.

Read more
Groq Tensor Streaming Processor Delivers 1 PetaOPS of Compute

AI startup Groq makes an initial disclosure of their Tensor Streaming Processor (TSP); a single chip capable of 1 petaOPS or 250 teraFLOPS of compute.

Read more
A Look at Spring Crest: Intel Next-Generation DC Training Neural Processor

A look at the microarchitecture of Intel Nervana next-generation data center training neural processor, codename Spring Crest.

Read more
IBM Adds POWER9 AIO, Pushes for an Open Memory-Agnostic Interface

IBM adds a third variant of POWER9, the POWER9 Advanced I/O (AIO) processor which incorporates the Open Memory Interface (OMI), a new open memory-agnostic interface.

Read more
Intel Unveils the Tremont Microarchitecture: Going After ST Performance

Intel unveils the Tremont microarchitecture, its next-generation low-power small x86 core.

Read more
Intel Spring Hill: Morphing Ice Lake SoC Into A Power-Efficient Data Center Inference Accelerator

First detailed at Hot Chips 31, Intel Spring Hill morphs the Ice Lake SoC into a highly power-efficient data center inference accelerator.

Read more
Analog AI Startup Mythic To Compute And Scale In Flash

A look at the IPU architecture of analog AI startup Mythic which attempts to significantly reduce the power consumption by computing directly in analog in flash.

Read more