WikiChip Fuse
Ayar Labs Realizes Co-Packaged Silicon Photonics

Integrated photonics has long been considered a holy grail for communication. Ayar Labs TeraPHY chiplet represents a major step forward through the co-packaging of the optical interface along with an SoC.

Read more
A Look At The Habana Inference And Training Neural Processors

A look at the Habana inference and training neural processors designed for the acceleration of data center workloads.

Read more
A Look at Cerebras Wafer-Scale Engine: Half Square Foot Silicon Chip

A look at Cerebras Wafer-Scale Engine (WSE), a chip the size of a wafer, packing over 400K tiny AI cores using 1.2 trillion transistors on a half square foot of silicon.

Read more
A Look at Spring Crest: Intel Next-Generation DC Training Neural Processor

A look at the microarchitecture of Intel Nervana next-generation data center training neural processor, codename Spring Crest.

Read more
IBM Adds POWER9 AIO, Pushes for an Open Memory-Agnostic Interface

IBM adds a third variant of POWER9, the POWER9 Advanced I/O (AIO) processor which incorporates the Open Memory Interface (OMI), a new open memory-agnostic interface.

Read more
Intel Spring Hill: Morphing Ice Lake SoC Into A Power-Efficient Data Center Inference Accelerator

First detailed at Hot Chips 31, Intel Spring Hill morphs the Ice Lake SoC into a highly power-efficient data center inference accelerator.

Read more
Inside Tesla’s Neural Processor In The FSD Chip

A deep dive into the custom-designed Tesla neural processing units integrated inside the company’s full self-driving (FSD) chip based on the Tesla Hot Chips 31 talk.

Read more
A Look at NEC’s Latest Vector Processor, the SX-Aurora

A look at the NEC SX-Aurora, their latest vector processor – increasing compute while maintaining a high B/F through six HBM2 modules leveraging TSMC 2nd gen CoWoS technology. The SX-Aurora introduces a new form factor, system architecture, and execution model.

Read more
Intel Cascade Lake Brings Hardware Mitigations, AI Acceleration, SCM Support

A look at Cascade Lake, Intel’s microarchitecture for next-generation Xeon microprocessors featuring process enhancements, persistent memory support, and AI acceleration.

Read more
POWER9 Scales Up To 1.2 TB/s of I/O, Targets NVLink 3, OpenCAPI Memory for 2019

A look at the IBM POWER9 scale-up design recently disclosed at Hot Chips 30 and their plans for a 3rd POWER9 derivative for 2019.

Read more