A Look at Cerebras Wafer-Scale Engine: Half Square Foot Silicon Chip
A look at Cerebras Wafer-Scale Engine (WSE), a chip the size of a wafer, packing over 400K tiny AI cores using 1.2 trillion transistors on a half square foot of silicon.
Read moreA look at Cerebras Wafer-Scale Engine (WSE), a chip the size of a wafer, packing over 400K tiny AI cores using 1.2 trillion transistors on a half square foot of silicon.
Read moreAI startup Groq makes an initial disclosure of their Tensor Streaming Processor (TSP); a single chip capable of 1 petaOPS or 250 teraFLOPS of compute.
Read moreIntel announces Keem Bay, its 3rd-generation Movidius VPU edge inference processor.
Read moreA look at the microarchitecture of Intel Nervana next-generation data center training neural processor, codename Spring Crest.
Read moreFirst detailed at Hot Chips 31, Intel Spring Hill morphs the Ice Lake SoC into a highly power-efficient data center inference accelerator.
Read moreA look at the IPU architecture of analog AI startup Mythic which attempts to significantly reduce the power consumption by computing directly in analog in flash.
Read moreAlibaba launches its own homegrown inference accelerator for their own cloud.
Read moreA deep dive into the custom-designed Tesla neural processing units integrated inside the company’s full self-driving (FSD) chip based on the Tesla Hot Chips 31 talk.
Read moreIntel announced Pohoiki Beach and Pohoiki Springs, two new research neuromorphic systems capable of scaling from 64 to 768 Loihi chips with 8 million to 100 million neurons per system.
Read moreAn initial look into Intel’s upcoming Nervana Neural Network Processor (NNP) accelerators.
Read more