On February 24 (local time), MatX, an AI chip company specializing in large language model (LLM) workloads, declared the successful closure of its Series B funding round, raising $500 million (equivalent to approximately RMB 3.445 billion). The investment round attracted notable backers, including AIchip and Marvell. Established by two ex-Google TPU engineers, MatX is currently in the process of developing a cutting-edge chip known as MatX One. This innovative chip utilizes a scalable systolic array architecture, merging the low-latency benefits of SRAM design with the extended-context processing prowess of HBM solutions. As a result, it achieves industry-leading performance in terms of LLM throughput and minimal latency. The MatX One chip is versatile, catering to a wide range of applications such as training, pre-filling, inference decoding, and reinforcement learning, and holds the promise of significantly reducing the operational costs associated with LLMs.
