$676 million! Softbank led the investment, where is this cloud AI chip unicorn cow?

$676 million! Softbank led the investment, where is this cloud AI chip unicorn cow?

On April 14, SambaNova Systems, an American star AI chip unicorn, recently announced a Series D financing of US$676 million (equivalent to 4.4 billion yuan), led by SoftBank Vision Fund 2, Intel Capital, Google Ventures, Walden International Wait for participation.

So far, the company’s valuation has exceeded US$5 billion (equivalent to 32.7 billion yuan), making it one of the world’s most valuable artificial intelligence (AI) startups.

This AI chip startup born out of Stanford University has long been favored by well-known investors. Intel Capital, Google Ventures, Walden International, BlackRock, and Red Chain Capital have all participated in its multiple rounds of financing. So far, SambaNova Systems has raised a total of $1.132 billion in financing.

  $676 million! Softbank led the investment, where is this cloud AI chip unicorn cow?

Just last December, the startup launched its most blockbuster software and hardware integration platform, SambaNova DataScale, with terabyte-level memory capacity and millions of PetaFLOPS of low-latency interconnect computing power, capable of processing a large number of complex data models.

Argonne National Laboratory, U.S. Department of Energy’s National Nuclear Security Administration (NNSA), Lawrence Livermore National Laboratory (LLNL), Los Alamos National Laboratory (LANL) have all applied DataScale to accelerate AI computing Research.

  01.

Co-founded by Oracle veterans and Stanford professors

Chairman of Walden International and Chairman of the Board of Directors

The Palo Alto, California-based startup focused on developing infrastructure to handle AI workloads, founded in 2017 by Oracle and Sun Microsystems veteran Rodrigo Liang, Stanford University professors Kunle Olukotun, and Chris Ré, provides data from Center-to-edge systems for running AI and data-intensive applications.

Liang is the CEO of SambaNova, Olukotun is the chief technical expert, and Chen Liwu, the founder and chairman of Walden International and CEO of EDA software giant Cadence, is also the chairman of SambaNova.

  $676 million! Softbank led the investment, where is this cloud AI chip unicorn cow?

▲ Group photo of the founders of SambaNova, left – Kunle Olukotun, middle – Rodrigo Liang, right – Chris Ré

Olukotun, known as the “father of multicore processors,” recently received the IEEE Computer Society’s Harry H. Goode Memorial Award.

He is the director of the Stanford Hydra Chip Multiprocessor (CMP) research project, which has developed a chip design that matches four dedicated processors and their caches with a shared L2 cache.

Ré is an associate professor in the Department of Computer Science at Stanford University’s Information Laboratory and the recipient of one of the highest interdisciplinary awards in the United States, the MacArthur genius award, and he is also affiliated with the Stanford Statistical Machine Learning Group. , Pervasive Parallelism Lab and Stanford AI Lab.

  $676 million! Softbank led the investment, where is this cloud AI chip unicorn cow?

▲SambaNova board member

02.

Self-developed reconfigurable data flow unit

40 billion transistors, TSMC 7nm process

SambaNova’s AI chip and its customers are still largely under wraps, but the company has previously revealed that it is developing “software-defined” devices inspired by DARPA-funded research into efficient AI processing.

Through a combination of algorithm optimization and custom hardware, SambaNova claims to significantly improve the performance and capabilities of most AI applications. The company’s co-founder and CEO Rodrigo Liang shared that SambaNova did not use the mainstream Arm and x86 architectures on the market, but developed its own chip architecture.

The reconfigurable data flow unit Cardinal SN10 RDU developed by SambaNova contains 40 billion transistors, uses TSMC’s 7nm process, and consists of a series of reconfigurable nodes for data, storage and switching.

Rodrigo Liang mentioned that to avoid possible adverse effects of chip shortages, SambaNova invested earlier last year to secure TSMC’s production capacity.

Each Cardinal chip has 6 memory controllers, supports 153GB/s bandwidth, 8 chips are connected in an all-to-all configuration, and the last point is achieved through a switching network that allows chip expansion.

 $676 million! Softbank led the investment, where is this cloud AI chip unicorn cow?

SambaNova doesn’t sell Cardinal on its own, but as a data center-installed solution.

In December 2020, SambaNova Systems announced the general availability of the SambaNova Systems DataScale, based on the Reconfigurable Dataflow Architecture (RDA), a fully integrated software and hardware platform optimized for dataflow from algorithms to chips, including SambaFlow Software stack and 8 reconfigurable data flow units Cardinal SN10 RDU. Each RDU supports seamless parallel processing of large models, enabling businesses to bring new services and products to market faster than today’s state-of-the-art solutions.

The base unit offered by SambaNova is called the DataScale SN10-8R and features an AMD processor paired with 8 Cardinal chips and 12 TB of DDR 4 memory, or 1.5TB per Cardinal chip.

  $676 million! Softbank led the investment, where is this cloud AI chip unicorn cow?

▲The effect of SambaNova DataScale SN10-8R running workload

03.

Benchmark NVIDIA

Record-breaking performance in 4 key areas

Designed for efficient deep learning inference and training, SambaNova Systems DataScale achieves record-breaking multi-rack scale performance metrics in four key areas compared to the latest NVIDIA flagship A100 GPUs:

(1) Performance: The throughput of DLRM inference is 7 times higher than that of A100, and the latency is only 1/7 of that of A100; the training speed of BERT-Large is 1.4 times faster than that of DGX A100 system.

(2) Accuracy: Compared with the DGX A100 system, its high-resolution computer vision has an accuracy of 90.23%; compared with the NVIDIA A100 GPU, its DLRM recommendation engine has an accuracy of 80.46%.

(3) Scale: Breaking BERT-Large training and accuracy records at multi-rack scale.

(4) Ease of use: From loading the dock to the data center, DataScale can be quickly and easily integrated into any existing infrastructure running customer workloads within 45 minutes, and thousands of files can be downloaded with advanced precision on DataScale within seconds A pre-trained Hugging Face Transformer model without code changes.

  $676 million! Softbank led the investment, where is this cloud AI chip unicorn cow?

On the software side, SambaNova has its own graph optimizer and compiler that customers using TensorFlow, PyTorch, and other machine learning frameworks can use to recompile workloads for Cardinal.

SambaNova aims to support natural language processing, high-resolution computer vision, and recommendation models containing over 100 billion parameters, with a larger memory footprint and higher accuracy.

Along with the new SN10-8R product, SambaNova will offer two cloud-like service options:

The first is the SambaNova AI cloud platform, which allows research institutions to access and use SambaNova’s hardware resources for free;

The second is the industry’s first DataFlow as a Service, for business customers who want flexible cloud services without paying for hardware.

  04.

The product has been sold for more than 1 year

Landed in multiple US laboratories

It is reported that the first generation of Cardinal chips was taped out in the spring of 2019, and the first chip samples have been used on customer servers.

Prior to this, SambaNova’s products had been sold to customers for over a year.

Argonne National Laboratory, National Nuclear Security Administration (NNSA) under the U.S. Department of Energy, Lawrence Livermore National Laboratory (LLNL), Los Alamos National Laboratory (LANL) and many other research institutions have Deploy the DataScale platform to accelerate AI computing in research.

For example, LLNL is using DataScale in its Corona supercomputer to develop Covid-19 drugs.

Bronis de Supinski, LLNL’s chief technology officer, said SambaNova’s platform is being used to explore a technique called cognitive simulation, in which AI is used to speed up parts of the simulation processing. He claims that this is about a 5x increase in performance compared to running the same model of GPU.

 05.

Behind the financing of SambaNova

AI chips have become a battleground for all countries

An AI chip is a hardware dedicated to accelerating AI applications. It improves the performance of running large-scale AI algorithms through technologies such as low-precision computing and in-memory computing. According to a study by Statista, a global comprehensive data repository, application-specific integrated circuits (ASICs) are expected to account for an increasing share of inference-phase AI edge computing processing power, reaching 70 percent by 2025.

Taking the new financing obtained by SambaNova as an example, multinational AI chip startups are gaining financing, and the AI ​​chip market is facing fierce competition. According to PR Newswire, the market is expected to reach $91.18 billion in 2025.

  

▲By 2025, ASIC will account for 70% of AI edge computing processing power in the inference stage (Source: Statista)

Recently, Chinese cloud AI chip startups are experiencing a new round of financing boom. A number of cloud chip startups such as Tianshu Zhixin, Biren Technology, Suiyuan Technology, and Muxi Integration have announced that they have received high new financing. Baidu’s AI chip division was recently valued at $2 billion after financing. Zhongke Cambrian last year Became the first pure AI chip company listed on the Science and Technology Innovation Board.

In the competition for the development and implementation of cloud and edge AI chips, there are also some star startups that have attracted much attention in other countries.

For example, Hailo, an Israeli AI chip startup that is developing hardware to speed up AI inference at the edge, received $60 million in venture capital in March 2020.

California-based startup Mythic has raised $85.2 million to develop a custom in-memory computing architecture.

British AI chip unicorn Graphcore is committed to developing large-scale IPU processor chips and systems to accelerate AI workloads. The company has hundreds of millions of dollars in capital reserves.

SambaNova is also one of the beneficiaries of the unprecedented and ongoing customer demand for AI chips.

A surge in purchasing demand for cars and electronics due to the spread of the coronavirus has exacerbated a growing shortage of chips. U.S. President Joe Biden’s recent pledge of $180 billion for advanced computing research and development and semiconductor manufacturing specifically for AI and quantum computing has become central to the U.S. National Science and Technology Strategy.

 06.

Conclusion: AI chip market

Start the battle of landing and ecology

Since 2020, many financing events have been reported in the AI ​​chip field in many countries. In general, taking the domestic market as an example, according to ICBC Investment Bank data, the investment in the domestic AI chip field reached 5.857 billion yuan in 2019, while the total financing of AI chips from January to October in 2020 has exceeded that of the whole of 2019. . In addition to the enterprise level, many countries and regions have also started from the policy level to support the growth of AI chip companies.

Behind this, AI chip startups have gone through the early technological pioneering period, and the market may start a new battle of landing and ecological construction.

  

The Links:   J2-Q02A-D CM75DU-12H