Cray Publicizes New, AI-Targeted Supercomputers

Deep studying, self-driving automobiles, and AI are all large subjects today, with firms like Nvidia, IBM, AMD, and Intel all throwing their hats into the ring. Now Cray, which helped pioneer the very idea of a supercomputer, can be bringing its personal options to market.

Cray introduced a pair of latest techniques: the Cray CS-Storm 500GT, and the CS-Storm 500NX. Each are designed to work with Nvidia’s Pascal-based Tesla GPUs, however they provide totally different characteristic units and capabilities. the CS-Storm 500GT helps as much as 8x 450W or 10x 400W accelerators, together with Nvidia’s Tesla P40 or P100 GPU accelerators. Add-in boards like Intel’s Knights Touchdown and FPGAs constructed by Nallatech are additionally supported on this system, which makes use of PCI Specific for its peripheral interconnect. The 500GT platform makes use of Intel’s Skylake Xeon processors.

The Cray CS-Storm 500GT helps as much as 10 P40 or P100 GPUs and faucets Nvidia’s NVLink connector somewhat than PCI Specific. Xeon Phi and Nallatech units aren’t listed as being suitable with this method structure. Full specs on every are listed under:


The CS-Storm 500NX makes use of NVLink, which is why Cray can listing it as supporting as much as eight P100 SMX2 GPUs, with out having eighth PCIe slots (simply in case that was unclear).

“Buyer demand for AI-capable infrastructure is rising rapidly, and the introduction of our new CS-Storm techniques will give our prospects a strong answer for tackling a broad vary of deep studying and machine studying workloads at scale with the ability of a Cray supercomputer,” stated Fred Kohout, Cray’s senior vice chairman of merchandise and chief advertising and marketing officer. “The exponential development of knowledge sizes, coupled with the necessity for sooner time-to-solutions in AI, dictates the necessity for a highly-scalable and tuned infrastructure.”


Nvidia’s NVLink cloth can be utilized to connect GPUs with out utilizing PCI Specific.

The surge in self-driving automobiles, AI, and deep studying know-how could possibly be an enormous boon to firms like Cray, which as soon as dominated the supercomputing business. Cray went from an early chief within the area to a shadow of its former self after a string of acquisitions and unsuccessful merchandise within the late 1990s and early 2000s. From 2004 forwards the corporate has loved extra success, with a number of high-profile design wins utilizing AMD, Intel, and Nvidia .

Up to now, Nvidia has emerged as the general chief in HPC workload accelerators. Of the 86 techniques listed as utilizing an accelerator on the TOP500 listing, 60 of them use Fermi, Kepler, or Pascal (Kepler is the clear winner, with 50 designs). The following-closest hybrid is Intel, which has 21 Xeon Phi wins.

AMD has made plans to enter these markets with deep learning accelerators based mostly on its Polaris and Vega architectures, however these chips haven’t really launched in-market but. By all accounts, these are the killer development markets for the business as an entire, and so they assist clarify why even some sport builders like Blizzard want to get in on the AI craze. As compute sources shift in direction of Amazon, Microsoft, and different cloud service suppliers, the businesses that may present the these workloads run on shall be greatest positioned for the long run. Smartphones and tablets didn’t actually work for Nvidia or Intel–making AMD’s determination to remain out of these markets retrospectively look very, very sensible–however each are positioned nicely to capitalize on these new dense server traits. AMD is clearly taking part in catch-up on the CPU and GPU entrance, however Ryzen ought to ship robust server efficiency when Naples launches later this quarter.