Intel, Lenovo Announce Multi-Year Collaboration on AI and HPC

Lenovo and Intel announced a multi-year coalition to accelerate the unification of Artificial intelligence (AI) and High- performance computing (HPC) to solve the world’s most challenging issues. The multi-year global collaboration which is built over the long-standing partnership of companies in the data center will speed the convergence of AI and HPC, creating solutions for all sizes or organizations.

The supreme global system provider of “TOP500” supercomputers Lenovo, as a market strategy would optimise the full portfolio of Intel’s AI and HPC software and hardware solutions.

The Neptune liquid cooling of Lenovo in combination with the Intel Xeon Scalable platform (second generation) is already helping customers to unlock new insights of data by producing exceptional products from joint engineering and utilizing distinctive conjunction of HPC IP from the two companies.

President of Lenovo Data Center Group and executive vice president of Lenovo, Kirk Skaugen gave his statement “Our aim is to further expedite innovation into ‘Exascale’ era whereby aggressively providing these solutions to businesses and scientists of all sizes we could enhance the outcomes and discoveries”.

It is estimated that around 173 of the world’s “TOP500” supercomputers spanning 19 markets run on the servers of Lenovo.

In addition to this, among the world’s topmost 25 universities, 17 universities rely on the infrastructure of Lenovo.

The general manager of Data Center Group and executive vice president of Intel, Navin Shenoy said: “Through the convergence of HPC and AI, Intel is laser-focused to help and encourage the discovery and innovation of the customers”.

He added, “Our expanded alliance with Lenovo coalesce the best innovations of both the companies’ to drive the progress of customers at an even higher pace”.

The collaboration plan focus on three major areas; Optimization of software for AI and HPC convergence, Systems, and Solutions, Enablement of the ecosystem.