Nvidia is acquiring Run:ai, a Tel Aviv-based company that makes it easier for developers and operations teams to manage and optimize their AI hardware infrastructure. Terms of the deal aren’t being disclosed publicly, but two sources close to the matter tell TechCrunch that the price tag was $700 million.
CTech reported earlier this morning the companies were in “advanced negotiations” that could see Nvidia pay upwards of $1 billion for Run:ai. Evidently, the negotiations went off without a hitch, apart from a possible price change.
Nvidia says that it’ll continue to offer Run:ai’s products “under the same business model” and invest in Run:ai’s product roadmap as part of Nvidia’s DGX Cloud AI platform, which gives enterprise customers access to compute infrastructure and software that they can use to train models for generative and other forms of AI. Nvidia DGX server and workstation and DGX Cloud customers will also gain access to Run:ai’s capabilities for their AI workloads, Nvidia says — particularly for generative AI deployments running across multiple data center locations.
“Run:ai has been a close collaborator with Nvidia since 2020 and we share a passion for helping our customers make the most of their infrastructure,” Omri Geller, Run:ai’s CEO, said in a statement. “We’re thrilled to join Nvidia and look forward to continuing our journey together.”
Geller co-founded Run:ai with Ronen Dar several years ago after the two studied together at Tel Aviv University under professor Meir Feder, Run:ai’s third co-founder. Geller, Dar and Feder sought to build a platform that could “break up” AI models into fragments that run in parallel across hardware, whether on-premises, in public clouds or at the edge.
While Run:ai has few direct competitors, other companies are applying the concept of dynamic hardware allocation to AI workloads. For example, Grid.ai offers software that allows data scientists to train AI models across GPUs, processors and more in parallel.
But relatively early in its life, Run:ai managed to establish a large customer base of Fortune 500 companies — which in turn attracted VC investments. Prior to the acquisition, Run:ai had raised $118 million in capital from backers including Insight Partners, Tiger Global, S Capital and TLV Partners.
In the blog post, Alexis Bjorlin, Nvidia’s VP of DGX Cloud, noted that customer AI deployments are becoming increasingly complex and that there’s a growing desire among companies to make more efficient use of their AI computing sources.
A recent survey of organizations adopting AI from ClearML, the machine learning model management company, found that the biggest challenge in scaling AI for 2024 so far has been compute limitations in terms of availability and cost, followed by infrastructure issues.
“Managing and orchestrating generative AI, recommender systems, search engines and other workloads requires sophisticated scheduling to optimize performance at the system level and on the underlying infrastructure,” Bjorlin said. “Nvidia’s accelerated computing platform and Run:ai’s platform will continue to support a broad ecosystem of third-party solutions, giving customers choice and flexibility. Together with Run:ai, Nvidia will enable customers to have a single fabric that accesses GPU solutions anywhere.”
Run:ai is among Nvidia’s biggest acquisitions since its purchase of Mellanox for $6.9 billion in March 2019.