Run:ai launches optimised AI solution with NVIDIA support

NVIDIA DGX
Run:ai launches optimised AI solution with NVIDIA support
Fin is an experienced reporter with a focus on the frontlines of global business news and cutting-edge technological trends. He has published engaging interviews with leading industry figures from the likes of CBS, Rakuten, Spotify, and more. When not tapping away behind a laptop, he can be found exploring web3 and the cryptocurrency markets. You can follow his Twitter @FinStrathern or connect with him at https://www.linkedin.com/in/finstrathern/.

Israeli AI solutions firm Run:ai has launched an MLOps Compute Platform (MCP) powered by NVIDIA’s DGX systems.

The platform is a full-stack AI tool for enterprises that will help them avoid some of the issues associated with AI model deployment.

Run:ai described the MCP as an end-to-end infrastructure for simplifying the hardware and software complexities of AI development and deployment into a single solution.

The company has a history of collaboration with NVIDIA, with the MCP being no exception. The solution comes with NVIDIA base command and can be installed with support from NVIDIA and Run:ai enterprise specialists.

The platform even gathers compute resources into a centralised pool that can be managed by one team but delivered by another. Developers will be able to use Airflow, MLflow, Kubeflow and other tools through system integrations.

“AI offers incredible potential for enterprises to grow sales and reduce costs, and simplicity is key for businesses seeking to develop their AI capabilities,” said Matt Hull, vice president of Global AI Data Center Solutions at NVIDIA.

“As an integrated solution featuring NVIDIA DGX systems and the Run:ai software stack, Run:ai MCP makes it easier for enterprises to add the infrastructure needed to scale their success,” he added.

In Run:ai’s view, having multiple teams sharing GPU compute leads to organisational inefficiencies. On the other hand, “shadow AI”, where each team has its own infrastructure, leads to idle resources and increased expenses. The MLOps Compute Platform is designed as a comprehensive solution to these potential roadblocks to AI deployment.

The company claims its clients have recorded up to a 500 percent improved utilisation and return on investment on their GPUs, highlighting the platform’s ability to address bottlenecks in development.

Omri Geller, co-founder and CEO of Run:ai, said: “This is a unique, best-in-class hardware/software AI solution that unifies our AI workload orchestration with NVIDIA DGX systems – the universal AI system for every AI workload – to deliver unprecedented compute density, performance and flexibility.”

Run:ai and NVIDIA also recently collaborated on the Israeli firm’s Atlas Platform certification, giving clients confidence in NVIDIA’s AI and data analytics software for optimising AI model production.

Want to learn more about edge computing from industry leaders? Check out Edge Computing Expo taking place in Amsterdam, California and London. 

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

Tags:

View Comments
Leave a comment

Leave a Reply