Saturday, July 20, 2024
DigitalSelect1100x220
MT2.1100x220
Evina 110o x 220
WT24ConnectingVAS1100x220
    WT24ConnectingVAS1100x220
    DigitalSelect1100x220
    Evina 110o x 220
    MT2.1100x220

    Gcore, Graphcore and UbiOps collaborate to boost ML and AI workloads

    Gcore, a European provider of high-performance, low-latency, international cloud, and edge solutions, partners with UbiOps and Graphcore to offer powerful computing resources on-demand, specifically designed for the rising requirements of modern AI tasks.

    By partnering with Graphcore and UbiOps, Gcore Cloud is taking a significant step forward in empowering AI teams with a unique service offering that combines the best of Graphcore’s Intelligence Processing Units (IPU) hardware, powerful machine learning operations (MLOps) platform UbiOps, and cloud infrastructure.

    Andre Reitenbach, CEO at Gcore, says: “The collaboration between Gcore, Graphcore, and UbiOps brings a seamless experience for AI teams. This enables effortless utilization of Gcore’s cloud infrastructure with Graphcore’s IPUs on the UbiOps platform. This means that users can take advantage of the exceptional computational capabilities of IPUs for their specific AI tasks. Also, users can leverage UbiOps’ out-of-the-box MLOps features such as model versioning, governance, and monitoring.

    These features help teams to accelerate time to market with AI solutions, save on computing resource costs, and efficiently use them with on-demand hardware scaling. We’re thrilled about this partnership’s potential to enable AI projects to succeed and reach their goals.”

    To demonstrate significant IPU benefits compared to other devices, Gcore benchmarked workloads on three different compute resources: CPU, GPU, and IPU. Gcore trained a Convolutional Neural Network (CNN), a model designed for image analysis, on the CIFAR-10 dataset containing 60,000 labeled images, on these three different devices. Then they compared how fast the training went for different amounts of data

    By measuring training speeds for different batch sizes, Gcore found that training on CPUs was slow, even for a relatively simple CNN and small dataset. At the same time, IPUs and GPUs significantly accelerated the process. With minimal optimization, an even shorter training time could be achieved on IPU versus GPU.

    Type Effective batch size* Graph Compilation (s) Training duration

    (s)

    Time per epoch

    (s)

    Unit cost ($/h)
    IPU-POD4 50 ~180 472 08.1 From $2.5
    IPU-POD4 8 ~180 1420 26.0 From $2.5
    GPU 50 0 443 08.6 From $4
    GPU 8 0 2616 51.7 From $4
    CPU 4 0 10+ hours 10+ minutes From $1.3
    CPU 50 0 ~5 hours 330 From $1.3

    Thanks to the collaboration between Gcore Cloud, Graphcore and UbiOps, AI teams can now easily access powerful hardware specifically designed for demanding AI and ML workloads. The integration of Gcore Cloud, Graphcore’s IPUs and UbiOps’ MLOps platform helps teams work more efficiently and cost-effectively, enabling more and more AI projects to become successful and achieve their goals.

    Geraldine O'Sullivan
    Geraldine O'Sullivan
    Championing Mobile Billing & Engagement Solutions That Monetise Content, Drive In App Billing & Increase On-Boarding For mVAS
    Related Articles

    Subscribe to our newsletter

    To be updated with all the latest news, offers and special announcements.

    CFM600x500
    MobileArts600x500
    MT2.600x500
    Evina 900x750