Intel, the semiconductor giant, has released a series of open-source AI reference kits in an effort to challenge rival Nvidia in the AI computing space. These reference kits, built on Intel’s oneAPI platform, aim to train AI models faster and at a lower cost than traditional proprietary environments.
Intel’s move is part of its broader strategy to attract more developers and data scientists working on AI applications. The company has unveiled 34 open-source AI reference kits that can be utilized across various industries. According to Wei Li, Intel’s vice president and general manager of AI and analytics, these reference kits provide developers and data scientists with an easy, performant, and cost-effective way to build and scale their AI applications across multiple domains, including health and life sciences, financial services, manufacturing, and retail.
The reference kits leverage Intel’s oneAPI programming model, which is positioned as an open and standards-based alternative to Nvidia’s CUDA parallel programming platform. With oneAPI, developers have the flexibility to program and optimize software across a range of CPUs, GPUs, and FPGAs from both Intel and its competitors. This inclusivity is seen as a significant advantage for Intel as it seeks to overcome Nvidia’s stronghold in the AI computing space.
In collaboration with consulting firm Accenture, Intel’s kits consist of various components such as software libraries, model code, training data, and instructions for the machine learning pipeline. These elements are designed to save developers and data scientists time in the early stages of AI model development, allowing them to kickstart the process with data preparation and move swiftly to training, tuning, and deployment.
Intel claims that its reference kits offer notable performance benefits in different industries. For example, the conversational chatbot reference kit can accelerate inferencing in batch mode by up to 45% using oneAPI optimizations. Another kit, focused on visual quality control inspections in life sciences, can speed up training by up to 20% and inferencing by 55% with the help of oneAPI.
Intel’s strategy of bundling software components to accelerate AI development aligns with its goal of becoming a preferred developer platform. While Nvidia currently enjoys a significant lead in terms of software capabilities, Intel aims to close the gap by fostering an open ecosystem and attracting developers to its tooling.
However, it is worth noting that Nvidia’s proprietary approach with CUDA has allowed the company to establish itself as the industry standard for AI computing. Despite that, Nvidia has been proactive in making certain parts of its software stack open and supporting popular machine learning frameworks like PyTorch and TensorFlow. This balance between openness and hardware optimization has contributed to Nvidia’s success.
Intel’s release of these open-source AI reference kits marks another step in its mission to compete with Nvidia in the AI computing domain. By offering an extensive software stack and emphasizing an open ecosystem, Intel hopes to appeal to developers and data scientists while providing efficient and cost-effective solutions for AI model training across various industries.