Leading software company JFrog Ltd has announced an integration with Amazon SageMaker, allowing companies to securely develop machine learning models. By incorporating new versioning capabilities into its ML Model Management solution, JFrog enables model development to be seamlessly integrated into DevSecOps workflows. This integration increases transparency and ensures that the correct and secure version of a model is always utilized.
JFrog’s integration with Amazon SageMaker guarantees that all artifacts, whether used by data scientists or for ML application development, are stored in JFrog Artifactory. This integration is available to JFrog customers and users.
Kelly Hartman, SVP of Global Channels and Alliances at JFrog, explains how the combination of Artifactory and Amazon SageMaker establishes a single source of truth that adheres to DevSecOps best practices. This approach provides flexibility, speed, security, and peace of mind, breaking new ground in MLSecOps.
To further educate users, JFrog will be hosting an educational webinar on January 31st, where they will discuss best practices for integrating model use and development into secure software supply chains and development processes.
The integration between JFrog and Amazon SageMaker offers organizations a single source of truth for data scientists and developers, ensuring that all models are accessible, traceable, and tamper-proof. By bringing machine learning (ML) closer to software development and production workflows, the integration protects models from deletion or modification. It enables the development, training, security, and deployment of ML models, while also scanning ML licenses for compliance with company policies and regulatory requirements.
With this new integration, JFrog is providing a comprehensive and secure environment for ML model development. By offering versioning capabilities and storage in Artifactory, developers, DevOps teams, and data scientists can collaborate more efficiently and effectively within a DevSecOps framework. This integration marks an important step forward in the field of MLSecOps, ensuring that machine learning models are developed and deployed securely in the cloud.