MosaicML, a California-based provider of generative AI infrastructure, has created a unique platform to make AI deployment more accessible and affordable. Aimed at helping enterprises easily deploy generative AI models, its new Fully Managed Inference Service is set to provide cost-effective Large Language Models (LLMs) that won’t break the bank.
Due to the privacy concerns involved with deploying generative models, as well as the high-costs associated with the product, MosaicML’s new offering has made sure to address these issues head on. Offering two separate tiers, its Starter tier focused on curating and hosting open-source models as API endpoints for a simple deployment, while its Enterprise tier allows users to build their models in its own secure environment thus ensuring full privacy and security.
Not only that, it also claims to provide services up to 15 times cheaper than other comparable options in the market. According to its cost assessment, the Starter edition of its inference service was found to be 4 times cheaper than OpenAI’s offering, while the Enterprise tier was deemed 15 times cheaper. As mentioned above, these tests were all taken using 40 GB NVIDIA A100s with standard 512-token input sequences or 512×512 images.
MosaicML also states that customers are already seeing the results coming through, citing one publicly traded customer in the financial compliance space they have that used their service to deploy their custom GPT model. This customer was able to experience over 10 times the savings when it came to inference costs compared to other providers, and their Total Cost of Ownership was less than $100,000 for their first model.
Naveen Rao, the CEO of MosaicML, is looking forward to the results that the inference service will have and what else the company has in store in the near future. With the growing demand across industries for large language models, he’s likely to see many successes and opportunities for MosaicML as it continues to provide easily-deployable solutions for whatever businesses may need.