OpenAI, an artificial intelligence research laboratory, has announced an update to its large language model API offerings, including the introduction of function calling, significant cost reductions, and a 16,000 token context window option for the gpt-3.5-turbo model. The context window is like short-term memory, enabling the model to store content related to the prompt input or conversation. The latest version of gpt-3.5-turbo-16k has four times the context length, enabling it to process up to 16,000 tokens in length, or around 20 pages of text in a single request. OpenAI also introduced more ‘steerable’ versions of GPT-4 and gpt-3.5-turbo, with improved reliability. The company is offering substantial cost reductions, particularly a 75% cost reduction for its text-embedding-ada-002 model. Earlier versions of these models will be promptly deprecated, with developers encouraged to use the updated versions. The GPT-4 API is locked behind a waitlist but will be available widely soon.
OpenAI Enhances Chatbot API for Developers
Date:
Frequently Asked Questions (FAQs) Related to the Above News
Please note that the FAQs provided on this page are based on the news article published. While we strive to provide accurate and up-to-date information, it is always recommended to consult relevant authorities or professionals before making any decisions or taking action based on the FAQs or the news article.