OpenAI’s Highly-Anticipated Advanced Voice Mode Available for Early Users
OpenAI has officially launched the alpha program for its ChatGPT advanced voice mode, allowing a select group of users to experience the groundbreaking technology firsthand. While not all users have immediate access to this enhanced feature, the alpha version is now live and ready for testing.
What sets this new advanced voice mode apart from its predecessor is its ability to generate voice output directly from the AI model. Unlike before, where ChatGPT would simply read text outputs, this new mode creates audio from scratch. This results in faster voice outputs with a wider range of speed, tone, and emotions, as demonstrated by OpenAI earlier this year.
Although the video and screen-sharing capabilities showcased in previous demos are not included in this alpha release, OpenAI assures users that all ChatGPT plus subscribers will have access to the advanced voice mode by the fall.
While some critics have raised concerns about the delayed rollout of these new features, the alpha program marks a significant step forward in making the technology accessible to a wider audience. With users getting hands-on experience with the advanced voice mode, we can expect to see real-world applications that go beyond curated sample videos.
As the industry eagerly awaits the full release of ChatGPT’s enhanced capabilities, early users are now exploring the potential of this cutting-edge voice technology firsthand. Stay tuned for more updates on how ChatGPT’s advanced voice mode delivers on the hype in the coming months.