OpenAI has introduced a revolutionary approach to improving AI reasoning through process-supervised reward modeling (PRMs). This approach evaluates the individual steps and reasoning processes undertaken by AI models, thereby providing more detailed assessments and feedback. Traditional reinforcement learning from human feedback (RLHF) primarily focuses on overall results generated by AI models to evaluate their performance. In OpenAI’s PRMs, however, a separate model provides critiquing feedback on any erroneous judgments made by a primary model for more granular assessments. OpenAI has curated a dataset comprising 800,000 marked judgments representing distinct stages in solving math problems. The company highlights its commitment to developing high-quality datasets for varied domains. OpenAI has already begun training GPT-4, its latest iteration of the GPT series, using PRMs.
OpenAI is a research organization dedicated to creating AI technologies for social benefit. It aims to make AI technology safe and accessible to all.
The person mentioned in this article is OpenAI. It is referred to as a company in the article, but it is, in fact, a research organization focused on AI technology.