OpenAI, the leading AI research organization, has announced the creation of its Collective Alignment Team, which aims to gather public opinion on the behavior of AI models. Through this initiative, OpenAI intends to design systems that incorporate public input to better steer AI models while addressing various challenges. These challenges include the digital divide, polarized groups, diversity representation, and concerns surrounding AI governance.
The newly formed Collective Alignment team will collaborate with external advisors and grant teams. They will also run pilots to implement the grant prototypes into their AI models. OpenAI is seeking exceptional research engineers from diverse technical backgrounds to join the team and contribute to this important work.
This development is a direct result of OpenAI’s public program, launched in May 2020, where they awarded 10 grants worth €100,000 each to fund experiments related to democratic inputs to AI. The objective of this program was to establish a democratic process for determining the rules AI systems should adhere to. OpenAI stated that their aim was to involve a broadly representative group of individuals who would exchange opinions, engage in deliberative discussions, and ultimately decide on outcomes through a transparent decision-making process.
As part of its commitment to transparency, OpenAI has shared the code created by the grant program participants, along with summaries of their work. The company acknowledges that some participants expressed concerns about AI’s role in policymaking and the need for transparency regarding when and how AI is applied in democratic processes. However, through post-deliberation sessions, many teams reported that participants became more hopeful about the public’s ability to contribute to guiding AI.
By creating the Collective Alignment Team and involving public opinions, OpenAI is taking a significant step toward democratizing AI development. This move reflects their commitment to responsible AI development and ensuring that diverse perspectives shape the behavior of AI models. With this initiative, OpenAI aims to foster trust, transparency, and inclusivity in the development and deployment of AI technologies.
As OpenAI continues to recruit exceptional research engineers to join the Collective Alignment Team, the field of AI can look forward to a more inclusive, diverse, and responsive approach to AI development and governance. By incorporating public input, OpenAI endeavors to address the societal impacts of AI and build AI systems that truly serve the needs and values of the communities they interact with.
Overall, OpenAI’s efforts to crowdsource public opinions on AI behavior, through the establishment of the Collective Alignment Team, are an important step toward creating a more democratic and inclusive AI development process. With transparency, collaboration, and public input at its core, OpenAI aims to build AI systems that reflect the values and aspirations of diverse communities and ensure a fair and equitable future for AI.