This week, respected neuroscientist, author, and founder Gary Marcus was invited to take part in a Senate Judiciary Committee hearing alongside OpenAI CEO Sam Altman and IBM’s Chief Privacy Trust Officer Christina Montgomery. Over three hours, the group discussed the potential regulation of artificial intelligence (AI) and how it should be managed. Marcus has become well-known recently for his work in co-founding AI companies, teaching at NYU, and his popular podcast and newsletter “Humans vs. Machines.”
Given the significant implications of the hearing, there has been a great deal of interest to know more about Marcus’ views on the topic and especially his experience in the hearing. He shared that he is still in Washington discussing possible solutions with lawmakers and their staff members.
Regarding AI, Marcus and Google AI scientist Yann LeCun have debated multiple issues over the years. One of the primary disagreements the two had recently is LeCun’s belief that it is acceptable to use language models that may come with unintended risks, a viewpoint Marcus disagrees with.
Weapons were an area that was not discussed in any depth during the hearing, but Marcus notes it could be a topic in later discussions. Open source vs. closed systems was another area that didn’t get much attention. It remains to be seen what the right balance is between allowing a fair degree of open source and having limitations on what can be done and how it can be deployed.
As for Meta’s strategy of releasing its language model, Marcus believes it was a careless move, noting that the genie is now out of the bottle. He suggests the government and scientists play a larger role in review and oversight, similar to the process followed by the FDA to ensure safety and that any large-scale deployment is carefully measured with cost/benefit analysis.
Gary Marcus is from New York and holds degrees from MIT and Stanford. He has been featured in the New York Times Sunday Magazine, Wired, and Bloomberg TV, and has made a number of appearances in AI discussions. Marcus is committed to ensuring AI is trusted and feels strongly that having an impartial authority oversee its use is essential to that goal. He aims to have a larger role to ensure a good outcome for humanity, and advocates for nonprofit, global, and neutral regulation.