Keeping up with the fast-paced world of artificial intelligence (AI) can seem like an impossible task. Therefore, this comprehensive list is presented to help one process the recent events that have occurred in the world of machine learning. From varying research and experiments to the headlines surrounding this industry, this article provides an overview of the past week’s events regarding AI.
Perhaps the most interesting development concerns an experiment in which ChatGPT repeats erroneous information in Chinese dialects more often than in English. Although this isn’t unexpected, it does emphasize the potential danger when people place too much trust in AI systems that sound genuine. HuggingFace, for instance, attempted to develop a conversational AI similar to ChatGPT but users soon found that the bot provided absurd answers and even propaganda. The same was true of Discord’s AI chatbot, which users managed to manipulate into sharing instructions about how to make napalm and meth. AI start-up Stability AI also launched an AI-based chatbot similar to ChatGPT, only for it to display an inability to answer even basic questions.
These technical flaws lead to increased efforts to improve AI models and to reduce bias and toxicity. An example is Nvidia’s release of NeMo Guardrails, a toolkit which open source code, examples and documentations to make text-generative AI safer.
Other headlines include Stanford’s application of algorithmic optimization on smart agriculture to reduce waste, alongside OpenAI co-founder John Schulman’s talk on current AI language models and their common habit of to committing to lies due to lack of knowledge. Additionally, a project between EPFL and Phase One is underway with aims to create the largest digital image ever made.
On the flip side, living creatures have an impressive ability when it comes to spatial learning, which AI models still cannot compete with. Recent research from University College London suggests that there exists a short feedback loop for animals which humans are yet to teach AI. Lastly, Square Enix released a ‘AI Tech Preview’ version of a classic video game, yet it failed to perform tasks of natural language integration and was consequently one of the worst reviewed games on Steam.
Stability is an AI development organization that provides open source AI models like StableVicuña, a model that uses Reinforcement Learning from Human Feedback (RLHF). John Schulman is the co-founder at OpenAI and has spoken at UC Berkeley about machine learning. He believes that reinforcement learning from human feedback is the solution to common AI problems. Finally, EPFL and Phase One are working together on what could be the largest digital image ever produced, containing 150 megapixels and over 127,000 elements.