In future, there might be a technology that makes it possible to expunge the experience of trauma. The technology may also apply to wiping depression, regret, guilt, anxiety and other emotions. While the innovation would be important to mental health, the number of ways it can go wrong abound.
In close to three decades of global internet and the permeation of tech, it is clear that surprises in how they can be misapplied are numerous. It should be stale that technology just solves problems, without looking at the inverse.
Generative AI has liftoff. There are several people that minimize it, but it is not social media. As a tool, it makes various — previously expert — endeavors possible by simple prompts. As a seemingly thinking machine, it takes a seat in the domain of human productivity in the way that no organism has.
Departments of philosophy across universities should converge on paths for answers on different aspects where generative AI can go wrong. This simultaneous effort should take on aspects where risks are already apparent, while prospecting for others.
Research would include neurotechnology, especially how to place checks on products and users, limiting to brain disorders only. The efforts from philosophy, in a world faced with an entrant in the human domain, should be intense.
The good things that AI can do does not mean that other things should be left to chance. There are technical approaches to some of the risks, but technical approaches might be better to counter, not just to guardrail one platform.
All the ways that generative AI has been misused in recent months are directions to seek ethical solutions, which may involve a hierarchy of teams, reporting, awareness, balance and containment, modeled by philosophy. Technology corporations may be willing to cooperate if the recommendations are good enough.
Generative AI is an opportunity for further renaissance of philosophy as an important aspect of knowledge to solve problems in the world. It remains with the university departments, if they want to be on the sidelines on this, or they want to lead the way, against the unknowns and already accruing uncertainties.
The number of problems that generative AI poses are categories, so answers by different departments can go into those, especially for local or global AI applications. Philosophy of mind can also play a role in how AI outputs take shape across the mind.
Article contributed by David Stephen, he does research in conceptual brain science. He was a visiting scholar in medical entomology at the University of Illinois Urbana-Champaign, UIUC, IL. He did computer vision research at Rovira i Virgili University, URV, Tarragona.