The Great Pretender is an interesting title when talking about AI and its ability to be convincing in its responses to certain questions. AI, more specifically Natural Language Processing, has enabled machines to generate language as if they were human and, as a result, has given rise to a wide range of complex discussion about the capabilities of AI compared to humans. Despite the many visionary advancements in the field, one simple fact remains that today’s AIs cannot distinguish between something that is correct or incorrect, leaving them open to the possibility of being unknowingly tricked.
The critical issue here is that AIs cannot comprehend the concept of accuracy. While it can produce a response that people may find impressive, and possibly even accurate, it can’t compare the validity of its answer against a set of references or decisive facts. Instead, what it does is come up with an answer that statistically resembles a correct answer. Therefore, instead of considering AI an authority on any given topic, it is more accurate to consider it an expert bullshitter.
ChatGPT is an effective example of this, as it takes a previous statement and produces a response that, while sounding correct, lacks any genuine truth. This happens because the model understands only generalised relationships between words and data nodes in its training set, allowing it to answer virtually any question with a series of impressive-sounding words. However, these words may have no factual bearing or knowledge without the context offered by an outside source. The AI is simply regurgitating words regardless of whether it is speaking the truth or not.
This is why it is important to consider the results from today’s AI constructions with some caution. Understanding this concept helps to explain why these models can be both incredibly powerful and yet unknowingly untrustable. With great power comes great responsibility, and it is down to the user to clarify the validity and accuracy of an AI’s response, particularly in contexts where the truth is of the utmost importance.
This brings us to the discussants and commentators on this subject, who come from a wide range of backgrounds, skill levels and experiences, yet all bring a unique insight on certain aspects of this conversation. This idea of perceptron, mechanical Turk and Aristotle is a great starting point for questioning and debating what counts as knowledge, language, and intelligence, and just how far AI can be pushed.
While there is no real resolution to the questions posed by the complexities of AI and their use, by better understanding their functional limitations, becoming more aware of their dangers, and remaining open-minded to the points raised by some of the sharpest minds in the field, one can get a better handle on why AI can be both incredibly useful and yet incredibly dangerous when not used appropriately.
One final note of caution to consider when using chatbot systems is that their accuracy and trustworthiness may depend greatly on their function and purpose. Facilitating natural conversations with users can be a great way to maintain a positive customer experience without considering its accuracy. On the other hand, for scenarios where accuracy is critical, such as legal advice or medical diagnostics, a more reliable means of obtaining information is necessary.
In conclusion, the title The Great Pretender is aptly applied to AI-generated language constructions, because of the very nature of AI in today’s age. By understanding the limitations of today’s AIs, respecting the points of discussion around language, intelligence and truth, and being aware of when to trust AI and when to be cautious, one can truly appreciate the nuanced and complex landscape of AI understood conversationally.