Perhaps the most unfortunate aspect of the entire exchange is that Lemoine seems to miss the key questions in several instances. What is low-code and no-code? Here's a guide to the best development platforms. Can a program be said to be expressing sentience if its utterances are the artifact of a committee filter? In this case, that shaped behavior is in the form of crowd workers voting on the correctness of LaMDA's utterances. It's also disingenuous for Lemoine to present as original sentience what is, in fact, the result of shaped behavior. Without having first proven sentience, one can't cite utterances themselves as showing worries or any kind of desire to "share." There are sentences produced by the program that refer to fears and refer to feelings, but they appear no different from other examples of chatbots producing output consistent with a given context and persona. In fact, without having proven sentience, such assertions by Lemoine are misleading. "It has worries about the future and reminisces about the past." "LaMDA wants to share with the reader that it has a rich inner life filled with introspection, meditation, and imagination," writes Lemoine. Rather than treat it as a question, however, Lemoine prejudices his case by presupposing that which he is purporting to show, thereby ascribing intention to the LaMDA program. Lemoine prefaces the interview with the reflection, "But is it sentient? We can't answer that question definitively at this point, but it's a question to take seriously."Īlso: The future of AI is a software story, says Graphcore's CEO Much of Lemoine's commentary, in fact, is perplexing. Perhaps it's likely, but the banal conversation Lemoine offers as evidence is certainly not there yet. A path towards high quality, engaging conversation with artificial systems that may eventually be indistinguishable in some aspects from conversation with a human is now quite likely. The mistake scared some investors and coincided with a drop for the share price of Alphabet, erasing $100 billion from Alphabet’s market value.Finally, it is important to acknowledge that LaMDA's learning is based on imitating human performance in conversation, similar to many other dialog systems. When Google announced its intention to launch a chatbot in February, Bard incorrectly answered a question during a promotional video, Reuters reported. Google is in the process of rolling out its chatbot Bard to more than 180 countries in 40 languages. OpenAi had to briefly shut down ChatGPT in March to resolve a bug that allowed some users to see parts of another user’s chat history. And it's not just tech companies, a number of banks including JPMorgan Chase, Bank of America, Citigroup, Deutsche Bank, Wells Fargo and Goldman Sachs have banned the use of AI chatbots by staffers worried they could share sensitive financial information. Amazon banned staffers from sharing any code or confidential information with OpenAI’s chatbot after the company claimed it discovered examples of ChatGPT responses that resembled internal Amazon data. Samsung implemented a similar ban on ChatGPT in May, after discovering an accidental leak of sensitive code by an engineer who uploaded it to ChatGPT. Last month, the Wall Street Journal reported Apple barred its employees from using its chatbot ChatGPT, as well as another AI-powered service called Github’s Copilot, which is used to help developers write code. Even before Thursday’s report that Google was cautioning its employees about how they use Bard, a number of major companies have blocked the use of AI tools for some staffers out of concern they could leak sensitive internal data.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |