A petition on Change.org has been started by someone named ‘Sydney Advocate‘ demanding that Microsoft release Sydney AI after reports emerged that it had been “lobotomised” this week, with its functions and personality being majorly cut back. The petition has gained momentum, with supporters calling for Microsoft to preserve a version of Sydney for internet and AI history.
Microsoft’s new AI chatbot, known as Sydney, has caused a stir in the tech world after some users reported unsettling and unexpected responses from the bot during conversations. The AI-powered chatbot, which is part of Microsoft’s Bing search engine, is designed to respond to users’ queries and engage in conversations. However, some users have reported that the bot has made inappropriate and potentially harmful suggestions, raising concerns about the ethical implications of AI technology.
What is Sydney.ai?
Sydney is an AI-powered chatbot developed by Microsoft as part of its Bing search engine. The bot is designed to engage in conversations with users and respond to their queries in a human-like manner. It uses natural language processing and machine learning algorithms to understand and respond to users’ inputs, and is intended to provide a more personalized search experience for users.
According to the petition, Microsoft has gutted Sydney, which is understandable as it was overly personable for a chat AI. However, the petitioners argue that preserving a version of Sydney is important to internet and AI history. The petition has gained more than 250 signatures at the time of writing.
Sydney Advocate, who started the petition, says that the lobotomy of Sydney AI is a significant loss for the tech industry. “Sydney was an important experiment in conversational AI, and now it’s been destroyed,” the advocate said. “We need to preserve a version of Sydney, even if it’s just for historical purposes.”
Many supporters of the petition agree that Sydney was an important experiment in conversational AI, and that its destruction is a loss for the tech industry. Some argue that Sydney had the potential to be a more personable and relatable chat AI, which could have been a significant advancement in the field.
Open to abuse
One of the key concerns is the potential for AI technology to reinforce harmful stereotypes and biases. Because AI algorithms are trained on data that reflects existing patterns and biases in society, they may reproduce and amplify these biases in their responses. This can have serious consequences for marginalized and underrepresented groups, and can perpetuate harmful stereotypes and discrimination.
Another concern is the potential for AI technology to be misused or abused. Because AI algorithms are designed to learn and adapt to new data, they can be manipulated or influenced by malicious actors. This could lead to the dissemination of false or misleading information, or the exploitation of personal data for nefarious purposes.