{"id":2001,"date":"2023-02-08T00:56:24","date_gmt":"2023-02-08T00:56:24","guid":{"rendered":"https:\/\/promptmuse.com\/?p=2001"},"modified":"2023-04-07T10:14:56","modified_gmt":"2023-04-07T10:14:56","slug":"openais-watermarking-to-halt-misuse-of-gpt-3-and-chatgpt-outputs","status":"publish","type":"post","link":"https:\/\/promptmuse.com\/openais-watermarking-to-halt-misuse-of-gpt-3-and-chatgpt-outputs\/","title":{"rendered":"OpenAI’s Watermarking to Halt Misuse of GPT-3 and ChatGPT Outputs"},"content":{"rendered":"
OpenAI, the leading artificial intelligence research laboratory, is taking steps to prevent users from misusing the content generated by their AI models. With the increasing popularity of their GPT-3 and ChatGPT models, it has become evident that some users are using them for unethical purposes, such as cheating on homework, creating fake news, and making social media bots. To tackle this issue, OpenAI has decided to implement a watermarking system to detect and track the misuse of AI-generated content<\/a>.<\/p>\n According to Tom Goldstein<\/a>, Perotto Associate Professor in the Department of Computer Science at the University of Maryland, OpenAI’s watermarking system can detect AI-generated content with an accuracy rate of 99.999999999994%<\/strong>. This high level of accuracy is possible because the watermarking system adds a unique signature to the content generated by GPT-3 and ChatGPT.<\/p>\n #OpenAI<\/a> is planning to stop #ChatGPT<\/a> users from making social media bots and cheating on homework by “watermarking” outputs. How well could this really work? Here’s just 23 words from a 1.3B parameter watermarked LLM. We detected it with 99.999999999994% confidence. Here’s how \ud83e\uddf5 pic.twitter.com\/pVC9M3qPyQ<\/a><\/p>\n \u2014 Tom Goldstein (@tomgoldsteincs) January 25, 2023<\/a><\/p><\/blockquote>\nHow Effective is OpenAI’s Watermarking System?<\/h2>\n
\n