Oops, Samsung employees accidentally leaked trade secrets via ChatGPT.

Never forget that anything you share with ChatGPT is captured and used to further train the model. Samsung employees learned this the hard way after accidentally leaking a top secret. Samsung Data.

Samsung employees accidentally share confidential information while using ChatGPT for help at work. Samsung’s semiconductor division has allowed engineers to use ChatGPT to verify source code.

But The Economist Korea reported(Opens in a new tab) Three separate instances where Samsung employees unwittingly leaked sensitive data to ChatGPT. In one case, and employee posted secret source code in chat to check for bugs. Another employed shared code from ChatGPT and “Requested Code Optimization”. Third, he shared a meeting transcript to exchange notes for a presentation. That data is now out in the wild to feed ChatGPT.

The leak is a real-world example of hypothetical situations that privacy experts have faced. Concerned(Opens in a new tab). Other scenarios include sharing confidential legal documents or long text for the purpose of summarizing or analyzing, which can be used to improve the model. Experts warn that it may violate GDPR compliance, which is why Italy will soon It is forbidden Discussion GPT

Samsung has taken immediate action to limit the upload capacity of ChatGPT to 1024 bytes per person, and is investigating the people involved in the hack. It is also planning to build its own in-house AI chatbot to prevent future scandals. But Samsung is unlikely to recall any leaked data. chatgpts Data policy(Opens in a new tab) It says it uses data to train the models unless you ask to opt out. In the ChatGPT user guide, clearly It warns users(Opens in a new tab) Not to share sensitive information in conversations.

Consider this a warning to remember the next time you turn to chatgpt for help. Samsung certainly does.

Leave a Reply

Your email address will not be published. Required fields are marked *