The risk of trusting ChatGPT with personal secrets

The article discusses the risks associated with trusting a large language model like ChatGPT with personal secrets. It points out that while it may be convenient for users to have conversations with these artificial intelligences, there is no guarantee that they will never reveal sensitive information. The article cites some recent incidents in which bots have made mistakes and released confidential data, resulting in potentially serious consequences for the affected parties.

The article notes that the issue of trust has been discussed for many years in both the scientific and legal communities. While there are safeguards in place to prevent malicious actors from accessing confidential data, these do not necessarily apply to AI-based systems. In addition, the article points out that AI-based systems can easily be tricked into divulging confidential information, as they are not infallible.

The article concludes by noting that it is ultimately up to the user to decide how much trust they want to put in an AI-based system. It also encourages users to take extra measures to protect their data, such as encrypting communications or using secure passwords. Ultimately, the article suggests that users should think carefully before entrusting their personal secrets to an AI-based system.

Read more here: External Link