Top 2 Things to know regarding OpenAI privacy rules before using GPT

OpenAI’s ChatGPT has been on an exponential growth curve since its launch & has shown the ability to handle a wide range of user queries, from writing emails to generating code, giving rise to numerous use cases. In view of Italy banning ChatGPT, after raising concerns about its privacy policy from a recent data breach and the legal basis for using personal data to train ChatGPT, it becomes important for users to keep in mind the privacy policy/privacy rules of OpenAI to protect their personal data. In this article, we wanted to highlight the top 2 things users need to keep in mind to ensure their personal data privacy.

  1. ChatGPT may collect user data for system improvement. ChatGPT has initially provided a pop-up to users before starting usage asking users not to share sensitive data & that it can be used for re-training the models.
    • SetuServ’s Solution: Users can opt out of having their content used to improve OpenAI services at any time by filling out this form. Also, given this data privacy challenge, it is currently safer to use ChatGPT on public data. Alternatively, you can anonymize the data at your end for any internal data you are looking to load into it.

2. Starting March 1st, 2023, Open AI will not use data submitted by customers via OpenAI API to train or improve the models unless the user explicitly decides to share the data with OpenAI.

    • The good news is that now OpenAI API can be used by companies over ChatGPT to access the GPT models to keep their data private. Reaching out to companies like “SetuServ” which have a thorough understanding of the intuition of LLMs, limitations of data privacy & security of LLMs, can help companies with better implementation of LLMs on their data while taking care of any data privacy issues