The legal situation regarding the use of generative AI
As teachers and education professionals, we have many doubts about the legal issues surrounding ChatGPT and other forms of generative artificial intelligence (AI), despite the fact that they are currently being widely used by students and workers alike.
The use of ChatGPT and other generative AI is not recommended by the European, Spanish or Catalan data protection authorities until the legal context surrounding these technologies has been clarified. The chief problems are associated with data protection, intellectual property and confidentiality.
Data protection
Data in our chats, which may be confidential, are used to train the system. Although ChatGPT offers the possibility of indicating that we don’t want them to be used, it is not clear how it does this or whether, in fact, it actually does.
Until the European Data Protection Board has made a pronouncement on the matter, the Catalan Data Protection Authority has recommended that it should not be used in the public (and university) sector when personal data are involved.
Intellectual property
The European authorities provide for the possibility of using existing digital material for data mining purposes. However, it is not clear whether the material employed to train ChatGPT has been lawfully gathered. The owners must also have been given the opportunity to object to their content being used in this training.
There are still many questions as to whether (or not) the content used to train AI is protected by intellectual property legislation and whether it may infringe the associated rights, since it uses as reference third-party data and information without the authors’ authorization and without citing its sources. In short, what we obtain from ChatGPT may infringe copyright.
Confidentiality
As ChatGPT provides no guarantees as to the security of the information saved, and the functionalities that allow the tool not to use these data also fail to guarantee that this information does not reach third parties, it is not advisable to include any kind of strategic or confidential documentation, or any that has been classified as restricted by the organization.
In practice
As the recommendation not to use it is difficult to follow, since its use is already widespread, a series of guidelines need to be provided. Bearing this in mind, if you use ChatGPT, take the following precautions:
- Before entering any content (questions for assignments or student answers), analyse it to make sure it does not infringe any intellectual property, data protection or confidentiality rights.
- Do not provide any content that shares any personal data or information that can help identify a person, any content over which usage rights are not held, or any information that is reserved, confidential or of strategic value to the university.
- Data subjects and other interested parties must be duly informed of the use made of the content, and their express consent, authorization and/or the necessary licences obtained.
- Although we cannot know for sure how the content we submit to ChatGPT will be used, it is best to select the option for opting out of this content being used to improve the system and also to enable the option not to save the chats (i.e. that our session is not used to train the AI).
- It may be worth considering if it is really necessary to share student answers and assess whether, in certain cases, it would be feasible to use fictional answers instead.
With regard to the introduction of ChatGPT or other generative AIs into activities and assignments (by either involving students in criticizing and improving the tool’s answers, using it for gamification or using other approaches or methodologies), we recommend analysing all possible risks before doing so.