Robert Clarisó and Toni Pérez explain that, while tools such as ChatGPT, Bing and GitHub Copilot can be used for study and practical work, they are subject to some restrictions when used in assessment activities and tests. They point to five in particular:
In general, the use of generative AI tools to achieve the primary goal of assessment activities is not permitted. So, for example, if a student is taking an introductory programming course, the use of GitHub Copilot is not allowed. Similarly, if they are taking a course in written communication skills, they should not use ChatGPT to complete the activity in question.
Students must always check with their course instructor to determine whether the use of a particular AI tool is acceptable for a given assessment activity. Generative AI tools cannot be used in tests or final exams, unless expressly permitted.
If students use any kind of generative AI tool, they must properly cite it in their continuous assessment activity, just as they would any other kind of external resource they relied on to complete it.
Any use of generative AI assistants must be limited in scope, such as minor edits or adjustments to their work. Note that if a plagiarism detection service flags an activity submitted by a student as suspicious, this could be a sign that they have used generative AI improperly, and this could have an impact on their grade.
Students are always responsible for the work they submit for assessment activities. This means that, as the author, they are also responsible for any errors they may contain. Students must have a firm grasp of all the details of the activities they submit and be able to explain and support the decisions they have made. If they are unable to do this, they may receive a lower grade, even if the work is correct.
Other resources on generative AI that may also interest you: