Skip to Main Content

AI Guidelines for Students

Generative AI Basics

 From UNESCO Guide for Generative AI in Education and Research (2023): 

“Generative AI (GenAI) is an artificial intelligence (AI) technology that automatically generates content in response to prompts written in natural-language conversational interfaces. Rather than simply curating existing webpages by drawing on existing content, GenAI actually produces new content. The content can appear in formats that comprise all symbolic representations of human thinking: texts written in natural language, images (including photographs, digital paintings and cartoons), videos, music and software code.

GenAI is trained using data collected from webpages, social media conversations and other online media. It generates its content by statistically analysing the distributions of words, pixels or other elements in the data that it has ingested and identifying and repeating common patterns (for example, which words typically follow which other words).  While GenAI can produce new content, it cannot generate new ideas or solutions to real-world challenges, as it does not understand real-world objects or social relations that underpin language. Moreover, despite its fluent and impressive output, GenAI cannot be trusted to be accurate”  (UNESCO, 2023, p. 8). 

Terms & Jargon

Algorithm - A sequence of instructions for solving a problem or performing a task. Algorithms define how an artificial intelligence system processes input data to recognize patterns, make decisions, and generate outputs.

Artificial Intelligence (AI) - Computer systems designed to perform tasks associated with human intelligence, such as pattern recognition or decision making.

Bias - In regards to large language models, errors resulting from the training data. This can result in falsely attributing certain characteristics to certain races or groups based on stereotypes.

Chatbot - A program that communicates with humans through text in a written interface, built on top of a large language model. Examples include ChatGPT by OpenAI, Bard by Google, and more. While many people refer to chatbots and LLMs interchangeably, technically the chatbot is the user interface built on top of an LLM. 

Data Mining: Data mining is a process of discovering new knowledge from large amounts of data. It is a subset of AI that uses machine learning & mathematical techniques to extract knowledge from data that can be used for fraud detection, risk management, & more.

Deep Learning - A method of AI, and a subfield of machine learning, that uses multiple parameters to recognize complex patterns in pictures, sound and text. The process is inspired by the human brain and uses artificial neural networks to create patterns.

Emergent Behavior - When an AI model exhibits unintended abilities.

Ethical/Responsible AI: Ethical and thoughtful development and use of AI systems so that fairness, transparency, privacy, and societal impact are primary considerations. This is to ensure that AI benefits society while minimizing potential harms.

Generative Artificial Intelligence (GAI) - A subfield of Artificial Intelligence, referring to models capable of generating content (such as language, images, or music). The output of GAI models is based on patterns learned from extensive training datasets.

Hallucination - In the context of AI, a falsehood presented as truth by a large language model. For example, the model may confidently fabricate details about an event, provide incorrect dates, create false citations, or dispense incorrect medical advice.

Language Learning Model (LLM) - A type of neural network that learns skills — including generating prose, conducting conversations and writing computer code — by analyzing vast amounts of text from across the internet. The basic function is to predict the next word in a sequence, but these models have surprised experts by learning new abilities.

Machine Learning - A field of computer science in which a system learns patterns or trends from underlying data. Machine learning algorithms perform tasks like prediction or decision making.

Natural Language Processing (NLP): NLP is a branch of AI that deals with the interaction between computers and human (natural) languages. It encompasses a wide range of tasks, including text analysis, chatbots, speech recognition, and more.

Neural Network - A mathematical system, modeled on the human brain, that learns skills by finding statistical patterns in data. It consists of layers of artificial neurons: The first layer receives the input data, and the last layer outputs the results. Even the experts who create neural networks don’t always understand what happens in between.

Pattern Recognition: A field of computer science that deals with the automatic discovery of predictive information from data. It is an area of machine learning that develops algorithm that can recognize data patterns & use them to make predictions

Plugins: Software components or modules that can be added to existing programs or systems to extend their functionality, allowing for customization without altering the core software.

Prompt - In the context of AI, it is the input text written by a human that is given to a generative AI model. The prompt often describes what you are looking for, but may also give specific instructions about style, tone, or format.

Training Data - The content used to teach a machine learning system how to perform a particular task. Training data gives the system a knowledge base from which the model can make predictions or identify patterns. Training data might include images, text, code, or other types of media.

 

Adapted from: Glossary of TermsArtificial Intelligence (A.I.) & Information Literacy, Wolfram Memorial Library, Accessed 24 Feb. 2025; AI Jargon/Terms, Student Guide to Generative AI, UNC University Libraries, Accessed 24 Feb. 2025.

GenAI Tools

Prompt Engineering & Design

Use the following mnemonic devices as illustrated in the table below to design effective AI prompts:  IDEA, PARTS, CLEAR, REFINE:

SourcePark, J., & Choo, S. (2024). Generative AI Prompt Engineering for Educators: Practical Strategies. Journal of Special Education Technology. https://doi.org/10.1177/01626434241298954

Citing GenAI

Ethical Concerns

"AI Detector tools vary a great deal in terms of accuracy and consistency and have generated many 'false positives' flagging students' original writing as being produced by generative AI.  While traditional plagiarism detector tools scan student writing for copied and uncited sections of text, AI detector tools cannot do the same because of the way LLMs work.  Generative AI creates new patterns of language in each writing sample that are devoid of context and source and, therefore, cannot easily be detected by a tool." - AI Detector Tools, A.I. (Artificial Intelligence) & Information Literacy, Wolfgram Memorial Library, Widener University
"One of the most pressing ethical concerns of AI is algorithmic bias. Algorithmic bias occurs when the data used to train AI systems reflects the biases and prejudices of society, resulting in discriminatory outputs." - Teaching AI Ethics: Bias and Discrimination, Leo Furze
"Copyright is a hugely contentious aspect of the current wave of Artificial Intelligence, particularly in the field of AI image generation. As AI continues to advance and both artists and laypeople produce creative works, questions are cropping up about who owns the copyright to those works. With AI it’s possible to create “original” art, music, and literature, but the line between what human-generated and AI-generated is increasingly blurred..." -Teaching AI Ethics: Copyright, Leo Furze
"The concept of 'truth' is a significant ethical issue related to AI systems like ChatGPT. Since its launch...there have been two main concerns: first, the likelihood of AI models generating false or fabricated content, and second, the potential for individuals to exploit them for dishonest purposes, including academic cheating and intentional dissemination of false information." - Teaching AI Ethics: Truth and Academic Integrity, Leo Furze
"AI has become a potential threat to academic integrity, as tools like ChatGPT make it easier for students to access and use generated content for cheating purposes. The ease of generating human-like content could tempt students to bypass the hard work of research and writing." - Teaching AI Ethics: Truth and Academic Integrity, Leo Furze
"There are growing concerns about the impact of Artificial Intelligence technologies on our privacy. AI systems are often 'black boxes,' making it hard to understand how they arrive at their decisions and raising questions about transparency...The use of personal data in AI training data and the potential for data breaches and cyber attacks also pose significant privacy risks to individuals and organisations..." - Teaching AI Ethics: Privacy, Leo Furze