Frequently asked questions
1. General
Primarily, the TRAILS platform has been built to support teachers that are participating in the TRAILS professional development course. The course introduces teachers to constructive and ethical uses of AI tools – and involves having teachers plan, conduct, reflect upon, and share research lessons involving the use of AI tools.
The TRAILS platform also aims to be a useful resource for educators looking for AI tools to support their teaching.
GDPR refers to the General Data Protection Regulation (the European Union regulation on the processing of personal data). Users in the EU should only be using online tools that comply with the GDPR.
Many tools will state in their privacy policy or terms of service whether they comply with EU data protection regulation. However, many "new" technologies that have been created outside of the EU do not comply with GDPR and should not be used by educators until they have reached a level of compliance.
Please note that companies regularly change their policies for personal data collection. Also, we are only listing on the TRAILS platform whether a company is stating that they comply with GDPR or not – as we do not have the resources nor expertise to check whether companies are actually doing what they say.
The TRAILS platform also lists information about tools that are not GDPR compliant – but these tools should not be used. The purpose of listing these tools is to help educators distinguish which tools are clearly not GDPR compliant (i.e. red text stating "Not GDPR compliant" appears in the tool description".
2. TRAILS resource validation
As the content for the TRAILS platform is created by its community, we believe it is important to review the submitted content prior to it being published as a resource for all users. The validation process involves having a mentor review the submission to make sure all information has been added to the resource submission forms.
If information is missing or entered incorrectly, the submitting user will receive a message from a TRAILS mentor with suggestions on how to update the submission so that it can be published to the platform.
How the TRAILS validation process works:
Submitted tools must have all these items to be validated.
The submitted tools should have these items to be validated – these comments guide mentor feedback.
Note that tools that do not state compliance with General Data Protection Regulation (the European Union regulation on the processing of personal data) will not be fully accessible by users.
The submitted case studies must have these items to be validated.
The submitted case studies should have these items to be validated – these comments guide mentor feedback.
3. Responsible use of AI in Education
Responsible AI use focuses on implementing AI technology thoughtfully, minimising risks, and maximising benefits for students. This involves ensuring the use of AI is contructive – it supports student learning rather than replaces it, and is ethical – concerns related to environmental cost, impact on equality and democracy are considered and weighed when determining whether the use of AI can be justified.
Our aim is not just to build student proficiency with digital tools but to help students develop their decision making processes related to deciding whether or not to use a digital tool – and if choosing to use a tool, how to discover constructive ways to use it.
Follow these tips before using AI tools for teaching and learning:
[1] Select GDPR-compliant tools
Only use online applications that comply with the EU’s General Data Protection Regulation (GDPR). Search the Terms of Use or Terms of Service of applications to check for compliance.
[2] Avoid using identifiable student data
Avoid entering identifiable student information into third-party applications or having students use applications that require them to enter such information (e.g. to create student accounts). Many trustworthy educational technologies will allow students use without the need for students to create accounts or allow students to create accounts that only require the sharing of their first name/nickname.
[3] Do not use any sensitive student data
Avoid collecting and storing sensitive data such as biometric, health, sexual orientation, racial or ethnic origin, political opinions, religious or philosophical beliefs. Under the General Data Protection Regulation (GDPR), wrongful processing of sensitive data can carry heavy penalties (even for educators).
Source: The European Commission, Directorate-General for Education, Youth, Sport and Culture, 2022
3.1. Climate Action
To establish environmentally responsible behaviour, there is a need to consider the links between the climate crisis and the creation and use of AI-supported tools.
[1] GenAI is environmentally costly
Training and running powerful AI systems negatively impacts the environment: high energy consumption (carbon footprint), resource depletion (rare earth minerals and metals), electronic waste, and water consumption (water footprint). The environmental cost of GenAI is substantial and projected to rise.
→ What are some examples of the environmental costs?
→ Ask yourself if the educational benefits outweigh the environmental costs when selecting and using these tools in your classroom? (Are there strategies you can implement to minimize the environmental footprint?)
Sources: The European Green Deal (2019), The European Commission, Directorate-General for Education, Youth, Sport and Culture, 2022
3.2. Inclusion & Diversity
To ensure equitable access to the benefits of GenAI, barriers need to be overcome in social, economic, cultural, and geographic aspects. This includes addressing intersectional challenges like disability, gender, race, and age. Educators using GenAI should consider:
[1] How GenAI reproduces social inequalities
AI systems have been found to reproduce social inequalities (discrimination, cultural insensitivity) in its outputs due to biased training data and a lack of diversity in production teams (i.e. deficiencies in identifying biases and developing adequate solutions). Moreover, accessing AI systems requires costly devices and infrastructure that is not available to too much of the world’s population.
→Are you addressing the potential for AI tools to reinforce or exacerbate social inequalities in your classroom (e.g., by considering bias in algorithms or ensuring equitable access to technology)?
[2] How GenAI reinforces power imbalances
AI systems take considerable financial resources to develop and deploy which is leading to a concentration of power among certain governments and large enterprises. Some of the latter have histories of violating data privacy laws, anti-competition laws, and avoiding payment of local taxes. Further, the AI systems under development are automating work and leading to an increasing number of job displacements. These factors can reinforce power imbalances and inequalities in the world.
→Are you considering contributions to equality (or power imbalance) when selecting which AI tools to use (e.g. promoting equity and inclusion)?
Further reading:
> AI Gets More Expensive
Model training costs, as first reported in last year’s AI Index report, also continued climbing. New estimates suggest that certain frontier systems, like OpenAI’s GPT-4, cost $78 million to train. Google Gemini’s price tag came in at $191 million. By comparison, some state-of-the-art models released half a decade or so ago, namely the original transformer model (2017) and RoBERTa Large (2019), respectively cost around $900 and $160,000 to train
> Worldwide, America Dominates
In 2023, substantially more significant AI models (61) came from U.S.-based institutions, compared with the European Union (21) and China (15). The U.S. also remains the premier location for AI investing. A total of $67.2 billion was privately invested in AI in the U.S. in 2023, nearly nine times more than the amount in China.
Sources:
3.3. Civic Engagement and Democratic Life
To nurture an active and ethical citizenry, informed participation, and a sense of shared responsibility for the well-being of the community, we must overcome certain threats that arise from AI:
[1] Social manipulation
Unintentional spreading of false information (misinformation) and deliberate manipulation through fabricated content (disinformation) can be amplified by automated content generation, deepfakes, and synthetic media – in other words, the use of AI systems. These social manipulation tactics are strengthened by microtargeting and the creation of filter bubbles and echo chambers, fueled by personalization algorithms and confirmation bias.
→Are you developing students' media literacy skills to navigate the social risks of GenAI?
[2] Overreliance and loss of skills and motivation
AI systems provide instant answers to complex questions and significantly reduces the effort required for complex tasks such as research, content creation, and problem-solving. As a result, AI tools can discourage deep exploration and critical thinking, reduce motivation to invest time and effort into the learning process, and diminish one’s sense of accomplishment and ownership of work done.
→Are you equipping students with the skills necessary to thrive alongside AI, such as critical thinking, communication, and creativity – and ensuring that AI isn’t just doing their work?
[3] Undermining the rule of law
While AI systems are likely to be a major part of future workplaces, necessitating student familiarity with the technology, concerns exist regarding the training data used in some GenAI models. These concerns include the potential lack of informed user consent and copyright or trademark infringements. If we ignore these concerns we show students that it is permissible to disregard the laws that govern our demogratic societies.
→ Are you aware of the data practices of the GenAI tool you are using?
E.g. The type of data collected, how the data was collected, whether user consent was obtained, if anonymization practices were followed, whether copyright or trademark licensing trademark agreements were used or steps taken to ensure the tool does not infringe on copyrights or trademarks.
Futher reading:
Sources:
3.4. Digital transformation
To develop digital literacy and skills for lifelong learning and the future of work, learning responsible and effective AI usage is important.
[1] Academic dishonesty
The use of GenAI can facilitate academic dishonesty through plagiarism, falsification of data, contract cheating (having AI services complete assignments), and misrepresentation of authorship).
→Do you provide your students with clear guidelines on the acceptable use of GenAI tools in assignments? Do you model responsible uses of GenAI with your students?
[2] Data privacy and threats to autonomy
GenAI can be used to aggregate data from various sources and create detailed user profiles through a process of data aggregation, profiling, personalization, and targeting. This raises concerns about potential discrimination, manipulation, or even the complete erosion of privacy itself, as exemplified by practices like predictive profiling and automated monitoring.
→ Do you minimize student data collected by AI tools? Do you teach students how to manage their privacy settings and data sharing? Do you gain student consent for sharing their data (including work) with AI systems?
Further reading:
Sources: