• Tools
  • Case studies
  • Community
  • about
  • Faqs
  • en

Frequently asked questions

1. General

Who is this website intended for?

Primarily, the TRAILS platform has been built to support teachers that are participating in the TRAILS professional development course. The course introduces teachers to constructive and ethical uses of AI tools – and involves having teachers plan, conduct, reflect upon, and share research lessons involving the use of AI tools.

The TRAILS platform also aims to be a useful resource for educators looking for AI tools to support their teaching.

What does GDPR compliance refer to?

GDPR refers to the General Data Protection Regulation (the European Union regulation on the processing of personal data). Users in the EU should only be using online tools that comply with the GDPR.

Many tools will state in their privacy policy or terms of service whether they comply with EU data protection regulation. However, many "new" technologies that have been created outside of the EU do not comply with GDPR and should not be used by educators until they have reached a level of compliance.

Please note that companies regularly change their policies for personal data collection. Also, we are only listing on the TRAILS platform whether a company is stating that they comply with GDPR or not – as we do not have the resources nor expertise to check whether companies are actually doing what they say.

The TRAILS platform also lists information about tools that are not GDPR compliant – but these tools should not be used. The purpose of listing these tools is to help educators distinguish which tools are clearly not GDPR compliant (i.e. red text stating "Not GDPR compliant" appears in the tool description".

2. TRAILS resource validation

What is the purpose of the TRAILS resource validation process?

As the content for the TRAILS platform is created by its community, we believe it is important to review the submitted content prior to it being published as a resource for all users. The validation process involves having a mentor review the submission to make sure all information has been added to the resource submission forms.

If information is missing or entered incorrectly, the submitting user will receive a message from a TRAILS mentor with suggestions on how to update the submission so that it can be published to the platform.

What is the TRAILS resource validation process? How does it work?

How the TRAILS validation process works:

  • Teachers submit a tool or case study for validation by clicking on the "Validate" button in the submission form. 
  • Administrators (mentors) receive a notification that a resource (tool or case study) needs to be reviewed.
  • A mentor accesses the resource and is shown a validation form with a checklist of items and space to add comments.
  • The mentor assesses the resource and adds comments for any improvements that are needed.
  • The teacher receives a notification that feedback has been given. 
  • The feedback appears at the top of their submitted design. The teacher can respond to the mentor's comments.
  • If not validated, the teacher receives the checklist showing where deficiencies are. (They must make corrections and resubmit for validation – as once the checklist/report is sent it is removed from the administrators list of resources to review).
  • If validated by the mentor, the teacher is notified and the resource is published.
What are the validation criteria for tools?

Submitted tools must have all these items to be validated.

  1. The submission is unique (not a duplicate of an existing tool in the platform).
  2. The name of the tool is accurate.
  3. Education level, language, GDPR compliance, account requirements are marked appropriately.
  4. The link to the tool is a working public link (and not a referral link).
  5. The tool description is understandable and relevant to educators. 
  6. Student and teacher functions are marked appropriately.
  7. The comment on compliance is clear and referenced.
  8. An appropriate image has been added. 
  9. The submission appears to be free ethical issues (IP/Copyright, no personal data).

The submitted tools should have these items to be validated – these comments guide mentor feedback.

  1. The comment on compliance includes a link to the tool’s terms of use (terms of service)
  2. Teacher comments include recommended best practices for the tool.
  3. The submission is clearly relevant to educators and includes example use cases.

Note that tools that do not state compliance with General Data Protection Regulation (the European Union regulation on the processing of personal data) will not be fully accessible by users.

What are the validation criteria for case studies?

The submitted case studies must have these items to be validated.

  1. The submission relates to the use of an AI tool with students.
  2. The submission appears to be free of IP/Copyright issues.
  3. The submission does not include identifiable student data.
  4. The title is descriptive and unique.
  5. Education level, duration, subject and language are marked appropriately.
  6. The public summary is understandable and relevant to educators
  7. A tool is linked.
  8. Learning objectives and lesson actions are appropriately described.
  9. Teacher insights are described.

 

The submitted case studies should have these items to be validated – these comments guide mentor feedback.

  1. Motivation for conducting the case study lesson is described.
  2. Ethical considerations are selected and elaborated on.
  3. Evaluation activities are detailed.
  4. Quotes from participating teachers or students are added.
  5. Teacher recommendations are included.
  6. Images are added to the submission that help users better understand it.

3. Responsible use of AI in Education

What do you mean by the responsible use of AI in education?

Responsible AI use focuses on implementing AI technology thoughtfully, minimising risks, and maximising benefits for students. This involves ensuring the use of AI is contructive – it supports student learning rather than replaces it, and is ethical – concerns related to environmental cost, impact on equality and democracy are considered and weighed when determining whether the use of AI can be justified.

Our aim is not just to build student proficiency with digital tools but to help students develop their decision making processes related to deciding whether or not to use a digital tool – and if choosing to use a tool, how to discover constructive ways to use it.

GenAI Safety Tips for Educators

Follow these tips before using AI tools for teaching and learning:

[1] Select GDPR-compliant tools
Only use online applications that comply with the EU’s General Data Protection Regulation (GDPR). Search the Terms of Use or Terms of Service of applications to check for compliance.

[2] Avoid using identifiable student data
Avoid entering identifiable student information into third-party applications or having students use applications that require them to enter such information (e.g. to create student accounts). Many trustworthy educational technologies will allow students use without the need for students to create accounts or allow students to create accounts that only require the sharing of their first name/nickname.

[3] Do not use any sensitive student data
Avoid collecting and storing sensitive data such as biometric, health, sexual orientation, racial or ethnic origin, political opinions, religious or philosophical beliefs. Under the General Data Protection Regulation (GDPR), wrongful processing of sensitive data can carry heavy penalties (even for educators). 
 

Source: The European Commission, Directorate-General for Education, Youth, Sport and Culture, 2022

3.1. Climate Action

What does GenAI have to do with climate action?

To establish environmentally responsible behaviour, there is a need to consider the links between the climate crisis and the creation and use of AI-supported tools.

[1] GenAI is environmentally costly
Training and running powerful AI systems negatively impacts the environment: high energy consumption (carbon footprint), resource depletion (rare earth minerals and metals), electronic waste, and water consumption (water footprint). The environmental cost of GenAI is substantial and projected to rise. 

What are some examples of the environmental costs?

  • (1) The high environmental costs of training AI
    Big models emit big carbon emissions numbers – through large numbers of parameters in the models, power usage effectiveness of data centers, and even grid efficiency. The heaviest carbon emitter by far was GPT-3, but even the relatively more efficient BLOOM took 433 MWh of power to train, which would be enough to power the average American home for 41 years.
     
  • (2) Sustainable AI: AI for sustainability and the sustainability of AI
    A well-known study by Strubell et al. illustrated that the process of training a single, deep learning, natural language processing (NLP) model (GPU) can lead to approx. 600,000 lb of carbon dioxide emissions [17]. Compare this to familiar consumption and you’re looking at roughly the same amount of carbon dioxide emissions produced by five cars over the cars’ lifetime. Other studies have shown that ‘Google’s AlphaGo Zero generated 96 tonnes of CO2 over 40 days of research training which amounts to 1000 h of air travel or a carbon footprint of 23 American homes’ [1, 15].
     
  • (3) Reducing the Carbon Impact of Generative AI Inference (today and in 2035)
    For example, generative AI-backed search can cost 5 times more compute per request [53], requiring billions of dollars of computing infrastructure [56], and increasing associated embodied and operational carbon emissions.

    > A ChatGPT-like application with estimated use of 11 million requests/hour produces emissions of 12.8k metric ton CO2/year, 25 times the emissions for training GPT-3. Inference is critical to environmental and power cost.
     
  • (4) A.I. tools fueled a 34% spike in Microsoft’s water consumption
    Building a large language model requires analyzing patterns across a huge trove of human-written text. All of that computing takes a lot of electricity and generates a lot of heat. To keep it cool on hot days, data centers need to pump in water — often to a cooling tower outside its warehouse-sized buildings.

    In its latest environmental report, Microsoft disclosed that its global water consumption spiked 34% from 2021 to 2022 (to nearly 1.7 billion gallons, or more than 2,500 Olympic-sized swimming pools)

    Google reported a 20% growth in water use in the same period, which Ren also largely attributes to its AI work. (Shaolei Ren, a researcher at the University of California, Riverside)

    Ren’s team estimates ChatGPT gulps up 500 milliliters of water (close to what’s in a 16-ounce water bottle) every time you ask it a series of between 5 to 50 prompts or questions.

    (Li, P., Yang, J., Islam, M. A., & Ren, S. (2023). Making ai less" thirsty": Uncovering and addressing the secret water footprint of ai models. arXiv preprint arXiv:2304.03271.)
     
  • (5) AI drives 48% increase in Google emissions
    https://www.bbc.com/news/articles/c51yvz51k2xo

→ Ask yourself if the educational benefits outweigh the environmental costs when selecting and using these tools in your classroom? (Are there strategies you can implement to minimize the environmental footprint?)

Sources: The European Green Deal (2019), The European Commission, Directorate-General for Education, Youth, Sport and Culture, 2022

3.2. Inclusion & Diversity

How does GenAI impact inclusion and diversity efforts?

To ensure equitable access to the benefits of GenAI, barriers need to be overcome in social, economic, cultural, and geographic aspects. This includes addressing intersectional challenges like disability, gender, race, and age. Educators using GenAI should consider:

[1] How GenAI reproduces social inequalities
AI systems have been found to reproduce social inequalities (discrimination, cultural insensitivity) in its outputs due to biased training data and a lack of diversity in production teams (i.e. deficiencies in identifying biases and developing adequate solutions). Moreover, accessing AI systems requires costly devices and infrastructure that is not available to too much of the world’s population.
→Are you addressing the potential for AI tools to reinforce or exacerbate social inequalities in your classroom (e.g., by considering bias in algorithms or ensuring equitable access to technology)?

[2] How GenAI reinforces power imbalances
AI systems take considerable financial resources to develop and deploy which is leading to a concentration of power among certain governments and large enterprises. Some of the latter have histories of violating data privacy laws, anti-competition laws, and avoiding payment of local taxes. Further, the AI systems under development are automating work and leading to an increasing number of job displacements. These factors can reinforce power imbalances and inequalities in the world.
→Are you considering contributions to equality (or power imbalance) when selecting which AI tools to use (e.g. promoting equity and inclusion)?
 

Further reading:

  • AI Gets More Expensive
    Model training costs, as first reported in last year’s AI Index report, also continued climbing. New estimates suggest that certain frontier systems, like OpenAI’s GPT-4, cost $78 million to train. Google Gemini’s price tag came in at $191 million. By comparison, some state-of-the-art models released half a decade or so ago, namely the original transformer model (2017) and RoBERTa Large (2019), respectively cost around $900 and $160,000 to train

  • Worldwide, America Dominates
    In 2023, substantially more significant AI models (61) came from U.S.-based institutions, compared with the European Union (21) and China (15). The U.S. also remains the premier location for AI investing. A total of $67.2 billion was privately invested in AI in the U.S. in 2023, nearly nine times more than the amount in China. 

Sources: 

  1. Implementation guidelines Erasmus+ and European Solidarity Corps Inclusion and Diversity Strategy (2021)
  2. The European Commission, Directorate-General for Education, Youth, Sport and Culture, 2022
  3. Akter, S., McCarthy, G., Sajib, S., Michael, K., Dwivedi, Y. K., D’Ambra, J., & Shen, K. N. (2021). Algorithmic bias in data-driven innovation in the age of AI. International Journal of Information Management, 60, 102387. https://doi.org/10.1016/j.ijinfomgt.2021.102387 
  4. Nguyen, A., Ngo, H. N., Hong, Y., Dang, B., & Nguyen, B. P. T. (2023). Ethical principles for artificial intelligence in education. Education and Information Technologies, 28(4), 4221-4241. https://doi.org/10.1007/s10639-022-11316-w 

3.3. Civic Engagement and Democratic Life

How can GenAI impact Civic Engagement and Democratic Life?

To nurture an active and ethical citizenry, informed participation, and a sense of shared responsibility for the well-being of the community, we must overcome certain threats that arise from AI:

[1] Social manipulation
Unintentional spreading of false information (misinformation) and deliberate manipulation through fabricated content (disinformation) can be amplified by automated content generation, deepfakes, and synthetic media – in other words, the use of AI systems. These social manipulation tactics are strengthened by microtargeting and the creation of filter bubbles and echo chambers, fueled by personalization algorithms and confirmation bias.
→Are you developing students' media literacy skills to navigate the social risks of GenAI?

[2] Overreliance and loss of skills and motivation
AI systems provide instant answers to complex questions and significantly reduces the effort required for complex tasks such as research, content creation, and problem-solving. As a result, AI tools can discourage deep exploration and critical thinking, reduce motivation to invest time and effort into the learning process, and diminish one’s sense of accomplishment and ownership of work done.
→Are you equipping students with the skills necessary to thrive alongside AI, such as critical thinking, communication, and creativity – and ensuring that AI isn’t just doing their work? 

[3] Undermining the rule of law
While AI systems are likely to be a major part of future workplaces, necessitating student familiarity with the technology, concerns exist regarding the training data used in some GenAI models. These concerns include the potential lack of informed user consent and copyright or trademark infringements. If we ignore these concerns we show students that it is permissible to disregard the laws that govern our demogratic societies.
→ Are you aware of the data practices of the GenAI tool you are using? 
E.g. The type of data collected, how the data was collected, whether user consent was obtained, if anonymization practices were followed, whether copyright or trademark licensing trademark agreements were used or steps taken to ensure the tool does not infringe on copyrights or trademarks.

Futher reading:

  • 4: More AI, More Problems
    According to the AI, Algorithmic, and Automation Incidents and Controversies repository, reported issues are 26 times greater in 2021 than in 2012. Chalk that up to both an increase in AI use and a growing awareness of its misuse. Some of those reported issues included a deepfake of Ukrainian President Volodymyr Zelenskyy surrendering, face recognition technology to try to track gang members and rate their risk, and surveillance technology to scan and determine emotional states of students in a classroom.
     
  • The Disinformation Machine: How Susceptible Are We to AI Propaganda?
    AI propaganda is here. But is it persuasive? Recent research published in PNAS Nexus and conducted by Tomz, Josh Goldstein from the Center for Security and Emerging Technology at Georgetown University, and three Stanford colleagues—master’s student Jason Chao, research scholar Shelby Grossman, and lecturer Alex Stamos—examined the effectiveness of AI-generated propaganda. They found, in short, that it works.
    “Deepfakes are probably more persuasive than text, more likely to go viral, and probably possess greater plausibility than a single written paragraph,” he said. “I’m extremely worried about what’s coming up with video and audio.”

Sources:

  1. Youth participation strategy (2020)
  2. The European Commission, Directorate-General for Education, Youth, Sport and Culture, 2022
  3. Understand the risks and harms of AI, algorithms, and automation (AIAAIC Repository)
  4. Managing Misinformation, Harvard University
  5. Zhou, J., Zhang, Y., Luo, Q., Parker, A. G., & De Choudhury, M. (2023, April). Synthetic lies: Understanding ai-generated misinformation and evaluating algorithmic and human solutions. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems (pp. 1-20). https://doi.org/10.1145/3544548.3581318 
  6. Lucchi, N. (2023). ChatGPT: a case study on copyright challenges for generative artificial intelligence systems. European Journal of Risk Regulation, 1-23. https://doi.org/10.1017/err.2023.59 

3.4. Digital transformation

How does GenAI impact digital transformation efforts?

To develop digital literacy and skills for lifelong learning and the future of work, learning responsible and effective AI usage is important. 

[1] Academic dishonesty
The use of GenAI can facilitate academic dishonesty through plagiarism, falsification of data, contract cheating (having AI services complete assignments), and misrepresentation of authorship).
→Do you provide your students with clear guidelines on the acceptable use of GenAI tools in assignments? Do you model responsible uses of GenAI with your students?

[2] Data privacy and threats to autonomy
GenAI can be used to aggregate data from various sources and create detailed user profiles through a process of data aggregation, profiling, personalization, and targeting. This raises concerns about potential discrimination, manipulation, or even the complete erosion of privacy itself,  as exemplified by practices like predictive profiling and automated monitoring.
→ Do you minimize student data collected by AI tools? Do you teach students how to manage their privacy settings and data sharing? Do you gain student consent for sharing their data (including work) with AI systems?

Further reading:

  • 6: Increasing AI Labor Demand
    This year saw an increase in job postings seeking AI skills across all sectors, and the number of AI job postings overall were notably higher in 2022 over the prior year. The information sector dominated. California posted the most AI-related jobs by far (142,154), followed by Texas (66,624) and New York (43,899).
     
  • Industry Invests in AI
    Global private investment in generative AI skyrocketed, increasing from roughly $3 billion in 2022 to $25 billion in 2023. Nearly 80 percent of Fortune 500 earnings calls mentioned AI, more than ever before.

Sources:

  1. Digital Education Action Plan (2021-2027)
  2. The European Commission, Directorate-General for Education, Youth, Sport and Culture, 2022
  3. Padillah, R. (2023) ‘Ghostwriting: A reflection of academic dishonesty in the Artificial Intelligence Era’, Journal of Public Health [Preprint]. https://doi.org/10.1093/pubmed/fdad169
  4. Habib, S., Vogel, T., Anli, X., & Thorne, E. (2024). How does generative artificial intelligence impact student creativity?. Journal of Creativity, 34(1), 100072. https://doi.org/10.1016/j.yjoc.2023.100072 
  5. Sharples, M. (2022). Automated essay writing: An AIED opinion. International Journal of Artificial Intelligence in Education, 32(4), 1119-1126. https://doi.org/10.1007/s40593-022-00300-7  
  6. Gupta, M., Akiri, C., Aryal, K., Parker, E., Praharaj, L., (2023) ‘From chatgpt to threatgpt: Impact of generative AI in cybersecurity and privacy’, IEEE Access, 11, pp. 80218–80245. http://doi.org/10.1109/access.2023.3300381
  7. Tang, A., Li, K. K., Kwok, K. O., Cao, L., Luong, S., & Tam, W. (2023). The importance of transparency: Declaring the use of generative artificial intelligence (AI) in academic writing. Journal of Nursing Scholarship. https://doi.org/10.1111/jnu.12938
  8. Wu, X., Duan, R., & Ni, J. (2023). Unveiling security, privacy, and ethical concerns of chatgpt. Journal of Information and Intelligence. https://doi.org/10.1016/j.jiixd.2023.10.007 
Cookies are important for you, they influence on your navigation experience and they help us protect your privacy. TRAILS uses tracking services and cookies analysis, both own and from third parties, to improve our services and with research and development purposes. More information in our "Cookies policy".

Email address

Password

 
Cookie settings
Cookies are important for you, they influence on your navigation experience and they help us protect your privacy. TRAILS uses tracking services and cookies analysis, both own and from third parties, to improve our services and with research and development purposes. More information in our "Cookies policy".