AI Literacy Lab

THE QUESTION

Can an AI-powered interactive learning platform effectively teach citizens in the Baltics to identify and critically evaluate deepfakes and synthetic media?


LOCATION: Lithuania
SECTOR: Human rights and Democracy
TECH: AI
TIMELINE: November 2025 - Present
PIONEER: TBC
PARTNERS: Civic Resilience Initiative

 
 

The Challenge

Across the Baltics, as in many parts of the world, citizens are increasingly exposed to AI-generated misinformation in the form of synthetic text, images, audio, and video. As generative AI becomes more sophisticated, it is becoming harder for the public to distinguish genuine content from manipulated or entirely fabricated media.

Social media platforms are becoming dominated by AI-generated content, but existing digital literacy initiatives are not keeping pace with this shift. There is growing demand for accessible, trusted, locally relevant training that helps people recognise and understand AI-generated content, delivered in ways that feel engaging rather than didactic.

The Idea

This pilot will harness AI to build an interactive learning platform that teaches citizens to detect and critically evaluate the misinformation it creates. This meta approach—using AI to teach about AI—aims to create dynamic, engaging educational experiences that evolve alongside the threats they address.

Key components include:

  • Interactive AI Architecture: Two LLM systems working in tandem—one generating synthetic content (fake images, audio, text) on demand, while another guides learners through detection and verification techniques

  • School and Community Integration: Leveraging CRI's established networks across Lithuanian, Latvian, and Estonian schools to pilot the platform with captive audiences in structured learning environments

  • Scalable Digital Platform: Moving beyond one-time workshops to create replayable, self-guided learning that remains fresh through AI-generated content rather than static programming

The pilot will test whether this interactive approach can create more effective and engaging AI literacy education than traditional methods, providing critical insights for scaling information integrity initiatives.

"I don't see why AI shouldn’t be used to learn about AI. We could add an interactive layer where users chat with LLMs and even generate fake images, stories and audios on the spot. It’s a two-way street to understand an AI’s approach." - Lukas Andriukaitis, CRI

Image credit: Hanna Barakat & Cambridge Diversity Fund

 

Our learnings and stories so far

This pilot hasn’t started to publish yet, but there are plenty of other blogs to read below. Check back soon!

Frontier Tech Hub

The Frontier Tech Hub works with UK Foreign, Commonwealth and Development Office (FCDO) staff and global partners to understand the potential for innovative tech in the development context, and then test and scale their ideas.

https://www.frontiertechhub.org/
Previous
Previous

Online amplification networks detector

Next
Next

Building consumer confidence in water for a future with less plastic