A student-led global community of ambassadors committed to promoting ethical, transparent, and inclusive artificial intelligence for the benefit of all.
Everything we do is grounded in six commitments that guide our ambassadors, partnerships, and advocacy work.
AI systems must be designed to reduce — not amplify — systemic bias and inequity across communities, cultures, and contexts.
Decisions made by AI should be explainable, auditable, and understandable to those they affect.
Personal data must be protected rigorously, and AI must never be weaponized for surveillance or control.
The environmental cost of AI development must be acknowledged and minimized as part of responsible innovation.
Humans must retain meaningful control over consequential AI-driven decisions, especially in healthcare, justice, and governance.
AI should be built with and for diverse global communities — not exported as a one-size-fits-all solution.
Senator Edward J. Markey's Youth AI Privacy Act would establish the first federal guardrails to protect minors from AI chatbot exploitation — banning addictive design features, prohibiting data training on minors, and requiring transparency about AI interactions. This bill affects every young person using AI today.
"AI chatbots pose grave new risks to kids' privacy and safety, but Big Tech continues to speak only one language: profit." — Sen. Edward J. Markey, March 25, 2026
Curated readings, research, policy papers, and tools for anyone navigating the AI ethics and youth privacy landscape.
New legislation would require AI companies to implement privacy safeguards in chatbots, ban targeted advertising to minors, and prohibit addictive design features targeting young users.
A practical breakdown of the new regulatory framework and how teams can prepare for compliance without stifling innovation, with special attention to educational contexts.
Who consented to having their creative work used to train foundation models, and what rights do creators and students have now that these systems are embedded in daily life?
A comprehensive look at the GUARD Act, CHAT Act, SAFE BOTs Act, and COPPA 2.0: what each bill does, what it leaves out, and what young people should know about their rights right now.
Sen. Markey and Rep. Lee reintroduce legislation requiring every federal agency using AI to maintain a civil rights office focused on combating algorithmic bias and discrimination.
The Senate unanimously passed the Children and Teens' Online Privacy Protection Act, the most significant update to youth online privacy law in over 25 years. Here is what changes and what comes next.
A call to school boards and legislators: ethical AI cannot be responsibly designed without the voices of the generation it will shape most.
I'm 17. I have never known a world without the internet. I grew up on algorithms that decided what I saw, what I liked, and what I thought I wanted. Now artificial intelligence is moving into my classroom, my college application, and my future workplace. In many places it is already shaping decisions about whether I qualify for financial aid, how I'm disciplined at school, or whether I get a job interview at all.
So when I hear adults debate ethical AI in rooms where no student has a seat at the table, I'm not just frustrated. I'm alarmed. We are not a future problem to be managed. We are a present voice being ignored.
Ethical AI is about values: what we believe fairness looks like, who gets to be seen, and who gets left in the blind spot. These aren't abstract philosophy questions. For students like me, they're personal. When an AI proctoring tool flags a student's eye movements as suspicious, that is an ethical failure. When a predictive algorithm decides a kid from a low-income ZIP code is a "high risk" student before they've set foot in a classroom, that is an ethical failure. The principles matter, but only if the people writing them actually understand whose lives are on the line.
Youth don't just consume AI. We test it in real time, in our schools, our apps, and our social feeds. We see its failures up close in ways that policy documents miss. That lived experience is data, and right now it is being wasted.
The "How" Needs Our HandsHere is what I have come to understand: ethical AI is the what and the why. Responsible AI is the how, meaning the checklists, the audits, the policies, and the accountability structures. That is exactly where youth voice is being shut out the most. We are told the principles but not invited into the process of putting them into practice.
To school boards considering AI-powered learning tools or surveillance systems: we are the users. Consult us before procurement, not after harm is done. Ask us what transparency means to us. Ask us whether we feel safe. Ask us what fairness should look like in an automated grade appeal system, because we have sat in that chair.
To legislators drafting AI policy: the generation most affected by algorithmic decision-making is also the generation most digitally fluent. We can explain how a recommendation engine nudges behavior in ways adults cannot always see. We can tell you what it feels like when a chatbot gives a struggling peer dangerous advice. We are not too young to testify. We are exactly young enough to know the truth.
I am not asking for symbolic representation, meaning a student on a panel who gets three minutes and no follow-up. I am asking for structural inclusion: youth advisory seats on school technology committees, student feedback loops built into AI pilot programs before full rollout, high school and college student representatives with real voting input on state AI task forces, and AI literacy education that teaches students not just to use these tools, but to question and critique them.
Ethical AI without youth voice isn't just incomplete. It's a contradiction. You cannot build a moral compass for a future you don't live in, drawing only on perspectives from people who won't be shaped by it the most.
We are ready. We are informed. We are here. The question is whether the adults making these decisions are ready to listen, not just once, not just symbolically, but as genuine partners in governance.
The world you're building is the world we will inherit. Give us a hand in building it right.
Ambassadors are the backbone of our society. Students, educators, advocates, and community members who carry this mission into their schools, communities, and beyond. No technical degree required.
Tell us about yourself and why Responsible AI matters to you. We welcome applicants from all backgrounds.
A free online course covering AI ethics fundamentals, the policy landscape, and how to advocate effectively in your community.
Get a full resource toolkit, access to our ambassador community, and real opportunities to speak, publish, and shape policy.
Webinars, legislative briefings, community meetups, and workshops open to all members and the public.
Online · 2:00 PM EST · Free · Open to All
Online · 11:00 AM EST · Free · Open to All
Online · 10:00 AM EST · Members Only
Multiple cities + Online · 6:30 PM Local
The Ethical AI Society is a youth-led, non-partisan nonprofit founded to ensure that the development and deployment of artificial intelligence reflects the values, rights, and needs of all people, especially the generation that will be shaped by it the most.
Our network of student ambassadors, young advocates, and youth allies spans schools, communities, and policy spaces across the country. We believe that the most AI-affected generation deserves a seat at the table.
Read Our Full Story