A global community of ambassadors committed to promoting ethical, transparent, and inclusive artificial intelligence for the benefit of all.
Everything we do is grounded in six commitments that guide our ambassadors, partnerships, and advocacy work.
AI systems must be designed to reduce — not amplify — systemic bias and inequity across communities, cultures, and contexts.
Decisions made by AI should be explainable, auditable, and understandable to those they affect.
Personal data must be protected rigorously, and AI must never be weaponized for surveillance or control.
The environmental cost of AI development must be acknowledged and minimized as part of responsible innovation.
Humans must retain meaningful control over consequential AI-driven decisions, especially in healthcare, justice, and governance.
AI should be built with and for diverse global communities — not exported as a one-size-fits-all solution.
Curated readings, research, policy papers, and tools for anyone navigating the AI ethics landscape.
A practical breakdown of the new regulatory framework and how teams can prepare for compliance without stifling innovation.
New Stanford research reveals the gap between vendor claims and real-world performance of AI-driven recruitment platforms.
Who consented to having their creative work used to train foundation models — and what rights do creators have now?
A call to school boards and legislators: ethical AI cannot be responsibly designed without the voices of the generation it will shape most.
I'm 16. I've never known a world without the internet. I've grown up on algorithms that decided what I saw, what I liked, what I thought I wanted. And now, artificial intelligence is moving into my classroom, my college application, my future workplace — and in many places, into decisions about whether I qualify for financial aid, how I'm disciplined at school, or whether I get a job interview at all.
So when I hear adults debate ethical AI in rooms where no student has a seat at the table, I'm not just frustrated. I'm alarmed. Because we're not a future problem to be managed. We're a present voice being ignored.
Ethical AI is about values — what we believe fairness looks like, who gets to be seen, and who gets left in the blind spot. These aren't abstract philosophy questions. For students like me, they're personal. When an AI proctoring tool flags a Black student's eye movements as suspicious, that's an ethical failure. When a predictive algorithm decides a kid from a low-income ZIP code is a "high risk" student before they've set foot in a classroom, that's an ethical failure. The principles matter — but only if the people writing them actually understand whose lives are on the line.
Youth don't just consume AI. We test it in real time — in our schools, our apps, our social feeds — and we see its failures up close in ways that policy documents miss. That lived experience is data. And right now, it's being wasted.
The "How" Needs Our HandsHere's what I've come to understand: ethical AI is the what and the why. Responsible AI is the how — the checklists, the audits, the policies, the accountability structures. And that's exactly where youth voice is being shut out the most. We are told the principles, but not invited into the process of putting them into practice.
To school boards considering AI-powered learning tools or surveillance systems: we are the users. We should be consulted before procurement, not surveyed after harm is done. Ask us what transparency means to us. Ask us whether we feel safe. Ask us what fairness should look like in an automated grade appeal system — because we've sat in that chair.
To legislators drafting AI policy: the generation most affected by algorithmic decision-making is also the generation most digitally fluent. We can explain to you how a recommendation engine nudges behavior in ways adults can't always see. We can tell you what it feels like when a chatbot gives a struggling peer dangerous advice. We are not too young to testify — we are exactly young enough to know the truth.
I'm not asking for symbolic representation — a student on a panel who gets three minutes and no follow-up. I'm asking for structural inclusion. That means youth advisory seats on school technology committees. It means pilot programs for AI tools in schools must include student feedback loops before full rollout. It means state AI task forces should include high school and college student representatives with real voting input. It means AI literacy education that teaches us not just to use these tools, but to question and critique them.
Ethical AI without youth voice isn't just incomplete — it's a contradiction. You cannot build a moral compass for a future you don't live in, using only the perspectives of people who won't be shaped by it the most.
We are ready. We are informed. We are here. The question is whether the adults making these decisions are ready to listen — not just once, not just symbolically, but as partners in governance.
The world you're building is the world we will inherit. Give us a hand in building it right.
Ambassadors are the backbone of our society — educators, technologists, policymakers, and advocates who carry our mission into their communities.
Tell us about your background and why Ethical AI matters to you. No technical degree required.
A free 4-hour online course covering AI ethics fundamentals, policy landscape, and communication strategies.
Get a toolkit of resources, access to our ambassador community, and opportunities to speak and publish.
Webinars, conferences, community meetups, and workshops — open to all members and the public.
Online · 2:00 PM EST · Free
Online · 10:00 AM EST · Members Only
New York, NY · Full-Day Conference
Multiple cities + Online · 6:30 PM Local
The Ethical AI Society is a non-partisan, global nonprofit founded to ensure that the development and deployment of artificial intelligence reflects the values, rights, and needs of all people — not just those building it.
Our network of ambassadors spans academia, civil society, government, and industry. We believe that diverse voices — especially those historically excluded from tech — must shape the AI systems that will shape all of us.
Read Our Full Story