Kenyan teenagers are interrogating artificial intelligence with a precision that embarrasses most boardrooms. Before Kenya finalises its national AI policy, a group of young people sat down to tell the government exactly what they want and what they refuse to accept.

The question landed like a stone in still water. A facilitator turns to a room of fourteen-to-seventeen-year-olds and asks them to rate, on a scale of one to ten, how deeply artificial intelligence has already embedded itself in their education. Without hesitation, a hand goes up.

“Seven,” the participant says. “Because if you are given homework and you don’t have time to do it, you will just use AI. You are not really learning. You are just waiting for an answer.”

The room has a great deal more to say.

Kenya is developing its national AI and Emerging Technologies Policy, one of the first such frameworks on the continent. The policy development process is being spearheaded by the Ministry of Information, Communication and the Digital Economy (MICDE) with KICTANet as the lead implementing partner and supported by the British High Commission (BHC) in Nairobi through its governance and digital transformation portfolio. This support forms part of the broader UK–Kenya strategic partnership on responsible AI and emerging technologies, underscoring the UK’s commitment to advancing ethical, inclusive, and transparent AI governance frameworks across Africa.

A cohort of young Kenyans  aged between 9- 17 was invited not merely to observe the process, but to interrogate it. What followed across two hours was a rigorous dissection of AI in education, healthcare, data privacy, and governance. 

Education

The Exam That AI Could and Couldn’t Mark Fairly

A debate has taken root in Kenya’s education sector: should AI mark national examinations? The young people in this consultation declined to be cautious about it.

The efficiency argument came first. “In my school, we are 78 students,” one participant said. “When it’s time to mark the papers, some students have to help the teacher. In this context, people consider friendships, and some people bribe. But there’s no way you could bribe an AI.”

“AI has no attitude. Let’s say you had a marker who was once beaten by students at a school, then transferred and now they’re marking papers from that same school. AI has no attitude. It’s also more accurate, and fatigue has been mentioned,” Consultation participant.

The group’s enthusiasm carried a sharp condition: creative work. “AI is not effective in marking essays,” one participant argued. “The answer the AI expects may differ from what a student wrote, because we define creativity differently. A teacher might give marks, AI will just mark it wrong.”

The sharpest provocation cut both ways: if teachers are permitted to use AI tools to assist with marking, why are students penalised for using AI to answer? Nobody in the room had a satisfying response. It is the kind of ethical inconsistency that a national policy framework cannot afford to leave unaddressed.

Healthcare

Monitor Everything. Decide Nothing.

Healthcare produced the consultation’s most nuanced arguments, because it is where the stakes of getting AI wrong are most visceral.

The use case that drew the most consensus was practical: AI-powered health monitoring for elderly and isolated individuals, wearables that track vital signs and automatically contact a pre-designated person if the user stops responding. The jobs concern surfaced quickly. “If we introduce AI to do the night shift, that will be minus fifty nurses,” one participant calculated. “Where will those fifty nurses go?”

“I prefer AI to be more of a conclusion thing, but not the decider. AI identifies, AI concludes but humans acquire the results, humanly, at least through doctors or nurses or pharmacists,” Consultation participant.

The group drew a firm line: AI handles data collection and pattern recognition; licensed clinicians retain authority over diagnosis and treatment. Then a participant shared something personal. At twelve, she fractured her arm in a car accident. The attending doctor, focused entirely on the more visibly distressed patient, dismissed her without examination. Her observation cut through the room: “Sometimes AI can be used in that first diagnostic stage, where it checks everyone equally. There’s no part where it dismisses someone because of a certain prejudice.”

The counterpoint came immediately: “When a human speaks to you, they might say they had the same issue. They share their experience, a story. It’s different. It’s hard to trust AI.” Both observations are correct. The division of labour the group reached — AI for consistent initial assessment, humans for the therapeutic relationship — reflects a balance that global health systems are still struggling to formalise.

Data Privacy

Your FYP Knows What You Whispered

If there was a moment that should unsettle any technology company’s legal team, it was this one.

“That conversation you wanted to keep secret the next time you go to TikTok, your For You Page is about that conversation. This is happening, ” one participant noted.

Participants catalogued the mechanisms of commercial data extraction with fluency: terms of service designed to be unread, cross-platform behavioural tracking, telecom metadata collection. Meta, parent of WhatsApp, Instagram, and Facebook was cited specifically as a cross-platform data architecture that operates with minimal transparency to its youngest users.

When asked to draft three rules for a Kenyan children’s data code, a framework Kenya does not yet have, participants produced principles that map closely onto the UK’s Age Appropriate Design Code and the EU’s GDPR provisions on minors. That they arrived there without legal training is, itself, a data point.

The essentials: informed consent proxied through a trusted adult; plain-language privacy policies mandatory before account creation; granular cross-platform consent controls; and prohibition on using children’s data for targeting or surveillance. One participant went further, proposing age-linked restrictions built into phone hardware at manufacture permissions expanding automatically as the user ages. “There’s no way,” they noted, “you can rely on platforms to self-enforce age restrictions they have commercial incentives to overlook.”

AI Governance

Not Another Photo Op

The governance section produced the consultation’s most politically charged exchange. Should young people hold a permanent seat in Kenya’s AI policy structures? Consensus formed quickly: yes. The reasoning was not sentimental.

“You are making rules to protect me and you don’t involve me,” one participant said. “How sure are you that those rules are actually helping?”

“Every time we come here, we talk about the same issues. Once we are done talking, it ends there. We need feedback. We need to know: what did you do with what we said?” added another participant.

Participants were precise about what meaningful participation requires: year-round engagement rather than ceremonial World Internet Day appearances; age-disaggregated representation, because teenagers cannot speak for six-year-olds; decision-makers in the room rather than note-takers; mandatory written government responses to formal recommendations within a defined timeframe; and follow-up evaluation at three months to check whether consultation inputs appeared in actual policy text.

Dr George Musumba, the Technical Chairperson confirmed a validation workshop is planned for May, at which a draft-zero policy will be reviewed against the inputs gathered during the children’s consultation and sectoral roundtables. Several participants indicated they intended to hold the organisers to that timeline.

Analysis

What Kenya’s AI Policy Needs to Hear

Across every domain — education, healthcare, data, governance — the same architecture surfaced: AI as tool not replacement, localisation as prerequisite not afterthought, enforceable rights not aspirational ones, and structural youth participation with consequences for non-compliance.

When participants tested AI image generators live prompting tools with “African doctor” and “African city” some platforms initially returned images incongruous with modern Kenyan reality. The observation one participant drew is worth quoting in full:

“When you’re making the AI, maybe you should have input from the western point of view, from our point of view, from other people’s point of view so that everything is on your table, and then choose what you want.”

“We are changing Kenya here, actual policy. What you’re doing is going to inform the real laws of the country. You have a right to ask: what did you do with it? Your time is precious. Don’t forget that,” noted Dr Musumba.