People who run refugee camps need to know AI
For the past year, I’ve been working on establishing new knowledge platforms over at the UN Population Fund - basically, the agency has this enormous wealth of knowledge and documentation that becomes impossible to sift through for a simple human user. For anyone to make use of this amount of information, we need smart systems to help us navigate it. That’s where I came in with my designed platform - it provided the tools needed by UN staff members to find what they were looking for, sometimes without even knowing what it was.
I started this project just as AI was generating major buzz in the sector. Most people in the UN knew about ChatGPT or Gemini what they had learned online or through their own experimentation. A lot of them, managers in their 50s and 60s, were still struggling to adopt tools like Slack or Google Workspace, and suddenly they had these new, even more complicated technologies to master - the future was coming too fast, and they were floundering.
Despite this, most of them were quick to get excited about making their work better, faster, and easier. After a year’s worth of consultations, I quickly noticed trends among them:
They loved intelligent search, but hated chatbots. The future of AI in knowledge management and user experience didn’t involve talking extensively to a text bubble, rather it had the algorithm work quickly and invisibly in the background, magically finding what they needed right away.
They didn’t love AI-generated text, but inevitably used it for the dry, mundane parts of their jobs that previously ate away at their day. But for all the excitement, there were shadows that rarely got discussed in the open.
Nervous jokes about ‘feeding the algorithm’ were common among the tech-savvy colleagues, but these conversations rarely made it to the centre of the table. Maybe it was because the technology felt too new to question, or because the pressure to innovate was so intense; whatever the reason, the risks often got sidelined in favour of quick wins and shiny new features.
My knowledge portal project kind of became a microconsm of these tensions. On paper, it was exactly what we needed - a smarter, faster way to connect people with the information they needed most. But as we built it, thorny questions kept cropping up. Who really owned the data that powered the system? How could we ensure that consent was respected, especially when information was shared across teams and countries? And could we trust the algorithms to serve everyone equally, or were we just replicating the biases we already had?
Bias is something that particularly stood out to me as a risk that should not be underestimated. Bias in large language models often stems from training data that overrepresents certain countries, cultures, or perspectives—most commonly Western or English-speaking ones. This leads to outputs that can misrepresent or marginalize other regions, reinforce stereotypes, and overlook local customs or realities. For example, studies have shown that LLMs may rate people from Africa less favorably than those from Europe, default to Western holidays, or associate certain professions with specific genders. Such biases aren’t just academic—they have real-world consequences. For humanitarian organizations, which rely on accurate, fair, and culturally sensitive information, these embedded biases pose a serious risk: they can undermine trust, perpetuate harmful stereotypes, and ultimately hinder efforts to serve diverse communities with empathy and respect.
Another one that I worry about is data privacy. Sometimes, in the rush to complete the work and deliver results in emergency situations (which humanitarian orgs often find themselves in), it’s easy to forget just how much is at stake. I’ve seen staff upload entire PDFs, sometimes packed with confidential details about refugees or beneficiaries, straight to ChatGPT or Gemini to get a quick summary or draft a report. It feels efficient in the moment, but it’s a gamble with people’s privacy.
These platforms aren’t designed to be secure vaults for sensitive information. Once that data is uploaded, there’s no clear way to control where it goes or who might see it, now or in the future. In humanitarian work, a single slip can put vulnerable people at risk, breach trust, and even violate laws or ethical standards. The pressure to move fast can make these dangers seem abstract, but for the individuals whose stories and identities are exposed, the consequences are painfully real.
Amina is a beneficiary of the World Food Programme in South Sudan. When her personal details were leaked, she was threatened by local militia and denied access to vital food aid. Amina doesn’t exist, but she could. And that’s reason enough to care about privacy.
The promise of AI in humanitarian work is real, but so are the risks. In our rush to innovate, it’s all too easy to overlook the invisible costs: the voices lost to bias, the stories flattened by algorithms, the lives put at risk when privacy is treated as an afterthought. These aren’t abstract technical issues; they’re questions of trust, safety, and dignity for the very people we’re meant to serve. If we want AI to help us build a more just and effective humanitarian sector, we have to do more than chase efficiency. We have to slow down, ask hard questions, and put people - real or imagined - at the center of every decision. Because in the end, protecting their rights isn’t just a matter of policy or compliance. It’s the heart of the work.