Can we use AI-generated video for social good?
There’s no question about it - Artificial Intelligence is here and it’s here to stay. Every day I log on to Twitter, and it seems that some new interface has been developed that allows users to create digital products seemingly out of thin air. I tested one of them last year when I experimented with AI-generated audio to create my own podcast - the results honestly astounded me. Content production is about to change forever - and we are just getting started.
When OpenAI announced their upcoming project, Sora, I knew that another significant step in the evolution of AI-generated content has been made. We’ve gone beyond ChatGPT or Midjourney and now venturing into AI-generated video that looks more realistic than ever. It seems unbelievable, but it’s happening; and now it’s up to us as creators, communicators, and consumers to decide what this means for the future of humanitarian storytelling and the role AI will play in shaping our narratives.
A Double-Edged Sword
As a humanitarian communicator, it’s hard for me to ignore the potential implications of Sora in my field. The advantages are definitely tempting - in a small communications team with little to no resources, it would be easy to turn to a service like Sora to help with creating compelling video content for storytelling and awareness raising purposes without having to deal with the financial, logistical, and safety challenges of traditional video production. Need a shot of a Kenyan farmer harvesting produce? You can now customise everything from what they look like to the fruits or vegetables in their garden. That kind of creative freedom is definitely enticing and would make for a more polished product at a lower price - but sometimes, especially in a humanitarian context, ‘polished’ is not the ultimate goal.
We turn to humanitarian communication channels for an authentic connection to people from all over the world. We want the stories and images to be raw and unfiltered, reflecting the reality of communities in need or crisis zones. They remind us of our shared humanity and the need for compassion and solidarity in the face of adversity. When we use humanitarian channels to put out videos that don’t depict real people and situations, we risk eroding that sense of authenticity and connection. Worse even, since AI is still in its infancy, its algorithms are imperfect and tend to over-generalise, which can lead to the perpetuation of stereotypes or the oversimplification of complex issues. In less qualified hands, AI content can also present an overly sanitised version of reality that fails to capture the nuances and challenges of humanitarian work.
Ultimately, this can all lead to losing the trust of audiences, which would be devastating for humanitarian organisations. It goes against the very reason we engage with people online; trust is essential for maintaining the reputation of NGOs and INGOs everywhere as it’s what allows them to mobilize support, influences decision-makers, and drive meaningful change. If followers suspect that the stories they share are not genuine, or that they are prioritising high production values over authentic messaging, they may question their integrity and the validity of their causes. In a world that already struggles with misinformation and skepticism, particularly online, the humanitarian world cannot afford to contribute to these issues through artificial content.
Transparency Above All
With all that being said, I do see several potential uses for AI-generated video in the future that would be both creative and innovative while also preserving the trust of the reader. Before we delve into a few examples, I think it’s essential to address the underlying reason why we, as consumers, are often uncomfortable with AI-generated content in our digital lives.
With how far AI content has come in such a short time, it feels like we’re getting collective whiplash and losing our ability to discern what’s real and what isn’t. It gets progressively harder to tell if an image is authentic, and it makes us uncomfortable. While I personally find the evolution and development of AI technology fascinating, I firmly believe that we must draw the line at presenting AI-generated content as genuine, especially in the context of humanitarian communications.
How do we approach the issue then? For me, the answer is simple - we have to be transparent about the use of AI-generated content. Being upfront about the nature of the content, we can harness the power of AI to enhance our storytelling and engagement efforts, without compromising the trust that our audiences place in us. At this stage of AI-generated video, even labeling it as such would be ethically dubious - we need to go a step beyond and only use such content for materials where traditionally captured footage would be impossible.
“Sora is at its most powerful when you’re not replicating the old but bringing to life new and impossible ideas we would have otherwise never had the opportunity to see.”
Take, for example, the most recent use of Sora by members of the creative community - visual artists, designers, creative directors, and filmmakers all had a chance to put the product to the test, and they all saw Sora as a supplement, rather than replacement, for stories and experiences. Embracing the surreal world that AI would open for us is, in my opinion, the right way to leverage artificially-generated content.
In the end, the key to successfully integrating AI into humanitarian communications lies in being intentional, transparent, and ethical in our approach. We don’t know how we will see AI video in 5 or 10 years; but right now, we are still wary and distrustful of it, so the way forward (to me, at least) lies in creating visuals that would otherwise be ‘unfilmable’ - it’s up to us what those visuals will be and how to use them in a way that amplifies our impact and strengthens the bonds of trust that are so essential in our work.