To slop or not to slop?
I’ve been running my blog for nearly two years - longer than any other blog I’ve maintained throughout my online life. Part of the reason for this is the rise of GenAI; conceptualising, planning, and writing a new blog post seems a lot less daunting when I can turn to ChatGPT or Claude for help with suggestions for edits. In the summer of 2023, I genuinely felt like I had a reliable content-making machine at my fingertips. All I needed was a vague idea of an issue I wanted to highlight, and the LLMs could take it from there.
The first few posts on my blog were mostly AI-generated. I knew this was less than ideal but in my mind, as long as I was getting my point across, who cares who wrote the text? It felt efficient, even liberating. And I wasn’t alone. With these shiny new tools at our disposal, it became easier to ask, “Why bother writing at all?” Somewhere, a machine was already generating endless content, faster and more fluently than I ever could. So much of the internet already felt like it was written by a bot. And yet, I still wanted my voice to matter.
Two years later, AI content is everywhere—and I feel responsible, as a communicator, for holding the fort against what’s increasingly being called ‘AI slop’.
Clearly, it’s already becoming a problem.
The origin of the term is unclear but to most of us, ‘AI slop’ is the new internet spam - low-quality text, videos and images generated by AI. It’s easier than ever to make content in seconds and users have been taking advantage of the ever-evolving LLMs ever since their introduction to the public in 2022. An analysis of more than 300 million documents, including consumer complaints, corporate press releases, and job postings suggests that the web is being swamped with AI-generated slop. By late 2024, generative AI had quietly embedded itself into the fabric of public communication. Around 18% of financial complaints, 24% of corporate press releases, and nearly 14% of UN statements showed signs of LLM assistance. Even UN press releases showed a marked uptick to 14 percent being LLM-generated or assisted — a clear sign of the "growing institutional adoption of AI for regulatory, policy, and public outreach efforts," the researchers wrote.
Resistance seems futile. If I want to grow The Context Window and have Google index it so I can get more impressions, I need to churn out content. I’m already competing against all the other blogs and websites focusing on the intersection of AI and humanitarian communications, and those sources would have no qualms about publishing fully AI-generated articles optimised for the algorithms. I could try to do the right thing - stick to my human-written posts and set aside time and energy once or twice a month to do my research and painstakingly write an article and create the images for it - but I’d be buried under a mountain of AI slop. How can I compete?
GenAI models will only get better. They’ll get faster, slicker, more persuasive—better at mimicking nuance, emotion, even humour. When it becomes impossible to discern whether a blog post has been written by a human or a machine, would we even care? The algorithms will reward speed, and that’s a race we will lose to AI models. And this is what scares me: not just the volume of AI-generated content, but the way it resets expectations. Readers are being conditioned to want speed over substance, density over depth, cleverness over clarity. In a world like that, taking the time to craft something thoughtful starts to feel… naive.
It is incredibly hard to say no to an omniscient content producer that can oftentimes make better points than me. Even as I write this very article, I continuously check in with ChatGPT to make sure that I’m getting my point across clearly or ask for suggestions when I’m experiencing writer’s block. But this reliance comes with a cost. We have a responsibility to be mindful about the content we produce - otherwise, we risk not only weakening our marketing efforts but also eroding audience trust and engagement.
And for me, the issue is not just about quality; I just worry about contributing to a future where the internet, the place I’ve grown up in and have grown to love, is overrun by AI slop. Turning my back on this incredible new technology that can remix, repackage, and regenerate is also no solution, so I’ll continue my check-ins and I’ll copy a phrase here and there. And I won’t be able to compete on speed, but when I share my latest article, I want to feel like I am sharing something authentic and sincere.
This blog may never beat the machines on volume, and I’m okay with that. I didn’t start The Context Window to win the content race - I started it to make sense of the world, to ask better questions, and to share what I learn along the way. If the internet is going to be flooded with noise, maybe the best I can do is make sure my little corner of it still sounds human.