Gilbertiodendron dewevrei flowers from Korup National Park, Cameroon - a tree species that forms dominant forests in the Congo Basin where fires have doubled in recent decades.

From Cellulose to Colonial Bias: AI's Hidden Hierarchies

The thing about AI that keeps bugging me isn't the technology itself. It's that we don't know what we don't know.

I've been thinking a lot lately about how different AI development is from traditional software engineering. When you're building 'normal' software, you know exactly what your code will do.

But AI systems develop their own ways of making decisions.

AI systems are making choices about what to prioritise, what to remember, and what to ignore and they can do it in ways we didn't exactly 'programme'. It's like watching my toddler learn, except this toddler might be making decisions that affects millions of people.


The blind spots are real. They're bigger than you think.


I've been researching wildfires recently (after witnessing one in the UK), and this is what I found:

using an AI-powered research tool (not an LLM) depending on my initial inputs, I get different results. Garbage in, garbage out. Some regions don't appear at all, while others, particularly on the North American continent, dominate the outputs. Yet wildfires happen on almost every continent except Antarctica.

The African continent actually experiences the most ‘wildfire’ activity globally. But when was the last time you heard about any of these wildfires in the news? AI systems have become really good at detecting California fires while underperforming in data-sparse regions (like much of Africa and Asia).

This is a perfect example of how AI systems perpetuate existing inequalities. The regions with the most comprehensive fire monitoring systems get the most AI attention. This creates a loop that marginalises communities already facing the greatest climate impacts with the fewest resources.

This pattern connects directly to issues of knowledge representation and validation. It's not just about the possibility of biased datasets: worldviews are embedded in algorithms. When AI systems perpetuate Western-centric thinking while dismissing other knowledge systems, we're digitising colonialism (often unconsciously) but with very real consequences.

This bias becomes even more problematic when we consider the differences in fire behaviour across regions. An AI model trained on California chaparral fires won't immediately work for Siberian peat fires or Mediterranean scrub fires - the composition of the vegetation is different for a start. Anyone that's looked at plant and wood ash analyses knows exactly what I'm talking about.

As a descendant of well-known carpenters in the Middle East, I learnt the differences between hardwood and softwood as a child: mahogany burns differently from pine, ebony takes much longer to sand to perfection than beech. These same principles apply to wildfire AI systems, but on a global scale. California’s shrubs are packed with flammable oils and resins that create intense, fast-moving fires. Siberian peat is completely different - it’s like compressed organic matter that can smoulder underground for months, almost like a slow-burning coal. A model trained to recognise California’s resin-fuelled blazes isn’t going to understand the wet forest fires of the Congo Basin (particularly in the Democratic Republic of Congo, Cameroon, and Gabon) because it’s essentially looking at different materials. These Central African wet forest fires have doubled in recent decades. Each vegetation type has distinct chemical signatures that affect ignition temperatures, burn rates, and combustion patterns.

These technical limitations are compounded by detection issues. Satellite fire detection systems are good at detecting temperature anomalies on the surface level, but they can't differentiate between a ‘wildfire’ and controlled burns. Controlled burns in South Sudan or Scotland are part of land management practices. Yet these are detected as 'wildfires'. Classifications based on thermal imaging alone is simplistic and misses the cultural and ecological context that we understand when studying plant combustion properties.

Based on this experience and having studied ISO 42001, I recognise three areas where international standards would now require us to act:

  • We need to get comfortable with transparency. ISO 42001 requires organisations to establish processes for AI system transparency and explainability. If we can’t explain why an AI system made a decision, or why it’s showing us fires in one region but not another, we’re not just failing ethically - we’re failing to meet emerging international standards for AI governance.
  • We need diverse voices in the room, not just as an afterthought, but as core decision-makers throughout the entire development process. ISO 42001 emphasises stakeholder engagement and inclusive AI development practices. The people most affected by the blind spots need to be the ones helping to identify and fix them.
  • We need to keep ‘humans-in-the-loop’ for big decisions. ISO 42001 addresses human oversight requirements for AI systems, particularly for high-risk applications. AI can process information faster than we can, but human judgement, especially from diverse perspectives, still matters (international standards recognises this as essential).

Organisations that get this right build better products, earn deeper trust, and avoid the disasters that come from poorly governed AI systems.


My wildfire research taught me that AI systems are mirrors that reflect the world not as it is, but the world as it has been documented by those with the resources to do so.

We have a choice: we can keep building AI that replicates existing inequalities, or we can build AI that actively works toward justice.

AI developers need to be able to see the forest and the trees. Global perspectives set the foundation.

 

 

 

 

 

 

Image credit: 

Gilbertiodendron dewevrei, Leguminosae - Flowers from a tree in Korup National Park, Cameroon.

ID:1592950 © RBG Kew https://creativecommons.org/licenses/by/3.0/

Xander van der Burgt

Back to blog

Leave a comment

Please note, comments need to be approved before they are published.