A reflection on how an abandoned 'unethical chatbot' project sparked a deeper understanding of AI ethics in global materials systems.
Five years ago, I began exploring how technology intersects with ethics. I didn't realise until recently that this was the beginning of something deeper. That experiment became an unlikely entry point into what has now become my main focus. Is there potential for ethical frameworks to be embedded in AI systems, especially those that shape complex global supply chains and materials extraction?
The Unlikely Beginning: An Art Project That Never Was
My journey into AI ethics began in an unintentional way. Through a series of "what ifs...?" I deliberately tried to create something unethical. I was creatively exploring the intersection of technology and society. I conceptualised a project to build an 'unethical chatbot' designed to provoke discussion. It was a test into how far boundaries could be pushed, how long it could take to provoke a negative response from the user, and ultimately a way to question the responsibilities of artificial intelligence. The goal wasn't to cause harm, but to use the project as an interactive lens for examining the darker possibilities in AI system design.
Although I never launched the chatbot, the process of thinking through its implications became my gateway into understanding the ethical dimensions of AI system development. That artistic provocation became a catalyst for deeper inquiry about AI's role in the wider world.
Ethical considerations in AI aren't abstract philosophical concepts, especially in live AI systems. It becomes very real when you're designing systems that impact people, ecosystems and trade.
The Evolution: From Provocation to Purpose
My awareness deepened as I moved into work focused on industrial AI solutions. The stakes in this field (AI tools for materials researchers) go well beyond accuracy or performance metrics. These systems intersect with water extraction rights, emissions, mineral access, and global employment chains. If we want systems to support equitable development or align with frameworks like the UN SDGs, they need to be engineered with much more ethical scrutiny.
Today I find myself at the intersection of machine learning, materials science, and ethics. AI models used in this space aren't just helping discover compounds or forecast corrosion. Models are also redirecting entire flows of energy, capital, and raw inputs. In regions like the Lithium Triangle of South America or cobalt-producing zones in the DRC, decisions made by algorithms determines who benefits and who pays the price. This becomes even more critical as I begin exploring areas like mining, where ethical questions unfold in highly noticeable ways: impacting water access, land rights and livelihoods.
Understanding Ethical Challenges in AI-Driven Materials Systems
Systematic Bias in AI Systems
AI systems can either reinforce existing inequities or contribute to more equitable resource allocation, depending on governance approaches. The UNCTAD Technology and Innovation Report 2025 warns that
"AI technologies trained on skewed or discriminatory data are likely to ignore particular social, economic, environmental and cultural contexts, with the risk of deepening existing data divides." (p. 144).
The Stanford HAI AI Index Report documents industry concentration in AI development. In materials discovery and extraction, technical systems often downplay or ignore the perspectives of directly impacted communities. As Birhane argues, this can result in algorithmic processes that appear neutral but are in fact systematically biased.
Lithium: Efficiency vs. Water Justice
- The region known as the Lithium Triangle (in parts of Argentina, Bolivia, and Chile) holds around 56% of the world's lithium reserves.
- Lithium in this region is extracted from salt flats using brine evaporation methods, which can consume between 51,000 and 135,000 litres of water per tonne of lithium carbonate equivalent (LCE). In some high-altitude desert operations, the water demand has been estimated around 2 million litres per extracted tonne when accounting for freshwater drawn from fragile aquifers.
Industry and academic sources confirm that advanced machine learning algorithms are actively used to optimise processes in lithium extraction. An industrial report by SLB (2024) highlights the use of AI/ML for real-time process optimisation. An academic study by Fujita et al. (2023) demonstrates how long short-term memory (LSTM) based deep learning models can enhance brine recovery.
While these systems can maximise output and lower energy costs, research from the Stockholm Environment Institute notes that they are rarely aligned with local Indigenous governance. In lithium-rich regions, Indigenous communities have been “historically excluded from decision-making processes,” despite their traditional reliance on the affected ecosystems.
Decisions about extraction, water use, and ecological consequences are often made without community input. This exclusion creates a form of digitally mediated environmental injustice, a concept examined in Barandiarán's analysis of lithium mining in Chile, Argentina, and Bolivia.
Cobalt mining illustrates many of the same ethical tensions around automation, community exclusion, and the balance between innovation and justice.
Cobalt: Formalising Artisanal Livelihoods
- The Democratic Republic of the Congo (DRC) produces approximately 73% of the world's cobalt, essential for batteries in AI hardware and electric vehicles.
- Artisanal and small-scale mining (ASM) once accounted for around 10% in the DRC, supporting livelihoods for around 100,000–200,000 people.
- However, industrial scaling and market pressures have recently reduced artisanal and small-scale mining's share to below 2% of national cobalt production by 2024.
- Efforts to ensure ethical sourcing through blockchain traceability and automated scoring systems can unintentionally exclude artisanal miners and small-scale miners by imposing complex compliance requirements. These producers lack the technical capacity and financial resources (p.31) to comply with these requirements.
As AI systems reflect the priorities and assumptions of the developers who design them, addressing biases must start at the beginning of the design process.
The Power of Diverse Teams in Ethical AI Development
Extensive research confirms that diverse and inclusive teams aren’t just fairer: they are more likely to identify blind spots, anticipate unintended consequences, and challenge groupthink. As AI systems are used across global materials supply chains, technical decisions made in one region often have environmental or human impacts in another.
Studies by McKinsey show that organisations integrating diverse perspectives throughout decision-making are more adaptive and innovative:
“There is ample evidence that diverse and inclusive companies are likely to make better, bolder decisions - a critical capability in the crisis. For example, diverse teams have been shown to be more likely to radically innovate and anticipate shifts in consumer needs and consumption patterns - helping their companies to gain a competitive edge.”
Preface, 'Diversity Wins: How Inclusion Matters'.
An evidence-based feature in MIT Sloan Management Review shows that the benefits of diversity are greatest when DEI is embedded into core strategy, rather than treated as a separate initiative.
In ethical material-AI projects, where systems may affect communities far from the design table, this foresight is not a luxury but a necessity.
Integrating Ethics with Industry Standards
Combining ISO 27001 (information security) with the new ISO 42001 (AI management) creates dual-track governance: securing infrastructure while auditing for fairness, transparency, and inequitable power structures.
Embedding recurring ethics reviews and requiring stakeholder input at key stages of the AI lifecycle helps turn high-level principles into real practices. Instead of treating ethics as an add-on, embedding routine ethics checks through the AI lifecycle keeps responsible practices integrated. As shown in this research paper on AI ethics implementation this approach embeds ethical commitments in real-world workflows.
Ethics aren't only built into training data or compliance frameworks, they also emerge through the narratives, visuals, and metaphors that shape how AI is explained and sold. This brought me back to my background in art and communication.
Lessons from Marketing Ethics and Transparency
Recently, I explored another artistic angle that deepened my thinking about ethics in relation to materials. With the assistance of AI, I created a series of satirical campaigns for critical minerals, borrowing heavily from early 20th-century advertising and its use of orientalist aesthetics. Inspired by Edward Said's critique in Orientalism, my intent was to demonstrate how Western marketing has often exoticised the "East," creating narratives that distance consumers from the realities of resource extraction and exploitation.
Through this experiment, "aesthetic distance" (when visual and narrative style creates a sense of remoteness) can make the harms and disruptions caused by extraction seem less immediate, or even invisible. This same psychological distancing, I realised, is at play in some uses of AI: systems can either reinforce these disconnections, masking real-world impacts or (if designed ethically) help challenge and reveal them instead.
By exaggerating and leaning into marketing fantasy, this shows just how easily presentation can disconnect audiences from the consequences of material sourcing. For me, it was a reminder that the ethical challenges of AI don't just lie in code or data, it's also shaped by the stories and images that facilitate the relationship between technology and society. Ethical practice extends to how we communicate and frame the narratives around these systems, not just how we design and deploy them.
Scale and Speed: Why Embedded Ethics Matters
Autonomous labs now use AI to search 2.2 million new inorganic crystals in silico (through computer simulation), "equivalent to nearly 800 years of knowledge". Data-centre operators such as Google have installed 100 million lithium-ion cells for backup power since 2015. That's enough to power over 400 UK homes for a full year.
The acceleration continues: North Carolina State University's latest self-driving laboratory achieves 10x greater data collection than previous autonomous systems while reducing chemical waste through smarter decision making. Yet even this sustainable innovation operates faster than human review can assess its broader implications.
"The future of materials discovery is not just about how fast we can go, it's also about how responsibly we get there. Our approach means fewer chemicals, less waste, and faster solutions for society's toughest challenges."
Professor Milad Abolhasani, North Carolina State University. Source: ScienceDaily, July 2025.
When optimisation loops move this fast and this widely, human review cannot keep up. Ethical frameworks must be designed into objectives and deployment policies from day one.
Conclusion
Five years ago, an abandoned art project about an 'unethical chatbot' planted a seed that has grown. That creative experiment forced me to confront questions I couldn't ignore: What responsibilities do we have? How do our systems affect the most vulnerable?
Working in materials science AI, I see those questions everywhere. In lithium extraction algorithms that optimise water use without consulting Indigenous communities. In cobalt traceability systems that exclude artisanal miners. In materials discovery platforms that accelerate innovation while concentrating power.
My background taught me to see what's hidden in plain sight. In AI, that means asking questions such as whose voices are missing, whose interests aren't represented, whose communities bear the costs of technical progress. The most important insight from this journey is that ethical AI isn't built by accident, it grows from intentional choices about inclusion, accountability, and justice.
Every algorithm embeds values. Every optimisation reflects priorities. Every deployment has political consequences.
As these systems become more powerful, the frameworks we choose today will shape technology for generations. My hope is that by sharing this journey, including its false starts and ongoing questions, we can build AI that truly serves all of humanity, not just those with the power to define its objectives.
Images for this blog post were generated by AI. AI assisted research, verified by a human.