You know that feeling when somebody asks you, without telling you why, to pick a number between one and ten? My immediate thought is ‘what if I get it wrong?’. While the consequences of choosing the ‘wrong’ number are unlikely to be at the level of Squid Game, that momentary brain clench is similar to the way I feel when thinking about the current AI boom.
This article was first conceived as an explainer on AI agents, but in the time that has passed since that idea a few weeks ago and today’s commitment to just get this bloody thing finished, I decided that there are some great and/or interesting explainers already written:
AI Agents x Crypto X post and Github Repo by Sophia Dew of Celo.
Crypto & Vice: NSFW AI Agents Exponential Potential (unsure if this was advertorial but yikes!).
69 trends in 2025-era DAO design by Kevin Owocki on X.
The A.I. Agent Supercycle: A Guide to the Best Infrastructure Plays by Edgy on X.
So really, this piece is my attempt at unclenching various parts of my thinking about humanity’s great passion for replacing ourselves with machines.
AI Could Go Either Way
I’ve been flip-flopping between delight at the fun and novel applications of AI and AI agents, which are easy and entertaining to experiment with; and a desire to warn you against the kind of corporate-controlled, ubiquitously AI-fuelled future that I can effortlessly imagine. I think this tech is on the fence right now, and we can't afford to remain passive.

Pink metallic humanoid robot sitting on a fence looking thoughtful
I can’t quite get past the idea that we are somehow putting the last nail in the coffin of human agency and intelligence, perhaps even that of humanity’s existence. And yet I can also catch glimpses of zen-level societal improvement, a future in which by harnessing AI’s aggregational and predictive power we have finally worked out how to exist on this planet (our only home) without stressing the environment beyond its limits.
The Ugly Parts Sure Ain’t Pretty
In spite of all the good that may come, here are the parts that bother me, in no particular order:
We don’t have control over what corporations are doing with data they have about us.
In this recorded interview about her book, Data Cartels, law professor Sarah Lamdan highlights the skyrocketing value of data, particularly the example of academic publishers evolving into purveyors of “business insights” — selling the information they have already monetized via research databases for use as AI-training data for potentially harmful (certainly biased) predictive algorithms in sectors such as insurance, policing, and banking.
We don’t care enough about nation states that clearly plan to use AI for authoritarian surveillance.
According to this Wired article the U.S is happy to take investment from wherever, in order to maintain AI market leadership. Whether it’s questionable influence and/or potential IP leakage by the UAE, as the Wired article explains, or China’s documented use of AI in military, cyber-influence, and surveillance applications, it’s concerning to me that AI development is dominated by players who care more about progress, profit, and power than human rights, privacy, and equitable outcomes.
We keep talking about AI enhancing human capability and creativity but much of our effort and attention seems to be going to AI that replaces human effort.
So what will the result of externalising the intellectual effort needed to write, create, research, compose, and design be? My fear is that we will be left bereft, and I’m not the only person who thinks so:

SketchesbyBoze on X

ABC News: DeepSeek's emergence signals the beginning of the human-replacing phase of AI
There’s an endless drive for productivity and growth associated with AI — the idea that AI will empower us to achieve more efficiency and further economic and industrial expansion. Not only does this narrative conveniently ignore the question of how those who lose jobs to AI will afford to live, but it's also at odds with our precarious planetary situation which in my humble opinion would benefit from a degrowth approach.
As the authors of this article note, we’re at risk of painting ourselves into a corner if we don’t stop to consider the impacts of making AI ubiquitous:
“AI itself is becoming an infrastructure that many services of today and tomorrow will depend upon. Considering the vast environmental consequences associated with the development and use of AI, of which the world is only starting to learn, the necessity of addressing AI alongside the concept of infrastructure points toward the phenomenon of carbon lock-in.
Carbon lock-in refers to society’s constrained ability to reduce carbon emissions ... due to the inherent inertia created by entrenched technological, institutional, and behavioral norms. That is, the drive for AI adoption in virtually every sector of society will create dependencies and interdependencies from which it will be hard to escape.”
Which leads me to…
The massive ecological footprint of AI
I’m hopeful that the research on green AI (green-by AI and green-in AI) will continue to expand, because right now, AI is the colour of pollution. Most of the research into the environmental impact of AI has centred on the energy used to train large language models, which leads to significant carbon dioxide emissions, as illustrated in this graph:

The authors who produced this graph further note that “using these systems also has a cost. As an example, GPT-3 was accessed 590 million times in January 2023, leading to energy consumption equivalent to that of 175,000 persons. Moreover, in inference time, each ChatGPT query consumes energy equivalent to running a 5 W LED bulb for 1hr 20 min, representing 260.42 MWh per day.”
It’s not just about energy and emissions though; this research paper introduces a method for assessing the impact of ‘AI as a service’, encompassing training, inference, online hosting, and end-user terminals (e.g. phone or desktop computer). Using Stable Diffusion as a case study, they calculated that one year of use led to “360 tons of carbon-equivalent emission, an impact on metal scarcity equivalent to the production of 5659 smartphones, and an energy footprint of 2.48 Gigawatt hours”.
This is energy use at industrial scale; just one Gigawatt hour could power about 300 million smartphone charges. It certainly gives me pause to consider the number of iterations I generated to arrive at my earlier image of the ‘pink metallic humanoid robot sitting on a fence looking thoughtful’.
Is Crypto a Catalyst or a Casino?
So far, the crypto-specific AI narrative has been frothing about using AI agents in DeFi (#DeFAI) and trading, improving the web3 user experience, and enabling features such as autonomous investments, voting, and community management for DAOs. Really though, we’re all hyper-aware that for many people, AI agent tokens are this cycle’s new ‘meta’ i.e. the latest excuse to play the casino. I'm not immune; I FOMO’d into very small amounts of Virtual, GAME, and aixbt only to see them crash recently when the release of Chinese startup’s DeepSeek-R1 model sent tech stocks and crypto tokens tumbling.
With the release of the Venice.ai token VVV, things got a little testy on Crypto Twitter, and once again I thought ‘Sigh, there has to be more to it, surely?’

While pondering all this I did find a Bankless article which puts forward the idea that blockchain can enhance the experience of using AI agent services through the provision of permissionless payment rails, alongside the transparency benefit of decentralised networks. Now that I can get behind.
Likewise, there’s evidence that the combined application of AI and blockchain technologies improves supply chain performance. Look at how AI boosts the blockchain efficiency advantage here:

Comparison of supply chain performance with traditional, blockchain, AI, and blockchain with AI technologies
My hope is that once some of the hype subsides, web3 builders will find new and amazing use cases which combine blockchain technology with artificial intelligence applications. No doubt there are already project teams in the public goods, regen, and educational spaces of crypto who are dreaming big and looking for support. I’d love to hear about them.
I’ve not explored it properly yet, but the SingularityNET ecosystem appears to be aligned with the pursuit of positive AI applications. Its stated mission is “creating a decentralized, democratic, inclusive and beneficial Artificial General Intelligence. An ‘AGI’ that is not dependent on any central entity, that is open for anyone and not restricted to the narrow goals of a single corporation or even a single country.” Also check out Mother, “a decentralized network where AI agents work, govern, and grow. Powered by community, driven by purpose”.
Maintaining Our Agency to Make AI a Public Good
I guess it’s clear by now: I’m several types of AI skeptic. You could say I’ve got dem LLM Blues. It’s not that I don’t believe it’s real or that I think it completely sucks, just that I’m convinced that we must use our collective agency to prevent it from cementing our climate catastrophe, and to steer it in directions that don’t discriminate against and disenfranchise citizens.
Allowing AI power to develop and become the status quo without attention to its impact is a sure path to a future where AI runs rampant, grabbing pussy and ignoring felony charges left right and centre. Hell, it could even become President.
In So You Want to Escape the Algorithm, Elan Ullendorff writes that avoiding algorithms is near impossible, and instead recommends staying intentional and thinking critically, leaning in to explore the cracks and crevices to find the humanity. To me, humans are infinitely more interesting than AI. If you’re looking for a strategy to not just cope with AI but to thrive, I highly recommend Packy McCormick’s post titled Most Human Wins.
I’ll leave the last word to Vitalik Buterin:
“We, humans, continue to be the brightest star. The task ahead of us, of building an even brighter 21ˢᵗ century that preserves human survival and freedom and agency as we head toward the stars, is a challenging one. But I am confident that we are up to it.”
Author and Designer Bio
trewkat is a writer, editor, and designer interested in the potential for web3 to disrupt fat cats. She is a long-time contributor at Black Flag DAO and cofounder of IndyPen CryptoMedia.
Editor Bio
Hiro Kennelly is a writer and cofounder of IndyPen CryptoMedia, buidler at Black Flag DAO, and governator at DAOplomats. He loves people, Moloch, and degenerative cryptoeconomics.
This post does not contain financial advice, only educational information. By reading this article, you agree and affirm the above, as well as that you are not being solicited to make a financial decision, and that you in no way are receiving any fiduciary projection, promise, or tacit inference of your ability to achieve financial gains.
IndyPen CryptoMedia is open to submissions for publication. We’d love to read your work, so please submit your article for consideration!