At Davos, Crypto Pushes Case for Decentralized AI
With Big Tech set to dominate AI, decentralizers made the case for a blockchain governance layer for the next era of the internet.
The list of companies leading the arms race for generative artificial intelligence tells you all you need to know about the risks we face from a concentration of power in this technology and how blockchain’s decentralizing data management model can help mitigate them.
The five most prominent members of the corporate establishment now participating in AI are familiar names: Microsoft, Alphabet, Amazon, Apple and Meta. They’re the same internet platforms that have dominated Web2 for the last two decades. Between them, these five players are investing billions in the technology, both via giant stakes in startups such as Open AI and Anthropic and with their own internal projects.
You’re reading Money Reimagined, a weekly look at the technological, economic and social events and trends that are redefining our relationship with money and transforming the global financial system. Subscribe to get the full newsletter here.
Not coincidentally, those companies occupy five of the seven top positions in overall corporate market capitalization rankings. Their combined market capitalization is just shy of $10 trillion. Add in sixth-placed Nvidia, whose graphic cards are being aggressively purchased by those same five to build the computational capacity to develop generative AI’s large language models (LLMs), and you get to more than a quarter of the entire S&P 500 market cap.
It’s fitting that the only company of comparable size worldwide is third-ranked Saudi Arabia’s state-owned oil company Saudi Aramco. After all, the element that explains the internet titans’ dominance – data – is often described as the “new oil.”
The pole positions these companies command stems from the massive amounts of digital data they hold about us, the human beings on whose language choices and behavioral patterns the LLMs are trained. The Big Five’s search engines, social media, browsers, operating systems and cloud computing services extracted zettabytes and zettabytes of such data about our online activity and the social relations it reveals. We were the quarries from which they extracted this new digital commodity.
Incentivized to do so by the internet economy’s prevailing surveillance capitalism business model, these companies then used that commodity to create new machines (algorithms) with which to target our adrenal glands and, with consistent dopamine hits, surreptitiously direct us to take actions in their business interests. Over time, they iteratively fine-tuned a set of human manipulation tools to keep us endlessly engaged with their platforms in ways that kept advertisers, app developers and corporate IT departments paying for their services. (Six years ago, Facebook’s first president, Sean Parker, let it slip that this was a deliberate plan aimed at “exploiting a vulnerability in human psychology.”)
Those market cap numbers show that this model spectacularly served the interests of the platforms’ shareholders. But there is now incontrovertible evidence that it was grossly misaligned with society at large.
With adolescent suicides having risen by around 50% since 2008, the U.S. Surgeon General has warned about the threat to young people’s mental wellbeing from exposure to online bullying and other forms of toxic behavior. Meanwhile, with just about any contentious issue now locked in volleys of abuse between warring interest groups, we are finding it hard to ascertain facts and, by extension, to resolve urgent issues such as climate change and the Gaza conflict. More broadly, as Frank McCourt and I argue in our forthcoming book, Our Biggest Fight, the internet economy as currently structured is responsible for the wholesale decline in the health of our democracy.
Why on earth would we port this same destructive, oligopolistic model into the AI age, when data-driven algorithms will have even greater sway over our lives? Why allow the centralized corporate owners of the AI infrastructure absolute control over all vital information that pertains to our essence as human beings?
Of course, the platforms will fight tooth and nail to defend what they will describe as their right to exploit their data. But we’ve reached a point where we should recognize it as our data. It is too dangerous to have this human-sensitive information monopolized and secretly manipulated by companies that have already shown a capacity to harm us.
How we would get to a model of where data and content is controlled at the edges of the network rather than the center is for a different article (perhaps one I’ll write closer to publication of the book). Just know changes in data management models are coming, one way or another. With The New York Times suing Microsoft-backed Open AI for its ingestion of the newspaper’s articles into its model, one can expect many institutions that control digital content to start withholding any new material from the AI companies.
That opens a path toward AI models running on a more decentralized system in which training data is used only if there is consent from its owners. For that we’ll need the kind of decentralized tracking approaches that blockchain could enable, both to give assurances to consenting owners that their data and content is being used as described and to ensure that vital information isn’t subject to AI-driven “deep fake” tricks. We need a system of verification in which people can trust a censorship-resistant, open-source protocol rather than the promises of Big Tech that they’ll “do the right thing.”
It’s no wonder, then, that one of the most talked about topics among the crypto folks at the World Economic Forum in Davos last week – most of them agitating outside the WEF Congress’s security walls rather than within them – was the intersection of AI and blockchain. They were energized by new developments such as the Hedera Hashgraph-proofed data validation system unveiled by Jonathan Dotan of the Starling Lab and the decentralized compute project known as MorpheusAI led by Erik Voorhees and David Johnston.
Those discussions were mostly removed from other AI programming during Davos, where many global companies touted solutions to save humanity from the machines. (Tata Consultancy Services, for instance, erected a huge sign declaring that “The Future is AI. The Future is Humanity.”) That’s a pity, because it is now a matter of great urgency that the mainstream listen to the decentralizers in the blockchain community.
Michael J. Casey
Michael J. Casey is Chairman of The Decentralized AI Society, former Chief Content Officer at CoinDesk and co-author of Our Biggest Fight: Reclaiming Liberty, Humanity, and Dignity in the Digital Age. Previously, Casey was the CEO of Streambed Media, a company he cofounded to develop provenance data for digital content. He was also a senior advisor at MIT Media Labs's Digital Currency Initiative and a senior lecturer at MIT Sloan School of Management. Prior to joining MIT, Casey spent 18 years at The Wall Street Journal, where his last position was as a senior columnist covering global economic affairs. Casey has authored five books, including "The Age of Cryptocurrency: How Bitcoin and Digital Money are Challenging the Global Economic Order" and "The Truth Machine: The Blockchain and the Future of Everything," both co-authored with Paul Vigna. Upon joining CoinDesk full time, Casey resigned from a variety of paid advisory positions. He maintains unpaid posts as an advisor to not-for-profit organizations, including MIT Media Lab's Digital Currency Initiative and The Deep Trust Alliance. He is a shareholder and non-executive chairman of Streambed Media. Casey owns bitcoin.