| SearchRecent comments
 Democracy LinksMember's Off-site Blogs | dot AI bubble....
 “Keeping perspective” is Michael Pascoe’s mantra. He writes that AI mania has not just warped perspective, it risks blowing up reality. Over more decades than I care to count of market watching and reporting, I haven’t seen a time when there’s been more wide-spread conviction that we’re experiencing a dangerous bubble that’s sure to pop, yet the money keeps pouring in to inflate it. The lead up to the “Crash of 87”, now viewed in the rearview mirror as a minor hiccup, was relatively muted. Not even the “dot bomb” bubble at the turn of the century, when all a company had to do to get a share price boost was to add “e” or “dotcom” to its name, was as widely perceived as over-cooked as the present outlook.“This time it’s different” This time round there is vastly more money chasing itself with a circular investment boom at its core built on promises of revolution and, inevitably, that “this time it’s different”. This time round there are trillions of dollars being splurged on a heady mix of momentum riding, FOMO (fear of missing out), and massive bets that the first movers will win all. There are the market telltales of sky-high valuations for inexperienced startups merely intending to build data centres or AI-somethings, but they are sideshows, mere flotsom that will be blown away when the reckoning occurs, as the shells and frauds were when the dot bomb burst. As a general rule, things tend to turn out to be not as bad as you fear or as good as you hope. This time round, the hope is about the level of bad. The serious game is the impact both the reckoning and the AI promises will have on the real economy, on employment and wealth. That’s where the potential is for a GFC-scale event, not a piddling ’87 or dot bomb. And this time round, still carrying the cost of all their COVID efforts, governments and central banks will be less able to ameliorate the pain. We’re already reaping the result of a global easing cycle with global government deficit spending fuelling asset inflation with its subsequent wealth effect. After what was, with the benefit of hindsight, overcompensation during COVID, the ammo isn’t there to fight another shock. Two quite different articles this month have highlighted the impossibility of the nirvana being promised by the AI investment promoters. Former colleague Alan Kohler writing for the ABC was the bleakest, seeing a future where a GFC-size bust is the least-worst option.Returns not there The other is a note to clients by independent economist Gerard Minack showing the AI spend simply can’t generate the returns for investors to justify it. I’ll come back to that. Kohler first, adding the AI and crypto bubbles together for frightening numbers: Somewhere between $3 trillion and $6 trillion has been invested in building AI infrastructure and software, and that has been responsible for almost all US economic growth over the past year. The top 10 American AI companies have provided most of the US stock market’s gains over the past two years and are now valued at $35 trillion, almost half the total market. Meanwhile there are 20,000 cryptocurrencies worth $5.8 trillion, of which Bitcoin represents more than half. The total cash in the AI and crypto bets is more than a quarter of global GDP; it’s probably the greatest technology investment boom/bubble in history. No probably about it, in my opinion. It would not take a total crash to send shocks through the financial and real economies.Jobs armageddon the quid pro quo But Kohler’s Doomsday outcome is that it would be worse if the investment ends up being justified by profits created as it would create massive longterm unemployment for the many while the very few at the top continue on their present path of becoming even more unimaginably rich. Gerard Minack’s note has a smaller focus and thus more concrete outcome: there’s a lot of capital being burned. Minack limits his numbers to the “AI8” listed entities, Alphabet, Oracle, Microsoft, Amazon, Meta, Broadcom, Nvidia and Palantir. In this bubble phase, investors keep rewarding companies that increase planned AI-related investment spending. But: “The larger the investment spend, the greater the revenue that will need to be generated to ensure an adequate return on that investment. In my view that revenue hurdle is already implausibly high. Whatever the technical wonders that AI will generate, the investment returns will disappoint. That will inevitably lead to significant market losses.” Greatly simplyfying Minack’s analysis, the AI8 will conservatively have investment stock of more than US$1 trillion by the end of next year. That’s allowing for 20 per cent depreciation on an investment spend heading above US$2 trillion in 2027. (And a reminder that this is ignoring the investment spending of unlisted companies such as Open AI.)$1T for a 10% return So how much revenue will their AI businesses need to generate to get a reasonable return on this stock of invested capital? I’ll skip the details of Minack’s figuring but AI would need revenue of some US$925 billion a year to achieve a modest 10 per cent return on invested capital. And that return compares with the hyper-scalers’ current 25 per cent ROIC. By comparison, Minack quotes Praetorian Capital’s Harry Kupperman’s observation that the incredibly successful Microsoft Office 365 subscription services had revenue of US$94 billion last year. “In other words, to achieve an average ROIC, the AI industry will need to support 7-10 firms with businesses as widely deployed, and widely subscribed to, as Office 365.” And then it gets harder. If the hyper-scalers’ current 50 per cent gross margin falls closer to the S&P500 average, they would require additional revenue of US$1.2-$1.6 trillion. As Minack concludes, “good luck with that”. A further complication is that AI will cannibalise much of the hyper-scalers’ existing businesses. I would further speculate about what competition between that many players would do to margins. There’s also the well-reported phenomenon of much of the AI splurge being circular – the major players are hanging out and taking in a lot of each other’s washing.Show me the money It’s all good fun until investors reach the imminent “show me the money” stage and the music stops. That’s when the big boys take a hit and the fringe players, the bubble startups, lose their shirts. As for the concurrent crypto-bubble, RBA Governor Bullock last week pointed to the challenge for the financial system’s security if quantum computing ever effectively works. “If you believe what they say on the tin of quantum computing, what takes 200 years to decrypt now, to break, will take a matter of minutes. So it is a big threat,” she said. Decrypted crypto is no crypto at all.And the Toddler King Then there is the little matter of the world’s biggest economy being run by a febrile toddler king and a mob of self-enriching accomplices. It all adds up to the biggest mystery: how markets are so willingly charging higher to yet more records regardless of risk with the biggest gains at the riskiest end. On one hand there’s the view that the reckoning is always a little further off. On the other, the second law of “old bond dog” Anthony Peters comes to mind: “Nobody gets fired for being long a falling market, but woe betide anyone short a rising market.” Good luck with that, too. https://michaelwest.com.au/dangerous-bubble-sure-to-pop-wall-street-has-ai-crash-in-the-wings/ 
 
 YOURDEMOCRACY.NET RECORDS HISTORY AS IT SHOULD BE — NOT AS THE WESTERN MEDIA WRONGLY REPORTS IT — SINCE 2005. 
 Gus Leonisky POLITICAL CARTOONIST SINCE 1951. 
 PICTURE AT TOP BY GUS LEONISKY. 
 | User login | 
academic AI....
How AI exposes the moral hypocrisy of academic publishing
Nicholas Agar
Knowledge production in the humanities is undergoing a step change, a sudden transformation driven, in part, by AI technologies.
Many things in the humanities won’t change, simply because there are constants in the ways humans agree or disagree, fall in love or into hate. So long as there are humans in 2075 there will be human philosophers pondering humanity’s problems. The insights of today’s philosophical geniuses will presumably be as interesting to the philosophers of that time as are the insights of Ludwig Wittgenstein to us today.
But step changes in knowledge production place into starker relief some of the bad practices that we have fallen into. Just as Warren Buffett observed about economic downturns, “only when the tide goes out do you discover who’s been swimming naked”, so too abrupt changes in technology and student expectations expose the moral compromises of academic humanists. We have long been swimming naked, clinging to outdated practices that no longer serve students, society or truth.
The AI cheating problemPrompting ChatGPT to “critically discuss Plato’s theory of forms” isn’t a way to do philosophy. But that’s precisely the path many students now take, motivated by high tuition fees and high-stakes assessments.
There is a race between AI writing tools and AI detection tools, one that the detectors are destined to lose. Companies like OpenAI, which created ChatGPT, don’t reveal their secrets to firms like Turnitin LLC, a business in plagiarism detection. The result? The detection tech will always be playing catch-up.
The numbers tell the story about how far behind they will lag. Turnitin was acquired by Advance Publications for US$1.75 billion in 2019. OpenAI now has a US$500 billion valuation. The first-mover wins, and OpenAI has more money to spend training AIs to produce human-like speech than Turnitin can spend on detecting it.
Terminators or Cylons?Does the difficulty in detecting AI-cheating mean that the professors should give up? Perhaps it casts the defenders of human writing in the role of Sarah Connor in the Terminator franchise. The odds are clearly against her. But Hollywood produces many movies in which she heroically beats the odds and the machines.
The problem is that our movie analogy is ill chosen. We don’t face machines like the hulking T-800 cyborg. A better representation is the Cylon from the television series Battlestar Galactica. In the 2004 version, machines that perfectly pass for humans engineer our downfall by infiltrating us. What befalls the humans in that story is also happening to the humanities in real life: even as they proclaim that they are fighting AI, humanities scholars are abetting its infiltration.
The vulnerability of the humanities is more ideological than technological. It comes in the form of the teaching-research nexus, a prized feature of the Humboldtian university invented in Prussia in the late-nineteenth century. Ernest Boyer, former president of the Carnegie Foundation for the Advancement of Teaching, expressed it well when he wrote:
The most inspired teaching will generally take place when faculty are pursuing their own intellectual work, and students, rather than being passive observers, are partners in the scholarly enterprise.
Our students become our apprentices. In the fullness of time, they replace us. The glitch in this plan, that continues to work for the sciences, becomes apparent in the overproduction of humanities PhDs for whom there are no jobs.
Paradoxically, much of the money governments spend to sustain the humanities amplifies its vulnerability. The money has attracted academic publishing businesses. Profits passed on to shareholders become debits for governments and taxpayers.
The hypocrisy of punishing AI cheatsConsider the contract I recently signed with humanities publisher Taylor & Francis. It granted them the right to distribute my work “in printed, electronic or other medium now known or later invented, and in turn to authorise others … to do the same”. We can speculate about what this might mean.
Informa PLC is the parent company of Taylor & Francis. Its 2025 financial report offers rare transparency about a quiet transformation underway in academic publishing. Informa is more open to its investors than it is to humanities scholars. The report reveals that Taylor & Francis generated over US$75 million in 2024 from data access licensing, explicitly naming AI companies among customers gaining legal entry to vast troves of scholarly content. With nearly 9,000 new titles added annually and a vast back catalogue of specialist works, Informa is positioning this licensing as a “repeatable income stream” and a key part of its growth strategy.
What this means for humanists is stark. The very articles and books we painstakingly produce are being fed, legally and lucratively, into AI systems that will soon replicate, and perhaps replace, our intellectual labour.
Signing up to be my apprentice by inviting me to supervise your PhD in philosophy is a bit like apprenticing with a master weaver when a factory with power looms has just opened in your town. Yet most authors remain unaware that their work is fuelling the next generation of AI tools, often without any additional consent or compensation.
This is speculation about possible motivations of Informa PLC. It would not suffice for a class action lawsuit mounted by sacked humanities academics. If pressed, Big Oil’s lawyers can vehemently assert their passion for the environment. That’s certainly how Big Academic Publishing’s lawyers would advise them to reply to questions about how they might be contributing to the failure of humanities faculties.
One hint about Informa’s intentions can be found in a linguistic pivot from the 2024 to the 2025 report. In 2024 there was talk of “flexible Pay-to-Publish Open Research platforms”. That language is absent from the 2025 report. Now that governments are less interested in paying for humanities academics to publish, it is a reasonable inference that Informa is looking to replace lost revenue with money from training AIs. Scholars fret about the sloppy academic referencing of AI text. An AI with full access to the Taylor & Francis back catalogue can almost certainly improve on the referencing of distracted humanists anxious about their jobs.
Herein lies the hypocrisy. We punish students for using AI, even as we gift our own research to a business that directly feeds it into the very models that we caution students against using — all of this without compensation, consent or even awareness. If anyone’s cheating, it’s not the students. The challenge for the humanities isn’t to either abet or beat AI detection tools. It’s to reimagine a scholarly ecosystem with AI where truth-seeking is collaborative, transparent and fair. That starts with confronting the uncomfortable truths not just about our students, but about ourselves.
Nicholas Agar is Professor of Ethics at the University of Waikato in Aotearoa New Zealand. He is the author of How to be Human in the Digital Economy and Dialogues on Human Enhancement, and co-author (with Stuart Whatley and Dan Weijers) of How to Think about Progress: A Skeptic’s Guide to Technology.
https://www.abc.net.au/religion/how-ai-exposes-the-moral-hypocrisy-of-academic-publishing/105937278
READ FROM TOP.
YOURDEMOCRACY.NET RECORDS HISTORY AS IT SHOULD BE — NOT AS THE WESTERN MEDIA WRONGLY REPORTS IT — SINCE 2005.
Gus Leonisky
POLITICAL CARTOONIST SINCE 1951.