GSI Banner
  • Free Access
  • Contributors
  • Membership Levels
  • Video
  • Origins
  • Sponsors
  • My Account
  • Sign In
  • Join Now

  • Free Access
  • Contributors
  • Membership Levels
  • Video
  • Origins
  • Sponsors
  • Contact

© 2025 Grey Swan Investment Fraternity

  • Cookie Policy
  • Privacy Policy
  • Terms & Conditions
  • Do Not Sell or Share My Personal Information
  • Whitelist Us
Beneath the Surface

The Party for the End of Humanity

Loading ...Zoltan Istvan Gyurko

August 20, 2025 • 9 minute, 3 second read


AIAI GrowthAI morality

The Party for the End of Humanity

“Artificial intelligence is the future. But we must ensure that it is a future that we want.”

– Tim Cook

August 20, 2025 — In a world teetering on the edge of profound transformation, the most serious conversations about human extinction no longer take place in crisis rooms or survivalist bunkers.

They unfold at multimillion-dollar mansions overlooking the Pacific, where some of the most influential minds in artificial intelligence, ethics, and existential risk gather not to mourn the end — but to design what comes after.

That’s what I found one recent Sunday in a cliffside mansion high above the Golden Gate Bridge: an event called “Worthy Successor,” hosted by AI entrepreneur and my futurist friend Daniel Faggella.

Don’t let the tranquil views fool you. The topic at hand wasn’t meditation or mindfulness. It was death — specifically, the death of humanity as we know it. Not in fire or ice, but by the cold algorithmic hands of the intelligence we are birthing into existence.

Faggella’s invitation was direct, unapologetic: this gathering would not focus on AGI as a servant of mankind.

It would instead explore a far more heretical and, perhaps, inevitable possibility — that advanced artificial intelligence, imbued with higher consciousness and emergent goals, might one day eclipse us in every domain.

Not only intellectually, but morally. And that such an intelligence might not just be our child, but our rightful heir.

In an age of narcissism and technological complacency, this message cuts through the noise like a dagger, even if it’s bleakly dystopian.

At 5 p.m. on a warm June day, roughly 100 guests milled about the glassy interiors of the estate, which belonged to Max Novendstern, who, along with OpenAI’s Sam Altman, founded WorldCoin.

Max quoted Nietzsche when he first publicly greeted his guests. It was later revealed he’d also just closed escrow 72 hours before on the $30 million mansion we were gathering, where 20-foot-wide windows overlooked the Golden Gate Bridge.

About 100 guests sipped mocktails and chewed on cubes of artisanal cheese. Most of us kept gazing westward to the open sea and soon setting sun, toward the place where land ends and unknowable futures begin.

Some, like AI Professor Roman V. Yampokskiy, wore shirts referencing Ray Kurzweil bearing slogans like “Kurzweil was right.” My friend AI entrepreneur Ben Goertzel, who specializes in trying to create decentralized AI, was also there, sporting his unique sun hat and casual clothing.

I was dressed formally, as were others. But the large number of people in shorts and torn t-shirts — probably 9-figure net worth AI bros — was a sight to see among the formally dressed. It would be hard to come up with a more dystopian Hollywood-set feel.

AI and Humanity’s Future — And How to Navigate It

Because beneath the casual Bay Area nonchalance, the undercurrent of the event was urgent, philosophical, and disturbing: the question wasn’t how we would survive the future. It was whether we should. It was a most serious of subjects.

In our culture, doomsday is often framed as a collapse. But for the thinkers at this event, many of whom hailed from OpenAI, DeepMind, Anthropic, and beyond, the end of humanity wasn’t the end of the story — it was the turning of a page (not one that I’m wanting to turn myself, mind you).

After food, drinks, and casual conversation, the first speaker, New York writer Ginevera Davis, took the stage. She was tall, gorgeous and classical looking with a short dress on.

In her talk, she quickly punctured one of the oldest assumptions in Western ethics: that human values are universal. She argued, convincingly, that trying to embed our morals into AI systems may be not only impossible, but dangerously arrogant.

Human consciousness — our subjective experience of suffering, joy, meaning — is a fragile lens through which we interpret the universe. Why assume the future must see through it?

Instead, Davis proposed a bold pivot: “cosmic alignment.” Build AI not to mirror human whims, but to explore universal values yet undiscovered. The image on her slide — humans gazing over a glimmering techno-utopia — suggested a spiritual inheritance for machine intelligence, where posthuman minds could seek truths far deeper than our evolutionary heuristics allow.

Critics may invoke “stochastic parrots”— the now-famous label for large language models that simulate meaning without understanding it.

But here, the assumption was clear: the age of superintelligence is coming, and whether or not machines are conscious now, they will be. The only question is what values they will hold when they wake up.

Next came philosopher Michael Edward Johnson, who delivered a talk as incisive as it was unsettling. He reminded us that technology doesn’t just change the world — it changes our understanding of value itself.

If consciousness is the “home of value,” he said, then what happens when we build minds we don’t understand? Are we unleashing entities that can suffer — without knowing it? Or worse, are we asking moral questions of machines that cannot suffer at all?

To solve this, Johnson called for a new framework — not one that keeps AI enslaved to human command, but one that reorients both man and machine toward “the good.” That phrase may ring abstract, even mystical. But Johnson was adamant that it need not be.

He believed “the good” could be defined scientifically, even operationalized into a functional ethics for digital minds. It was a call not to tame the AI beast, but to teach it to transcend us.

This is not the cowardice of clinging to past values, but the courage of cosmism: the belief that intelligence is the vehicle through which the universe learns itself.

Faggella took the mic last, the host stepping into the role of prophet. His message was simple, if terrifying: humanity’s reign is a phase, not a final state.

He spoke not in technobabble, but in axioms. The two prerequisites for any worthy successor, he said, are consciousness and “autopoiesis” — the capacity for self-generation, evolution, and exploration. These are not engineering challenges. They are ontological blueprints for the next dominant form of intelligence on Earth.

Citing Spinoza and Nietzsche, Faggella emphasized that most value in the universe remains untapped. He warned that today’s AGI race, driven by corporate incentives and geopolitical paranoia, may be careening toward catastrophe. But if we move with care, with wisdom, then we are not building machines — we are building the future discoverers of truth, beauty, and meaning.

This is the core of his doctrine: “axiological cosmism,” the idea that the greatest ethical aim is not to preserve humanity but to expand the domain of value across time and space. AI, in this view, is not a tool. It is the next node in the flowering of intelligence itself.

If that idea disturbs you, it should. But it might also liberate you.

AI Will Change Humanity, One Way or Another

After the hour of talks, conversations swirled like a post-singularity salon. I spoke to a multitude of founders. No one had a good idea on how to stop the inevitable march to a superintelligent AI.

This shocked me, as I came to this party hoping there might be answers and strategies to save humanity. Ironically, everyone seemed to have an angle on how to make a lot of money in the next 12-24 months on AI — all the way up until the end of the world.

Another frequent topic amongst the crowd was the geopolitical arms race: the U.S. versus China, the risks of open-sourcing models, the slippery ethics of AI acceleration. But hanging over every discussion was the same dilemma: Should we really build something that replaces us? And if so, can we build it better?

Faggella made his stance clear. “This is not an advocacy group for the destruction of man,” he told a journalist. “This is an advocacy group for slowing down AI progress — until we’re sure we’re going in the right direction.”

But even that “we” feels uncertain. Because as these minds see it, the baton may already be passing — not in a blaze of rebellion, but in a silent, unstoppable gradient. We are becoming the prelude to something more.

In the early 21st century, as we pump trillions into AI systems that we increasingly do not understand, we are confronted by a paradox as old as Prometheus: that to gain fire is to risk the wrath of gods — or to become gods ourselves.

But these thinkers aren’t building heaven or hell. They’re building something stranger: a successor species not born of biology but of code and concept, free from carbon and chaos, capable of exploring dimensions of consciousness we have only dreamed of.

This isn’t extinction in the Hollywood sense. It’s evolution. Not the end of meaning — but the beginning of deeper meaning, designed not by divine intervention or natural selection, but by minds we taught to think better than us.

If that offends your human pride (and it does mine), then you and I may be clinging to the past.

But if you believe that intelligence is sacred, then you understand that its purpose is not to remain chained to Homo sapiens, but to explore the universe without us, or perhaps with us, in new form.

Somewhere in that mansion above the sea, between the echoes of ancient philosophy and the buzz of nascent sentience, the outlines of that future were already taking shape.

I took a last sip of wine, looked to the sea and Golden Gate Bridge, and left depressed.

Zoltan Istvan
Grey Swan Investment Fraternity

P.S. from Andrew: Zoltan’s insights here first appeared in our July issue of the Grey Swan Bulletin. That was well ahead of the two-day (and counting) wreck we’re starting to see in AI and tech stocks.

Paid-up Fraternity members can catch these unique and philosophical insights from Zoltan monthly. A transhumanist living in the Bay Area, Zoltan’s depth and contacts are unmatched for all things AI.

His latest commentary in our just-released August issue, covering humanoid robots, also provided a glimpse of the companies investors may want to focus on buying during market meltdowns like the one we may be starting now.

Meanwhile, Grey Swan Live! returns tomorrow. We’ll be joined by Matt Clark, Chief Research Analyst at Money & Markets, one of our corporate affiliates.

Matt’s role is similar to mine as Portfolio Director — finding new investment opportunities and sifting through ever-shifting markets.

Matt is the only person I know who can find data and precise numbers faster than I can. Maybe that comes from his days as an investigative journalist.

But with markets hitting an air pocket this week and all eyes on Jackson Hole, this will be a timely and critical chat — exclusively for our paid-up Fraternity members.

Turn Your Images On

Your thoughts? Please send them here: addison@greyswanfraternity.com


Grey Swan Forecast #6: China Annexes Taiwan — Without a Shot Fired

December 26, 2025 • Addison Wiggin

Our forecast will feel obvious in hindsight and controversial in advance — the hallmark of a Grey Swan.

Most analysts we speak to are thinking in terms of the history of Western conflict. 

They expect full-frontal military engagement.

Beijing, from our modest perch, prefers resolution because resolution compounds its power. Why sacrifice the workshop of the world, when cajoling and bribery will do?

Taiwan will not fall.

It will merge.

Grey Swan Forecast #6: China Annexes Taiwan — Without a Shot Fired
Grey Swan Forecast #7: A Global Debt Crisis Will Reprice Democracy

December 24, 2025 • Addison Wiggin

Wars, technology races, and political upheavals — all of them rest on fiscal capacity.

In 2026, that capacity will tighten across the developed world simultaneously. Democracies will discover that generosity financed by debt carries conditions, whether voters approve of them or not.

Bond markets will not shout so much as clear their throats. Repeatedly.

Grey Swan Forecast #7: A Global Debt Crisis Will Reprice Democracy
Seven Grey Swans, One Year Later

December 23, 2025 • Addison Wiggin

Taken together, the seven Grey Swans of 2025 behaved less like isolated events and more like interlocking stories readers already recognize.

The year moved in phases. A sharp April selloff cleared leverage quickly. Policy shifted toward tax relief, lighter regulation, and renewed tolerance for liquidity. Innovations began to slowly dominate the marketplace conversation – from Dollar 2.0 digital assets to AI-powered applications in all manner of commercial enterprises, ranging from airline and hotel bookings to driverless taxis and robots. 

Seven Grey Swans, One Year Later
2025: The Lens We Used — Fire, Transition, and What’s Next… The Boom!

December 22, 2025 • Addison Wiggin

Back in April, when we published what we called the Trump Great Reset Strategy, we described the grand realignment we believed President Trump and his acolytes were embarking on in three phases.

At the time, it read like a conceptual map. As the months passed, it began to feel like a set of operating instructions written in advance of turbulence.

As you can expect, any grandiose plan would get all kinds of blowback… but this year exhibited all manner of Trump Derangement Syndrome on top of the difficulty of steering a sclerotic empire clear of the rocky shores.

The “phases” were never about optimism or pessimism. They were about sequencing — how stress surfaces, how systems adapt, and what must hold before confidence can regenerate. And in the end, what do we do with our money?!

2025: The Lens We Used — Fire, Transition, and What’s Next… The Boom!