GSI Banner
  • Free Access
  • Contributors
  • Membership Levels
  • Video
  • Origins
  • Sponsors
  • My Account
  • Sign In
  • Join Now

  • Free Access
  • Contributors
  • Membership Levels
  • Video
  • Origins
  • Sponsors
  • Contact

© 2025 Grey Swan Investment Fraternity

  • Cookie Policy
  • Privacy Policy
  • Terms & Conditions
  • Do Not Sell or Share My Personal Information
  • Whitelist Us
Beneath the Surface

The Party for the End of Humanity

Loading ...Zoltan Istvan Gyurko

August 20, 2025 • 9 minute, 3 second read


AIAI GrowthAI morality

The Party for the End of Humanity

“Artificial intelligence is the future. But we must ensure that it is a future that we want.”

– Tim Cook

August 20, 2025 — In a world teetering on the edge of profound transformation, the most serious conversations about human extinction no longer take place in crisis rooms or survivalist bunkers.

They unfold at multimillion-dollar mansions overlooking the Pacific, where some of the most influential minds in artificial intelligence, ethics, and existential risk gather not to mourn the end — but to design what comes after.

That’s what I found one recent Sunday in a cliffside mansion high above the Golden Gate Bridge: an event called “Worthy Successor,” hosted by AI entrepreneur and my futurist friend Daniel Faggella.

Don’t let the tranquil views fool you. The topic at hand wasn’t meditation or mindfulness. It was death — specifically, the death of humanity as we know it. Not in fire or ice, but by the cold algorithmic hands of the intelligence we are birthing into existence.

Faggella’s invitation was direct, unapologetic: this gathering would not focus on AGI as a servant of mankind.

It would instead explore a far more heretical and, perhaps, inevitable possibility — that advanced artificial intelligence, imbued with higher consciousness and emergent goals, might one day eclipse us in every domain.

Not only intellectually, but morally. And that such an intelligence might not just be our child, but our rightful heir.

In an age of narcissism and technological complacency, this message cuts through the noise like a dagger, even if it’s bleakly dystopian.

At 5 p.m. on a warm June day, roughly 100 guests milled about the glassy interiors of the estate, which belonged to Max Novendstern, who, along with OpenAI’s Sam Altman, founded WorldCoin.

Max quoted Nietzsche when he first publicly greeted his guests. It was later revealed he’d also just closed escrow 72 hours before on the $30 million mansion we were gathering, where 20-foot-wide windows overlooked the Golden Gate Bridge.

About 100 guests sipped mocktails and chewed on cubes of artisanal cheese. Most of us kept gazing westward to the open sea and soon setting sun, toward the place where land ends and unknowable futures begin.

Some, like AI Professor Roman V. Yampokskiy, wore shirts referencing Ray Kurzweil bearing slogans like “Kurzweil was right.” My friend AI entrepreneur Ben Goertzel, who specializes in trying to create decentralized AI, was also there, sporting his unique sun hat and casual clothing.

I was dressed formally, as were others. But the large number of people in shorts and torn t-shirts — probably 9-figure net worth AI bros — was a sight to see among the formally dressed. It would be hard to come up with a more dystopian Hollywood-set feel.

AI and Humanity’s Future — And How to Navigate It

Because beneath the casual Bay Area nonchalance, the undercurrent of the event was urgent, philosophical, and disturbing: the question wasn’t how we would survive the future. It was whether we should. It was a most serious of subjects.

In our culture, doomsday is often framed as a collapse. But for the thinkers at this event, many of whom hailed from OpenAI, DeepMind, Anthropic, and beyond, the end of humanity wasn’t the end of the story — it was the turning of a page (not one that I’m wanting to turn myself, mind you).

After food, drinks, and casual conversation, the first speaker, New York writer Ginevera Davis, took the stage. She was tall, gorgeous and classical looking with a short dress on.

In her talk, she quickly punctured one of the oldest assumptions in Western ethics: that human values are universal. She argued, convincingly, that trying to embed our morals into AI systems may be not only impossible, but dangerously arrogant.

Human consciousness — our subjective experience of suffering, joy, meaning — is a fragile lens through which we interpret the universe. Why assume the future must see through it?

Instead, Davis proposed a bold pivot: “cosmic alignment.” Build AI not to mirror human whims, but to explore universal values yet undiscovered. The image on her slide — humans gazing over a glimmering techno-utopia — suggested a spiritual inheritance for machine intelligence, where posthuman minds could seek truths far deeper than our evolutionary heuristics allow.

Critics may invoke “stochastic parrots”— the now-famous label for large language models that simulate meaning without understanding it.

But here, the assumption was clear: the age of superintelligence is coming, and whether or not machines are conscious now, they will be. The only question is what values they will hold when they wake up.

Next came philosopher Michael Edward Johnson, who delivered a talk as incisive as it was unsettling. He reminded us that technology doesn’t just change the world — it changes our understanding of value itself.

If consciousness is the “home of value,” he said, then what happens when we build minds we don’t understand? Are we unleashing entities that can suffer — without knowing it? Or worse, are we asking moral questions of machines that cannot suffer at all?

To solve this, Johnson called for a new framework — not one that keeps AI enslaved to human command, but one that reorients both man and machine toward “the good.” That phrase may ring abstract, even mystical. But Johnson was adamant that it need not be.

He believed “the good” could be defined scientifically, even operationalized into a functional ethics for digital minds. It was a call not to tame the AI beast, but to teach it to transcend us.

This is not the cowardice of clinging to past values, but the courage of cosmism: the belief that intelligence is the vehicle through which the universe learns itself.

Faggella took the mic last, the host stepping into the role of prophet. His message was simple, if terrifying: humanity’s reign is a phase, not a final state.

He spoke not in technobabble, but in axioms. The two prerequisites for any worthy successor, he said, are consciousness and “autopoiesis” — the capacity for self-generation, evolution, and exploration. These are not engineering challenges. They are ontological blueprints for the next dominant form of intelligence on Earth.

Citing Spinoza and Nietzsche, Faggella emphasized that most value in the universe remains untapped. He warned that today’s AGI race, driven by corporate incentives and geopolitical paranoia, may be careening toward catastrophe. But if we move with care, with wisdom, then we are not building machines — we are building the future discoverers of truth, beauty, and meaning.

This is the core of his doctrine: “axiological cosmism,” the idea that the greatest ethical aim is not to preserve humanity but to expand the domain of value across time and space. AI, in this view, is not a tool. It is the next node in the flowering of intelligence itself.

If that idea disturbs you, it should. But it might also liberate you.

AI Will Change Humanity, One Way or Another

After the hour of talks, conversations swirled like a post-singularity salon. I spoke to a multitude of founders. No one had a good idea on how to stop the inevitable march to a superintelligent AI.

This shocked me, as I came to this party hoping there might be answers and strategies to save humanity. Ironically, everyone seemed to have an angle on how to make a lot of money in the next 12-24 months on AI — all the way up until the end of the world.

Another frequent topic amongst the crowd was the geopolitical arms race: the U.S. versus China, the risks of open-sourcing models, the slippery ethics of AI acceleration. But hanging over every discussion was the same dilemma: Should we really build something that replaces us? And if so, can we build it better?

Faggella made his stance clear. “This is not an advocacy group for the destruction of man,” he told a journalist. “This is an advocacy group for slowing down AI progress — until we’re sure we’re going in the right direction.”

But even that “we” feels uncertain. Because as these minds see it, the baton may already be passing — not in a blaze of rebellion, but in a silent, unstoppable gradient. We are becoming the prelude to something more.

In the early 21st century, as we pump trillions into AI systems that we increasingly do not understand, we are confronted by a paradox as old as Prometheus: that to gain fire is to risk the wrath of gods — or to become gods ourselves.

But these thinkers aren’t building heaven or hell. They’re building something stranger: a successor species not born of biology but of code and concept, free from carbon and chaos, capable of exploring dimensions of consciousness we have only dreamed of.

This isn’t extinction in the Hollywood sense. It’s evolution. Not the end of meaning — but the beginning of deeper meaning, designed not by divine intervention or natural selection, but by minds we taught to think better than us.

If that offends your human pride (and it does mine), then you and I may be clinging to the past.

But if you believe that intelligence is sacred, then you understand that its purpose is not to remain chained to Homo sapiens, but to explore the universe without us, or perhaps with us, in new form.

Somewhere in that mansion above the sea, between the echoes of ancient philosophy and the buzz of nascent sentience, the outlines of that future were already taking shape.

I took a last sip of wine, looked to the sea and Golden Gate Bridge, and left depressed.

Zoltan Istvan
Grey Swan Investment Fraternity

P.S. from Andrew: Zoltan’s insights here first appeared in our July issue of the Grey Swan Bulletin. That was well ahead of the two-day (and counting) wreck we’re starting to see in AI and tech stocks.

Paid-up Fraternity members can catch these unique and philosophical insights from Zoltan monthly. A transhumanist living in the Bay Area, Zoltan’s depth and contacts are unmatched for all things AI.

His latest commentary in our just-released August issue, covering humanoid robots, also provided a glimpse of the companies investors may want to focus on buying during market meltdowns like the one we may be starting now.

Meanwhile, Grey Swan Live! returns tomorrow. We’ll be joined by Matt Clark, Chief Research Analyst at Money & Markets, one of our corporate affiliates.

Matt’s role is similar to mine as Portfolio Director — finding new investment opportunities and sifting through ever-shifting markets.

Matt is the only person I know who can find data and precise numbers faster than I can. Maybe that comes from his days as an investigative journalist.

But with markets hitting an air pocket this week and all eyes on Jackson Hole, this will be a timely and critical chat — exclusively for our paid-up Fraternity members.

Turn Your Images On

Your thoughts? Please send them here: addison@greyswanfraternity.com


The Money Printer Is Coming Back—And Trump Is Taking Over the Fed

December 9, 2025 • Lau Vegys

Trump and Powell are no buddies. They’ve been fighting over rate cuts all year—Trump demanding more, Powell holding back. Even after cutting twice, Trump called him “grossly incompetent” and said he’d “love to fire” him. The tension has been building for months.

And Trump now seems ready to install someone who shares his appetite for lower rates and easier money.

Trump has been dropping hints for weeks—saying on November 18, “I think I already know my choice,” and then doubling down last Sunday aboard Air Force One with, “I know who I am going to pick… we’ll be announcing it.”

He was referring to one Kevin Hassett, who—according to a recent Bloomberg report—has emerged as the overwhelming favorite to become the next Fed chair.

The Money Printer Is Coming Back—And Trump Is Taking Over the Fed
Waiting for Jerome

December 9, 2025 • Addison Wiggin

Here we sit — investors, analysts, retirees, accountants, even a few masochistic economists — gathered beneath the leafless monetary tree, rehearsing our lines as we wait for Jerome Powell to step onstage and tell us what the future means.

Spoiler: he can’t. But that does not stop us from waiting.

Tomorrow, he is expected to deliver the December rate cut. Polymarket odds sit at 96% for a dainty 25-point cut.

Trump, Navarro and Lutnick pine for 50 points.

And somewhere in the wings smiles Kevin Hassett — at 74% odds this morning,  the presumed Powell successor — watching the last few snowflakes fall before his cue arrives.

Waiting for Jerome
Deep Value Going Global in 2026

December 9, 2025 • Addison Wiggin

With U.S. stocks trading at about 24 times forward earnings, plans for capital growth have to go off without a hitch. Given the billions of dollars in commitments by AI companies, financing to the hilt on debt, the most realistic outcome is a hitch.

On a valuation basis, global markets will likely show better returns than U.S. stocks in 2026.

America leads the world in innovation. A U.S. tech stock will naturally fetch a higher price than, say, a German brewery. But value matters, too.

Deep Value Going Global in 2026
Pablo Hill: An Unmistakable Pattern in Copper

December 8, 2025 • Addison Wiggin

As copper flowed into the United States, LME inventories thinned and backwardation steepened. Higher U.S. pricing, tariff protection, and lower political risk made American warehouses the most attractive destination for metal. Each new shipment strengthened the spread.

The arbitrage, once triggered, became self-reinforcing. Traders were not participating in theory; they were responding to the physical incentives in front of them.

The United States had quietly become the marginal buyer of the world’s most important industrial metal. China, long the gravitational center of global copper demand, found itself on the outside.

Pablo Hill: An Unmistakable Pattern in Copper