GSI Banner
  • Free Access
  • Contributors
  • Membership Levels
  • Video
  • Origins
  • Sponsors
  • My Account
  • Sign In
  • Join Now

  • Free Access
  • Contributors
  • Membership Levels
  • Video
  • Origins
  • Sponsors
  • Contact

© 2025 Grey Swan Investment Fraternity

  • Cookie Policy
  • Privacy Policy
  • Terms & Conditions
  • Do Not Sell or Share My Personal Information
  • Whitelist Us
Daily Missive

The Party for the End of Humanity

Loading ...Zoltan Istvan Gyurko

August 20, 2025 • 9 minute, 3 second read


AIAI GrowthAI morality

The Party for the End of Humanity

“Artificial intelligence is the future. But we must ensure that it is a future that we want.”

– Tim Cook

August 20, 2025 — In a world teetering on the edge of profound transformation, the most serious conversations about human extinction no longer take place in crisis rooms or survivalist bunkers.

They unfold at multimillion-dollar mansions overlooking the Pacific, where some of the most influential minds in artificial intelligence, ethics, and existential risk gather not to mourn the end — but to design what comes after.

That’s what I found one recent Sunday in a cliffside mansion high above the Golden Gate Bridge: an event called “Worthy Successor,” hosted by AI entrepreneur and my futurist friend Daniel Faggella.

Don’t let the tranquil views fool you. The topic at hand wasn’t meditation or mindfulness. It was death — specifically, the death of humanity as we know it. Not in fire or ice, but by the cold algorithmic hands of the intelligence we are birthing into existence.

Faggella’s invitation was direct, unapologetic: this gathering would not focus on AGI as a servant of mankind.

It would instead explore a far more heretical and, perhaps, inevitable possibility — that advanced artificial intelligence, imbued with higher consciousness and emergent goals, might one day eclipse us in every domain.

Not only intellectually, but morally. And that such an intelligence might not just be our child, but our rightful heir.

In an age of narcissism and technological complacency, this message cuts through the noise like a dagger, even if it’s bleakly dystopian.

At 5 p.m. on a warm June day, roughly 100 guests milled about the glassy interiors of the estate, which belonged to Max Novendstern, who, along with OpenAI’s Sam Altman, founded WorldCoin.

Max quoted Nietzsche when he first publicly greeted his guests. It was later revealed he’d also just closed escrow 72 hours before on the $30 million mansion we were gathering, where 20-foot-wide windows overlooked the Golden Gate Bridge.

About 100 guests sipped mocktails and chewed on cubes of artisanal cheese. Most of us kept gazing westward to the open sea and soon setting sun, toward the place where land ends and unknowable futures begin.

Some, like AI Professor Roman V. Yampokskiy, wore shirts referencing Ray Kurzweil bearing slogans like “Kurzweil was right.” My friend AI entrepreneur Ben Goertzel, who specializes in trying to create decentralized AI, was also there, sporting his unique sun hat and casual clothing.

I was dressed formally, as were others. But the large number of people in shorts and torn t-shirts — probably 9-figure net worth AI bros — was a sight to see among the formally dressed. It would be hard to come up with a more dystopian Hollywood-set feel.

AI and Humanity’s Future — And How to Navigate It

Because beneath the casual Bay Area nonchalance, the undercurrent of the event was urgent, philosophical, and disturbing: the question wasn’t how we would survive the future. It was whether we should. It was a most serious of subjects.

In our culture, doomsday is often framed as a collapse. But for the thinkers at this event, many of whom hailed from OpenAI, DeepMind, Anthropic, and beyond, the end of humanity wasn’t the end of the story — it was the turning of a page (not one that I’m wanting to turn myself, mind you).

After food, drinks, and casual conversation, the first speaker, New York writer Ginevera Davis, took the stage. She was tall, gorgeous and classical looking with a short dress on.

In her talk, she quickly punctured one of the oldest assumptions in Western ethics: that human values are universal. She argued, convincingly, that trying to embed our morals into AI systems may be not only impossible, but dangerously arrogant.

Human consciousness — our subjective experience of suffering, joy, meaning — is a fragile lens through which we interpret the universe. Why assume the future must see through it?

Instead, Davis proposed a bold pivot: “cosmic alignment.” Build AI not to mirror human whims, but to explore universal values yet undiscovered. The image on her slide — humans gazing over a glimmering techno-utopia — suggested a spiritual inheritance for machine intelligence, where posthuman minds could seek truths far deeper than our evolutionary heuristics allow.

Critics may invoke “stochastic parrots”— the now-famous label for large language models that simulate meaning without understanding it.

But here, the assumption was clear: the age of superintelligence is coming, and whether or not machines are conscious now, they will be. The only question is what values they will hold when they wake up.

Next came philosopher Michael Edward Johnson, who delivered a talk as incisive as it was unsettling. He reminded us that technology doesn’t just change the world — it changes our understanding of value itself.

If consciousness is the “home of value,” he said, then what happens when we build minds we don’t understand? Are we unleashing entities that can suffer — without knowing it? Or worse, are we asking moral questions of machines that cannot suffer at all?

To solve this, Johnson called for a new framework — not one that keeps AI enslaved to human command, but one that reorients both man and machine toward “the good.” That phrase may ring abstract, even mystical. But Johnson was adamant that it need not be.

He believed “the good” could be defined scientifically, even operationalized into a functional ethics for digital minds. It was a call not to tame the AI beast, but to teach it to transcend us.

This is not the cowardice of clinging to past values, but the courage of cosmism: the belief that intelligence is the vehicle through which the universe learns itself.

Faggella took the mic last, the host stepping into the role of prophet. His message was simple, if terrifying: humanity’s reign is a phase, not a final state.

He spoke not in technobabble, but in axioms. The two prerequisites for any worthy successor, he said, are consciousness and “autopoiesis” — the capacity for self-generation, evolution, and exploration. These are not engineering challenges. They are ontological blueprints for the next dominant form of intelligence on Earth.

Citing Spinoza and Nietzsche, Faggella emphasized that most value in the universe remains untapped. He warned that today’s AGI race, driven by corporate incentives and geopolitical paranoia, may be careening toward catastrophe. But if we move with care, with wisdom, then we are not building machines — we are building the future discoverers of truth, beauty, and meaning.

This is the core of his doctrine: “axiological cosmism,” the idea that the greatest ethical aim is not to preserve humanity but to expand the domain of value across time and space. AI, in this view, is not a tool. It is the next node in the flowering of intelligence itself.

If that idea disturbs you, it should. But it might also liberate you.

AI Will Change Humanity, One Way or Another

After the hour of talks, conversations swirled like a post-singularity salon. I spoke to a multitude of founders. No one had a good idea on how to stop the inevitable march to a superintelligent AI.

This shocked me, as I came to this party hoping there might be answers and strategies to save humanity. Ironically, everyone seemed to have an angle on how to make a lot of money in the next 12-24 months on AI — all the way up until the end of the world.

Another frequent topic amongst the crowd was the geopolitical arms race: the U.S. versus China, the risks of open-sourcing models, the slippery ethics of AI acceleration. But hanging over every discussion was the same dilemma: Should we really build something that replaces us? And if so, can we build it better?

Faggella made his stance clear. “This is not an advocacy group for the destruction of man,” he told a journalist. “This is an advocacy group for slowing down AI progress — until we’re sure we’re going in the right direction.”

But even that “we” feels uncertain. Because as these minds see it, the baton may already be passing — not in a blaze of rebellion, but in a silent, unstoppable gradient. We are becoming the prelude to something more.

In the early 21st century, as we pump trillions into AI systems that we increasingly do not understand, we are confronted by a paradox as old as Prometheus: that to gain fire is to risk the wrath of gods — or to become gods ourselves.

But these thinkers aren’t building heaven or hell. They’re building something stranger: a successor species not born of biology but of code and concept, free from carbon and chaos, capable of exploring dimensions of consciousness we have only dreamed of.

This isn’t extinction in the Hollywood sense. It’s evolution. Not the end of meaning — but the beginning of deeper meaning, designed not by divine intervention or natural selection, but by minds we taught to think better than us.

If that offends your human pride (and it does mine), then you and I may be clinging to the past.

But if you believe that intelligence is sacred, then you understand that its purpose is not to remain chained to Homo sapiens, but to explore the universe without us, or perhaps with us, in new form.

Somewhere in that mansion above the sea, between the echoes of ancient philosophy and the buzz of nascent sentience, the outlines of that future were already taking shape.

I took a last sip of wine, looked to the sea and Golden Gate Bridge, and left depressed.

Zoltan Istvan
Grey Swan Investment Fraternity

P.S. from Andrew: Zoltan’s insights here first appeared in our July issue of the Grey Swan Bulletin. That was well ahead of the two-day (and counting) wreck we’re starting to see in AI and tech stocks.

Paid-up Fraternity members can catch these unique and philosophical insights from Zoltan monthly. A transhumanist living in the Bay Area, Zoltan’s depth and contacts are unmatched for all things AI.

His latest commentary in our just-released August issue, covering humanoid robots, also provided a glimpse of the companies investors may want to focus on buying during market meltdowns like the one we may be starting now.

Meanwhile, Grey Swan Live! returns tomorrow. We’ll be joined by Matt Clark, Chief Research Analyst at Money & Markets, one of our corporate affiliates.

Matt’s role is similar to mine as Portfolio Director — finding new investment opportunities and sifting through ever-shifting markets.

Matt is the only person I know who can find data and precise numbers faster than I can. Maybe that comes from his days as an investigative journalist.

But with markets hitting an air pocket this week and all eyes on Jackson Hole, this will be a timely and critical chat — exclusively for our paid-up Fraternity members.

Turn Your Images On

Your thoughts? Please send them here: addison@greyswanfraternity.com


The Useless Metal that Rules the World

August 29, 2025 • Dominic Frisby

Gold has led people to do the most brilliant, the most brave, the most inventive, the most innovative and the most terrible things. ‘More men have been knocked off balance by gold than by love,’ runs the saying, usually attributed to Benjamin Disraeli. Where gold is concerned, emotion, not logic, prevails. Even in today’s markets it is a speculative asset whose price is driven by greed and fear, not by fundamental production numbers.

The Useless Metal that Rules the World
The Regrettable Repetition

August 29, 2025 • Addison Wiggin

Fresh GDP data — the Commerce Department revised Q2 growth upward to 3.3% — fueling the rally. Investors cheered the “Goldilocks” read: strong enough to keep the music going, not hot enough (at least on paper) to derail hopes for a Fed pivot.

Even the oddball tickers joined in. Perhaps as fittingly as Lego, Build-A-Bear Workshop popped after beating earnings forecasts, on track for its fifth consecutive record year, thanks to digital expansion.

Neither represents a bellwether of industrial might — but in this market, even teddy bears roar.

The Regrettable Repetition
Gold’s Primary Trend Remains Intact

August 29, 2025 • Addison Wiggin

In modern finance theory, only U.S. T-bills are considered risk-free assets.

Central banks are telling us they believe the real risk-free asset is gold.

Our Grey Swan research shows exactly how the dynamic between government finance and gold is playing out in real time.

Gold’s Primary Trend Remains Intact
Socialist Economics 101

August 28, 2025 • Lau Vegys

When we compare apples to apples—median home prices to median household income, both annualized—we get a much more nuanced picture. Housing has indeed become less affordable, with the price-to-income ratio climbing from roughly 3.5 in 1984 to about 5.3 today. In other words, the typical American family now has to work much harder to afford the same home.

But notice something crucial: the steepest increases coincide precisely with periods of massive government intervention. The post-dot-com bubble recovery fueled by Fed easy money after 2001. The housing bubble inflated by government-backed mortgages and Fannie Mae shenanigans. The recent explosion driven by unprecedented monetary stimulus and COVID lockdown policies.

Socialist Economics 101