Meta’s metaverse ambitions are now in full retreat. Yesterday, Meta announced that it would be shuttering Horizon Worlds, its flagship VR app and the “culmination of [everything Meta had] learned so far about virtual spaces and communities.”

This comes on the heels of another 1500 layoffs within Meta’s Reality Labs Division – my old stomping grounds – and shuttering several recently acquired gaming studies. The company also ended its nine-month effort to get third parties to adopt its mixed reality operating system, stopped support for Supernatural (the VR fitness app the FTC sued to block Meta from acquiring), and even killed off “Horizon Workrooms,” the much maligned virtual productivity app that the company promoted on the national morning news

It’s fair to say the company has fully pivoted to AI glasses and what it’s calling superintelligence. The company’s former “Head of Metaverse” could never figure out how to get anyone at Meta to use his products, let alone the general public, so he transitioned to an AI post. But hey, the avatars got legs before the “legendary misadventure” came to an end.

I haven’t used my Quest headset in ages, but it took seeing products I worked on get dismantled to finally feel a sense of closure about my time at Meta. Like Meta, I went all-in on immersive tech as the future, only to watch large language models take all the oxygen out of policy debates about eye tracking and spatial computing.

Spatial computing expert Avi Bar-Zeev was a constant and constructive critic during my days as a metaverse booster, and his list of high-level list of reasons for why the Meta’s metaverse flamed out motivated me to gather my own thoughts. Bar-Zeev distills his critique into a lack of trust in Facebook. I understand this framing – this is the company that effectively mainstreamed “virtual groping” as a term of art. But I don’t think the public’s low regard for Meta fully explains this outcome.

Meta remains a tremendously profitable company. The Ray-Ban joint venture has produced successive iterations of products with genuine product-market fit. The Quest platform was (and arguably still is) viable, even if it never approached the billion-user scale that leadership demanded. But Meta’s effort to own the next computing platform, ultimately in service of personalized advertising, remained constrained by the Google and Apple duopoly. The trust deficit mattered, but it was downstream of deeper structural problems.

“Social” Was Never the Superpower

When I was at Meta, the internal mantra was that “social is our superpower.” I don’t think that’s correct.I haven’t gotten much traction with this argument, but I maintain that what we call “social media” is far from it. Rather, what the media and our lawmakers breathlessly refer to as social media are just content feeds. And influencers and user-generated content are quickly being replaced by AI slop. 

Gaming, however, is genuinely social, though in ways Meta never understood. That’s why the effort to bootstrap “Facebook Friends” into the Oculus ecosystem was so misguided. Rather than boost existing social platforms with real communities – like Roblox, Rec Room, or VRChat – Meta insisted on plowing ahead with Horizon Worlds, a product that became a punchline before it found an audience. The company had a viable hardware platform and chose to force an inferior social layer onto it instead of meeting users where they already were.

​​The Shiny Object Problem

Before I joined Reality Labs in 2021, I cracked wise on a panel about daring to mention the metaverse (I called it the “m-word”) so count me as someone stunned that the company rebranded after a term coined in a dystopian novel. I understand why corporate leadership wanted distance from Facebook’s brand baggage, but the rebrand was symptomatic of a deeper pattern: an institutional compulsion to chase the next big thing.

Facebook begat Instagram. Which begat a parade of wearable devices. During my time, I watched entire teams eagerly pivot to finding ways to integrate NFTs into Instagram. When asked point-blank by an external advocate, “Why?”, the only answer was a halting pitch about letting people monetize digital creativity.

As excited as I was about the potential of augmented reality, I became too close to the work to see clearly how quickly AI would become the next fixation. Product teams raced to rebrand every project as warranting special “AI reviews,” but Meta is perpetually structured to incentivize people to chase wherever the new energy was — leaving constant re-orgs and turf battles in their wake. These land-grabs came at the cost of maintaining existing products. “Deprecation” became a dark joke on our team, but the number of functional, sometimes good products that Meta would discard in pursuit of the latest growth opportunity was staggering.

The Right Technology, the Wrong Use Cases

I joined Meta for a lot of reasons: the compensation, the opportunity to bet on becoming an expert in an emerging technology with massive potential. And while the metaverse dream didn’t pan out as I’d hoped, I thought then (and still believe) that AR and VR are genuinely impressive technologies.

Reality Labs was appealing precisely because it wasn’t social media. It was hardware that could drive unique immersive and augmented experiences. Unfortunately for Meta, it wasn’t at the forefront of social VR, and aside from Pokémon Go, social AR experiences haven’t been success at scale. Perhaps as a result, Meta could never embrace the use cases that might have been real killer apps.

The most powerful applications were always right in front of us: universal real-time translation, live contextual instruction (imagine AR-guided cooking), and AR mapping and navigation. It’s not a coincidence that Google has leaned heavily into these sorts of use cases. There is real potential for AR overlays to pull people out of dangerous situations such as pedestrians walking into traffic, faces buried in their phones. But Meta kept marketing this technology as a way to reshape the classroom – as if flooding schools with VR helmets made sense – or to enable doctors to perform virtual surgery. The gap between the technology’s actual strengths and Meta’s chosen narratives was enormous.

Resistance to Restrictions, Rejecting of Regulation

This is where Bar-Zeev’s trust argument resonates most. Meta routinely recoiled from making commitments not to do certain things. Internally, we were told our job was to “make the world ready for the metaverse.” But that was never going to happen if the company wouldn’t commit to basic trust-building measures.

Pledging not to use eye-tracking data to infer users’ states of impairment? Acknowledging that a tiny LED light on a recording-capable device is not a meaningful privacy protection? Working constructively with platform-level privacy features rather than treating Apple’s privacy APIs as an existential threat? Nonstarters. Those were just examples during my tenure. More recently, reports suggest Meta’s approach to relaunching facial recognition is to take advantage of “a environment where many civil society groups that we would expect to attack us would have their resources focused on other concerns.”

Reality Labs invested significant time and money establishing an external advisory council, and I was proud of the work we did to fund independent research and bring those findings inside. But over and over again, there was no convincing anyone with authority that products could be unsafe – or even unseemly. This was before Frances Haugen and the wave of exposés that revealed the depth of the company’s institutional aversion to accountability. But the warning signs were there for anyone willing to see them.

The culture I experienced was one that systematically avoided accountability, producing inconsistent and incoherent positions on law, regulation, and product ethics. When I pushed back on a deployment of deepfake technology, the product team insisted the effect was “not realistic” despite their own product spec declaring the goal of besting competitors with “more realistic” implementations. And on, and on, and on.

How do you make the world ready for the metaverse when your corporate vision for the thing is fundamentally antisocial? It’s hard to imagine a company less suited than Meta to serve as the face of the metaverse. Neal Stephenson’s dystopian fiction may yet come to pass, with immersive, persistent, and deeply surveilled digital worlds overlapping with physical reality, but I doubt anyone will be calling this the “metaverse” if it does.

The term itself may be beyond salvaging. But the lessons here – about trust, compelling use cases, institutional incentives, and the limits of brute-force platform ambition – should carry forward as AI becomes the next shiny object in tech. Meanwhile, Meta’s leadership talks about the potential of generative AI friends to replace real human connection, which suggests social really isn’t the company’s strong suit anymore.

0 Comments

Comments are closed.