Join our daily and weekly newsletters for the latest updates and exclusive content on our industry-leading AI coverage. He learns more
2025 will be the year that big tech companies shift from selling us more and more powerful tools to selling more and more powerful capabilities. The difference between tool and ability is subtle but profound. We use tools as external tools that help us overcome our organic limitations. From cars and planes to phones and computers, tools dramatically expand what we can accomplish as individuals, in large teams and as vast civilizations.
Capabilities are different. We experience abilities in the first person as self-embodied abilities that seem internal and immediately accessible to our conscious minds. For example, language and mathematics are human-created technologies that we carry in our brains and carry with us throughout our lives, expanding our abilities to think, create, and collaborate. they superpower Which seem so ingrained in our existence that we rarely think of them as technologies at all. Fortunately, we don’t need to purchase a service plan.
But the next wave of great powers will not be free. But just like our abilities to reason verbally and numerically, we will experience these powers as self-embodied abilities that we carry with us throughout our lives. I refer to this new technological discipline as… Increased mentality It will emerge from the convergence of artificial intelligence, conversational computing and augmented reality. And in 2025 the arms race will begin between the largest companies in the world to sell to us Superhuman abilities.
These new superpowers will be unleashed Context-aware AI agents Which is loaded in Body worn devices (Like artificial intelligence glasses) that travel with us throughout our lives, seeing what we see, hearing what we hear, experiencing what we experience, and providing us with enhanced abilities to perceive and interpret our world. In fact, by 2030, I predict that the majority of us will be living our lives aided by context-aware AI agents that bring… Digital superpowers In our normal daily experiences.
How will our superhuman future unfold?
First of all, we will do this Whisper For those savvy customers, they will He whispers againActs like an alter ego that provides us with context-aware recommendations, knowledge, guidance, and advice. Spatial remindersdirection signals, Touch payments and other verbal and cognitive content that will guide us through our days and teach us about our world.
Consider this simple scenario: You’re walking downtown and find a store across the street. Wondering when it opens? So, you grab your phone and type (or say) the name of the store. You can quickly find opening hours on the website and perhaps review other information about the store as well. This is the basic computing model for tool use prevalent today.
Now, let’s take a look at how big tech is moving to the capable computing paradigm.
Stage 1: You wear AI-powered glasses that can see what you see, hear what you hear, and process your surroundings through a large, multimodal language model (LLM). Now when you see that store across the street, you simply whisper to yourself, “I wonder when it’ll open?” And the sound will instantly ring in your ears “10:30 AM”.
I know this is a slight shift from asking your phone to search for a store name, but it will be profound. The reason is that the context-aware AI agent will share your reality. It’s not just tracking your location like a GPS, it’s seeing, hearing and paying attention to what you’re paying attention to. This will make it feel much less like a tool, and more like an internal ability tied to your first-person reality.
And when we’re asked a question by an AI-powered alter ego in our ears, we often just answer it We nodded for emphasis (It is detected by the sensors in the glasses) or we shake our heads in rejection. It will seem so natural and smooth, we may not even realize that we responded consciously.
Stage 2By 2030, we will no longer need to whisper to AI agents that travel with us throughout our lives. Instead, we will be able to simply pronounce words, and artificial intelligence will know what we are saying by reading our lips and detecting activation signals from our muscles. I’m confident that “The Mouth” will be popularized because it is more personal, more flexible in noisy places, and most importantly, it will feel more personal, interior, and self-embodied.
Stage 3: By 2035, you may not even need to utter words. This is because AI will learn to interpret the signals in our muscles with such precision and accuracy that we will simply need to think about uttering words to communicate our intentions. We will be able to focus our attention on any object or activity in our world and think about something, and useful information will come out of our AI glasses such as A voice that knows everything In our heads.
Of course, the abilities will go beyond just wondering about things around you. This is because the onboard AI that shares your first-person reality will learn to anticipate the information you want even before you ask for it. For example, when a coworker approaches you from down the hall and you can’t quite remember his name, the AI will feel uncomfortable, and a voice will ring out: “Greg from Engineering.”
Or when you pick up a can of soup at a store and are curious about carbs or wonder if they’re cheaper at Walmart, the answers will ring in your ears or appear visually. It will even give you superpowers to assess the emotions on other people’s faces, predict their moods, goals, or intentions, and guide you during conversations in real-time to make you more persuasive, attractive, and persuasive (see this fun Video example).
I know some people will be skeptical about the level of adoption I predict above and the quick time frame, but I don’t take these claims lightly. I’ve spent much of my career working on technologies that… Increase and expand human capabilitiesAnd I can say without a doubt that the mobile computing market is about to go in this direction in a very big way.
Over the past 12 months, two of the world’s most influential and innovative companies, Meta and Google, have revealed their intentions to give us self-embodied superpowers. Meta has taken its first big step by adding context-aware AI to its Ray-Ban eyewear and by showing off its Orion mixed reality prototype that adds impressive visual capabilities. Meta is now very well positioned to capitalize on its significant investments in AI and extended reality (XR) and become a major player in the mobile computing market, and will likely do so by selling superpowers we can’t resist.
It shouldn’t be outdone by Google lately Announcing Android XRa new AI-powered operating system for Increase our world With seamless, context-aware content. They also announced a partnership with Samsung to bring new glasses and headphones to the market. With a market share of over 70% for mobile operating systems and an increasingly strong AI presence with Gemini, I believe Google is well positioned to be the leading provider of technology-enabled superhumans within the next few years.
Of course, we need to consider the risks
In the famous words Spiderman 1962 movie“With great power comes great responsibility.” This wisdom is literally about superpowers. The difference here is that the greatest responsibility will not fall on the consumers who purchase these technological powers, but rather on the companies that provide them and the regulatory bodies that supervise them.
After all, when wearing augmented reality (AR) glasses powered by artificial intelligence, each of us can find ourselves in a world… A new reality Where technologies Controlled by third parties It can selectively change what we see and hear, while AI-powered voices whisper advice, information and guidance into our ears. While intentions are positive, even magical, Potential for abuse It is just as deep.
To avoid miserable results, my primary recommendation to both consumers and manufacturers is to do this Adopt a subscription business model. If the arms race to sell superpowers is driven by the company that can provide amazing new abilities for a reasonable monthly fee – we will all benefit. Instead, if the business model becomes a competition to monetize superpowers by delivering the most effective targeted influence in our eyes and ears throughout our daily lives, consumers can It can be easily manipulated With precision and spread we have never encountered before.
Ultimately, these superpowers will no longer seem optional. Ultimately, not having it can put us at a cognitive disadvantage. It is now up to industry and regulators to ensure we roll out these new capabilities in a way that is not intrusive, manipulative or dangerous. I’m confident this could be a magical new computing trend, but it requires careful planning and oversight.
Louis Rosenberg founded Immersion Corp, Outland Research and Amnesty International unanimouslyAnd he wrote our next reality.
Data decision makers
Welcome to the VentureBeat community!
DataDecisionMakers is a place where experts, including technical people who do data work, can share data insights and innovations.
If you want to read about cutting-edge ideas, cutting-edge information, best practices, and the future of data and data technology, join us at DataDecisionMakers.
You might even think Contribute an article Your own!
https://venturebeat.com/wp-content/uploads/2025/01/Superpowers-City-Midjourney-LBR-2.png?w=1024?w=1200&strip=all
Source link