Why is AI so "useless"?
Dec 6, 2025
Author’s Note This is a substantial read.
While building Charm, I realized that the challenge facing the AI industry isn't just about "coding"—it is a deeper, structural issue.
The problem is complex. Here, I attempt to articulate why there is such a massive disconnect between the AI industry's hype and the actual reality for users, and how we might bridge that gap. I invite you to join this discussion and explore the direction of our evolution.
TL;DR
The Problem: A true application ecosystem relies on interconnected capability chains. Simply making models smarter cannot solve the core issue of why AI isn't yet universally practical in society.
The Diagnosis: The real bottleneck stems from structural fragmentation across three layers: Development, Execution, and Distribution—which collectively block the mass adoption of AI applications.
The Shift: Maturing the ecosystem requires a fundamental paradigm shift—moving from the Tool Era of manual clicks to Intent-Centric Computing.
The Solution: We are building Charm—not as another framework, but as a unified System Layer designed to govern, coordinate, and distribute these agentic applications.
As the AI boom accelerates and capital floods into the industry, people have started to ask whether we’re approaching the next dot-com bubble.
Are we over-investing? Are we being overly optimistic about the long-term value AI can bring to humanity?
For many, this feels like an echo chamber of hype—a handful of insiders getting excited while AI has yet to tangibly change most people's lives.
So why does AI look like a bubble?
Fundamentally, what most people are really trying to say is:
"Why is AI so useless in my life?"
Application-Perceived Value
To understand this question, we have to shift the focus from AI to applications—especially consumer-facing ones.
Most people can’t perceive underlying technologies directly. They only perceive value through applications.
People don’t care which model or technique you used; they only care about which real problem you solved for them.
Today, no one seriously questions the value of the internet or smartphones because we have become deeply dependent on the services they deliver. We rely on them to access information, enjoy entertainment, work, and maintain social relationships.
We all agree AI is the future, but there’s an obvious reality:
AI applications are still far from being indispensable in most people’s daily lives.
So the real question we should be asking is:
How far are we from a true AI application explosion?
Generative Applications vs. Agentic Applications
In this context, I believe the evolution of agentic applications will be far more crucial for unlocking AI’s real-world value.
Generative applications are easier to understand and adopt, and they have indeed improved productivity across many domains. However, in terms of ecosystem positioning, they still function largely like traditional tools: they rely on humans to drive every action and decision.
As a result, their impact is mostly confined to localized efficiency gains. We shouldn’t expect them alone to produce structural changes in how society operates.
Human society is not driven by content, but by behavior, decision-making, and processes.
Agentic applications are fundamentally different.
For the first time, software can proactively assume human roles and create productive capacity that is truly independent of human operators. This is the critical frontier where AI has the potential to be genuinely disruptive—and what will ultimately determine if AI can be deeply integrated into the economic fabric of society.
Crucially, the rise of agentic applications is also the key to unlocking the full potential of generative applications.
Once intent-driven interaction becomes the primary mode, generative apps will cease to be mere content generators. They will evolve into capability modules within human workflows. When agents orchestrate and leverage generative tools to deliver results, these apps transform from standalone products into composable ecosystem modules, surfacing significantly greater value across a wide range of use cases.
In short:
Agentic applications are the bridge moving us from the "tool era" to system-level intelligence.
Generative applications are the core capability units that get combined and invoked to make AI practically useful.
So Why Aren’t AI Applications Good Enough Yet?
Common intuitive answers include:
Models aren’t strong enough yet.
Development is too hard.
Supporting infrastructure and tooling lack maturity.
All of these are partially true.
Stronger models, specialized capabilities, and better infrastructure definitely make AI applications more useful and reliable. But I don’t believe they address the root cause.
Consider two key observations:
1. On the enterprise side, AI is already doing real work. Decision platforms, internal automation workflows, and customer support augmentation often face stricter requirements—higher complexity, greater precision, and zero tolerance for error. Yet, these solutions are being adopted faster than consumer AI products.
2. Successful prototypes are everywhere "in the wild." Across forums and online communities, we see a plethora of practical consumer-facing AI prototypes, many of which have received enthusiastic reception from early adopters.
This clearly illustrates the reality:
What prevents mainstream adoption is not an inability to meet user needs or deliver value, nor is it purely a technical limitation.
Ecosystem Fragmentation and Capability Silos
The real structural culprit behind today’s bottleneck is ecosystem fragmentation and platform isolation.
This fragmentation manifests across three critical axes:
Development
Execution
Distribution
Fragmentation of the Development Environment
Over the past year, the infrastructure and frameworks for building agentic applications have been proliferating at a breakneck pace. Every project is experimenting with its own abstraction layers and features, presenting builders with a dizzying array of choices.
From an innovation perspective, this diversity is a positive sign—it indicates that the agentic landscape is evolving rapidly.
But this rapid expansion has created a growing problem:
Fragmented development environments are compounding into structural friction at the ecosystem level.
Today, when a developer wants to build a production-grade agent, the first step is to choose among a fragmented landscape of mutually incompatible ecosystems. Each ecosystem enforces its own agent abstraction, schema, semantics, behavior model, and event lifecycle.
This leads to a rigid form of ecosystem lock-in.
Whenever a developer needs to extend an agent beyond what a specific framework natively supports, they are often forced to:
manually implement redundant tool integrations for each new target;
reconcile mismatched semantics and protocols;
convert between conflicting data formats and intermediate representations;
or even rebuild the entire workflow or migrate to a different framework.
Consequently, extension, integration, and iteration have become the primary sources of friction in the production process.
Too many frameworks, scattered capabilities, and inconsistent semantics are driving the ecosystem toward "capability islands." Each platform is reinventing the wheel—re-implementing similar functionality without a shared abstraction that would allow these capabilities to be reused and composed across boundaries.
Fragmentation of the Execution Environment
This architectural fragmentation cascades downstream, creating severe execution and interaction bottlenecks.
Even in traditional software, complex real-world workflows are rarely handled by a single application. Every realistic process involves orchestrating across multiple external tools and services. Regardless of AI's involvement, work is inherently a connected, multi-step chain.
However, today’s AI ecosystem remains stuck in a stage of "isolated capabilities":
Agentic applications usually execute within pre-designed, framework-specific flows. Their capabilities are difficult to compose, reuse, or extend across ecosystem boundaries.
Generative applications produce outputs that are siloed within their own UI, platform, and proprietary data structures, with no standardized way for other systems to programmatically consume those results.
While agents can now interface with models and tools via protocols like MCP or A2A, they still fail to interoperate like traditional apps in a robust, stateful manner. These protocols improve connectivity—but what they really solve is data transmission, not shared operating rules.
In practice, AI applications still diverge significantly in:
semantic structures,
task definitions,
internal representations,
and workflow lifecycle semantics.
It is difficult to collaborate in a way that is reusable, portable, and semantically consistent. It is even harder to build the kind of cross-application capability chains we take for granted in traditional software. Unsurprisingly, ecosystem-level progress has lagged significantly behind the advancements of the models themselves.
Missing Distribution Layer
Despite all these issues, model capability and infrastructure have advanced rapidly. AI application development is demonstrably easier, and performance is significantly better than even just a few months ago.
Yet, real-world adoption still lags far behind technical maturity.
Beyond the fragmentation of development and execution environments, there is another critical structural bottleneck:
AI applications lack a reliable, trusted, and consistent Application Distribution Layer.
Most AI apps today still exist as isolated functions scattered across websites, bots, and plugins. They lack robust delivery and management mechanisms, and therefore struggle to form a true application ecosystem.
This distribution layer is not merely a content list, a marketplace, or a simple exposure channel. It must also shoulder the responsibility for:
End-user interaction surfaces
Versioning and updates
Execution compatibility
Security review and permission governance
Billing and subscription rails
Rating and trust mechanisms
What we need is a set of shared distribution standards and governance capabilities with consistent structures and definitions.
More fundamentally, AI applications are not the same as traditional apps.
What gets delivered is not just a binary executable. It is a Semantic–Behavior Composite, consisting of multiple layers such as semantics, inference logic, and execution graphs.
For that reason, "installing" an AI app is not simply copying files. It is deploying an agentic behavior unit characterized by:
Clearly defined behavior boundaries;
Predictable execution semantics;
Cross-tool collaboration capabilities;
And backed by critical support structures:
Cross-runtime and cross-platform capability declarations;
Behavior descriptions and dependency definitions;
Granular permission control and behavior governance.
None of this is something our existing operating systems are designed to handle.
Today’s AI features—manifesting merely as links, bots, or single-page utilities—are fundamentally not playing the role of applications. They lack a standardized execution environment, behavior-layer sandboxes, a verifiable lifecycle, and unified governance.
This leads to a cascade of damaging side effects:
1. No User Flywheel or Ecosystem Feedback Loop AI apps struggle to accumulate stable installs, reviews, and long-term trust.
2. High Update and Maintenance Costs Each developer must maintain their own execution environment and tooling. Every app effectively re-invents the wheel, solving the same infrastructure problems from scratch.
3. Heightened Security and Governance Risks Apps rely on ad-hoc API key management, manual HITL (Human-in-the-Loop) checks, and private APIs, with no common permission tiers. In practice, users are forced to rely on blind trust, hoping the developer "doesn’t mess things up."
4. Failure to Form a Composable Network AI apps rarely interoperate. Every integration is a bespoke effort. Products keep walking similar paths and rebuilding similar logic, which severely weakens the core promise and scalability of agentic applications.
5. Fragmented and Expensive Monetization Developers are forced to roll their own payment, authentication, authorization, and subscription logic. The operational overhead is significantly higher than for traditional apps, while returns are often not proportional.
From Tools to Intent
In the future, application usage will shift fundamentally toward intent-centric interaction.
Humans will no longer manually navigate disparate websites or apps. Instead, they will express semantic goals to an agent, which will then execute cross-application workflows on their behalf.
In this Intent-Centric Computing paradigm:
Users no longer interact with applications in silos, nor do they need to navigate platform-specific nuances.
They simply declare their intent, and agents handle the planning, tool selection, and orchestration.
Apps and tools are no longer primarily UI surfaces, but composable capability units within larger task flows.
Application value does not diminish in this model.
It simply shifts—from what the user directly clicks on, to what the agent can reliably invoke to deliver the desired outcome.
But to make this world real, better models alone are not enough.
We need a unified ecosystem layer—a higher-level abstraction featuring:
A common language for describing capabilities;
Consistent logic for invoking behavior;
And shared rules for application governance;
Only then can we support true collaboration across models, applications, and frameworks.
Unified Development Environment and Interface: Let Capabilities Be “Described” and “Composed”
On the development side, what we need is not yet another agent framework.
We require a cross-platform semantic description layer, allowing applications, tools, and agents from heterogeneous sources to express their capabilities in a consistent format.
The goal is not to dictate which framework everyone must use, but to establish:
A neutral, framework-agnostic capability description format That allows callable units across ecosystems to be defined and exposed in a shared standard.
A composable capability model That abstracts functions, tools, and services from different ecosystems into generic capability modules. Developers can then mix, match, and orchestrate these modules without worrying about their underlying provenance.
In this model, an AI application is no longer a rigid flow locked inside a specific framework.
Instead, developers work through a single, unified interface, assembling tools, sub-agents, and services from across the entire AI ecosystem, without having to:
Scour for external features in disparate repositories;
Manually wire low-level integration details;
Or re-implement redundant logic due to framework incompatibilities.
Unified Execution Layer: Let Applications “Interop” and “Collaborate”
On the execution side, this ecosystem layer does not seek to replace existing runtimes.
Rather, it operates as a unifying abstraction layer atop them, standardizing behavior semantics across platforms so that applications can be invoked, composed, and orchestrated in a consistent manner.
This means transforming application and service capabilities into shared behavioral units, governed by a unified protocol for:
Interpreting each other’s semantic outputs;
Maintaining workflow continuity across state transitions;
Executing seamless cross-platform interactions;
And standardizing error handling, resource usage, and lifecycle management.
In such an execution layer, applications cease to be isolated capability islands. Instead, they form a cross-ecosystem capability network.
Agents from heterogeneous sources can invoke each other, integrate external apps as functional modules within a flow, and return results in standardized formats that any participating runtime can understand and act upon.
Unified Distribution Platform: Let Applications Be “Installed,” “Trusted,” and “Governed”
To truly enable capability flow and ecosystem formation, this unified layer must also encompass distribution and governance.
This is far more than a traditional marketplace limited to "mere listing and exposure."
It is a semantic-contract-based distribution model designed to:
Define an application’s behavioral boundaries and capability requirements;
Explicitly declare permissions, dependencies, and callable units;
Manage versioning, compatibility, updates, and security posture;
And enforce consistent execution semantics and governance rules across multiple runtimes.
Under such a structure, an AI app is no longer just:
A URL,
A copied prompt,
Or an opaque script.
It evolves into a verifiable software artifact with:
Clearly defined behavior;
Controllable permissions;
And a complete, managed lifecycle.
It creates an entity that can be: Installed. Verified. Authorized. Updated. Revoked. Monetized.
Within a single, coherent delivery and governance model:
Developers can publish and maintain applications in a predictable way, without reinventing complex permission models, versioning strategies, or trust frameworks.
Users and enterprises can explore, install, and manage AI applications in a consistent, trustworthy environment, instead of wrestling with fragmented, opaque, and uncontrollable tools.
In this way, the application ecosystem can finally achieve true scalability.
Conclusion
Reflecting on the history of the internet and mobile computing, what we are experiencing now may not be merely a bubble, but a critical inflection point in a new technology cycle.
Models will continue to improve. Frameworks will continue to evolve. Tools will continue to proliferate.
But as long as the ecosystem remains fragmented, we remain far from true industry maturity.
This is exactly the future we are building toward—and we call it Charm.
Charm is not intended to be yet another framework, nor is it designed as a mere cloud service.
It is a unified system layer that spans development, execution, governance, and distribution. Its mission is to integrate and orchestrate the entire AI ecosystem, pushing AI from isolated point capabilities toward system-level intelligence.
Finally, I invite open dialogue—whether it be debates, challenges, refinements, or an interest in joining us to build this future. Every insight and exchange helps us grow stronger.
