introduction
Decentralized Finance (DeFi) has ignited a story of exponential growth through a series of simple yet powerful economic primitives, transforming blockchain networks into global permissionless markets and completely disrupting traditional finance. In the rise of DeFi, several key metrics have become the universal language of value: total value locked (TVL), annualized yield (APY/APR), and liquidity. These concise metrics inspire participation and trust. For example, DeFis TVL (the dollar value of assets locked in the protocol) soared 14 times in 2020, then quadrupled again in 2021, peaking at over $112 billion. High yields (some platforms claimed APYs as high as 3000% during the liquidity mining craze) attract liquidity, while the depth of liquidity pools signals lower slippage and more efficient markets. In short, TVL tells us how much money is involved, APR tells us how much can be earned, and liquidity indicates how easy it is to trade assets. Despite their flaws, these metrics have built a multi-billion dollar financial ecosystem from scratch. By translating user engagement into direct financial opportunity, DeFi creates a self-reinforcing adoption flywheel that rapidly grows in popularity and drives mass participation.
Today, AI is at a similar crossroads. But unlike DeFi, the current AI narrative is dominated by large, general-purpose models trained on massive internet datasets. These models often struggle to deliver effective results in niche areas, specialized tasks, or personalized needs. Their one-size-fits-all model is powerful but fragile, general but misplaced. This paradigm needs to shift. The next era of AI should not be defined by the scale or generality of models, but should focus on the bottom-up - smaller, highly specialized models. This type of customized AI requires a completely new kind of data: high-quality, human-aligned, and domain-specific data. But obtaining this data is not as simple as crawling the web, it requires active and conscious contributions from individuals, domain experts, and communities.
To drive this new era of professionalized, human-aligned AI, we need to build incentive flywheels similar to what DeFi designed for finance. This means introducing new AI-native primitives for measuring data quality, model performance, agent reliability, and aligned incentives — metrics that directly reflect the true value of data as an asset, not just an input.
In this article, we’ll explore these new primitives that will form the backbone of an AI-native economy. We’ll explain how AI can flourish if the right economic infrastructure is in place—one that generates high-quality data, incentivizes its creation and use, and puts individuals at the center. We’ll also look at examples of platforms like LazAI that are pioneering these AI-native frameworks, ushering in new paradigms for pricing and rewarding data, and powering the next leap in AI innovation.
DeFi’s Incentive Flywheel: TVL, Yield, and Liquidity — A Quick Review
The rise of DeFi is no accident; it is designed to make participation both profitable and transparent. Key metrics such as total locked value (TVL), annualized yield (APY/APR), and liquidity are not just numbers, but primitives that align user behavior with network growth. Together, these metrics form a virtuous cycle that attracts users and capital, which in turn drives further innovation.
Total Value Locked (TVL): TVL measures the total capital deposited into DeFi protocols (e.g., lending pools, liquidity pools), and has become synonymous with the “market cap” of DeFi projects. Rapid growth in TVL is seen as a sign of user trust and protocol health. For example, during the DeFi boom of 2020-2021, TVL jumped from less than $10 billion to over $100 billion, and by 2023 it had surpassed $150 billion, demonstrating the scale of value participants were willing to lock into decentralized applications. High TVL creates a gravitational effect: more capital means greater liquidity and stability, attracting more users to seek opportunities. While critics point out that blindly chasing TVL can lead protocols to offer unsustainable incentives (essentially “buying” TVL) to cover up inefficiencies, without TVL, the early DeFi narrative would lack a concrete way to track adoption.
Annual Percentage Yield (APY/APR): The promise of yield turns participation into a tangible opportunity. DeFi protocols are starting to offer amazing APRs to liquidity or capital providers. For example, Compound launched the COMP token in mid-2020, pioneering the liquidity mining model - rewarding liquidity providers with governance tokens. This innovation sparked a frenzy of activity. Using a platform is no longer just a service, it becomes an investment. High APY attracts yield seekers, further driving up TVL. This reward mechanism drives network growth by directly incentivizing early adopters with generous returns.
Liquidity: In finance, liquidity is the ability to move assets without causing wild price swings — it’s the cornerstone of healthy markets. Liquidity in DeFi is often enabled through liquidity mining programs, where users earn tokens for providing liquidity. Deep liquidity on decentralized exchanges and lending pools means users can trade or borrow with low friction, improving the user experience. High liquidity leads to higher volumes and utility, which in turn attracts more liquidity — a classic positive feedback loop. It also enables composability: developers can build new products (derivatives, aggregators, etc.) on top of liquid markets, driving innovation. As a result, liquidity becomes the lifeblood of the network, driving adoption and the emergence of new services.
Together, these primitives form a powerful incentive flywheel. Participants who create value by locking assets or providing liquidity are immediately rewarded (through high yields and token incentives), which encourages more participation. This turns individual participation into broad opportunities - users earn profits and governance influence - which in turn generate network effects that attract thousands of users to join. The results are impressive: by 2024, the number of DeFi users exceeded 10 million, and its value increased nearly 30 times in a few years. Clearly, large-scale incentive alignment - converting users into stakeholders - is the key to DeFis exponential rise.
What’s missing from the current AI economy
If DeFi shows how bottom-up participation and incentive alignment can kick-start a financial revolution, today’s AI economy still lacks the foundational primitives to support a similar transformation. Current AI is dominated by large, general-purpose models trained on massive crawled datasets. These foundational models are incredibly large, but designed to solve all problems, they often fail to serve anyone particularly effectively. Their one-size-fits-all architectures are difficult to adapt to niche areas, cultural differences, or individual preferences, resulting in fragile outputs, blind spots, and an increasing disconnect from real-world needs.
The next generation of AI will be defined not just by scale, but by contextual understanding — the ability of models to understand and serve specific domains, communities of expertise, and diverse human perspectives. However, this contextual intelligence requires a different input: high-quality, human-aligned data. And that’s what’s currently missing. There are currently no widely agreed-upon mechanisms to measure, identify, value, or prioritize this data, nor are there open processes for individuals, communities, or domain experts to contribute their perspectives and improve the intelligent systems that increasingly impact their lives. As a result, value remains concentrated in the hands of a small number of infrastructure providers, while the masses are disconnected from the upside potential of the AI economy. Only by designing new primitives that can discover, validate, and reward high-value contributions (data, feedback, alignment signals) can we unlock the participatory growth loop that DeFi relies on to thrive.
In short, we must also ask:
How do we measure the value created? How do we build a self-reinforcing adoption flywheel that drives bottom-up engagement with individual-centric data?
To unlock an “AI-native economy” like DeFi, we need to define new primitives that turn participation into AI opportunities, catalyzing network effects that have not been seen in the space to date.
AI-native technology stack: new primitives for the new economy
We are no longer just transferring tokens between wallets, but rather we are feeding data into models, translating model outputs into decisions, and AI agents into actions. This requires new metrics and primitives to quantify intelligence and alignment, just as DeFi metrics quantify capital. For example, LazAI is building the next generation of blockchain networks to solve the AI data alignment problem by introducing new asset standards for AI data, model behavior, and agent interactions.
The following outlines several key primitives that define the economic value of on-chain AI:
Verifiable Data (The New “Liquidity”): Data is to AI what liquidity is to DeFi — the lifeblood of the system. In AI (especially large models), having the right data is critical. But raw data can be of poor quality or misleading, and we need high-quality data that is verifiable on-chain. A possible primitive here is “Proof of Data (PoD)/Proof of Data Value (PoDV)”. This concept would measure the value of data contributions not only based on quantity, but also on quality and its impact on AI performance. Think of it as the counterpart of liquidity mining: contributors who provide useful data (or labels/feedback) will be rewarded based on the value their data brings. Early designs for such systems are already emerging. For example, the Proof of Data (PoD) consensus of a blockchain project considers data as the primary resource for verification (similar to energy in Proof of Work or capital in Proof of Stake). In this system, nodes are rewarded based on the quantity, quality, and relevance of their contributed data.
Generalizing this to a general AI economy, we might see “Total Locked Data Value (TDVL)” as a metric: an aggregate measure of all valuable data on the network, weighted by verifiability and usefulness. Pools of verified data could even be traded like liquidity pools — for example, a pool of verified medical images for on-chain diagnostic AI could have quantified value and utilization. Data provenance (knowing where data came from, its modification history) will be a key part of this metric, ensuring that data fed into AI models is trustworthy and traceable. Essentially, if liquidity is about available capital, verifiable data is about available knowledge. Metrics like Proof of Data Value (PoDV) capture the amount of useful knowledge locked in the network, while on-chain data anchoring, enabled by LazAI’s Data Anchor Token (DAT), makes data liquidity a measurable, incentivized economic layer.
Model performance (a new asset class): In the AI economy, trained models (or AI services) become assets in their own right — and could even be considered a new asset class alongside tokens and NFTs. Trained AI models have value because of the intelligence encapsulated in their weights. But how can this value be represented and measured on-chain? We may need on-chain performance benchmarks or model certifications. For example, the accuracy of a model on a standard dataset, or its win rate in competitive tasks, could be recorded on-chain as a performance score. Think of this as an on-chain “credit rating” or KPI for the AI model. Such a score could adjust as the model is fine-tuned or as data is updated. Projects such as Oraichain have explored combining AI model APIs with reliability scores (test cases to verify that the AI output is as expected) on-chain. In AI-native DeFi (“AiFi”), one could envision staking based on model performance — for example, developers could stake tokens if they believe their model performs well, and receive rewards if independent on-chain audits confirm its performance (or lose their stake if the model performs poorly). This will incentivize developers to report truthfully and continuously improve their models. Another idea is to tokenize model NFTs that carry performance metadata - the floor price of model NFTs may reflect their practicality. Such practices are already emerging: some AI markets allow the buying and selling of model access tokens, and protocols such as LayerAI (formerly CryptoGPT) explicitly view data and AI models as emerging asset classes in the global AI economy. In short, DeFi asks how much money is locked?, and AI-DeFi will ask how much intelligence is locked? - not only computing power (although equally important), but also the effectiveness and value of running models in the network. New metrics may include proof of model quality or a time-series index of on-chain AI performance improvements.
Agent Behavior and Utility (On-Chain AI Agents): The most exciting and challenging addition to AI-native blockchains are autonomous AI agents running on-chain. They could be trading bots, data curators, customer service AIs, or complex DAO governors — essentially software entities that can sense, decide, and act on behalf of users or even on their own on the network. The DeFi world only has basic “bots”; in the AI blockchain world, agents could become first-class economic entities. This creates a need for metrics around agent behavior, trustworthiness, and utility. We could see something like an “agent utility score” or reputation system. Imagine each AI agent (perhaps represented by an NFT or semi-fungible token (SFT) identity) accruing a reputation based on its actions (completing tasks, collaborating, etc.). Such a score is like a credit score or user rating, but for AI. Other contracts could use this to decide whether to trust or use the agent’s services. In the concept of iDAO (individual-centric DAO) proposed by LazAI, each agent or user entity has its own on-chain domain and AI assets. These iDAOs or agents can be envisioned to establish measurable records.
Already, platforms are starting to tokenize AI agents and assign on-chain metrics: for example, Rivalz’s “ Rome protocol ” creates NFT-based AI agents (rAgents) whose latest reputation metrics are recorded on-chain. Users can stake or lend these agents, and their rewards depend on the agent’s performance and influence in the collective AI “cluster”. This is essentially DeFi for AI agents, and shows the importance of agent utility metrics. In the future, we may discuss “active AI agents” like active addresses, or “agent economic impact” like transaction volume.
Attention traces could become another primitive - recording what the agent pays attention to (which data, signals) during its decision making process. This could make black-box agents more transparent and auditable, and attribute the success or failure of the agent to specific inputs. In general, agent behavior metrics would ensure accountability and alignment: if autonomous agents are to be entrusted with managing large sums of money or critical tasks, their reliability needs to be quantified. High agent utility scores could become a prerequisite for on-chain AI agents to manage large sums of money (similar to how high credit scores are a threshold for large loans in traditional finance).
Use incentives and AI alignment metrics: Finally, the AI economy needs to consider how to incentivize beneficial usage and alignment. DeFi incentivizes growth through liquidity mining, early user airdrops, or fee rebates; in AI, usage growth alone is not enough, we need to incentivize usage that improves AI results. Here, metrics tied to AI alignment are critical. For example, human feedback loops (such as users rating AI responses or providing corrections through iDAOs, which will be explained in more detail below) can be recorded and feedback contributors can earn alignment benefits. Or imagine proof of attention or proof of engagement, where users who invest time in improving the AI (by providing preference data, corrections, or new use cases) are rewarded. The metric might be an attention track, capturing the amount of quality feedback or human attention invested in optimizing the AI.
Just as DeFi needed block explorers and dashboards (e.g. DeFi Pulse, DefiLlama) to track TVL and earnings, the AI economy needs new explorers to track these AI-centric metrics — imagine an “AI-llama” dashboard showing total aligned data volume, number of active AI agents, cumulative AI utility earnings, etc. It has similarities to DeFi, but the content is completely new.
Towards a DeFi-style AI flywheel
We need to build an incentive flywheel for AI - treating data as a first-class economic asset, thereby transforming AI development from a closed enterprise to an open, participatory economy, just as DeFi transformed finance into an open field of user-driven liquidity.
Early explorations in this direction have already occurred. For example, projects such as Vana have begun to reward users for participating in data sharing. The Vana network allows users to contribute personal or community data to DataDAO (decentralized data pool) and earn dataset-specific tokens (exchangeable for network native tokens). This is an important step towards monetization of data contributors.
However, simply rewarding contributions is not enough to replicate DeFi’s explosive flywheel. In DeFi, liquidity providers are not only rewarded for depositing assets, but the assets they provide also have a transparent market value, and the returns reflect actual usage (transaction fees, loan interest plus incentive tokens). Similarly, the AI data economy needs to go beyond general rewards and directly price data. In the absence of economic pricing based on data quality, scarcity, or degree of improvement to models, we may be stuck in shallow incentives. Simply distributing tokens to reward participation may encourage quantity rather than quality, or stagnate when the token lacks actual AI utility. To truly unleash innovation, contributors need to see clear market-driven signals, understand the value of their data, and be rewarded when the data is actually used in AI systems.
We need an infrastructure that is more focused on directly valuing and rewarding data to create a centralized incentive loop for data: the more high-quality data people contribute, the better the models become, attracting more usage and data demand, which in turn drives up returns for contributors. This will transform AI from a closed race for big data to an open market for trusted, high-quality data.
How are these ideas reflected in real projects? Take LazAI as an example - the project is building the next generation blockchain network and basic primitives for the decentralized AI economy.
Introduction to LazAI — Aligning AI with Humans
LazAI is a next-generation blockchain network and protocol designed specifically to solve the AI data alignment problem, building the infrastructure for a decentralized AI economy by introducing new asset standards for AI data, model behavior, and agent interactions.
LazAl provides one of the most forward-looking approaches, solving the AI alignment problem by making data verifiable, incentivized, and programmable on-chain. The following will use the LazAI framework as an example to illustrate how the Al native blockchain puts the above principles into practice.
Core Issues - Data Misalignment and Lack of Fair Incentives
AI alignment often comes down to training data quality, and the future requires new data that is aligned with humans, trusted, and governed. As the AI industry moves from centralized, general-purpose models to contextualized, aligned intelligence, infrastructure must evolve in tandem. The next era of AI will be defined by alignment, accuracy, and traceability. LazAI addresses the challenges of data alignment and incentives with a radical solution: align data at the source and reward the data itself directly. In other words, ensure that training data verifiably represents the human perspective, denoises/debiases, and rewards data based on quality, scarcity, or how much it improves the model. This is a paradigm shift from tinkering with models to curating data.
LazAI not only introduces primitives, but also proposes a new paradigm for data acquisition, pricing and governance. Its core concepts include data-anchored tokens (DAT) and individual-centric DAOs (iDAOs), which together realize the pricing, traceability and programmable use of data.
Verifiable and Programmable Data — Data Anchored Tokens (DAT)
To achieve this goal, LazAI introduces a new on-chain primitive - Data Anchored Token (DAT), a new token standard designed for AI data assetization. Each DAT represents a piece of data anchored on the chain and its lineage information: the identity of the contributor, the evolution over time, and the usage scenario. This creates a verifiable history for each piece of data - similar to a version control system for datasets (such as Git), but with blockchain security. Because DATs exist on the chain, they are programmable: smart contracts can manage their usage rules. For example, data contributors can specify that their DATs (such as a set of medical images) are only accessible to specific AI models, or used under specific conditions (privacy or ethical constraints are enforced through code). The incentive mechanism is reflected in the fact that DATs can be traded or pledged - if the data is valuable to the model, the model (or its owner) may pay to obtain access to the DAT. In essence, LazAI has built a market for data tokenization and traceability. This directly echoes the “verifiable data” metric discussed above: by checking the DAT, you can confirm whether it has been verified, how many models it is used by, and what model performance improvement it brings. Such data will receive a higher valuation. By anchoring data on the chain and tying economic incentives to quality, LazAI ensures that AI is trained on credible and measurable data. This is a solution to the problem through incentive alignment - high-quality data is rewarded and stands out.
Individual-centered DAO (iDAO) framework
The second key component is LazAI’s iDAO (individual-centric DAO) concept, which redefines governance in the AI economy by putting individuals (not organizations) at the heart of decision-making and data ownership. Traditional DAOs often prioritize collective organizational goals, inadvertently weakening individual will. iDAOs subvert this logic. They are personalized governance units that allow individuals, communities, or domain-specific entities to directly own, control, and verify the data and models they contribute to AI systems. iDAOs support customized, aligned AI: as a governance framework, they ensure that models always adhere to the values or intentions of contributors. From an economic perspective, iDAOs also make AI behavior programmable by the community - rules can be set to limit how models use specific data, who can access models, and how the benefits of model outputs are distributed. For example, an iDAO can stipulate that every time its AI model is called (such as an API request or task completion), part of the benefits will be returned to the DAT holders who contributed the relevant data. This establishes a direct feedback loop between agent behavior and contributor rewards - similar to the mechanism in DeFi where liquidity provider benefits are linked to platform usage. In addition, iDAOs can achieve composable interactions through protocols: one AI agent (iDAO) can call the data or model of another iDAO under negotiated terms.
By building on these primitives, LazAI’s framework turns the vision of a decentralized AI economy into reality. Data becomes an asset that users can own and profit from, models move from private silos to collaborative projects, and every participant—from individuals curating unique datasets to developers building small, specialized models—becomes a stakeholder in the AI value chain. This alignment of incentives is expected to replicate the explosive growth of DeFi: when people realize that participating in AI (contributing data or expertise) directly translates into opportunities, they will be more motivated to participate. As more participants participate, network effects kick in—more data leads to better models, which attract more users, which in turn generate more data and demand, forming a positive cycle.
Building a Trust Foundation for AI: A Verifiable Computing Framework
In this ecosystem, LazAIs Verified Computing Framework is the core layer for building trust. The framework ensures that every generated DAT, every iDAO (individualized autonomous organization) decision, and every incentive allocation has a verifiable traceability chain, making data ownership executable, governance processes accountable, and intelligent agent behavior auditable. By transforming iDAO and DAT from theoretical concepts into reliable and verifiable systems, the Verified Computing Framework achieves a paradigm shift in trust - from relying on assumptions to deterministic guarantees based on mathematical verification.
The establishment of this set of basic elements for the value realization of the decentralized AI economy makes the vision of the decentralized AI economy truly come true:
Data assetization: Users can confirm the ownership of data assets and obtain benefits
Model collaboration: AI models are transformed from closed islands to open collaborative products
Participation equity: From data contributors to vertical model developers, all participants can become stakeholders in the AI value chain
This incentive-compatible design is expected to replicate the growth momentum of DeFi: when users realize that participating in AI construction (by contributing data or expertise) can directly translate into economic opportunities, their enthusiasm for participation will be ignited. As the scale of participants expands, network effects will emerge - more high-quality data will lead to better models, attracting more users to join, which in turn will generate more data demand, forming a self-reinforcing growth flywheel.
Conclusion: Towards an Open AI Economy
The journey of DeFi shows that the right primitives can unlock unprecedented growth. In the coming AI-native economy, we are on the cusp of a similar breakthrough. By defining and implementing new primitives that value data and alignment, we can transform AI development from a centralized project to a decentralized, community-driven endeavor. This journey will not be without its challenges: ensuring that economic mechanisms prioritize quality over quantity, and avoiding ethical pitfalls that prevent data incentives from compromising privacy or fairness. But the direction is clear. Practices like LazAI’s DAT and iDAO are paving the way to transform the abstract idea of “human-aligned AI” into concrete mechanisms for ownership and governance.
Just as early DeFi experimented with optimizing TVL, liquidity mining, and governance, the AI economy will iterate on its new primitives. In the future, debates and innovations around data value measurement, fair reward distribution, and AI agent alignment and benefits will emerge. This article only scratches the surface of incentive models that may promote the democratization of AI, hoping to inspire open discussion and in-depth research: How to design more AI-native economic primitives? What unintended consequences or opportunities may arise? With the participation of a broad community, we are more likely to build an AI future that is not only technologically advanced, but also economically inclusive and aligned with human values.
DeFi’s exponential growth isn’t magic — it’s driven by incentive alignment. Today, we have the opportunity to drive an AI renaissance by doing the same thing with data and models. By turning participation into opportunity, and opportunity into network effects, we can kickstart a flywheel for AI that reshapes value creation and distribution in the digital age.
Let’s build this future together — starting with a verifiable dataset, an aligned AI agent, and a new primitive.