🎉 Gate Square Growth Points Summer Lucky Draw Round 1️⃣ 2️⃣ Is Live!
🎁 Prize pool over $10,000! Win Huawei Mate Tri-fold Phone, F1 Red Bull Racing Car Model, exclusive Gate merch, popular tokens & more!
Try your luck now 👉 https://www.gate.com/activities/pointprize?now_period=12
How to earn Growth Points fast?
1️⃣ Go to [Square], tap the icon next to your avatar to enter [Community Center]
2️⃣ Complete daily tasks like posting, commenting, liking, and chatting to earn points
100% chance to win — prizes guaranteed! Come and draw now!
Event ends: August 9, 16:00 UTC
More details: https://www
In-depth analysis of the AI Layer 1 track: Six major projects layout decentralized AI infrastructure
AI Layer1 Research Report: Finding Fertile Ground for on-chain DeAI
Overview
Background
In recent years, leading technology companies such as OpenAI, Anthropic, Google, and Meta have been driving the rapid development of large language models (LLM). LLMs have shown unprecedented capabilities across various industries, greatly expanding the realm of human imagination, and even demonstrating the potential to replace human labor in certain scenarios. However, the core of these technologies is firmly held by a few centralized tech giants. With substantial capital and control over expensive computational resources, these companies have built insurmountable barriers, making it difficult for the vast majority of developers and innovation teams to compete.
At the early stage of rapid evolution of AI, public opinion often focuses on the breakthroughs and conveniences brought by technology, while the attention to core issues such as privacy protection, transparency, and security is relatively insufficient. In the long term, these issues will profoundly affect the healthy development of the AI industry and social acceptance. If not properly addressed, the debate over whether AI will be "good" or "evil" will become increasingly prominent, and centralized giants, driven by profit motives, often lack sufficient motivation to proactively address these challenges.
Blockchain technology, with its decentralized, transparent, and censorship-resistant characteristics, provides new possibilities for the sustainable development of the AI industry. Currently, many "Web3 AI" applications have emerged on several mainstream blockchains. However, a deeper analysis reveals that these projects still face numerous issues: on one hand, the degree of decentralization is limited, and key links and infrastructure still rely on centralized cloud services, making it difficult to support a truly open ecosystem; on the other hand, compared to AI products in the Web2 world, on-chain AI still shows limitations in model capabilities, data utilization, and application scenarios, and the depth and breadth of innovation need to be improved.
To truly realize the vision of decentralized AI, enabling the blockchain to securely, efficiently, and democratically support large-scale AI applications while competing in performance with centralized solutions, we need to design a Layer 1 blockchain specifically tailored for AI. This will provide a solid foundation for open innovation in AI, democratic governance, and data security, promoting the prosperous development of a decentralized AI ecosystem.
Core Features of AI Layer 1
AI Layer 1, as a blockchain specifically tailored for AI applications, has its underlying architecture and performance design closely centered around the needs of AI tasks, aiming to efficiently support the sustainable development and prosperity of the on-chain AI ecosystem. Specifically, AI Layer 1 should possess the following core capabilities:
Efficient Incentives and Decentralized Consensus Mechanism The core of AI Layer 1 lies in building an open network for sharing resources such as computing power and storage. Unlike traditional blockchain nodes that primarily focus on ledger bookkeeping, the nodes in AI Layer 1 need to undertake more complex tasks. They are required not only to provide computing power and complete AI model training and inference, but also to contribute diverse resources such as storage, data, and bandwidth, thereby breaking the monopoly of centralized giants in AI infrastructure. This places higher demands on the underlying consensus and incentive mechanisms: AI Layer 1 must be able to accurately assess, incentivize, and verify the actual contributions of nodes in AI inference, training, and other tasks, ensuring network security and efficient resource allocation. Only in this way can the stability and prosperity of the network be guaranteed while effectively reducing the overall computing power costs.
Excellent high performance and heterogeneous task support capabilities. AI tasks, especially the training and inference of LLMs, place extremely high demands on computing performance and parallel processing capabilities. Furthermore, the on-chain AI ecosystem often needs to support diverse and heterogeneous task types, including different model structures, data processing, inference, storage, and other diverse scenarios. AI Layer 1 must deeply optimize the underlying architecture for high throughput, low latency, and elastic parallelism, and preset native support capabilities for heterogeneous computing resources to ensure that various AI tasks can run efficiently, achieving smooth scalability from "single-type tasks" to "complex diverse ecosystems."
Verifiability and Trustworthy Output Assurance AI Layer 1 not only needs to prevent malicious actions of models and data tampering security risks but also must ensure the verifiability and alignment of AI output results from the underlying mechanism. By integrating cutting-edge technologies such as Trusted Execution Environment ( TEE ), Zero-Knowledge Proof ( ZK ), and Multi-Party Computation ( MPC ), the platform enables each model inference, training, and data processing to be independently verifiable, ensuring the fairness and transparency of the AI system. Meanwhile, this verifiability also helps users clarify the logic and basis of AI output, realizing "what is gained is what is desired", enhancing user trust and satisfaction with AI products.
Data Privacy Protection AI applications often involve sensitive user data, and in fields such as finance, healthcare, and social networking, data privacy protection is particularly critical. AI Layer 1 should adopt encryption-based data processing technologies, privacy computing protocols, and data permission management methods while ensuring verifiability to guarantee the security of data throughout the entire process of inference, training, and storage, effectively preventing data leakage and misuse, and eliminating users' concerns about data security.
Powerful Ecological Carrying and Development Support Capability As an AI-native Layer 1 infrastructure, the platform not only needs to have technical leadership but also must provide comprehensive development tools, integrated SDKs, operational support, and incentive mechanisms for ecological participants such as developers, node operators, and AI service providers. By continuously optimizing platform usability and developer experience, it promotes the landing of rich and diverse AI-native applications, achieving sustainable prosperity of the decentralized AI ecosystem.
Based on the above background and expectations, this article will provide a detailed introduction to six representative AI Layer 1 projects, including Sentient, Sahara AI, Ritual, Gensyn, Bittensor, and 0G. It will systematically sort out the latest developments in the field, analyze the current status of project development, and discuss future trends.
Sentient: Building a Loyal Open-Source Decentralized AI Model
Project Overview
Sentient is an open-source protocol platform that is building an AI Layer 1 blockchain (. The initial phase is Layer 2, which will later migrate to Layer 1). By combining AI Pipeline and blockchain technology, it aims to construct a decentralized artificial intelligence economy. Its core goal is to address the issues of model ownership, invocation tracking, and value distribution in the centralized LLM market through the "OML" framework (, making AI models achieve on-chain ownership structure, invocation transparency, and value sharing. Sentient's vision is to enable anyone to build, collaborate, own, and monetize AI products, thereby promoting a fair and open AI Agent network ecosystem.
The Sentient Foundation team brings together top academic experts, blockchain entrepreneurs, and engineers from around the world, dedicated to building a community-driven, open-source, and verifiable AGI platform. Core members include Princeton University professor Pramod Viswanath and Indian Institute of Science professor Himanshu Tyagi, who are responsible for AI safety and privacy protection, respectively, while Polygon co-founder Sandeep Nailwal leads the blockchain strategy and ecosystem layout. Team members come from renowned companies such as Meta, Coinbase, Polygon, as well as top universities like Princeton University and the Indian Institutes of Technology, covering fields like AI/ML, NLP, and computer vision, working together to drive the project forward.
As a secondary venture of Sandeep Nailwal, the co-founder of Polygon, Sentient came with a halo from the very beginning, possessing rich resources, connections, and market awareness, providing strong backing for the project's development. In mid-2024, Sentient completed a $85 million seed round financing, led by Founders Fund, Pantera, and Framework Ventures, with other investment institutions including dozens of well-known VCs such as Delphi, Hashkey, and Spartan.
![Biteye and PANews Jointly Release AI Layer1 Research Report: Searching for On-chain DeAI Fertile Ground])https://img-cdn.gateio.im/webp-social/moments-f4a64f13105f67371db1a93a52948756.webp(
) design architecture and application layer
Infrastructure Layer
Core Architecture
The core architecture of Sentient consists of two parts: AI Pipeline ### and the on-chain system.
The AI pipeline is the foundation for developing and training "Loyal AI" artifacts, which includes two core processes:
The blockchain system provides transparency and decentralized control for the protocol, ensuring the ownership of AI artifacts, usage tracking, revenue distribution, and fair governance. The specific architecture is divided into four layers:
(## OML Model Framework
The OML framework ) is based on the core concepts proposed by Sentient: Open, Monetizable, and Loyal ###, aimed at providing clear ownership protection and economic incentive mechanisms for open-source AI models. By combining on-chain technology and AI-native cryptography, it has the following characteristics:
(## AI 原生加密学)AI-native Cryptography###
AI-native encryption leverages the continuity, low-dimensional manifold structure, and differentiable properties of AI models to develop a "verifiable but non-removable" lightweight security mechanism. Its core technology is:
This method enables "behavior-based authorization calls + ownership verification" without the cost of re-encryption.
(## Model Rights Confirmation and Secure Execution Framework
Sentient currently adopts Melange mixed security: combining fingerprint rights confirmation, TEE execution, and on-chain contract profit sharing. Among them, the fingerprint method is implemented based on OML 1.0, emphasizing the "Optimistic Security )" concept, which assumes compliance by default and allows for detection and punishment after violations.
The fingerprint mechanism is a key implementation of OML. It generates unique signatures during the training phase by embedding specific "question-answer" pairs. Through these signatures, model owners can verify ownership and prevent unauthorized duplication and commercialization. This mechanism not only protects the rights of model developers but also provides a traceable on-chain record of model usage.
In addition, Sentient has launched the Enclave TEE computing framework, utilizing trusted execution environments ### such as AWS Nitro Enclaves ( to ensure that the model only responds to authorized requests, preventing unauthorized access and usage. Although TEE relies on hardware and presents certain security risks, its high performance and real-time advantages make it a core technology for current model deployment.