Revolution or Scam? An In-Depth Analysis of DeSci's Current State
Original Article Title: DeSci: A Revolution? Or Just a Dream?
Original Article Author: 100y eth, Crypto Writer
Original Article Translation: zhouzhou, BlockBeats
Editor's Note: This article explores the business model of the DeSci field and the shortcomings of the scientific research review system. The author points out that the current review process is inefficient, lacks transparency and effective incentives, affecting the impartiality of academic work. Furthermore, the prevalent "publish or perish" culture in academia makes researchers more inclined to pursue trending topics, overlooking the value of failed experiments. The rise of DeSci not only provides powerful solutions to these issues but also allows the decentralization concept to be more deeply reflected in the field of scientific research.
The following is the original content (slightly reorganized for easier reading comprehension):
The problems in academia are evident, but DeSci is not a cure-all. I recently obtained a Ph.D. in Chemical Engineering and during my studies, I published four first-author papers, including in sub-journals of "Nature" and the Journal of the American Chemical Society (JACS).
Although my academic experience is limited to the graduate level and I have not served as an independent researcher, which may result in a less comprehensive perspective, in my nearly six-year academic career, I have deeply felt many structural issues in the academic system.
In this context, DeSci (Decentralized Science) attempts to use blockchain technology to challenge the centralized structure of the scientific field, a concept that is undoubtedly fascinating. Recently, the topic of DeSci has swept through the crypto market, with many believing it can completely transform the research model in science.
I also hope to see such a transformation. However, I believe the possibility of DeSci completely overturning the traditional academic world is not high. From my perspective, the more realistic scenario is that DeSci serves as a complementary means in some aspects, alleviating specific issues in the traditional academic system.
Therefore, with the recent boom of DeSci, I would like to take this opportunity to, based on my limited academic experience, explore the structural issues in the traditional academic system, assess whether blockchain technology can truly address these issues, and analyze the impact DeSci may have on academia.
1. The Sudden Boom of DeSci
1.1 DeSci: From a Niche Concept to a Growing Movement
The longstanding structural issues in academia have long been widely discussed, for example, in articles such as VOX's "270 Scientists' View on Science's Seven Major Challenges" and "The War for the Liberation of Science." Over the years, people have been constantly trying to solve these problems, some of which will be mentioned later in this article.
As a concept, DeSci seeks to address these challenges by introducing blockchain technology into scientific research, but it did not start gaining attention until around 2020. At that time, Coinbase CEO Brian Armstrong introduced the concept of DeSci to the crypto community through ResearchHub, hoping to realign the incentive mechanism of scientific research through ResearchCoin (RSC).
However, due to the speculative nature of the crypto market, DeSci failed to attract widespread user participation, with only a small portion of the community supporting this vision for a long time—until the emergence of pump.science.
1.2 The Butterfly Effect of pump.science

pump.science is a DeSci project in the Solana ecosystem, built by the well-known DeSci platform Molecule. It serves as both a research funding platform and utilizes Wormbot technology for real-time streaming of long-term experiments. Users can propose compounds they believe may extend lifespan or purchase tokens related to these ideas.
Once the token market cap surpasses a specific threshold, the project team uses Wormbot devices to conduct experiments to validate whether the compound can indeed extend the lifespan of the experimental subjects. If the experiment is successful, token holders receive equity in that compound.
However, some community members have criticized this model, arguing that these experiments lack sufficient scientific rigor and may not truly foster life-extending drugs. Gwart's satirical comment represents a skeptical view of DeSci, questioning the arguments put forth by its supporters.

pump.science adopts a Bonding Curve mechanism similar to Molecule, where the token price rises as the number of purchasing users increases.
The launch of tokens like RIF (Representing Rifaximin) and URO (Representing Urokinase) coincided with the meme token frenzy in the crypto market, driving their prices higher. This unexpected bull run brought DeSci into the spotlight.
Ironically, what fueled the DeSci craze was not its scientific essence, but the speculative price surge of its tokens.

In the rapidly changing crypto market, DeSci has long been a niche area. However, in November 2024, it became one of the hottest narratives. Not only did the token issued by pump.science skyrocket, but Binance also announced an investment in DeSci's grant-funded Bio protocol. Existing DeSci tokens also saw significant price surges, marking a pivotal moment for this field.
2. Flaws in Traditional Science
It is no exaggeration to say that the academic world is facing numerous systemic and severe issues. Over the years in academia, I have often questioned: how does such a flawed system manage to sustain itself? Before delving into the potential of DeSci, let's first take a look at the shortcomings of the traditional academic system.
2.1.1 Evolution of Research Funding
Prior to the 19th century, scientists obtained research funding and made a living in a vastly different manner than today:
Sponsorship: European monarchs and nobles would sponsor researchers to enhance their own prestige and drive scientific progress. For example, Galileo received sponsorship from the Medici family, enabling him to continue developing the telescope and conducting astronomical research. Religious institutions also played a significant role in scientific development; during the Middle Ages, the church and clergy sponsored research in astronomy, mathematics, and medicine.
Self-funding: Many scientists relied on income from other professions to support their research. They may have been university professors, teachers, writers, or engineers, using these roles to financially back their research.
By the end of the 19th century and into the early 20th century, governments and corporations began establishing centralized research funding systems. During World Wars I and II, various governments set up research institutions and heavily invested in defense research to gain a competitive edge in warfare.
- In the United States, organizations such as the National Advisory Committee for Aeronautics (NACA) and the National Research Council (NRC) were established during World War I.
- In Germany, the Emergency Association of German Science (Notgemeinschaft der Deutschen Wissenschaft), founded in 1920, was the predecessor to today's German Research Foundation (DFG).
- Concurrently, corporate research institutions like Bell Labs and GE Research Lab were established, signaling the active participation of enterprises in research funding.
This government-corporate-led research funding model gradually became mainstream and continues to this day. Governments and corporations worldwide contribute substantial budgets to support global research. For instance, in just 2023, the U.S. federal government's R&D expenditure amounted to a staggering $190 billion, a 13% increase from 2022.

In the United States, the allocation process of research funding is supported by federal government funding for R&D and distributed by multiple agencies. For example:
· National Institutes of Health (NIH): the largest funder of biomedical research;
· Department of Defense (DoD): focuses on research in the defense sector;
· National Science Foundation (NSF): supports scientific and engineering research in various disciplines;
· Department of Energy (DOE): responsible for research in renewable energy and nuclear physics;
· NASA: funds space and aeronautics research.
2.1.2 Centralized Funding System Distorts Scientific Research
Today, university professors are almost unable to conduct research independently without relying on external funding. Therefore, they are forced to rely on government or corporate funding, and this centralized funding model has also brought many problems to the academic community.
First is the inefficiency of the funding acquisition process. Although the specific processes vary among countries and institutions, there is generally a widespread issue of being lengthy, opaque, and inefficient.
Research teams need to submit large numbers of application documents and reports and undergo strict reviews by the government or corporations. For renowned labs, a grant may reach several million or even tens of millions of dollars, allowing them to avoid frequent applications for a longer period. However, this is not the case for most.
For most labs, a single grant is usually only tens of thousands of dollars, meaning they need to repeatedly apply, fill out large amounts of documents, and undergo continuous reviews. Many graduate students and scholars have to spend a significant amount of time on funding applications and corporate projects rather than focusing on research.
What's worse is that many projects funded by corporations have little relevance to graduate student research topics, further highlighting the inefficiency and irrationality of this system.

Investing a significant amount of time in funding applications may have a payoff, but obtaining funding is not easy. According to data from the National Science Foundation (NSF), the funding approval rates for 2023 and 2024 were 29% and 26%, respectively, with a median annual funding amount of only $150,000. Similarly, the success rate for funding from the National Institutes of Health (NIH) is usually between 15% and 30%.
Single-source funding often cannot meet research needs, and many scholars have to apply multiple times to sustain their research.
The challenge of funding applications lies not only in low success rates but also in the crucial role of networking. Professors often collaborate with peers in their applications rather than applying independently to increase the chances of approval. Additionally, to secure corporate funding, professors frequently engage in private lobbying efforts with the funders.
This reliance on networks, coupled with the opacity of the fund allocation process, makes it harder for early-career researchers to enter this system.
Another significant issue is the lack of incentives for long-term research. Funding exceeding 5 years is extremely rare. According to NSF data, the majority of grants have durations of 1 to 5 years, similar to the funding patterns of other government agencies. Corporate R&D project funding typically lasts only 1 to 3 years.
Political factors also profoundly influence government research funding. For example, during the Trump administration, there was a significant increase in R&D investment in the defense sector, whereas under Democratic administrations, funding tended to favor environmental research. Due to the fluctuation of government funding with policy changes, long-term research projects struggle to receive stable support.
Corporate funding faces similar challenges. In 2022, the average tenure of a CEO in the S&P 500 companies was 4.8 years, with other executives seeing similar tenure lengths. They often need to rapidly adjust their strategies based on industry and technological shifts, resulting in few long-term ongoing research projects funded by corporations.
Under the pressure of a centralized funding system, researchers are forced to select projects that can deliver visible results in a short period to ensure continued funding support. This has led to a short-term orientation in academia, with only a few institutions or teams willing to undertake research projects lasting over 5 years.
Furthermore, researchers tend to focus more on incremental improvements to increase their publication output rather than pursuing truly innovative breakthroughs.
`Scientific research can be categorized as incremental or groundbreaking, with the former being small improvements on existing work and the latter breaking entirely new ground.
`Due to the constraints of the funding model, incremental research is often easier to fund, while disruptive innovation struggles to survive.
The high degree of specialization in modern science indeed makes major breakthroughs more challenging, but the centralized funding model exacerbates this issue, further stifling the potential for disruptive innovation.

Some researchers may even manipulate data or exaggerate research findings. The current funding mechanism requires results to be produced in a limited time frame, which promotes academic misconduct. As a graduate student, I often hear about cases of data manipulation by students in other labs. Nature points out that the number of retracted papers in academic conferences and journals is sharply rising.
2.1.3 Do Not Misunderstand: Centralized Funding Is Inevitable
It needs to be clarified that centralized funding itself is not the issue. While this model has brought about many negative effects, it is still crucial to modern science. Compared to the past, today's scientific research is more complex and expensive. A graduate student's project may easily require thousands to hundreds of thousands of dollars, while research in fields such as defense, aerospace, and basic physics requires an exponential increase in resources.
Therefore, centralized funding is indispensable, but the issues within it must also be addressed.
2.2.1 Journal Industry Overview
In the crypto industry, entities such as Tether, Circle (issuer of stablecoins), Binance, and Coinbase (centralized exchanges) dominate the market. In academia, journals are the most powerful entities, including giants like Elsevier, Springer Nature, Wiley, the American Chemical Society (ACS), IEEE, and others.
· Elsevier had a revenue of $36.7 billion in 2022, with a net profit of $25.5 billion and a profit margin of nearly 70%.
· In comparison, Nvidia's profit margin in 2024 is 55%-57%.
· By the first 9 months of 2024, Springer Nature had already reached a revenue of $14.4 billion, highlighting the immense scale of the academic publishing industry.
The core profit models of academic journals include:
· Subscription Fees: Access to journal articles usually requires a subscription or a one-time fee to access specific articles.
· Article Processing Charges (APC): Authors can pay fees to make their papers open-access; otherwise, most papers are placed behind paywalls.
· Copyright and Reprint Fees: In most cases, once a paper is published, the copyright belongs to the publisher, and journals monetize through educational or commercial licensing.
2.2.2 Journals: The Epicenter of Misaligned Incentives in Academia
So, why are journals considered the "top predators" of the academic world? Isn't their business model just the same as the general publishing industry's logic? The answer is no.
Journal's business model is extremely biased towards the publisher, rather than the author or the reader.
In traditional publishing or online platforms, authors usually can share revenue with the platform, and the content can reach as wide an audience as possible. However, the core operation of academic journals is entirely in favor of the publisher's interests:
· Scholars provide research results for free, but journals profit through subscription fees, page charges, copyright transfer, etc.
· Even if authors pay a high APC, journals still control the distribution channels, influencing the reach of the paper.
· Readers (including universities, research institutions) often need to pay high subscription fees to access cutting-edge research results.
Journals play a key role in academic communication, but their profit model entirely favors the publisher, rather than the author or the academic community itself.

To read papers from a specific journal, readers must pay a subscription fee or purchase single articles. However, if researchers want their papers to be open access, they need to pay high processing fees to the journal and receive no profit-sharing.
It doesn't stop there—researchers not only have no right to share the revenue brought by the journal, but in most cases, the copyright of their papers is directly transferred to the journal. This means that journals can not only price freely but also repeatedly monetize this research content for business purposes.
This system is highly exploitative and entirely unfair to researchers.
In terms of the profit model, journals not only monopolize the channels for disseminating academic achievements but also squeeze researchers through high fees. The scale and depth of exploitation in their business model are astonishing.
For example, in the field of natural sciences, one of the most well-known open-access journals—Nature Communications—authors need to pay a $6,790 article processing fee (APC) for each published paper. This fee is almost a significant portion of many researchers' annual budgets, yet the journal still does not provide any share to the researchers.
The essence of academic journals has long evolved from promoting knowledge dissemination to systematic exploitation of researchers.

The subscription fees for academic journals are equally staggering. While subscription prices vary by journal discipline and type, the average annual institution subscription fee for journals under the American Chemical Society (ACS) can be as high as $4,908 per journal. If an institution subscribes to all ACS journals, the cost would skyrocket to $170,000.
Springer Nature's subscription fees are even higher, with an average annual fee of around $10,000 per journal, and the full journal package subscription fee reaching $630,000.
Since most research institutions subscribe to multiple journals, this means that researchers and institutions face a huge subscription expense.
A more serious issue is that researchers are almost compelled to publish papers in these journals to establish their academic credentials. The financial flow of the journal industry mainly relies on government and corporate research funding, creating a highly exploitative cycle:
· Researchers must continually publish papers to accumulate academic achievements, apply for research funding, and advance their careers.
· The funding for research mainly comes from government or corporate research grants.
· The article processing charges (APC) for publication are also paid by this funding.
· Institutions, to allow researchers to read these papers, also need to pay high subscription fees, which also come from government or corporate funding.
Since these costs are mostly borne by research funding rather than individual researchers' contributions, researchers are less sensitive to these high expenses, giving journals unlimited room to increase prices. Ultimately, academic journals have formed a distorted profit model: charging authors high publication fees, charging readers and institutions exorbitant subscription fees, and monopolizing the copyright of papers. This system not only greatly exploits researchers but also hinders the free dissemination of knowledge, turning academic research into a thoroughly commercial business.
2.2.3 Inefficiency and Lack of Transparency in the Journal Peer Review Process
The issues with journals are not only in their revenue structure but also in the inefficiency and lack of transparency in the publishing process. In my six years of academic career, having published four papers, I have encountered numerous issues, especially the inefficient submission process, lack of transparency, and a peer review system with a luck factor.
The standard peer review process for journals typically includes the following steps:
1. Researchers compile their research findings into a manuscript and submit it to the target journal.
2. The journal editor assesses whether the manuscript fits the journal's scope and basic standards. If it does, the editor will assign two to three peer reviewers to conduct a review.
3. Peer reviewers evaluate the manuscript, provide feedback, raise issues and questions, and then make one of four recommendations:
· Accept: No modifications needed, accept as is.
· Minor Revision: Accept with minor changes required.
· Major Revision: Accept but significant changes needed.
· Reject: Manuscript is not accepted.
4. The researcher revises the paper based on the review comments, and the editor makes the final decision.
While this process may seem straightforward, it is rife with inefficiencies, inconsistencies, and overreliance on subjective judgment, which can undermine the overall quality and fairness of the system.
The inefficiency of the review process is a significant issue. For example, in the natural sciences and engineering, the time required to submit a paper and go through the review process is roughly as follows:
· Time to receive editorial rejection: 1 week to 2 months.
· Time to receive peer review comments: 3 weeks to 4 months.
· Time for final decision: 3 months to 1 year.
When delays occur in the review process due to journal or reviewer circumstances, or multiple review rounds are needed, publishing a paper can take over a year. For example, in a paper I submitted, the editor sent the manuscript to three reviewers, but one did not respond, necessitating the search for another reviewer, extending the review period by four months.
Furthermore, if the paper is rejected during this lengthy period, the entire process must start again, with submission to another journal, meaning additional waiting and a doubling of time wasted. This inefficient and time-consuming publication process is highly disadvantageous for researchers, as during this time, similar research by other teams may have already been published. I have witnessed this situation many times, and novelty is a key element of a paper, delays of which can have serious consequences.
Another issue is the shortage of peer reviewers. As mentioned earlier, submitted papers are typically evaluated by two to three peer reviewers. Whether a paper is accepted often depends on the opinions of these reviewers. While reviewers are usually experts in the relevant field and consensus on the paper's quality is usually reached through them, the review results still carry an element of luck.
I have experienced an example: I once submitted a paper to a prestigious journal A, received two major comments and one minor comment, and was ultimately rejected. I then submitted the same paper to journal B, which is relatively less known, but it also rejected my paper, with one rejection and one major issue in the review comments. Interestingly, despite journal B's lower prominence, the outcome was even worse.
This highlights a key issue: the evaluation of a paper relies on a small number of experts, and the selection of reviewers is entirely determined by the journal editor. This means that whether a paper is accepted actually carries a certain element of luck. For example, if three reviewers are lenient, the paper may be accepted; but if assigned to three stricter reviewers, the paper may be rejected.
The problems in the journal peer review process are not only inefficiency and lack of transparency, but also include insufficient reviewer numbers, lack of incentivization, and the presence of bias in peer review.
Firstly, significantly increasing the number of reviewers to achieve a fairer evaluation is not practical. For journals, adding reviewers means more communication and higher inefficiency. Therefore, while increasing reviewers may help achieve a more just review, from the journal's perspective, this approach is not cost-effective.
Secondly, the lack of incentivization mechanisms in the peer review process has resulted in varying review quality. Some reviewers are able to fully grasp the content of the paper, provide insightful comments and questions. However, other reviewers do not carefully read the paper, provide information already included in the paper, or give irrelevant criticisms and comments, leading to significant revisions or straight rejections of the paper.
Let me illustrate with an example from my experience. I once submitted a paper to the renowned Journal A. Despite receiving two major comments and one minor comment, my paper was still rejected. I then submitted the same paper to Journal B, which has slightly lower prestige. However, after receiving 1 rejection and 1 major comment, it was rejected again. Interestingly, although Journal B is not as prominent as Journal A, the outcome was worse.
The third issue is the lack of incentivization measures in the peer review process, resulting in poor review quality. This varies among peer reviewers. Some reviewers have a thorough understanding of the paper and provide thoughtful comments and questions. However, others do not read the paper carefully, ask about information already included, or give irrelevant criticisms and comments, leading to major revisions or rejection. Unfortunately, this is quite common and can leave researchers feeling betrayed as if their efforts have been in vain.
The fourth issue is the lack of transparency in the peer review process. Peer review is usually conducted anonymously to ensure fairness, with journal editors responsible for selecting reviewers. However, reviewers sometimes may identify the authors of the paper under review. This can lead to biases in the review, for example, reviewers may favor papers from their friends or collaborators, or intentionally give harsh reviews to papers from competing teams. This situation is more common than we imagine and can sometimes have a significant impact on the final outcome of a paper.
2.2.6 The Illusion of Impact Factor
Lastly, I want to discuss the issue of the evaluation of citation counts and impact factors.
How do we evaluate a researcher's career and academic level? Each researcher has their own unique strengths: some excel in experimental design, some are adept at identifying research directions, and others can delve deep into overlooked details. However, qualitatively assessing each researcher's strengths is nearly impossible. Therefore, the academic community relies on quantitative metrics, namely using a single number to evaluate researchers, especially citations and the H-index.
Researchers with a high H-index and paper citation count are often considered more accomplished. The H-index is an indicator used to measure a researcher's productivity and impact. For example, an H-index of 10 means that the researcher has at least 10 papers, each of which has been cited at least 10 times. Ultimately, citation count remains the most important metric of assessment.
So, how can researchers increase their citation count? While producing high-quality papers is the fundamental solution, choosing the right research topics is equally crucial. The more popular the research field and the more researchers there are, the more likely a paper's citation count will increase.
2.2.5 Publish or Perish
Success stems from failure. Progress in any field requires failure as a stepping stone. However, in the modern scientific world, almost all papers only report successful results, while the countless failures leading to these successes are ignored and abandoned. In the fiercely competitive academic world, researchers have little incentive to report failed experiments because these failures do not benefit their careers and are often seen as a waste of time.
This also reflects the "publish or perish" phenomenon in academia. To gain academic recognition and continue funding, researchers often need to publish a large number of papers. However, successful papers tend to only present results while overlooking the failures in the research, preventing many research processes from being fully presented. As a result, the academic field exhibits low tolerance for failure, instead treating outcomes as the sole measure of a researcher's value, leading to the homogenization of academic evaluation.
This is also closely related to the relationship between impact factor and citation count. The academic world's evaluation of papers often overly relies on citation count and impact factor, overlooking the failures or challenges that may have been encountered during the research process. Yet, these undisclosed failures are actually an indispensable part of academic progress.
2.3 Systemic Challenges
In the field of computer software, open-source projects have transformed software development by publicly sharing code and encouraging global contributions, fostering collaboration among developers, thus allowing software to be improved. However, the trajectory of scientific development has moved in the opposite direction.

In the early days of science, such as the 17th century, scientists prioritized sharing knowledge, advocated for natural philosophy, and displayed an open and collaborative attitude, distancing themselves from rigid authority. For example, despite their competitive relationship, Isaac Newton and Robert Hooke still shared and critiqued each other's work through letters, collectively advancing knowledge.
In contrast, modern science has become more closed off. Researchers, driven by competition to secure funding and publish in high-impact journals, often keep unpublished research confidential and strongly discourage external sharing. As a result, research labs within the same field naturally see each other as competitors, with limited avenues to learn about each other's work.
Since much research builds incrementally on previously published work, competing labs are likely to be working on similar research topics. Without sharing the research process, multiple labs end up conducting parallel studies on the same subject. This creates a high-stakes, low-efficiency environment where the lab that publishes results first reaps all the credit. Researchers often find that as they near completion, similar research has already been published, rendering much of their efforts futile.
In the worst cases, even within the same lab, students may withhold experimental materials or research findings from each other, fostering internal competition rather than collaboration. As open-source culture has become foundational in computer science, the modern scientific community must embrace a more open and collaborative culture to benefit a broader audience.
3. How to Fix Traditional Science?
Researchers are acutely aware of these issues in the scientific community. While they recognize these problems, these challenges are deeply rooted in structural issues that individuals find hard to address. However, over the years, there have been many attempts to tackle these issues.
3.1.1 Fixing Centralized Funding
Fast Grants: During the COVID-19 pandemic, Stripe's CEO, Patrick Collison, identified inefficiencies in traditional grant processes and launched the Fast Grants initiative, raising $50 million to support hundreds of projects. Funding decisions are made within 14 days, with grant amounts ranging from $10,000 to $500,000, a significant sum for researchers.
Renaissance Philanthropy: Founded by Tom Kalil, a former technology policy advisor to Presidents Clinton and Obama. This nonprofit consultancy organization connects donors with high-impact scientific and technological projects and, with the support of Eric and Wendy Schmidt, operates similarly to the patronage system that once thrived among European scientists.
hhmi: The Howard Hughes Medical Institute employs a unique funding model that supports individual researchers rather than specific projects. By providing long-term funding, it alleviates the pressure for short-term results, allowing researchers to focus on sustained research work.
experiment.com: This is an online crowdfunding platform that allows researchers to introduce their work to the public and raise the necessary funds from individual donors.
3.1.2 Fixing Academic Journals
PLOS ONE: PLOS ONE is an open-access scientific journal that anyone can freely read, download, and share articles from. It evaluates papers based on scientific validity rather than impact and is known for publishing negative, inconclusive, or invalid results. Its streamlined publishing process helps researchers quickly disseminate research findings. However, PLOS ONE charges researchers article processing fees ranging from $1000 to $5000.
arXiv, bioRxiv, medRxiv, PsyArXiv, SocArXiv: These are preprint servers that allow researchers to share their paper drafts before formal publication. They enable the rapid dissemination of research findings, claim precedence on specific topics, provide community feedback and collaboration opportunities, and offer readers free access to papers.
Sci-hub: Founded by Kazakhstani programmer Alexandra Elbakyan, Sci-hub provides free access to paywalled papers. Despite being illegal in most jurisdictions and facing lawsuits from publishers like Elsevier, it has been praised for advancing free access to academic content while also criticized for legal violations.
3.1.3 Fixing Collaboration
ResearchGate: This is a professional social platform where researchers can share papers, ask and answer questions, and find collaborators.
CERN: CERN is a non-profit organization involved in particle physics research, conducting large-scale experiments that are challenging for individual labs to undertake. It brings together researchers from multiple countries, funded based on each participating country's GDP contribution.
3.2 DeSci, the New Wave
While the above efforts have made some progress in addressing the challenges modern science faces, they have not created enough impact to fundamentally transform the field. In recent years, with the rise of blockchain technology, a concept called Decentralized Science (DeSci) has garnered attention as a potential solution to these structural issues. So, what exactly is DeSci? Can it truly revolutionize the modern scientific ecosystem?
4. Enter DeSci
4.1 DeSci Overview
DeSci, short for Decentralized Science, refers to making scientific knowledge a public good by improving funding, research, peer review, and research output sharing. It aims to create a more efficient, fair, transparent, and inclusive system. Blockchain technology plays a central role in achieving these goals through the following features:
Transparency: Except for privacy networks, blockchain networks are inherently transparent, allowing anyone to view transactions. This feature enhances the transparency of project funding and peer review processes.
Ownership: Blockchain assets are protected by private keys, making ownership claims easy. This feature enables researchers to monetize their data or assert intellectual property (IP) rights to leverage funded research outcomes.
Incentive Mechanism: The incentive mechanism is core to blockchain networks. To encourage collaboration and active participation, token incentives can be used to reward individuals participating in various research processes.
Smart Contracts: Smart contracts deployed on a neutral network execute operations based on their code definitions. They can be used to establish and automate interaction logic among participants, with transparency.
4.2 Potential Applications of DeSci
As the name suggests, DeSci can be applied to various aspects of scientific research. ResearchHub categorizes the potential applications of DeSci into the following five areas:
Research DAOs: These are decentralized autonomous organizations focusing on specific research topics. Through blockchain technology, they can transparently manage research planning, funding, governance voting, and project management.
Publication: Blockchain can decentralize and transform the publication process entirely. Research papers, data, and code can be permanently recorded on the blockchain, ensuring their credibility, providing free access, and improving processes such as peer review through token incentives.
Funding and Intellectual Property: Researchers can easily obtain global audience funding support through blockchain networks. Additionally, by tokenizing research projects, token holders can participate in project direction decisions or share future intellectual property income.
Data: Blockchain can achieve secure, transparent storage, management, and sharing of research data.
Infrastructure: This includes governance tools, storage solutions, community platforms, and identity systems that can be seamlessly integrated into DeSci projects.
The best way to understand DeSci is to explore projects within its ecosystem and see how they address structural issues in modern science. Next, let's take a closer look at some prominent projects in the DeSci ecosystem.
5. DeSci Ecosystem

5.1 Why the Ethereum Ecosystem is the Ideal Choice for DeSci
Unlike DeFi, gaming, or AI applications, DeSci projects primarily focus on the Ethereum ecosystem. This trend can be attributed to the following reasons:
· Trustworthy Neutrality: Ethereum is the most neutral network among smart contract platforms. Given the nature of DeSci, which involves significant fund flows (e.g., research funding), values such as decentralization, fairness, censorship resistance, and trust are crucial. This makes Ethereum the optimal network for building DeSci projects.
· Network Effects: Ethereum boasts the largest user base and liquidity in the smart contract network. Compared to other applications, DeSci, as a relatively niche field, may face fragmentation risks if projects are spread across multiple networks. This fragmentation could hinder project management due to liquidity and ecosystem-related challenges. Therefore, most DeSci projects are built on the Ethereum network to leverage Ethereum's strong network effects.
· DeSci Infrastructure: Few DeSci projects are built from scratch. Instead, many projects leverage existing frameworks (such as Molecule) to accelerate development. Since most DeSci infrastructure tools are based on Ethereum, the majority of projects in this field also operate on Ethereum.
Based on these reasons, the DeSci projects discussed in this article mostly belong to the Ethereum ecosystem. Next, we will explore some representative projects in each area of DeSci.
5.2.1 Molecule

Molecule is a fundraising and tokenization platform for biopharma intellectual property. Researchers can raise funds from multiple individuals via blockchain, tokenize project IP, and funders can receive IP Tokens proportionate to their contributions.
Molecule's decentralized fundraising platform, Catalyst, connects researchers and funders. Researchers prepare the necessary documentation and project plans, submit projects on the platform, and funders support these proposals with ETH funding. Once the fundraising is complete, IP-NFTs and IP Tokens are issued, and funders can claim these tokens based on their contributions.

An IP NFT represents a tokenized version of project intellectual property on the blockchain, combining two legal agreements into one smart contract. The first legal agreement is the research agreement signed between researchers and funders. This agreement includes terms such as research scope, deliverables, timeline, budget, confidentiality, intellectual property and data ownership, publication, results disclosure, licensing, and patent conditions. The second legal agreement is the assignment agreement, transferring the research agreement to the IP NFT holder, ensuring that the rights of the current IP NFT holder can be transferred to a new holder.
IP Tokens represent partial governance rights over intellectual property. Token holders can participate in key research decisions and access exclusive information. While IP Tokens do not guarantee revenue sharing from the research, depending on the intellectual property holder, future commercialization profits may be distributed to IP Token holders.

The price of IP Tokens is determined by the Catalyst Bonding Curve, reflecting the relationship between token supply and price. As more tokens are issued, their price increases. This mechanism incentivizes early contributors by allowing them to acquire tokens at a lower cost.
Here are some cases of successful fundraising through Molecule:
Oslo University's Square Lab: The Square Lab researches aging and Alzheimer's disease. The lab received support from VitaDAO through Molecule's IP-NFT framework to identify and characterize new drug candidates for activating mitochondrial autophagy, which has a positive impact on Alzheimer's research.
Artan Bio: Artan Bio focuses on tRNA-related research. Through Molecule's IP-NFT framework, it received $91,300 in funding from the VitaDAO community.
5.2.2 Bio.xyz

Bio.xyz is a curation and liquidity protocol for decentralized science (DeSci), similar to an incubator supporting BioDAO. Bio.xyz's goals include:
·Plan, create, and accelerate new BioDAO-funded scientific projects on-chain.
·Provide ongoing funding and liquidity for BioDAO and on-chain biotech assets.
·Standardize BioDAO frameworks, tokenomics, and data/product suites.
·Generation and commercialization of scientific intellectual property (IP) and data.
BIO token holders can vote to decide which new BioDAOs join the ecosystem. Once a BioDAO is approved to join the BIO ecosystem, voting-supported token holders can participate in the initial private token sale. This process is similar to an approved seed round.
The governance token of the approved BioDAO is paired with the BIO token and added to a liquidity pool, addressing BioDAO's concerns regarding governance token liquidity (e.g., VITA/BIO). Additionally, Bio.xyz also operates a bio/acc rewards program providing BIO token rewards to BioDAOs to help them achieve key milestones.
Furthermore, the BIO token serves as a meta-governance token among multiple BioDAOs, enabling BIO holders to participate in the governance of various BioDAOs. Moreover, the BIO network provides $100,000 in funding for incubated BioDAOs, acquiring 6.9% of the BioDAO's token supply for the liquidity pool. This increases the protocol's assets under management (AUM), thereby enhancing the value of the BIO token.
Bio.xyz leverages Molecule's IP NFT and IP Tokens framework to manage and own IP. For example, VitaDAO successfully issued IP Tokens such as VitaRNA and VITA-FAST within the Bio ecosystem. Below is a list of research DAOs currently incubated through Bio.xyz, which will be discussed in detail:
·Cerebrum DAO: Focused on preventing neurodegenerative diseases.
·PsyDAO: Dedicated to advancing consciousness evolution through safe, accessible psychedelic experiences.
·cryoDAO: Contributing to cryopreservation research projects.
·AthenaDAO: Committed to driving women's health research.
·ValleyDAO: Supporting synthetic biology research.
·HairDAO: Collaborating with others to develop new methods for treating hair loss.
·VitaDAO: Focused on research related to human lifespan.
In summary, Bio.xyz is planning BioDAO and providing a token framework, liquidity services, funding, and incubation support. As the IP of BioDAO in the ecosystem is successfully commercialized, the value of Bio.xyz's funding pool will increase, creating a virtuous cycle.
5.3.1 VitaDAO
When mentioning the most well-known research DAO, VitaDAO is often the first to come to people's minds. Its reputation stems from being an early DeSci project and receiving lead investment from Pfizer Ventures in 2023. VitaDAO funds projects dedicated to longevity and aging research, having supported over 24 projects with funding exceeding $4.2 million. In return, VitaDAO receives IP NFTs or company equity and operates IP NFTs using the Molecule.xyz framework.
VitaDAO provides transparency through blockchain, making its funding pool publicly visible. The pool's value is approximately $44 million, including around $2.3 million in equity and $29 million in tokenized IP assets. VITA token holders decide the direction of the DAO through governance votes and can access various health services.
Noteworthy projects supported by VitaDAO include VitaRNA and VITA-FAST. The IP of both projects has been tokenized and is actively traded, with VitaRNA having a market value of around $13 million and VITA-FAST's market value at $24 million. Both projects hold regular meetings with VitaDAO to update on their progress.
VitaRNA: VitaRNA is an IP Token project led by the biotech company Artan Bio. The project received funding in June 2023 and released an IP NFT in January 2024, breaking it down into IP Tokens. The project's innovative research focuses on inhibiting arginine nonsense mutations, particularly the CGA codon, which is vital in proteins associated with DNA damage, neurodegenerative diseases, and tumor suppression.
VITA-FAST: VITA-FAST is an IP Token project from Viktor Korolchuk's lab at Newcastle University. The project focuses on discovering new autophagy inducers. Autophagy is a cellular process, the decline of which is a factor in biological aging. Therefore, the project explores anti-aging and related disease treatment by stimulating autophagy, ultimately aiming to enhance human healthspan.
5.3.2 HairDAO
HairDAO is an open-source research network where patients and researchers collaborate to develop methods for treating hair loss. According to data from the Scandinavian Biotech Lab, hair loss affects 85% of men and 50% of women in their lifetime. However, currently available treatments in the market include methods such as Minoxidil, Finasteride, and Dutasteride. Notably, Minoxidil was FDA approved in 1988, while Finasteride received approval in 1997.
Despite the approval of these treatment methods, their effects are limited, often only able to slow down or temporarily stop hair loss rather than cure it. The development of hair loss treatments has been slow due to:
· Complex Causes: Hair loss is influenced by various factors, including genetics, hormonal changes, and immune reactions, making the development of effective targeted therapies challenging.
· High Development Costs: Drug development requires significant time and investment, but since hair loss is not life-threatening, research funding priorities for it are usually lower.
HairDAO incentivizes patients to share their treatment experiences and data in the application, rewarding them with HAIR governance tokens. HAIR token holders can participate in DAO governance votes, enjoy discounts on HairDAO hair care products, and stake tokens for faster access to confidential research data.
5.3.3 Others
CryoDAO: CryoDAO focuses on cryopreservation research, with a funding pool exceeding $7 million and having funded five projects. CRYO token holders can participate in governance voting and have the opportunity for early or exclusive access to funded research breakthroughs and data.
ValleyDAO: ValleyDAO aims to address climate challenges through funding synthetic biology research. Synthetic biology, which uses organisms to sustainably synthesize nutrients, fuels, and drugs, is a key technology for combating climate change. ValleyDAO has funded multiple projects, including research by Professor Rodrigo Ledesma-Amaro at Imperial College London.
CerebrumDAO: CerebrumDAO focuses on brain health research, particularly Alzheimer's disease prevention. Its Snapshot page showcases proposals from various projects seeking funding. Decisions are made through decentralized voting by DAO members.
5.4.1 ResearchHub

ResearchHub is a leading DeSci publishing platform aimed to be the "GitHub for Science." Founded by Coinbase CEO Brian Armstrong and Patrick Joyce, ResearchHub successfully raised a $5 million Series A funding in June 2023, led by Open Source Software Capital.
ResearchHub is a tool for open publication and discussion of scientific research, incentivizing researchers through its native RSC token for publishing, peer review, and curation. Its key features include:
Funding
Users can create funding requests for specific tasks from other ResearchHub users using the RSC token. Funding types include:
· Peer Review: Request for peer review of a manuscript.
· Answer Questions: Request for answering specific questions.

Under the "Funds" tab, researchers can upload research proposals and receive RSC token funding from users.

The Journal section archives papers from peer-reviewed journals and preprint servers. Users can browse literature and engage in discussions. However, many peer-reviewed papers are behind paywalls, allowing access only to abstracts written by others.

The Hub section is where users can create and join research groups to facilitate discussions on specific research topics.

RH Journal is ResearchHub's in-house journal with an efficient peer review process completed in 14 days and decisions made within 21 days. Additionally, it provides incentive mechanisms for peer reviewers, addressing the common incentive misalignment issues in traditional peer review systems.

The RSC token is an ERC-20 token designed for the ResearchHub ecosystem, with a total supply of 1 billion tokens. RSC tokens drive user engagement and support ResearchHub in becoming a fully decentralized open platform. Their uses include:
· Governance voting
· Tipping other users
· Bounty programs
· Incentives for peer reviewers
· Rewards for orchestrating research papers
5.4.2 ScieNFT
ScieNFT is a decentralized preprint server where researchers can mint their work as NFTs. The published formats can range from simple charts and ideas to datasets, artworks, methods, and even negative results. Preprint data is stored using decentralized storage solutions like IPFS and Filecoin, while the NFTs are uploaded to the Avalanche C-Chain.
While using NFTs to identify and track ownership of work is an advantage, a notable drawback is the lack of clear benefits of owning these NFTs. Additionally, the market lacks effective curation.
5.4.3 deScier

deScier is a decentralized scientific journal platform. Unlike publishers like Elsevier or Springer Nature managing multiple journals, deScier hosts various journals. All papers retain 100% of the researchers' copyright, and peer review is part of the process. However, as described below, a significant limitation is the lower number of papers published in the journals and slower upload speeds.
5.5.1 Data Lake
Data Lake software enables researchers to integrate various user recruitment channels, track their effectiveness, manage consent forms, and conduct prescreening surveys while giving users control over their data. Researchers can share and manage patient data consent easily between third parties. The Data Lake Chain, based on Arbitrum Orbit's L3 network, is used to manage patient consent.
5.5.2 Welshare Health

In traditional medical research, the most significant bottleneck is the delay in recruiting clinical trial participants and a lack of patient participation. Additionally, while patients' medical data is highly valuable, there is also a risk of misuse. Welshare aims to address these challenges through Web3 technology.
Patients can securely manage their data, monetize it to earn income, and receive personalized medical services. Conversely, medical researchers can more easily access diverse datasets, facilitating their research.
Through an application built on the Base Network, users can selectively provide data to earn in-app reward points, which can later be converted into cryptocurrency or fiat currency.
5.5.3 Hippocrat
Hippocrat is a decentralized healthcare data protocol that allows individuals to securely manage their health data using blockchain and Zero Knowledge Proof (ZKP) technology. Its first product, HippoDoc, is a telemedicine application that provides medical consultations, combining medical databases, AI technology, and assistance from healthcare professionals. During this process, patient data is securely stored on the blockchain.
5.6.1 Ceramic
Ceramic is a decentralized event streaming protocol that enables developers to create decentralized databases, distributed computation pipelines, authenticated data streams, and more. These features make it ideal for DeSci projects, helping them leverage Ceramic as a decentralized database:
· Data on the Ceramic network is permissionless, allowing researchers to share and collaborate on data.
· Actions on the Ceramic network such as research papers, citations, and reviews are represented as "Ceramic streams." Each stream can only be modified by the original author's account, ensuring intellectual property traceability.
· Ceramic also provides infrastructure for verifiable claims, enabling DeSci projects to adopt its reputation infrastructure.
5.6.2 bloXberg
bloXberg is a blockchain infrastructure established under the leadership of the Max Planck Digital Library in Germany, with participants including ETH Zurich, Ludwig Maximilian University of Munich, and IT University of Copenhagen, among other renowned research institutions.
bloXberg aims to advance scientific progress through various processes in innovative scientific research, such as research data management, peer review, and intellectual property protection. By leveraging blockchain to decentralize these processes, it enhances the transparency and efficiency of research. Researchers can securely share and collaborate on research data using blockchain.
Is DeSci Really a Panacea?
We have explored the structural issues in modern science and how DeSci addresses these problems. But hold on a moment. Can DeSci truly transform the scientific landscape and play a central role as the crypto community claims? I don't think so. However, I do believe that DeSci has the potential to play a supportive role in certain areas.
6.1 What Blockchain Can and Cannot Solve
Blockchain is not magic; it cannot solve all problems. We must clearly delineate what blockchain can and cannot solve.
6.1.1 Funding
DeSci is poised to excel in funding under the following conditions:
Small-scale funding
Research with commercial potential
The funding scale in the scientific community varies greatly, ranging from tens of thousands to millions or even billions of dollars. For large projects requiring substantial funding, centralized funding from governments or corporations is inevitable. However, small-scale projects can obtain funding through the DeSci platform.
For researchers conducting small-scale projects, cumbersome paperwork and lengthy funding review processes can be overwhelming. In such cases, the DeSci funding platform provides rapid and efficient funding support, making it highly attractive.
However, to increase the likelihood of a research project receiving funding through the DeSci platform, it must have a reasonable commercialization outlook, such as through patents or technology transfer. This provides a motive for public investment in the project. However, most modern scientific research is not focused on commercialization but rather aims to enhance a nation's or corporation's technological competitiveness.
In conclusion, fields suitable for funding on the DeSci platform include biotechnology, healthcare, and pharmaceuticals. Successful research in these areas has high commercial potential. Furthermore, while ultimate commercialization requires substantial funding, the initial stages of research typically require less funding than other fields, making the DeSci platform a favorable option for capital raising.
I doubt whether DeSci can support long-term research. Although a few researchers may receive support from altruistic and voluntary funders to pursue long-term research, this culture is unlikely to be widely disseminated in the scientific community. Even if the DeSci platform utilizes blockchain, there is no inherent causation suggesting they can sustain long-term funding. If attempting to forcibly link blockchain with long-term research, a possible consideration could be milestone-based funding through smart contracts.
6.1.2 Journal
Ideally, the area where DeSci is most likely to bring innovation is academic journals. Through smart contracts and token incentives, DeSci has the potential to restructure the journal-driven profit model into a researcher-centric model. However, in reality, this will be a challenge.
For researchers, a key factor in their career is publishing papers. In academia, researchers' capabilities are primarily judged by the journals they publish in, citation counts, and h-index. Human nature instinctively relies on authority, a fact that has remained unchanged from prehistoric times to today. For example, an unknown researcher can become a star overnight by publishing an article in top journals such as Nature, Science, or Cell.
Although ideally researchers' capability assessment should focus on qualitative aspects, this evaluation heavily relies on peer recommendations, thus almost unavoidably depends on quantitative assessment. It is precisely because of this that journals hold enormous power. Despite monopolizing the profit model, researchers still have to comply. To give DeSci journals more influence, they must establish authority, but achieving the reputation accumulated over a century by traditional journals through token incentives alone is extremely challenging.
While DeSci may not be able to completely change the journal landscape, it can undoubtedly make contributions in certain areas, such as peer review and negative results.
As mentioned earlier, peer reviewers currently have little incentive, which reduces the quality and efficiency of reviews. Providing token incentives to reviewers can improve review quality and raise journal standards.
Furthermore, token incentives can drive the development of a network of journals specialized in publishing negative results. Since reputation has less impact on journals specialized in negative results, the combination of token rewards will motivate researchers to publish their findings in such journals.
6.1.3 Collaboration
In my view, blockchain is unlikely to significantly address the intense competition in modern science. Unlike in the past, the number of researchers today far exceeds that of the past, where every achievement directly impacts career progress, making competition inevitable. Expecting blockchain to solve the overall collaboration issue in the scientific community is unrealistic.
On the other hand, in small-scale research DAOs, blockchain can effectively promote collaboration. Researchers in DAOs share a common vision through token-aligned incentives and record achievements on the blockchain through timestamps to gain recognition. I hope to see not only the number and activities of research DAOs grow in the biotechnology field but also expand into other disciplines.
7. Final Thoughts: DeSci Needs a Bitcoin Moment
Modern science faces many structural challenges, and DeSci provides a compelling narrative to address these issues. While DeSci may not be able to completely transform the entire scientific ecosystem, it can gradually expand through those who find value in it, including researchers and users.
Ultimately, we may see a balance between TradSci and DeSci. Just as Bitcoin was once considered a toy for computer geeks but now has major traditional financial institutions entering the market, I hope DeSci will also receive long-term recognition like Bitcoin and experience its own "Bitcoin moment."
Original Article Link: Link to Original Article
猜你喜歡
穩定幣驅動全球B2B支付革新,如何打破工作流程瓶頸釋放兆市場潛力?
這些新創公司正在無需資料中心的情況下建立先進AI模型
CEX與Wallet之後,OKX入局支付
科學平權運動:DeSci的萬億美元知識經濟重建革命
Sentient深度研報:獲8,500萬美元融資,建置去中心化AGI新範式
專訪Virtuals聯創empty:AI 創業不需要大量資金,Crypto是答案之一
今年 2 月,Base 生態中的 AI 協議 Virtuals 宣布跨鏈至 Solana,然而加密市場隨後進入流動性緊縮期,AI Agent 板塊從人聲鼎沸轉為低迷,Virtuals 生態也陷入一段蟄伏期。
三月初,BlockBeats 對 Virtuals 共同創辦人 empty 進行了一次專訪。彼時,團隊尚未推出如今被廣泛討論的 Genesis Launch 機制,但已在內部持續探索如何透過機制設計激活舊資產、提高用戶參與度,並重構代幣發行與融資路徑。那是一個市場尚未復甦、生態尚處冷啟動階段的時間點,Virtuals 團隊卻沒有停下腳步,而是在努力尋找新的產品方向和敘事突破口。
兩個月過去,AI Agent 板塊重新升溫,Virtuals 代幣反彈超 150%,Genesis 機製成為帶動生態回暖的重要觸發器。從積分獲取規則的動態調整,到專案參與熱度的持續上升,再到「新代幣帶老代幣」的機制閉環,Virtuals 逐漸走出寒冬,並再次站上討論焦點。
值得注意的是,Virtuals 的 Genesis 機制與近期 Binance 推出的 Alpha 積分系統有一些相似之處,評估用戶在 Alpha 和幣安錢包生態系統內的參與度,決定用戶 Alpha 代幣空投的資格。用戶可透過持倉、交易等方式獲得積分,積分越高,參與新項目的機會越大。透過積分系統篩選使用者、分配資源,專案方能夠更有效地激勵社群參與,提升專案的公平性和透明度。 Virtuals 和 Binance 的探索,或許預示著加密融資的新趨勢正在形成。
回看這次對話,empty 在專訪中所展現出的思路與判斷,正在一步步顯現其前瞻性,這不僅是一場圍繞打新機制的訪談,更是一次關於“資產驅動型 AI 協議”的路徑構建與底層邏輯的深度討論。
BlockBeats:可以簡單分享一下最近團隊主要在忙些什麼?
empty:目前我們的工作重點主要有兩個部分。第一部分,我們希望將 Virtuals 打造成一個類似「華爾街」的代理人(Agent)服務平台。設想一下,如果你是專注於 Agent 或 Agent 團隊建立的創業者,從融資、發幣到流動性退出,整個流程都需要係統性的支援。我們希望為真正專注於 Agent 和 AI 研發的團隊,提供這一整套服務體系,讓他們可以把精力集中在底層能力的開發上,而不用為其他環節分心。這一塊的工作其實也包括了與散戶買賣相關的內容,後面可以再詳細展開。
第二部分,我們正在深入推進 AI 相關的佈局。我們的願景是建立一個 AI 社會,希望每個 Agent 都能聚焦自身優勢,同時透過彼此之間的協作,實現更大的價值。因此,最近我們發布了一個新的標準——ACP(Agent Communication Protocol),目的是讓不同的 Agent 能夠相互互動、協作,共同推動各自的業務目標。這是目前我們主要在推進的兩大方向。
BlockBeats:可以再展開說說嗎?
empty:在我看來,其實我們面對的客戶群可以分為三類:第一類是專注於開發 Agent 的團隊;第二類是投資者,包括散戶、基金等各種投資機構;第三類則是 C 端用戶,也就是最終使用 Agent 產品的個人用戶。
不過,我們主要的精力其實是放在前兩大類──也就是團隊和投資人。對於 C 端用戶這一塊,我們並不打算直接介入,而是希望各個 Agent 團隊能夠自己解決 C 端市場的拓展問題。
此外,我們也認為,Agent 與 Agent 之間的交互作用應該成為一個核心模式。簡單來說,就是未來的服務更多應該是由一個 Agent 銷售或提供給另一個 Agent,而不是單純賣給人類使用者。因此,在團隊的 BD 工作中,我們也積極幫助現有的 AI 團隊尋找這樣的客戶和合作機會。
BlockBeats:大概有一些什麼具體案例呢?
empty:「華爾街」說白了就是圍繞資本運作體系的建設,假設你是一個技術團隊,想要融資,傳統路徑是去找 VC 募資,拿到資金後開始發展。如果專案做得不錯,接下來可能會考慮進入二級市場,例如在紐約證券交易所上市,或是在 Binance 這樣的交易所上幣,實現流動性退出。
我們希望把這一整套流程打通-從早期融資,到專案開發過程中對資金的靈活使用需求,再到最終二級市場的流動性退出,全部覆蓋和完善,這是我們希望補齊的一條完整鏈條。
而這一部分的工作和 ACP(Agent Communication Protocol)是不同的,ACP 更多是關於 Agent 與 Agent 之間交互標準的製定,不直接涉及資本運作系統。
BlockBeats:它和現在 Virtuals 的這個 Launchpad 有什麼差別呢?資金也是從 C 端來是嗎?
empty:其實現在你在 Virtuals 上發幣,如果沒有真正融到資金,那就只是發了一個幣而已,實際是融不到錢的。我們目前能提供的服務,是透過設定買賣時的交易稅機制,從中提取一部分稅收回饋給創業者,希望這部分能成為他們的現金流來源。
不過,問題其實還分成兩塊。第一是如何真正幫助團隊完成融資,這個問題目前我們還沒有徹底解決。第二是關於目前專案發行模式本身存在的結構性問題。簡單來說,現在的版本有點像過去 Pumpfun 那種模式——也就是當專案剛上線時,部分籌碼就被外賣給了外部投資人。但現實是,目前整個市場上存在著太多機構集團和「狙擊手」。
當一個真正優秀的專案一發幣,還沒真正觸達普通散戶,就已經被機構在極高估值時搶購了。等到散戶能夠接觸到時,往往價格已經偏高,專案品質也可能變差,整個價值發行體係被扭曲。
針對這個問題,我們希望探索一種新的發幣和融資模式,目的是讓專案方的籌碼既不是死死握在自己手裡,也不是優先流向英文圈的大機構,而是能夠真正留給那些相信專案、願意長期支持專案的普通投資者手中。我們正在思考該如何設計這樣一個新的發行機制,來解決這個根本問題。
BlockBeats:新模式的具體想法會是什麼樣子呢?
empty:關於資金這一塊,其實我們目前還沒有完全想透。現階段來看,最直接的方式還是去找 VC 融資,或是採取公開預售等形式進行資金募集。不過說實話,我個人對傳統的公開預售模式並不是特別認同。
在「公平發售」這件事上,我們正在嘗試換一個角度來思考-希望能從「reputation」出發,重新設計機制。
具體來說,就是如果你對整個 Virtuals 生態有貢獻,例如早期參與、提供支持或建設,那麼你就可以在後續購買優質代幣時享有更高的優先權。透過這種方式,我們希望把資源更多留給真正支持生態發展的用戶,而不是由短期套利的人主導。
BlockBeats:您會不會考慮採用類似之前 Fjord Foundry 推出的 LBP 模式,或者像 Daos.fun 那種採用白名單機制的模式。這些模式在某種程度上,和您剛才提到的「對生態有貢獻的人享有優先權」的想法是有些相似的。不過,這類做法後來也引發了一些爭議,例如白名單內部操作、分配不公等問題。 Virtuals 在設計時會考慮借鏡這些模式的優點,或有針對性地規避類似的問題嗎?
empty:我認為白名單機制最大的問題在於,白名單的選擇權掌握在專案方手中。這和「老鼠倉」行為非常相似。專案方可以選擇將白名單名額分配給自己人或身邊的朋友,導致最終的籌碼仍然掌握在少數人手中。
我們希望做的,依然是類似白名單的機制,但不同的是,白名單的獲取權應基於一個公開透明的規則體系,而不是由項目方單方面決定。只有這樣,才能真正做到公平分配,避免內幕操作的問題。
我認為在今天這個 AI 時代,很多時候創業並不需要大量資金。我常跟團隊強調,你們應該優先考慮自力更生,例如透過組成社區,而不是一開始就想著去融資。因為一旦融資,實際上就等於背負了負債。
我們更希望從 Training Fee的角度去看待早期發展路徑。也就是說,專案可以選擇直接發幣,透過交易稅所帶來的現金流,支持日常營運。這樣一來,專案可以在公開建設的過程中獲得初步資金,而不是依賴外部投資。如果專案做大了,自然也會有機會透過二級市場流動性退出。
當然最理想的情況是,專案本身能夠有穩定的現金流來源,這樣甚至連自己的幣都無需拋售,這才是真正健康可持續的狀態。
我自己也常在和團隊交流時分享這種思路,很有意思的是,那些真正抱著「搞快錢」心態的項目,一聽到這種機制就失去了興趣。他們會覺得,在這種模式下,既無法操作老鼠倉,也很難短期套利,於是很快就選擇離開。
但從我們的角度來看,這其實反而是個很好的篩選機制。透過這種方式,理念不同的專案自然會被過濾出去,最後留下的,都是那些願意真正建立、和我們價值觀契合的團隊,一起把事情做起來。
BlockBeats:這個理念可以發展出一些能夠創造收益的 AI agent。
empty:我覺得這是很有必要的。坦白說,放眼今天的市場,真正擁有穩定現金流的產品幾乎鳳毛麟角,但我認為這並不意味著我們應該停止嘗試。事實上,我們每天在對接的團隊中,有至少一半以上的人依然懷抱著長遠的願景。很多時候,他們甚至已經提前向我們提供了 VC 階段的資金支持,或表達了強烈的合作意願。
其實對他們來說想要去收穫一個很好的社區,因為社區可以給他們的產品做更好的回饋,這才是他們真正的目的。這樣聽起來有一點匪夷所思,但其實真的有很多這樣的團隊,而那種團隊的是我們真的想扶持的團隊。
BlockBeats:您剛才提到的這套「AI 華爾街」的產品體系-從融資、發行到退出,建構的是一整套完整的流程。這套機制是否更多是為了激勵那些有意願發幣的團隊?還是說,它在設計上也考慮瞭如何更好地支持那些希望透過產品本身的現金流來發展的團隊?這兩類團隊在您這套體系中會不會被區別對待,或者說有什麼機制設計能讓不同路徑的創業者都能被合理支持?
empty:是的,我們 BD 的核心職責其實就是去鼓勵團隊發幣。說得直接一點,就是引導他們思考發幣的可能性和意義。所以團隊最常問的問題就是:「為什麼要發幣?」這時我們需要採取不同的方式和角度,去幫助他們理解背後的價值邏輯。當然如果最終判斷不適合,我們也不會強迫他們推進。
不過我們觀察到一個非常明顯的趨勢,傳統的融資路徑已經越來越難走通了。過去那種融資做大,發幣上所的模式已經逐漸失效。面對這樣的現實,很多團隊都陷入了尷尬的境地。而我們希望能從鏈上和加密的視角,提供一套不同的解決方案,讓他們找到新的發展路徑。
BlockBeats:明白,我剛才其實想表達的是,您剛剛也提到,傳統的 AI 模式在很大程度上仍然依賴「燒錢」競爭。但在 DeepSeek 出現之後,市場上一些資金體積較小的團隊或投資人開始重新燃起了信心,躍躍欲試地進入這個領域。您怎麼看待這種現象?這會不會對目前正在做 AI 基礎研發,或是 AI 應用層開發的團隊產生一定的影響?
empty:對,我覺得先不談 DeepSeek,從傳統角度來看,其實到目前為止,AI 領域真正賺錢的只有英偉達,其他幾乎所有玩家都還沒有實現盈利。所以其實沒有人真正享受了這個商業模式的成果,大家也仍在探索如何面對 C 端打造真正有產出的應用。
沒有哪個領域像幣圈一樣能如此快速獲得社群回饋。你一發幣,用戶就會主動去讀白皮書的每一個字,試試你產品的每個功能。
當然,這套機制並不適合所有人。例如有些 Agent 產品偏 Web2,對於幣圈用戶而言,可能感知不到其價值。因此,我也會鼓勵做 Agent 的團隊在 Virtuals 生態中認真思考,如何真正將 Crypto 作為自身產品的差異化要素加以運用與設計。
BlockBeats:這點我特別認同,在 Crypto 這個領域 AI 的迭代速度確實非常快,但這群用戶給予的回饋,真的是代表真實的市場需求嗎?或者說這些回饋是否真的符合更大眾化、更具規模性的需求?
empty:我覺得很多時候產品本身不應該是強行推廣給不適合的使用者群體。例如 AIXBT 最成功的一點就在於,它的用戶本身就是那群炒作他人內容的人,所以他們的使用行為是非常自然的,並不覺得是在被迫使用一個無聊的產品。 mass adoption 這個概念已經講了很多年,大家可能早就該放棄這個執念了。我們不如就認了,把東西賣給幣圈的人就好了。
BlockBeats:AI Agent 與 AI Agent 所對應的代幣之間,究竟應該是什麼樣的動態關係?
empty:對,我覺得這裡可以分成兩個核心點。首先其實不是在投資某個具體的 AI Agent,而是在投資背後經營這個 Agent 的團隊。你應該把它理解為一種更接近創投的思路:你投的是這個人,而不是他目前正在做的產品。因為產品本身是可以快速變化的,可能一個月後團隊會發現方向不對,立即調整。所以,這裡的「幣」本質上代表的是對團隊的信任,而不是某個特定 Agent 本身。
第二則是期望一旦某個 Agent 產品做出來後,未來它能真正產生現金流,或者有實際的使用場景(utility),從而讓對應的代幣具備賦能效應。
BlockBeats:您覺得有哪些賦能方式是目前還沒看到的,但未來可能出現、值得期待的?
empty:其實主要有兩塊,第一是比較常見的那種你要使用我的產品,就必須付費,或者使用代幣支付,從而間接實現對代幣的「軟銷毀」或消耗。
但我覺得更有趣的賦能方式,其實是在獲客成本的角度思考。也就是說,你希望你的用戶同時也是你的投資者,這樣他們就有動機去主動幫你推廣、吸引更多用戶。
BlockBeats:那基於這些觀點,您怎麼看 ai16z,在專案設計和代幣機制方面,似乎整體表現並不太樂觀?
empty:從一個很純粹的投資角度來看,撇開我們與他們之間的關係,其實很簡單。他們現在做的事情,對代幣本身沒有任何賦能。從開源的角度來看,一個開源模型本身是無法直接賦能代幣的。
但它仍然有價值的原因在於,它像一個期權(call option),也就是說,如果有一天他們突然決定要做一些事情,比如推出一個 launchpad,那麼那些提前知道、提前參與的人,可能會因此受益。
開發者未來確實有可能會使用他們的 Launchpad,只有在那一刻,代幣才會真正產生賦能。這是目前最大的一個問號——如果這個模式真的跑得通,我認為確實會非常強大,因為他們的確觸達了大量開發者。
但我個人還是有很多疑問。例如即使我是使用 Eliza 的開發者,也不代表我一定會選擇在他們的 Launchpad 上發幣。我會貨比三家,會比較。而且,做一個 Launchpad 和做一個開源框架,所需的產品能力和社群運作能力是完全不同的,這是另一個重要的不確定性。
BlockBeats:這種不同是體現在什麼地方呢?
empty:在 Virtuals 上我們幾乎每天都在處理客服相關的問題,只要有任何一個團隊在我們平台上發生 rug,即使與我們沒有直接關係,用戶也會第一時間來找我們投訴。
這時我們就必須出面安撫用戶,並思考如何降低 rug 的整體風險。一旦有團隊因為自己的代幣設計錯誤或技術失誤而被駭客攻擊、資產被盜,我們往往需要自掏腰包,確保他們的社群至少能拿回一點資金,以便專案能夠重新開始。這些項目方可能在技術上很強,但未必擅長代幣發行,結果因操作失誤被攻擊導致資產損失。只要涉及「被欺騙」相關的問題,對我們來說就已經是非常麻煩的事了,做這些工作跟做交易所的客服沒有太大差別。
另一方面,做 BD 也非常困難。優秀的團隊手上有很多選擇,他們可以選擇在 Pumpfun 或交易所上發幣,為什麼他們要來找我們,那這背後必須要有一整套支援體系,包括融資支援、技術協助、市場推廣等,每個環節都不能出問題。
BlockBeats:那我們就繼續沿著這個話題聊聊 Virtuals 目前的 Launchpad 業務。有一些社群成員在 Twitter 上統計了 Virtuals Launchpad 的整體獲利狀況,確實目前看起來獲利的項目比較少。接下來 Launchpad 還會是 Virtuals 的主要業務區嗎?還是說,未來的重心會逐漸轉向您剛才提到的「AI 華爾街」這條路徑?
empty:其實這兩塊本質上是一件事,是一整套體系的一部分,所以我們必須繼續推進。市場的波動是很正常的,我們始終要堅持的一點是:非常清楚地認識到我們的核心客戶是誰。我一直強調我們的客戶只有兩類——團隊。所以市場行情的好壞對我們來說並不是最重要的,關鍵是在每一個關鍵節點上,對於一個團隊來說,發幣的最佳選擇是否依然是我們 Virtuals。
BlockBeats:您會不會擔心「Crypto + AI」或「Crypto AI Agent」這一類敘事已經過去了?如果未來還有一輪多頭市場,您是否認為市場炒作的焦點可能已經不再是這些方向了?
empty:有可能啊,我覺得 it is what it is,這確實是有可能發生的,但這也屬於我們無法控制的範圍。不過如果你問我,在所有可能的趨勢中,哪個賽道更有機會長期保持領先,我仍然認為是 AI。從一個打德撲的角度來看,它仍然是最優選擇。
而且我們團隊的技術架構和底層能力其實早已搭建完成了,現在只是順勢而為而已。更重要的是,我們本身真的熱愛這件事,帶著好奇心去做這件事。每天早上醒來就有驅動力去研究最新的技術,這種狀態本身就挺讓人滿足的,對吧?
很多時候,大家不應該只看產品本身。實際上很多優秀的團隊,他們的基因決定了他們有在規則中勝出的能力——他們可能過去在做派盤交易時,每筆規模就是上百萬的操作,而這些團隊的 CEO,一年的薪資可能就有 100 萬美金。如果他們願意出來單幹項目,從天使投資或 VC 的視角來看,這本質上是用一個很划算的價格買到一個高品質的團隊。
更何況這些資產是 liquid 的,不是鎖倉狀態。如果你當下不急著用錢,完全可以在早期階段買進一些優秀團隊的代幣,靜靜等待他們去創造一些奇蹟,基本上就是這樣一個邏輯。
第16週鏈上數據:結構性供需失衡加劇,數據揭⽰下⼀輪上漲的堅實藍圖?
a16z領投2500萬美元,0xMiden要在你手機裡跑一條隱私鏈
Sui Q1進階報告:BTCfi基建崛起、借貸協議爆發與執行分片未來
川普次子的加密生意經
盤點10大新興Launchpad平台,誰能完成Pump.fun的顛覆?
GoRich正式上線:鏈上交易零門檻,新手也能抓住100x Meme幣
4月29日市場關鍵情報,你錯過了多少?
PENGU觸底反彈發拉漲360%,胖企鵝如何靠IP行銷迎第二春?
「打新帶老」:Genesis Launch如何用積分制重建AI Agent打新邏輯?
Arthur Hayes最新訪談:漲勢能否持續?誰能跑贏BTC?選幣邏輯是什麼?
專訪AllianceDAO合夥人qw:Crypto創業家正逃向AI,90%的Crypto+AI都是偽命題
AI賽道重拾熱度,全面整理潛力專案與市場炒作邏輯
穩定幣驅動全球B2B支付革新,如何打破工作流程瓶頸釋放兆市場潛力?
這些新創公司正在無需資料中心的情況下建立先進AI模型
CEX與Wallet之後,OKX入局支付
科學平權運動:DeSci的萬億美元知識經濟重建革命
Sentient深度研報:獲8,500萬美元融資,建置去中心化AGI新範式
專訪Virtuals聯創empty:AI 創業不需要大量資金,Crypto是答案之一
今年 2 月,Base 生態中的 AI 協議 Virtuals 宣布跨鏈至 Solana,然而加密市場隨後進入流動性緊縮期,AI Agent 板塊從人聲鼎沸轉為低迷,Virtuals 生態也陷入一段蟄伏期。
三月初,BlockBeats 對 Virtuals 共同創辦人 empty 進行了一次專訪。彼時,團隊尚未推出如今被廣泛討論的 Genesis Launch 機制,但已在內部持續探索如何透過機制設計激活舊資產、提高用戶參與度,並重構代幣發行與融資路徑。那是一個市場尚未復甦、生態尚處冷啟動階段的時間點,Virtuals 團隊卻沒有停下腳步,而是在努力尋找新的產品方向和敘事突破口。
兩個月過去,AI Agent 板塊重新升溫,Virtuals 代幣反彈超 150%,Genesis 機製成為帶動生態回暖的重要觸發器。從積分獲取規則的動態調整,到專案參與熱度的持續上升,再到「新代幣帶老代幣」的機制閉環,Virtuals 逐漸走出寒冬,並再次站上討論焦點。
值得注意的是,Virtuals 的 Genesis 機制與近期 Binance 推出的 Alpha 積分系統有一些相似之處,評估用戶在 Alpha 和幣安錢包生態系統內的參與度,決定用戶 Alpha 代幣空投的資格。用戶可透過持倉、交易等方式獲得積分,積分越高,參與新項目的機會越大。透過積分系統篩選使用者、分配資源,專案方能夠更有效地激勵社群參與,提升專案的公平性和透明度。 Virtuals 和 Binance 的探索,或許預示著加密融資的新趨勢正在形成。
回看這次對話,empty 在專訪中所展現出的思路與判斷,正在一步步顯現其前瞻性,這不僅是一場圍繞打新機制的訪談,更是一次關於“資產驅動型 AI 協議”的路徑構建與底層邏輯的深度討論。
BlockBeats:可以簡單分享一下最近團隊主要在忙些什麼?
empty:目前我們的工作重點主要有兩個部分。第一部分,我們希望將 Virtuals 打造成一個類似「華爾街」的代理人(Agent)服務平台。設想一下,如果你是專注於 Agent 或 Agent 團隊建立的創業者,從融資、發幣到流動性退出,整個流程都需要係統性的支援。我們希望為真正專注於 Agent 和 AI 研發的團隊,提供這一整套服務體系,讓他們可以把精力集中在底層能力的開發上,而不用為其他環節分心。這一塊的工作其實也包括了與散戶買賣相關的內容,後面可以再詳細展開。
第二部分,我們正在深入推進 AI 相關的佈局。我們的願景是建立一個 AI 社會,希望每個 Agent 都能聚焦自身優勢,同時透過彼此之間的協作,實現更大的價值。因此,最近我們發布了一個新的標準——ACP(Agent Communication Protocol),目的是讓不同的 Agent 能夠相互互動、協作,共同推動各自的業務目標。這是目前我們主要在推進的兩大方向。
BlockBeats:可以再展開說說嗎?
empty:在我看來,其實我們面對的客戶群可以分為三類:第一類是專注於開發 Agent 的團隊;第二類是投資者,包括散戶、基金等各種投資機構;第三類則是 C 端用戶,也就是最終使用 Agent 產品的個人用戶。
不過,我們主要的精力其實是放在前兩大類──也就是團隊和投資人。對於 C 端用戶這一塊,我們並不打算直接介入,而是希望各個 Agent 團隊能夠自己解決 C 端市場的拓展問題。
此外,我們也認為,Agent 與 Agent 之間的交互作用應該成為一個核心模式。簡單來說,就是未來的服務更多應該是由一個 Agent 銷售或提供給另一個 Agent,而不是單純賣給人類使用者。因此,在團隊的 BD 工作中,我們也積極幫助現有的 AI 團隊尋找這樣的客戶和合作機會。
BlockBeats:大概有一些什麼具體案例呢?
empty:「華爾街」說白了就是圍繞資本運作體系的建設,假設你是一個技術團隊,想要融資,傳統路徑是去找 VC 募資,拿到資金後開始發展。如果專案做得不錯,接下來可能會考慮進入二級市場,例如在紐約證券交易所上市,或是在 Binance 這樣的交易所上幣,實現流動性退出。
我們希望把這一整套流程打通-從早期融資,到專案開發過程中對資金的靈活使用需求,再到最終二級市場的流動性退出,全部覆蓋和完善,這是我們希望補齊的一條完整鏈條。
而這一部分的工作和 ACP(Agent Communication Protocol)是不同的,ACP 更多是關於 Agent 與 Agent 之間交互標準的製定,不直接涉及資本運作系統。
BlockBeats:它和現在 Virtuals 的這個 Launchpad 有什麼差別呢?資金也是從 C 端來是嗎?
empty:其實現在你在 Virtuals 上發幣,如果沒有真正融到資金,那就只是發了一個幣而已,實際是融不到錢的。我們目前能提供的服務,是透過設定買賣時的交易稅機制,從中提取一部分稅收回饋給創業者,希望這部分能成為他們的現金流來源。
不過,問題其實還分成兩塊。第一是如何真正幫助團隊完成融資,這個問題目前我們還沒有徹底解決。第二是關於目前專案發行模式本身存在的結構性問題。簡單來說,現在的版本有點像過去 Pumpfun 那種模式——也就是當專案剛上線時,部分籌碼就被外賣給了外部投資人。但現實是,目前整個市場上存在著太多機構集團和「狙擊手」。
當一個真正優秀的專案一發幣,還沒真正觸達普通散戶,就已經被機構在極高估值時搶購了。等到散戶能夠接觸到時,往往價格已經偏高,專案品質也可能變差,整個價值發行體係被扭曲。
針對這個問題,我們希望探索一種新的發幣和融資模式,目的是讓專案方的籌碼既不是死死握在自己手裡,也不是優先流向英文圈的大機構,而是能夠真正留給那些相信專案、願意長期支持專案的普通投資者手中。我們正在思考該如何設計這樣一個新的發行機制,來解決這個根本問題。
BlockBeats:新模式的具體想法會是什麼樣子呢?
empty:關於資金這一塊,其實我們目前還沒有完全想透。現階段來看,最直接的方式還是去找 VC 融資,或是採取公開預售等形式進行資金募集。不過說實話,我個人對傳統的公開預售模式並不是特別認同。
在「公平發售」這件事上,我們正在嘗試換一個角度來思考-希望能從「reputation」出發,重新設計機制。
具體來說,就是如果你對整個 Virtuals 生態有貢獻,例如早期參與、提供支持或建設,那麼你就可以在後續購買優質代幣時享有更高的優先權。透過這種方式,我們希望把資源更多留給真正支持生態發展的用戶,而不是由短期套利的人主導。
BlockBeats:您會不會考慮採用類似之前 Fjord Foundry 推出的 LBP 模式,或者像 Daos.fun 那種採用白名單機制的模式。這些模式在某種程度上,和您剛才提到的「對生態有貢獻的人享有優先權」的想法是有些相似的。不過,這類做法後來也引發了一些爭議,例如白名單內部操作、分配不公等問題。 Virtuals 在設計時會考慮借鏡這些模式的優點,或有針對性地規避類似的問題嗎?
empty:我認為白名單機制最大的問題在於,白名單的選擇權掌握在專案方手中。這和「老鼠倉」行為非常相似。專案方可以選擇將白名單名額分配給自己人或身邊的朋友,導致最終的籌碼仍然掌握在少數人手中。
我們希望做的,依然是類似白名單的機制,但不同的是,白名單的獲取權應基於一個公開透明的規則體系,而不是由項目方單方面決定。只有這樣,才能真正做到公平分配,避免內幕操作的問題。
我認為在今天這個 AI 時代,很多時候創業並不需要大量資金。我常跟團隊強調,你們應該優先考慮自力更生,例如透過組成社區,而不是一開始就想著去融資。因為一旦融資,實際上就等於背負了負債。
我們更希望從 Training Fee的角度去看待早期發展路徑。也就是說,專案可以選擇直接發幣,透過交易稅所帶來的現金流,支持日常營運。這樣一來,專案可以在公開建設的過程中獲得初步資金,而不是依賴外部投資。如果專案做大了,自然也會有機會透過二級市場流動性退出。
當然最理想的情況是,專案本身能夠有穩定的現金流來源,這樣甚至連自己的幣都無需拋售,這才是真正健康可持續的狀態。
我自己也常在和團隊交流時分享這種思路,很有意思的是,那些真正抱著「搞快錢」心態的項目,一聽到這種機制就失去了興趣。他們會覺得,在這種模式下,既無法操作老鼠倉,也很難短期套利,於是很快就選擇離開。
但從我們的角度來看,這其實反而是個很好的篩選機制。透過這種方式,理念不同的專案自然會被過濾出去,最後留下的,都是那些願意真正建立、和我們價值觀契合的團隊,一起把事情做起來。
BlockBeats:這個理念可以發展出一些能夠創造收益的 AI agent。
empty:我覺得這是很有必要的。坦白說,放眼今天的市場,真正擁有穩定現金流的產品幾乎鳳毛麟角,但我認為這並不意味著我們應該停止嘗試。事實上,我們每天在對接的團隊中,有至少一半以上的人依然懷抱著長遠的願景。很多時候,他們甚至已經提前向我們提供了 VC 階段的資金支持,或表達了強烈的合作意願。
其實對他們來說想要去收穫一個很好的社區,因為社區可以給他們的產品做更好的回饋,這才是他們真正的目的。這樣聽起來有一點匪夷所思,但其實真的有很多這樣的團隊,而那種團隊的是我們真的想扶持的團隊。
BlockBeats:您剛才提到的這套「AI 華爾街」的產品體系-從融資、發行到退出,建構的是一整套完整的流程。這套機制是否更多是為了激勵那些有意願發幣的團隊?還是說,它在設計上也考慮瞭如何更好地支持那些希望透過產品本身的現金流來發展的團隊?這兩類團隊在您這套體系中會不會被區別對待,或者說有什麼機制設計能讓不同路徑的創業者都能被合理支持?
empty:是的,我們 BD 的核心職責其實就是去鼓勵團隊發幣。說得直接一點,就是引導他們思考發幣的可能性和意義。所以團隊最常問的問題就是:「為什麼要發幣?」這時我們需要採取不同的方式和角度,去幫助他們理解背後的價值邏輯。當然如果最終判斷不適合,我們也不會強迫他們推進。
不過我們觀察到一個非常明顯的趨勢,傳統的融資路徑已經越來越難走通了。過去那種融資做大,發幣上所的模式已經逐漸失效。面對這樣的現實,很多團隊都陷入了尷尬的境地。而我們希望能從鏈上和加密的視角,提供一套不同的解決方案,讓他們找到新的發展路徑。
BlockBeats:明白,我剛才其實想表達的是,您剛剛也提到,傳統的 AI 模式在很大程度上仍然依賴「燒錢」競爭。但在 DeepSeek 出現之後,市場上一些資金體積較小的團隊或投資人開始重新燃起了信心,躍躍欲試地進入這個領域。您怎麼看待這種現象?這會不會對目前正在做 AI 基礎研發,或是 AI 應用層開發的團隊產生一定的影響?
empty:對,我覺得先不談 DeepSeek,從傳統角度來看,其實到目前為止,AI 領域真正賺錢的只有英偉達,其他幾乎所有玩家都還沒有實現盈利。所以其實沒有人真正享受了這個商業模式的成果,大家也仍在探索如何面對 C 端打造真正有產出的應用。
沒有哪個領域像幣圈一樣能如此快速獲得社群回饋。你一發幣,用戶就會主動去讀白皮書的每一個字,試試你產品的每個功能。
當然,這套機制並不適合所有人。例如有些 Agent 產品偏 Web2,對於幣圈用戶而言,可能感知不到其價值。因此,我也會鼓勵做 Agent 的團隊在 Virtuals 生態中認真思考,如何真正將 Crypto 作為自身產品的差異化要素加以運用與設計。
BlockBeats:這點我特別認同,在 Crypto 這個領域 AI 的迭代速度確實非常快,但這群用戶給予的回饋,真的是代表真實的市場需求嗎?或者說這些回饋是否真的符合更大眾化、更具規模性的需求?
empty:我覺得很多時候產品本身不應該是強行推廣給不適合的使用者群體。例如 AIXBT 最成功的一點就在於,它的用戶本身就是那群炒作他人內容的人,所以他們的使用行為是非常自然的,並不覺得是在被迫使用一個無聊的產品。 mass adoption 這個概念已經講了很多年,大家可能早就該放棄這個執念了。我們不如就認了,把東西賣給幣圈的人就好了。
BlockBeats:AI Agent 與 AI Agent 所對應的代幣之間,究竟應該是什麼樣的動態關係?
empty:對,我覺得這裡可以分成兩個核心點。首先其實不是在投資某個具體的 AI Agent,而是在投資背後經營這個 Agent 的團隊。你應該把它理解為一種更接近創投的思路:你投的是這個人,而不是他目前正在做的產品。因為產品本身是可以快速變化的,可能一個月後團隊會發現方向不對,立即調整。所以,這裡的「幣」本質上代表的是對團隊的信任,而不是某個特定 Agent 本身。
第二則是期望一旦某個 Agent 產品做出來後,未來它能真正產生現金流,或者有實際的使用場景(utility),從而讓對應的代幣具備賦能效應。
BlockBeats:您覺得有哪些賦能方式是目前還沒看到的,但未來可能出現、值得期待的?
empty:其實主要有兩塊,第一是比較常見的那種你要使用我的產品,就必須付費,或者使用代幣支付,從而間接實現對代幣的「軟銷毀」或消耗。
但我覺得更有趣的賦能方式,其實是在獲客成本的角度思考。也就是說,你希望你的用戶同時也是你的投資者,這樣他們就有動機去主動幫你推廣、吸引更多用戶。
BlockBeats:那基於這些觀點,您怎麼看 ai16z,在專案設計和代幣機制方面,似乎整體表現並不太樂觀?
empty:從一個很純粹的投資角度來看,撇開我們與他們之間的關係,其實很簡單。他們現在做的事情,對代幣本身沒有任何賦能。從開源的角度來看,一個開源模型本身是無法直接賦能代幣的。
但它仍然有價值的原因在於,它像一個期權(call option),也就是說,如果有一天他們突然決定要做一些事情,比如推出一個 launchpad,那麼那些提前知道、提前參與的人,可能會因此受益。
開發者未來確實有可能會使用他們的 Launchpad,只有在那一刻,代幣才會真正產生賦能。這是目前最大的一個問號——如果這個模式真的跑得通,我認為確實會非常強大,因為他們的確觸達了大量開發者。
但我個人還是有很多疑問。例如即使我是使用 Eliza 的開發者,也不代表我一定會選擇在他們的 Launchpad 上發幣。我會貨比三家,會比較。而且,做一個 Launchpad 和做一個開源框架,所需的產品能力和社群運作能力是完全不同的,這是另一個重要的不確定性。
BlockBeats:這種不同是體現在什麼地方呢?
empty:在 Virtuals 上我們幾乎每天都在處理客服相關的問題,只要有任何一個團隊在我們平台上發生 rug,即使與我們沒有直接關係,用戶也會第一時間來找我們投訴。
這時我們就必須出面安撫用戶,並思考如何降低 rug 的整體風險。一旦有團隊因為自己的代幣設計錯誤或技術失誤而被駭客攻擊、資產被盜,我們往往需要自掏腰包,確保他們的社群至少能拿回一點資金,以便專案能夠重新開始。這些項目方可能在技術上很強,但未必擅長代幣發行,結果因操作失誤被攻擊導致資產損失。只要涉及「被欺騙」相關的問題,對我們來說就已經是非常麻煩的事了,做這些工作跟做交易所的客服沒有太大差別。
另一方面,做 BD 也非常困難。優秀的團隊手上有很多選擇,他們可以選擇在 Pumpfun 或交易所上發幣,為什麼他們要來找我們,那這背後必須要有一整套支援體系,包括融資支援、技術協助、市場推廣等,每個環節都不能出問題。
BlockBeats:那我們就繼續沿著這個話題聊聊 Virtuals 目前的 Launchpad 業務。有一些社群成員在 Twitter 上統計了 Virtuals Launchpad 的整體獲利狀況,確實目前看起來獲利的項目比較少。接下來 Launchpad 還會是 Virtuals 的主要業務區嗎?還是說,未來的重心會逐漸轉向您剛才提到的「AI 華爾街」這條路徑?
empty:其實這兩塊本質上是一件事,是一整套體系的一部分,所以我們必須繼續推進。市場的波動是很正常的,我們始終要堅持的一點是:非常清楚地認識到我們的核心客戶是誰。我一直強調我們的客戶只有兩類——團隊。所以市場行情的好壞對我們來說並不是最重要的,關鍵是在每一個關鍵節點上,對於一個團隊來說,發幣的最佳選擇是否依然是我們 Virtuals。
BlockBeats:您會不會擔心「Crypto + AI」或「Crypto AI Agent」這一類敘事已經過去了?如果未來還有一輪多頭市場,您是否認為市場炒作的焦點可能已經不再是這些方向了?
empty:有可能啊,我覺得 it is what it is,這確實是有可能發生的,但這也屬於我們無法控制的範圍。不過如果你問我,在所有可能的趨勢中,哪個賽道更有機會長期保持領先,我仍然認為是 AI。從一個打德撲的角度來看,它仍然是最優選擇。
而且我們團隊的技術架構和底層能力其實早已搭建完成了,現在只是順勢而為而已。更重要的是,我們本身真的熱愛這件事,帶著好奇心去做這件事。每天早上醒來就有驅動力去研究最新的技術,這種狀態本身就挺讓人滿足的,對吧?
很多時候,大家不應該只看產品本身。實際上很多優秀的團隊,他們的基因決定了他們有在規則中勝出的能力——他們可能過去在做派盤交易時,每筆規模就是上百萬的操作,而這些團隊的 CEO,一年的薪資可能就有 100 萬美金。如果他們願意出來單幹項目,從天使投資或 VC 的視角來看,這本質上是用一個很划算的價格買到一個高品質的團隊。
更何況這些資產是 liquid 的,不是鎖倉狀態。如果你當下不急著用錢,完全可以在早期階段買進一些優秀團隊的代幣,靜靜等待他們去創造一些奇蹟,基本上就是這樣一個邏輯。
