Recursion Works — January 2026

TEA Commons Memo

Underutilizing Techno-Economic Analysis puts massive amounts of capital and climate impact at risk — and we believe AI and open data can finally fix that.

01

Underutilizing Techno-Economic Analysis Puts Massive Amounts of Capital and Climate Impact at Risk

Climate technology deployment faces a critical commercialization crisis: 40,000+ teams and $35B+ in capital are at risk because fewer than 20% of early-stage teams use economic analysis to guide their R&D decisions.

With climate impacts intensifying, abatement windows shrinking, diminishing public sector support, and an estimated 45% of 2050's abatement potential tied to technologies not yet deployed at scale, it is critical that limited commercialization resources are efficiently allocated to economically viable, high-impact climate research.

The cost of ignoring or deferring commercial viability is massive: we estimate ~40,000 early-stage / proto-companies and ~500,000 applied science researchers working on climate globally, with $35B+ in public R&D and private capital at risk.

Global impact — number of innovators and deployed capital for climate-related efforts
Figure 1: Global impact — number of innovators to enable and deployed capital for climate-related efforts. Team analysis; primarily 2023 data, OECD, NSF.

The barrier isn't willingness, but accessibility: TEA is seen as complex and expensive, creating a mismatch between high need and prohibitive cost that leaves most scientists navigating critical economic gaps alone.

Our surveys indicate only <20% of pre-seed teams use economics to set R&D milestones, instead using their own heuristics, not trusting model outputs, or just not having a model at all.

Techno-economic analysis (TEA) provides the core mental model for commercializing a technology. In our experience coaching 50+ teams, when founders build TEA models, they develop an intuitive understanding of their economics: which assumptions drive their cost structure, where uncertainty matters most, and how to update their model as the venture evolves — so it remains a dynamic framework to inform their commercialization roadmap. This deep knowledge builds necessary internal conviction and establishes credibility with investors and partners.

There are, however, significant barriers that limit widespread TEA adoption. TEA is often seen as a complex, high-precision academic exercise that can present an intimidating cold-start problem for scientists — especially when working on breakthrough technologies with highly idiosyncratic TEA development needs. In-depth, personalized TEA support is the most effective way to build TEA capacity, but is not yet provided by most enabling organizations. While 1:1 coaching and TEA consultants can bridge that gap, they rely on expert time and are expensive. Ultimately, the mismatch between low ability to pay and high cost to serve makes critical support inaccessible for the vast majority of early teams.

Techno-economic analysis adoption among pre-seed climate startups
Figure 2: Techno-economic analysis underpins climate deployment but <20% have, or regularly integrate, TEA into their commercialization process. Source: internal team analysis.

02

Our Thesis: AI/LLMs and Accessible, Validated Data Will Enable Widespread TEA Adoption

AI and validated data can finally democratize expert-level TEA support: For the first time, LLMs can deliver personalized, high-quality guidance at scale — but only if grounded in accessible, trustworthy industrial data.

AI/LLMs are the enabling technology that will, for the first time, allow idiosyncratic TEA needs to be served at high quality and at scale through software tooling. If built thoughtfully, we can provide personalized guidance that adapts to each team's unique technology and context, while dramatically reducing the cost of delivering expert-level support — making it economically viable to serve the long tail of early-stage teams around the world. However, AI alone isn't enough to drive TEA adoption. Underlying the issue of delivering scalable, personalized TEA support is the challenge of easily accessible, high quality assumption data — a prerequisite for any analysis.

The real bottleneck is assumption data, not modeling: Teams can draft a TEA in 10–15 hours, but finding and validating good assumptions delays them by 2–3 months on average, with thousands of teams redundantly solving the same problems.

Over the course of 1,500+ TEA coaching hours, we've observed that teams can build a draft TEA in ~10–15 hours, but finding and validating good assumptions delays teams by 2–3 months on average. Our survey data indicates that 60% of teams find finding data they can trust to be the hardest part of developing their TEA.

At an ecosystem level, the problem isn't just that data is hard to find — it's that every team is independently solving the same problem. A startup researching hydrogen production can spend 10–15 hours finding simple capex data for electrolyzers across Google, public reports, and existing models. Halfway around the world, another team researches the exact same question and spends another 10–15 hours. Across thousands of teams and hundreds of data points, this creates massive redundant effort, and teams still too often arrive at inconsistent or low-quality assumptions that undermine trust in their models, rendering them largely ineffective.

The asymmetry is striking: while each team spends hours hunting for and validating a data point, an expert could gut-check the same information in 5–10 minutes — they have their own heuristics, know which sources are credible, what ranges are reasonable, and where to look. When a small group of experts validate a data point once, it can potentially serve thousands of teams, saving tens of thousands of hours.

Survey data: 60% say finding good data is the hardest part of TEA
Figure 3: Even among top teams, a clear majority say finding good data is the most challenging part of TEA. n = 17, pre-seed, Dec. 2025.

The broader uncertainty in core input assumptions diminishes trust in model outputs, deferring integration into internal decision-making and negatively impacting fundraising and pilot partnership outcomes. We believe these data challenges must be addressed as the first step of democratizing quality, personalized TEA support.

The following core beliefs give us conviction that building a repository of industrial data for early-stage climate commercialization is necessary, tractable, and can deliver disproportionate impact:

  • Impact is non-linear and front-loaded. There are a countable number of industries and applications that move the needle on climate impact, reference classes within those industries (~250), and data points for each reference class (~200), with many data points (est. 30%+) duplicated across classes. We would prioritize the most impactful data first — the 20% of data points that unlock 80% of the value — delivering substantial impact quickly.
  • Rough accuracy unlocks directional decisions. Rough-but-good-enough data aggregated from publicly available sources is sufficient for early-stage teams to make go/no-go decisions and matches their directional, order-of-magnitude precision needs. ±30% accuracy in validation is quick for experts to provide, eases collection costs, and helps teams avoid over-indexing on <5% accuracy in some parameters at the expense of massive uncertainty in others.
  • This is operational, not research. This effort is not dependent on further AI developments, but is a massive operational effort that can be coordinated across a coalition of partners. The work is tractable today, and needs a dedicated team to execute quickly.
  • Data makes AI trustworthy. Without validated data, LLMs hallucinate plausible-sounding assumptions that lead teams astray. By grounding future tooling in a validated database, we minimize hallucinations, ensure outputs are traceable to credible sources, and build the trust necessary for teams and external parties to rely on AI-supported TEAs.

03

We're Building an Open Source Industrial Data Repository and AI-Enabled Tooling for Early-Stage TEAs

The data problem is tractable and front-loaded: Roughly 250 reference classes across key climate industries, with ~200 data points each (30%+ duplicated), means we can prioritize the 20% of data that unlocks 80% of value.

The industrial database covers all major climate-relevant industries and all major reference classes for those industries. Data types per reference class include: raw material costs, capex, performance data, product specifications, and carbon intensity. Each data point will include metadata, sources, and variability ranges. Data collection and validation scales through a hybrid AI/expert process, operationalized for ease of use through a chat window.

The open-access industrials database as the foundation for trustworthy TEA tooling
Figure 4: The open-access industrials database is the foundation that makes TEA tooling trustworthy and delivers immediate, independent value.
Industrial database reference classes and assumptions structured by process flows
Figure 5: Industrial database made up of reference classes and assumptions, structured by process flows that serve as TEA scaffolds.

AI-enabled TEA tooling acts as a co-pilot — combining LLM intelligence with a calculation engine to guide teams through building TEAs they understand and trust.

A co-pilot tool like ChatGPT that users can interact with via a browser-based experience and chat window, that also integrates into Microsoft Excel to guide users through building a TEA in the context of their workflow. We take advantage of the language-based intelligence of large language models to tailor each TEA exercise to the team's needs, encased within a TEA calculation engine that helps eliminate LLM-related errors.

Open-access tooling layered on top of the database to support TEA development
Figure 6: Open-access tooling layers on top of the database to support innovators with developing, refining, and integrating their TEA into decision making.

04

Our Unique Vantage Point

1,500 hours spent coaching 50+ teams and expertise gained at Breakthrough Energy Fellows, ARPA-E, McKinsey, and Seedstars gives us the insight and operational capacity to execute at scale.

Our current core business is in supporting entrepreneurial scientists in research labs and early-stage startups from leading institutions via 1:1 TEA coaching, workshops, and classroom lectures. This service is not meaningfully scalable, but does further our ultimate mission of supporting every climate-oriented scientist in the world by giving us unique insight into early-stage TEA development workflows and helping us stay close to team needs as we develop tooling and data. It also forces us to repeatedly refine our approach across a wide range of scenarios, team backgrounds, and industries.

We believe this deep user understanding and intimate knowledge of what stumbling blocks scientists face at which junctures of the TEA development process are prerequisites for building effective AI tooling that actually gets adopted and integrated into early-stage commercialization workflows.

We complement this unique experience with deep expertise across the critical capabilities this effort requires: navigating the earliest stages of entrepreneurship in challenging industries and markets (ARPA-E, Seedstars), modeling for high uncertainty in both startups and Fortune 500s (Breakthrough Energy, McKinsey), and developing AI and data products at scale (Cyence).

We have a strong and growing coalition of 50+ partners from academic institutions, government, investors, and ecosystem enablers.

We're building a coalition of partners across multiple engagement tiers to execute this effort at scale. Early partners include:

  • Funders Providing capital and strategic guidance — Breakthrough Energy Fellows, Homeworld
  • Collaborators Contributing time, data access, and graduate student resources — MIT, Undaunted (Imperial College), Oxford, Fedtech
  • Co-implementors Hands-on building and distribution roles — External Affairs, Labstart

Our active pipeline includes 50+ additional organizations spanning academic institutions, accelerators, and public sector stakeholders.

Sample of potential partners and coalition roles
Figure 8: Sample of potential partners and coalition roles.

05

High Leverage Opportunity for Philanthropic Funding

We're raising $6–8M in philanthropic funding over 30 months to coordinate a coalition-wide effort building comprehensive industrial data coverage across all major climate sectors and AI-enabled TEA tooling.

Recursion Works will lead this initiative, with funding to support partner organizations who bring deep domain expertise, industry networks, and direct access to early-stage teams in their respective verticals.

The $6–8M investment in the public good and coalition model creates outsized impact across multiple dimensions:

  • Capital efficiency Increases the leverage of ~$100B of R&D and private capital by ensuring public and private funds flow towards economically viable climate solutions.
  • Climate deployment Accelerates commercialization time for technologies representing 45% of 2050's abatement potential.
  • Economic development More teams develop with sound economics, driving industrial innovation and attracting follow-on funding that spurs job creation.
  • Validate & advance AI for climate Freely accessible analytical infrastructure that benefits the full breadth of climate-relevant technologies, causing a rising-tide effect on the entire climate innovation ecosystem — while proving that validated industrial data unlocks AI capabilities that may attract future commercial investment.

We welcome any feedback or reactions you have to this memo. If there are teams, organizations, or funders that you believe would be interested in our work, please reach out.