How To Win Markets and Influence Policy
A game theoretic approach to navigating science in the 21st century
This is a guest post from Evan Peikon, who publishes Decoding Biology, a substack about computational biology, biosensor development and analytics, and network biology. He’s a prolific writer, founder, and scientist, and well worth following along.
Science today looks very different than it once did. The fundamental engine of innovation – open-ended basic research – now operates in an increasingly complex ecosystem where money, political power, and corporate interests heavily shape what gets studied and why.Â
While researchers continue to expand the boundaries of human knowledge, their work now exists in a landscape where scientific achievement, market dynamics, and geopolitical influence have become increasingly interconnected. This transformation demands new analytical frameworks to understand these changes and what they mean for science.Â
By looking at how scientists, academic institutions, companies, and governments make strategic decisions – like players in a complex game – we can begin to disentangle genuine innovation from marketing narratives, separate authentic technological risks from strategic positioning, and ultimately ensure that resources flow to breakthrough discoveries rather than sophisticated financial instruments.
The New Scientific Landscape
Throughout history, science has been about humanity's search for answers - a collective endeavor to understand how our world really works and why. Today, we refer to this open-ended scientific inquiry, or knowledge for knowledge's sake, as basic research. Basic research may not generate flashy news headlines or media coverage, but history is filled with examples of breakthrough discoveries emerging from research projects that would be dismissed in our current market-driven scientific landscape, where the boundaries between genuine innovations, marketing hype, and influence have become increasingly blurred.
Take Watson and Crick's discovery of DNA's double helix structure, for example. This breakthrough discovery wasn’t fueled by a desire to make money, but rather by an intrinsic drive to understand the basic molecular unit of heredity in living organisms. Similarly, consider Marie Curie's pioneering work on radioactivity, which exemplifies science in its most elemental form. Despite the immense potential of her discoveries, Marie Curie refused to patent her radium isolation process, believing that basic science should remain unfettered by commercial interests.
These major breakthroughs in biology and physics show how science has traditionally been guided by our basic human desire to learn and expand the boundaries of our knowledge. Yet today's scientific landscape has transformed dramatically. While the fundamental engine of innovation remains unchanged, researchers pursuing open-ended questions and applying their findings to real-world problems, the context surrounding this pursuit has profoundly changed. Science isn't just about expanding human knowledge anymore; it's about who shapes the future, who wields influence, and who controls the tremendous power that emerging technologies represent. This phenomenon demands careful examination, particularly as we witness the emergence of potentially transformative breakthroughs in fusion, artificial intelligence, biotechnology, robotics, and innovative computing, collectively, termed FABRIC.
Understanding this new scientific landscape requires more than just examining research methods or breakthrough discoveries. We need to analyze the complex web of incentives, power dynamics, and strategic behaviors that influence how science is conducted, communicated, and commercialized. In simpler terms, we need to take a game theoretic approach and look at science like a strategic game where each participant – researchers, universities, companies, and governments – makes moves based on their own goals and what they think others will do.
The Players and Their Motivations
Consider the key players in today's scientific ecosystem. Individual scientists still pursue knowledge and recognition, but they now operate within a complex network of institutional pressures, funding requirements, and commercial opportunities. Universities increasingly emphasize commercialization potential alongside academic merit. Companies, particularly in deep tech, have to balance genuine innovation with market expectations and regulatory concerns. Governments simultaneously act as funders, regulators, and strategic actors pursuing national interests.
Each of these players is involved in what game theorists would recognize as a multi-level game, where single actions often serve many different strategic purposes at once. For example, when a technology company announces a breakthrough, they're not just sharing scientific progress, they're simultaneously signaling value to investors, positioning themselves relative to competitors, influencing regulatory discussions, and shaping public discourse and perception.
The Risk-Value Paradox — A Case Study in Strategic Messaging
A striking paradox has materialized in recent years: The people who are most vocal about the existential risks of emerging technologies are often the same ones racing to develop the very technologies that they claim could threaten humanity. This dynamic exists across the entire spectrum of FABRIC technologies, each with its own unique manifestation of the risk-value paradox.
We see this pattern clearly in the artificial intelligence sector, where apocalyptic warnings about AI’s risks have become paradoxically intertwined with fundraising and regulatory capture strategies. The logic is compelling yet concerning: if a technology poses an existential threat, it must be extraordinarily powerful, and by extension, extraordinarily valuable. Take Sam Altman, CEO of OpenAI, who has famously stated "I think that AI will probably, most likely, sort of lead to the end of the world. But in the meantime, there will be great companies created with serious machine learning."
This seemingly contradictory stance makes more sense when we understand it as a strategic move that serves multiple purposes. AI companies navigate a complex landscape where they have to simultaneously raise money from investors, influence regulators, maintain a competitive advantage against competitors, and shape public perception. Their messaging about technological potential and associated risks serves multiple strategic purposes simultaneously.
When an AI company emphasizes the transformative potential of their technology, they're attracting capital and talent through dramatic claims about future capabilities. Yet by simultaneously highlighting potential risks, they position themselves as necessary partners in governance discussions, effectively securing a seat at the regulatory table. Their technical capabilities and breakthrough announcements become strategic weapons in the race for market dominance and intellectual property rights, while their careful balancing of optimism and concern helps maintain public trust while generating excitement about their innovations.
This creates a troubling feedback where exaggerating risks actually serves companies commercial interests. Companies developing potentially transformative technologies find themselves incentivized to emphasize both the revolutionary potential and inherent dangers of their work. This dual narrative attracts capital investment while simultaneously positioning these companies as necessary partners in mitigating the very risks they highlight.Â
The Risk-Value Paradox Across Technologies
This strategic interplay extends beyond AI to other transformative technologies. Take the robotics industry, for example, where discussions center on both the promise of automated manufacturing and healthcare alongside concerns about job displacement and autonomous weapons systems. Companies in this space often present themselves as necessary partners in ensuring responsible automation while simultaneously pursuing increasingly autonomous systems. Biotechnology companies navigate similar waters with genetic engineering, where the line between medical breakthrough and ethical concern often hinges more on presentation than technical reality.Â
The challenge for observers, whether policymakers, investors, or the public, is to separate genuine existential risks posed by emerging technologies from strategic positioning, which requires understanding not just the technology itself, but the incentive structures and games being played around it. The same concept applies when assessing reports of technological breakthroughs, especially when we, as outside observers, don’t have all of the information required to make a careful assessment of whether or not certain claims are grounded in facts. Â
Separating Signal from NoiseÂ
So, how can we distinguish puffery from fact, and genuine risk or scientific progress from carefully disguised marketing? The answer involves understanding the strategic value of different types of claims and behaviors.Â
Consider the controversy surrounding the 2024 AlphaFold3 paper which, unlike its predecessor, failed to release its code. The issue wasn't that DeepMind wanted to protect their intellectual property. Companies have every right to maintain trade secrets and protect their competitive advantages. Rather, the controversy stemmed from their choice to announce their breakthrough in a scientific paper while withholding the code and model details necessary for scientific verification. Instead of using a press release or corporate announcement, they chose to leverage the prestige and credibility of publishing a scientific paper in Nature to promote what was effectively a product announcement. This practice effectively turns respected scientific journals into high-impact advertising platforms.Â
From a game theory perspective, this withholding of verification mechanisms (ie, code) mirrors the broader phenomenon of "vaporware" in technology, a practice where tech companies announce amazing products that never actually see the light of day. These types of announcements make strategic sense when their immediate value in attracting investment or deterring competitors, exceeds the reputational cost of limited reproducibility or non-delivery.
This dynamic extends beyond code availability to warnings of existential risk and statements about technological capabilities. When we evaluate claims about potential risks or breakthroughs, we should consider their strategic value. Risk warnings from technology companies, for example, serve multiple strategic functions – they demonstrate corporate responsibility, justify increased funding, and position companies as necessary partners in governance discussions. Understanding these motivations becomes crucial for evaluating the credibility of such warnings.
The parallel between transparency in messaging and traditional scientific verification remains critical. Scientific claims, or claims of risk, without supporting information, undermine the integrity of scientific communication itself. Just as a scientific paper without reproducible methods fails to advance collective knowledge, claims about technological capabilities or risks without verifiable evidence may serve primarily as strategic positioning. This creates a troubling dynamic where the channels of scientific discourse - papers, conferences, and peer review - are increasingly co-opted as vehicles for marketing and influence. Yet in our current landscape, where scientific papers increasingly function as moves in a complex game of market positioning and acquiring influence, traditional scientific norms of transparency and reproducibility often conflict with strategic commercial interests. The challenge ahead isn’t eliminating strategic behaviors, that's likely impossible, but developing new frameworks that can preserve the integrity of scientific communication while acknowledging the reality of commercial incentives.
Restoring Scientific IntegrityÂ
Open science provides one model for realigning incentives with scientific integrity, particularly when combined with traditional intellectual property protections.
George Church's pioneering work exemplifies how academic leadership can drive open science forward. In 2005, he founded the Personal Genome Project, creating a framework for public and shareable biological samples with associated genome, health, and trait data from volunteers supporting open-ended research. Church's approach demonstrates how making innovations available through both open-source initiatives and commercial partnerships can accelerate scientific progress while creating economic opportunities.
This dual-track approach is increasingly adopted by industry leaders as well. Consider Novartis's approach through its Institutes for BioMedical Research, which develops open-source bioinformatics tools for drug discovery. This isn't altruism, it's strategic recognition that scientific progress accelerates when building blocks are shared. When new algorithms are locked behind proprietary barriers, it stunts innovation across the field. By making their tools open source, Novartis helps create an ecosystem where the entire field evolves more rapidly, ultimately benefiting their own research and development efforts.
This approach demonstrates how open source and commercial success aren't mutually exclusive. The patent system's original purpose remains relevant here: incentivizing innovation by granting inventors a limited period of exclusive rights in exchange for public disclosure. When companies can protect their core innovations through patents while making their tools and methods openly available, it creates a powerful alignment between transparency and commercial interests. The ability to patent while maintaining open-source practices gives companies a clear pathway to profit from their innovations while still contributing to scientific progress.
We also need new institutional frameworks that recognize and account for strategic behavior. Research funding mechanisms could be designed to reward reproducibility and transparent negative results, not just positive findings. Regulatory frameworks could require increased transparency from companies developing potentially transformative technologies while creating clear pathways for commercialization that don't compromise scientific integrity. The goal isn't to eliminate strategic behavior, that's impossible, but to channel it in ways that advance both scientific knowledge and commercial innovation.
The Role of Game Theory in Scientific Governance
The frameworks we've discussed, from open science to patent protections, represent important steps forward, but they need to be part of a broader governance approach that explicitly acknowledges the game-theoretic nature of modern science. Instead of treating scientific communication at face value, policymakers must analyze the strategic motivations driving different behaviors and claims.Â
This approach suggests several practical reforms. First, scientific publications could require explicit disclosure of not just financial interests, but also strategic ones, such as whether code or data withholding serves competitive advantages. Second, funding mechanisms could be redesigned to reward transparency and reproducibility while acknowledging legitimate commercial interests, similar to how Novartis balances open-source tools with proprietary innovations. Third, regulatory frameworks could create clear pathways for companies to protect their competitive advantages without compromising scientific integrity.
Crucially, we need to anticipate how these systems might be subverted. Just as companies have learned to weaponize scientific papers as marketing vehicles and risk warnings as strategic tools, they will find ways to game any new governance structures we create. This demands a dynamic approach to governance design, where we explicitly model potential failure modes and adversarial behaviors.
Consider how citation metrics, initially designed to measure scientific impact, have been corrupted by citation cartels and paper mills. Or how impact factors, meant to assess journal quality, now influence where scientists publish more than the quality of their work. By incorporating game theoretic analysis into the design process itself, we can create more robust systems that acknowledge and channel strategic behavior rather than pretending it doesn't exist.
What it all means
The future of science depends on our ability to understand and navigate its increasingly complex strategic landscape. By applying game theory to analyze scientific development, we can better distinguish genuine breakthroughs from sophisticated marketing, separate authentic risks from strategic positioning, and design systems that align individual incentives with collective scientific progress.
This doesn't mean abandoning the fundamental principles of scientific inquiry that drove Watson and Crick's discovery of DNA or Marie Curie's work on radioactivity. Rather, it means protecting those principles by understanding the complex games now being played around them. Only by acknowledging the strategic nature of modern scientific development, and designing frameworks that account for it, can we ensure that the pursuit of knowledge continues to serve its essential purpose: advancing human understanding and the welfare of all living beings.