December 2025 | Accelerate Science Now Coalition Members
Response to Office of Science and Technology Policy (OSTP) Request for Information on Accelerating the American Scientific Enterprise
Date: December 19, 2026
Submitted by: Accelerate Science Now
Point of Contact: Joshua New, Director of Policy, SeedAI
Email: josh@seedai.org
Phone: (781) 854-2766
Overview
The United States stands at a pivotal moment for scientific innovation. Advances in artificial intelligence (AI) are creating unprecedented opportunities to accelerate discovery across every scientific domain. At the same time, the federal research enterprise faces structural challenges—administrative inefficiencies, fragmented data infrastructure, barriers to public-private collaboration, and institutional models ill-suited to the speed of modern science—that threaten to slow progress precisely when acceleration is most needed.
Accelerate Science Now appreciates the opportunity to respond to OSTP’s Request for Information on Accelerating the American Scientific Enterprise. Our coalition represents a broad cross-section of the innovation ecosystem, including leading technology companies, research universities, national laboratories, nonprofits, and civil society organizations united by a shared commitment to igniting a new era of rapid scientific discovery.
The recommendations that follow address how the Administration can unlock flexible partnership mechanisms, build data and compute infrastructure for AI-driven science, support novel institutional models, reform grantmaking based on metascience evidence, strengthen regional innovation ecosystems, prepare the workforce for AI-transformed research, and adopt risk-based approaches to research security. We offer these recommendations recognizing that accelerating American science will require sustained collaboration among government, industry, academia, and civil society.
About Accelerate Science Now
Accelerate Science Now is a non-partisan coalition of leaders in industry, academia, civil society, and the research community, charged with igniting a new era of rapid scientific discovery and delivering the benefits to the American people.
Accelerate Science Now members include:
The Align Foundation, Amazon Web Services (AWS), Americans for Responsible Innovation, Anthropic, Arizona State University, Arm, Astera, Bit Biome, Black Tech Street, Broad Institute, Caltech, Carnegie Mellon University, Center for Data Innovation, Cohere, Computing Research Association (CRA), Convergent Research, Digit Bio, Emerald Cloud Lab, Energy Sciences Coalition, Engineering Biology Research Consortium (EBRC), Federation of American Scientists (FAS), Foundation for American Innovation, FutureHouse, Gingko Bioworks, Good Science Project, Google DeepMind, Hewlett Packard Enterprise (HPE), Horizon Institute for Public Service, Inclusive Abundance, Information Technology Industry Council (ITI), Institute for AI Policy and Strategy (IAPS), Institute for Progress (IFP), Institute of Electrical and Electronics Engineers (IEEE), Intel, Klyne, Lehigh University, Medra, Meridian, Meta, Microsoft, National Applied AI Consortium (NAAIC), New Mexico A.I. Labs, New Mexico Artificial Intelligence Consortium Academia (NMAIC-Academia), New Mexico State University, NobleReach Foundation, OpenMined, Potato, RenPhil, Rice University, Roadrunner Venture Studios, Roboflow, Samsung, SeedAI, Software Information Industry Association (SIIA), Syntensor, Systems & Technology Research (STR), Tetsuwan, Transfyr, UbiQD, University of California Berkeley, University of California Irvine, University of Albany, University of Florida, University of Tennessee-Knoxville, University of Wisconsin-Madison, VentureWell.
Accelerate Science Now is led by SeedAI, a non-profit, nonpartisan organization working at the forefront of artificial intelligence policy and governance.
These recommendations do not necessarily represent or reflect the official positions of all coalition members. This document should be understood as a collaborative effort to advance shared objectives, while acknowledging the diversity of viewpoints within our coalition.
Responses to OSTP RFI Questions
(i) What policy changes to Federal funding mechanisms, procurement processes, or partnership authorities would enable stronger public-private collaboration and allow America to tap into its vast private sector to better drive use-inspired basic and early-stage applied research?
Expand and Normalize the Use of Other Transaction Agreements (OTAs). Other Transaction Agreements represent a significantly underutilized mechanism for enabling flexible, mission-oriented public-private collaboration. Although many civilian agencies have held Other Transaction Authority since the 1990s, just over 2% of their contract dollars are awarded through these instruments. 1
NASA’s milestone-based Space Act Agreements demonstrate the transformative impact of OTA-like mechanisms—by aligning incentives around performance and outcomes, NASA helped catalyze the development of commercial space capabilities at a fraction of the traditional cost and timeline. 2
Two key barriers hinder broader OTA adoption: risk aversion among contracting officers who fear protests or professional consequences, and a lack of institutional support including internal policies, training, and personnel with expertise in structuring milestone-based agreements. To address this, the Administration should direct agencies to establish dedicated OTA offices with specialized contracting officers, standardized agreement templates, and clear guidance. OMB should publish model OTA contracts and provide technical assistance to agencies seeking to expand their use of these authorities.
OTAs should be expanded and regularly used to enable multi-party consortia involving nonprofits, labs, startups, and foundations for shared tools and data, rather than limiting their use to mainly bilateral industry partnerships.
Model Non-DOD Agency Other Transaction Authorities after DOD OT Authority and Expand Flexibilities for Rapid Adoption of Proven Technologies. Non-DOD federal agencies should be able to leverage their Other Transaction (OT) authorities with the same flexibilities provided to DOD agencies under 10 USC 4022(f)(1) through (5). Under this DOD OT authority, DOD agencies can award follow-on production contracts or transactions (OTs) to the recipients of an Other Transaction without competitive procedures if the prototype was successfully completed under the initial OT, and that initial OT was awarded under competitive procedures.
This allows DOD agencies to bypass a long procurement process and quickly award a sole source follow-on contract or OT to the recipients of an OT, enabling more rapid technology adoption. This also provides for a seamless transition from R&D to production and entices more commercial companies to participate in government-funded programs because there is a clear path to rapid production and a procurement tool in place to help avoid the valley of death. As of today, only DOD agencies have this authority to award sole-source follow-on production awards for specific technologies developed under OTs.
Any federal agency (DOD or non-DOD) should also be allowed to fund a follow-on production contract or OT regardless of which agency awarded the initial, competitively awarded OT. For example, if recipients of a DARPA OT successfully develop a prototype technology under their initially competed OT, they could receive a sole source follow on contract or OT from an HHS customer who desires that technology. This cross-federal agency procurement authority would enable more coordination across federal agencies, reduce potential for bureaucracy and duplicate investments, drastically cut procurement times, and speed up the adoption of valuable new innovations within the government.
Further, to maximize agency flexibility and speed, OT authority should permit any agency to award a follow-on production contract or OT to any single company that made up the team that received the initially competed OT. This would allow agencies to award a rapid follow on award to a subcontractor on a team whose individual prototype technology was successfully completed under the initial OT and proved to be the single technology that may benefit the agency’s future mission. This affords a federal agency a procurement tool to adopt and field a single proven technology quickly when the rest of a project may have failed, without having to start a long acquisition process to procure an available technology that the Government has already funded the development of under the initial OT.
Reduce Flow-Down Requirements in Cooperative Agreements. A significant barrier to SME participation in federal cooperative agreements, particularly in defense-related programs, is the administrative and legal burden of complex requirements designed for large contractors. Flow-down provisions—including intricate intellectual property terms and compliance requirements—create substantial disincentives for SME participation. These burdens disproportionately affect smaller firms that lack the in-house legal and administrative capacity of larger contractors, limiting their willingness to engage as collaborators or early adopters of federally funded innovations. Federal agencies should develop greater flexibility in flow-down requirements, establish standardized IP frameworks tailored to SME capabilities, and implement tiered compliance structures that preserve federal interests while broadening participation beyond the largest corporations.
Support Pre-Competitive Infrastructure. Federal funding should explicitly support what economists term “infratechnologies”—the foundational technical infrastructure that enables downstream innovation but is systematically underinvested in by the private sector due to its public-good characteristics. 3
This includes measurement and test methods, scientific and engineering databases, interface standards, quality control techniques, and reference models. In the life sciences and AI domains, this translates to standardized datasets, validated assays, benchmarks, and reference architectures. Because these investments generate widespread spillovers—benefits that accrue broadly across the research ecosystem rather than to individual investors—market forces alone will not produce them at optimal levels. The Administration should direct agencies to prioritize funding for this shared technical infrastructure, recognizing that such investments reduce barriers across the research community and enable the private-sector applied R&D that produces innovations.
(ii) How can the Federal government better support the translation of scientific discoveries from academia, national laboratories, and other research institutions into practical applications? Specifically, what changes to technology transfer policies, translational programs, or commercial incentives would accelerate the path from laboratory to market?
Modernize technology transfer to support scientific infrastructure, not just products. Current tech transfer frameworks are optimized for licensing high-revenue products, but many valuable research outputs—curated datasets, benchmarks, validated methods, and shared pipelines—do not fit this mold. Federal agencies should update tech transfer guidance to incentivize low-friction licensing for these infrastructure-like outputs, which often have greater long-term impact on accelerating downstream discovery than any single commercializable product.
Establish federated validation testbeds. A persistent challenge in translation is the gap between promising laboratory results and real-world performance. Federal agencies should support the creation of validation testbeds where academic discoveries can be assessed under clinical, regulatory, or industrial conditions. These facilities would allow researchers to stress-test findings before committing to costly commercialization pathways, reducing risk for both researchers and private-sector partners.
Support dedicated translation accelerators. Federal research agencies should pilot accelerator entities with dedicated teams possessing commercialization and emerging technology expertise. These entities can reduce commercialization barriers for use-inspired R&D teams and speed translation of promising research into tangible solutions. Operating through block grants, these accelerators could re-grant to multiple convergent teams at various stages to further develop and commercialize solutions aligned to priority technology areas.
Establish Shared National Reproducibility Infrastructure. A significant barrier to translating scientific discoveries into practical applications is the lack of shared infrastructure for tracking and verifying research results. The federal government should create or fund platforms for experiment provenance tracking, version-controlled data management, model documentation, and reproducible computational workflows. This infrastructure should be modular and interoperable, allowing agencies, universities, consortia, and startups to integrate without adopting a monolithic system. By enabling industry, academia, startups, and nonprofit research organizations to collaborate on shared standards and interoperable tooling, the federal government can reduce duplicative effort and lower barriers to entry for smaller organizations seeking to build on federally funded research.
The Administration should also establish national testbeds where new research technologies—including AI models, lab automation platforms, biological workflows, and high-performance computing environments—can be benchmarked for reproducibility and robustness. These testbeds would help set expectations for a robust and reproducible scientific enterprise while lowering barriers to integration across systems and markets. Such testbeds could be housed at national laboratories or federally funded research centers, with access made available to academic and industry partners.
Mandate and Incentivize Reproducibility and Translatability Plans. Federal research agencies should require grant applicants to include reproducibility and validation strategies as part of their proposals, similar to existing data management plan requirements. These plans should outline how results, code, and data—from both successes and failures—will be made verifiable by independent researchers. Program managers across agencies should be encouraged to evaluate reproducibility as a criterion for both selection and continuation of funding. This shift would help ensure that federally funded research produces outputs that can be reliably built upon by downstream researchers and commercial partners.
Create Incentives for Independent Replication, Validation, and Downstream Productization. The “valley of death” in biomedical and other research is exacerbated by the lack of dedicated funding to demonstrate robustness across contexts and support the transfer and distribution of knowledge and technology. Federal investment can de-risk research for commercialization, but current investments to create market-ready opportunities are insufficient. The Administration should create a fund dedicated to translational science that augments existing research grants and creates a marketplace for innovation. Agencies should set aside a percentage of program budgets for confirmatory independent replication and validation, with payments tied to go/no-go milestones linked to reproducibility and transferability. This approach would improve downstream reliability without slowing innovation.
Additionally, the federal government should fund the infrastructure layer of scientific translation: the data, metadata, and compute infrastructure required to support translatability at scale. This could include a repository for executable science that captures not only experimental successes but also the rich data and executables representing failures, which can prevent wasted effort across the R&D enterprise.
Enforcement mechanisms matter as well. Federally funded research, much of which already requires data access for the government from performers, should enforce these requirements by making overhead payments contingent on submission of data as required by federal agencies. Regulatory agencies such as FDA, EPA, and USDA could further align incentives by providing expedited review pathways, priority designations, or other regulatory advantages to performers who demonstrate robust reproducibility practices, transparent documentation, and pre-competitive access to underlying data—including negative data.
(iii) What policies would encourage the formation and scaling of regional innovation ecosystems that connect local businesses, universities, educational institutions, and the local workforce—particularly in areas where the Federal government has existing research assets like national laboratories or federally-funded research centers?
Regional innovation ecosystems that connect local businesses, universities, and the workforce are essential for ensuring the benefits of federally funded research reach all Americans. The Federal government possesses significant tools and authorities to encourage the formation and scaling of these ecosystems, and the Administration should make their effective utilization a priority.
Build on existing regional innovation authorities. Several federal agencies possess authorities specifically designed to catalyze regional innovation ecosystems. NSF’s Regional Innovation Engines program funds regional coalitions that bring together universities, local governments, and industry partners to drive technology-based economic development around community-specific challenges. The Economic Development Administration (EDA) administers programs including the Regional Technology and Innovation Hubs (Tech Hubs) program, which designates and funds regional consortia focused on technology development and manufacturing. DOE’s national laboratory system represents a distributed network of world-class research facilities that can anchor regional innovation efforts. The Administration should prioritize these programs and direct agencies to coordinate their regional innovation efforts to maximize impact and reduce duplication. Regional efforts with appropriate specialized focus areas (e.g., ag-biotech, marine biology, environmental genomics) should be aligned with local industry and workforce to boost the local economy.
Anchor regional hubs around federal research assets. The Federal government’s network of national laboratories, research centers, and agency facilities distributed across the country should serve as anchors for regional innovation ecosystems. DOE national laboratories, NIH-affiliated research centers, USDA research facilities, and NASA centers can all serve as nuclei for regional technology clusters. The National Academies’ NASA at a Crossroads report (2024) recommends that NASA adopt a centralized human capital strategy to maintain technical innovation and expertise. 4
The Administration should build on this recommendation by directing NASA and other agencies to develop explicit strategies for how their regional facilities can better integrate with and support local innovation ecosystems, including through workforce development partnerships with nearby universities and community colleges.
Support shared infrastructure. For regional ecosystems to thrive, small firms, nonprofits, regional universities, and community labs need access to shared wet lab space, computing resources, and data infrastructure. Regional ecosystems will also require training support to use these resources effectively and maximize the value these investments can provide. Without such shared resources, participation is effectively limited to large corporations and major research universities. The Administration should direct agencies to prioritize grants and cooperative agreements that establish shared infrastructure accessible to a broader range of regional participants, including through vouchers or credits for small and medium-sized businesses to access national lab facilities, shared compute, and datasets.
Enable region-specific focus areas aligned with local strengths. Different regions possess distinct industrial bases, workforce capabilities, and research strengths. Regional ecosystems should be encouraged to develop focus areas that leverage these advantages. Federal programs should provide flexibility for regions to define priorities aligned with local industry and workforce rather than imposing uniform national templates. Congress has established two complementary regional innovation programs: NSF Regional Innovation Engines (focused on use-inspired research translation) and EDA Regional Tech Hubs (focused on commercialization and deployment). These programs are still in relative infancy and require continued support to transform the U.S. industrial base. The Administration should establish a strategic place-based innovation framework that pairs federal funding with White House-led efforts to remove regulatory and permitting barriers while increasing FFRDC and national laboratory support through streamlined collaboration mechanisms, shared testbed access, and partnership resources.
Support local conveners and coordinators. Effective regional ecosystems require coordination among diverse stakeholders, but this coordination should not default to university-dominant models that may not serve all participants equally. The Administration should provide flexible support for local conveners, including nonprofits and foundations, to coordinate multi-institution partnerships. These conveners can align priorities, reduce duplication, and ensure more equitable participation across the regional research ecosystem.
(iv) How can Federal policies strengthen the role played by small- and medium-sized businesses as both drivers of innovation and as early adopters of emerging technologies?
Reduce Administrative Burden. SMBs often lack the internal legal, compliance, and grants management capacity of larger institutions. The Administration should reduce administrative burdens on SMBs through standardized forms, pre-qualification processes, and shared compliance services. OMB should publish model contract templates and provide technical assistance to smaller entities involved in public-private partnerships, lowering barriers to entry for startups and smaller research institutions.
Streamline SBIR and STTR Programs. The Small Business Innovation Research (SBIR) and Small Business Technology Transfer (STTR) programs are important mechanisms for supporting SMB participation in federal R&D, but administrative inefficiencies limit their effectiveness. For example, a National Academies evaluation found that SBR / STTR processes at NIH are particularly cumbersome and their funding timelines can be too long to enable significant or expedient scientific research. 5
SBIR funding also requires a peer review process that does not necessarily incorporate expertise from industry or biotechnology, despite the goal of this program being to enable small businesses to develop and commercialize promising new technologies. 6
The Administration should work to streamline these programs by accelerating decision cycles, providing clearer feedback to applicants, and establishing more flexible, collaborative pathways for agencies to engage with smaller firms through co-development and early-stage technology testing. The Administration should also reevaluate the appropriateness of peer-review procedures for SBIR funding to encourage expedient and expert review of proposals.
Further ideas for streamlining SBIR and STTR include:
Accelerate review and award timelines to match small-business realities; establish service-level targets (e.g., 60–90 days for Phase I) and report performance publicly.
Enable rolling review within existing quarterly cycles to reduce idle time and unnecessary delay.
Create a single, unified SBIR/STTR submission portal across agencies to reduce administrative burden.
Standardize core application and reporting requirements while preserving agency-specific technical review.
Use staged applications (short concept → invited full proposal) to reduce wasted effort and improve quality.
Provide clearer, more actionable feedback via structured reviewer feedback.
Strengthen pathways from research to real-world testing and adoption via Direct-to-Phase II, bridge mechanisms, and agency-led testbeds.
Bridge the SBIR/STTR Phase I–II Gap. A critical barrier in the SBIR/STTR pipeline is the funding gap between Phase I proof-of-concept awards and Phase II development grants. This gap often causes promising technologies to stall before reaching market readiness because regulatory testing, scale-up activities, and prototype development fall outside the scope of both traditional research grants and early-phase SBIR awards. Federal agencies should establish dedicated bridge funding mechanisms to support proof-of-concept development, prototype validation, and university-based technology testbeds that de-risk technologies for Phase II transitions. Additionally, agencies should expand Direct-to-Phase II mechanisms for technologies with prior validation and ensure that SBIR/STTR award sizes are sufficient to meaningfully advance complex technologies toward commercialization.
SBIR and STTR authorizations have lapsed and require Congressional action to resume activities. 7
OSTP should encourage Congress to act swiftly to reauthorize these programs.
Support Entrepreneurial Pathways for Researchers. The federal government should create stronger pathways for researchers to transition promising discoveries into entrepreneurial ventures. The National Laboratories offer an entrepreneurial leave program that allows staff scientists to take leave to work on a startup while retaining the option to return to their prior position. NSF should explore establishing a similar mechanism for researchers at universities and nonprofit research institutions, providing bridge funding during the period a researcher is on leave. Given the transformative potential of AI in scientific research, this program could be tailored to prioritize AI-related ventures, helping to channel scientific talent toward commercial applications that accelerate discovery.
NSF should also enhance existing programs such as I-Corps and Entrepreneurship Fellows with a dedicated AI focus. These programs already provide valuable support for translating research into practical applications, and an AI-specific track could help address workforce challenges in the scientific community while fostering the development of AI-driven small businesses. Science-based small and medium-sized enterprises demonstrate superior performance in measurable ways, making investments in researcher entrepreneurship a high-leverage opportunity to strengthen the innovation ecosystem.
Improve Access to Shared Infrastructure. Access to advanced computing resources, datasets, and laboratory facilities is essential for SMBs to participate in AI-driven research and development. The Administration should provide vouchers or credits for SMBs to access national laboratory facilities, shared compute resources, and curated datasets. Tiered access models, where publicly funded datasets are paired with computing credits, can help streamline workflows for smaller institutions lacking in-house capacity.
(v) What empirically grounded findings from metascience research and progress studies could inform Federal grantmaking processes to maximize scientific productivity and increase total return on investment? Please provide specific examples of evidence-based reforms that could improve funding allocation, peer review, or grant evaluation.
Diversify Grant Portfolios. Enabling the pursuit of good science is not a one-size-fits all approach, and federal research agencies need both more tools in the toolkit and more encouragement to leverage underutilized tools to support a robust research portfolio.
Evidence suggests that distributing funding across a larger number of smaller exploratory grants can increase innovation per dollar invested. 8 9
This approach allows agencies to place more “bets” on promising ideas while reducing the consequences of any single project’s failure. This does not obviate the need for large-scale grants – research agencies should treat these as different tools to solve different kinds of problems.
Reduce Administrative Burden Through Harmonization and Streamlining. National studies consistently show that principal investigators spend 42 to 44 percent of their research time on administrative tasks rather than conducting research - a staggering inefficiency that directly reduces scientific productivity and taxpayer return on investment. 10
Much of this burden stems from duplicative reporting requirements, inconsistent definitions and policies across agencies, and unnecessarily complex compliance processes.
Federal agencies should harmonize core requirements to eliminate redundancy and confusion. For example, NIH and FDA maintain different definitions of “clinical trial” and “clinical research,” creating compliance uncertainty for researchers working across both agencies. Similarly, inconsistent implementation of regulations like the revised Common Rule (45 CFR 46) across agencies introduces unnecessary complexity. Standardizing definitions, aligning reporting timelines, and creating unified compliance frameworks would dramatically reduce administrative overhead without compromising accountability or oversight.
The National Academies’ 2025 publication Simplifying Research Regulations and Policies: Optimizing American Science provides a comprehensive roadmap for streamlining regulatory enforcement to lessen burdens created by duplicative and inconsistent policies. 11
OMB and OSTP should work with agencies to implement these recommendations, focusing on areas where harmonization can recover significant researcher time - time that translates directly into more experiments, faster discoveries, and stronger global competitiveness.
Expand ARPA-Style Funding Models. Research demonstrates that ARPA-style programs—characterized by empowered program managers, milestone-based funding, and tolerance for failure—generate outsized returns on investment. Federal agencies should expand these approaches beyond their current footprint and expand existing efforts that have proven successful in defense and energy applications. ARPA-style programs are not well-suited to all kinds of research, but are ideal for clearly defined, mission-driven problems with measurable milestones.
Pilot Expert Prediction-based Methodologies. Research demonstrates the utility of prediction markets across disciplines, including economics, public health, and weather forecasting. 12
OSTP, in coordination with a research agency such as DOE, should establish a pilot program exploring whether expert prediction markets can augment traditional review processes for research funding decisions. This pilot should include non-government experts willing to support nontraditional approaches and develop a rigorous evaluative framework comparing outcomes of prediction market-informed projects with those selected through traditional means.
Similarly, federal research agencies should experiment with encouraging proposal reviewers to explicitly state their predictions about the success of a particular research proposal. 13
Over time, this could reveal useful data about the reviewers in the review pool – as research is concluded and predictions validated, certain reviewers may prove to exhibit particular biases about different kinds of research, or prove especially equipped to reviewing certain kinds of proposals. 14
This would create a flywheel effect for more effective peer-review efforts throughout the federal research ecosystem.
Recognize Infrastructure as Primary Research Outcomes. Federal grantmaking should explicitly incentivize community infrastructure—including curated datasets, measurement standards, benchmarks, and shared tools—as primary research outcomes rather than treating them as secondary to publications. This would increase the return on investment by enabling downstream research to build on shared resources. Grantmaking should also support the long-term maintenance of this high-value infrastructure, not just its initial creation, to ensure it remains a functional and valuable asset to the research ecosystem.
Strengthen the Science-of-Science Community. Federal agencies should support a robust science-of-science research community that can continue developing and applying AI tools to the analysis of science itself. This includes protecting offices with data collection and analysis expertise (such as NCSES, NSF TIP, and NIH OEPR) and exploring how to modernize the federal science metrics enterprise to better track AI-driven discovery. There are existing “science of science” programs within federal research agencies, but are relatively limited. 15
More explicitly, federal research agencies should be required to publish data that would support a robust metascience ecosystem. Nonsensitive internal data about proposals, peer review scores, and other key information about the processes of supporting research should be made publicly available in a timely manner. Similarly, research agencies should not prohibit outside researchers from publishing data about their own review processes.
In addition to publishing this data, research agencies should also build stronger connection points with science-of-science communities, such as fellowship programs, workshops, rotations, and other formal and informal engagements. Ensuring end-users have a role in influencing the decisionmaking process about what research is useful is important for enabling bold and valuable scientific pursuits and processes that might otherwise get overlooked.
(vi) What reforms will enable the American scientific enterprise to pursue more high-risk, high-reward research that could transform our scientific understanding and unlock new technologies, while sustaining the incremental science essential for cumulative production of knowledge?
Enabling transformative, high-risk research requires reforms to how the federal government identifies, funds, and evaluates ambitious scientific projects. Existing mechanisms for evaluating and reviewing research funding proposals do not always advance high-risk, high-reward research with the highest impact. Reviews can be biased toward incremental, conservative approaches with more guaranteed rates of success rather than revolutionary science and engineering solutions. The following reforms can address these structural barriers while preserving the incremental science essential for cumulative knowledge production.
Reserve dedicated funding for high-risk, high-reward projects. Federal research agencies should reserve a portion of their budgets explicitly for ambitious projects with transformative potential. This funding should be disbursed through rapid mechanisms with reduced administrative burden. Protected budgets signal institutional commitment to risk-taking and insulate ambitious research from competition with safer, more predictable proposals. Agencies that have such programs should publish detailed data about the performance of these funding efforts.
Evaluate high-risk research at the portfolio level. Rather than penalizing individual project failures, agencies should assess high-risk programs based on aggregate outcomes. This approach tolerates individual failures when insights are gained, recognizing that a portfolio of ambitious bets will inevitably include unsuccessful projects. Investigators should face no penalty for honest failure, and agencies should promote dissemination of negative outcomes as learning instruments for the broader scientific community.
Reform peer review to separate feasibility from impact. Current peer review conflates the likelihood of success with the potential value of success, systematically disadvantaging ambitious proposals. Agencies should pilot evaluation frameworks that explicitly separate these criteria, allowing reviewers to recommend funding for high-impact projects even when feasibility is uncertain. Additional reforms could include double-blind review where possible, downweighting pedigree and institutional prestige, and piloting lottery selection among meritorious proposals to reduce bias toward safe, incremental work. Research agencies should also consider how reliance on impact estimates might preclude progress on high-value science for which the impact is difficult to predict or that historically has let to unexpected benefits. 16
Invest in shared infrastructure that reduces risk across the ecosystem. Shared scientific accelerants—including data collaboratives, standardized assays, benchmarks, and computing infrastructure—lower the barriers to pursuing ambitious research. When foundational tools and data are readily available, researchers can take bigger swings on the science itself. Federal investment in this shared infrastructure democratizes access to high-risk research and increases the expected value of ambitious projects.
Agencies should also explore methods for encouraging a cultural shift in research communities that celebrates the publication of null results. This could include both positive feedback mechanisms, such as prizes or public recognition, and negative feedback mechanisms, such as deprioritizing proposals from researchers who routinely do not publish null-results.
Use sensible ROI expectations. Federal research programs should ensure that return-on-investment (ROI) expectations and development timelines are appropriately calibrated to the objectives of industrial and pre-competitive research. Overly prescriptive ROI requirements or compressed timelines, especially those that prioritize near-term physical demonstrators, can unintentionally discourage meaningful academic collaboration, limit exploratory research, and reduce participation by institutions best suited to advance foundational innovation. For research-focused programs to succeed, funding structures should preserve flexibility for longer-horizon inquiry, iterative development, and partnerships across academia and industry, recognizing that transformative technologies often require sustained research efforts before commercial outcomes can be fully realized.
(vii) How can the Federal government support novel institutional models for research that complement traditional university structures and enable projects that require vast resources, interdisciplinary coordination, or extended timelines?
Traditional university structures, while essential to the research ecosystem, face inherent constraints that can limit their ability to pursue certain types of ambitious, long-term research. Tenure incentives favor individual achievement over team science. Administrative overhead and grant cycles can impede decade-scale projects. And disciplinary silos make sustained interdisciplinary coordination difficult. The Federal government can support novel institutional models that address these limitations without supplanting the valuable role universities play.
Support mission-driven nonprofit research organizations. Federal agencies should develop mechanisms to fund and partner with mission-driven nonprofit research entities capable of pursuing decade-scale research outside traditional university constraints. Models such as the Howard Hughes Medical Institute and the Max Planck Society demonstrate how independent research organizations can pursue long-term, high-risk work with dedicated teams operating with greater flexibility in hiring, procurement, and project management. The Administration should explore how agencies can leverage existing authorities to co-fund or partner with such entities, particularly through matching programs where government de-risks early milestones while attracting private capital.
Pilot and Expand Novel Partnership Models. The federal government should support a range of novel institutional models for research that complement traditional university structures and enable projects requiring vast resources, interdisciplinary coordination, or extended timelines. No single model fits all research challenges—different problems require different organizational structures.
NSF’s recently announced Tech Labs program represents a promising approach. 17
Tech Labs will support full-time research, development, and innovation teams focused on overcoming persistent barriers to technology commercialization, with operational autonomy, milestone-based funding through Other Transaction Authority, and the flexibility to engage across academia, industry, national laboratories, and nonprofit sectors. With anticipated funding of $10–50 million per team annually, Tech Labs is designed to de-risk emerging technologies that are not yet ready for private investment but have transformative potential. Critically, Tech Labs accommodates diverse organizational structures, including independent pre-formed teams, teams spinning out of academia or industry, and Focused Research Organizations (FROs).
FROs remain a valuable model for time-limited, highly targeted challenges where a dedicated team can address a specific bottleneck over several years. These entities, with dedicated full-time teams of engineers, innovators, and scientists behind visionary leaders, could explore high-risk, high-reward approaches with the energy and agility of a startup while prioritizing the use of productivity-enhancing technologies like AI and automation.
However, other research priorities may require longer time horizons, broader scope, or different governance structures than FROs provide. The administration should encourage NSF and other agencies to experiment with a portfolio of institutional models calibrated to the nature of each challenge.
DOE’s Genesis Mission offers an opportunity to apply similar principles at scale. Genesis aims to integrate AI with DOE’s national laboratories, supercomputers, and experimental facilities to accelerate scientific discovery. The initiative could incorporate Tech Labs-style partnerships that bring together mission-aligned teams from labs, universities, and industry with the autonomy and flexibility to pursue ambitious goals. Other agencies with significant R&D missions—including NIH, NIST, and DARPA—should explore how milestone-based, team-centric funding models could complement their existing grant programs and accelerate translation of federally funded research into practical applications.
Enable flexible partnership mechanisms. As described above, federal agencies should make greater use of Other Transaction Agreements (OTAs) to enable flexible, mission-oriented partnerships with novel research institutions.
Encourage co-location with federal facilities. The Administration should encourage “federal campus” models that co-locate novel research institutions with NIH, DOE, NIST, and other agency facilities while maintaining independent governance. This approach allows emerging research organizations to leverage federal infrastructure, instrumentation, and expertise while retaining the operational flexibility that distinguishes them from traditional academic structures. National laboratories should be directed to dedicate a percentage of staff time and resources to supporting such co-located entities and regional innovation activities.
Fund shared infrastructure that lowers barriers to entry. Novel institutional models often lack the capital to build expensive research infrastructure from scratch. The Federal government should provide co-funding for pre-competitive infrastructure, including shared wet-lab spaces, compute resources, measurement standards, and open datasets, that allows mission-driven nonprofits and smaller research organizations to participate in ambitious research without duplicating costly facilities. The National AI Research Resource (NAIRR) represents one model for democratizing access to computational resources, and similar approaches could be applied to laboratory infrastructure.
Establish testbeds at national labs for long-term projects. DOE and other agencies should fund testbeds at national laboratories dedicated to long-term projects requiring advanced computing infrastructure and collaborative workflows. These testbeds can serve as neutral ground where novel research institutions, universities, and industry partners collaborate on projects with extended timelines that exceed typical grant cycles.
(viii) How can the Federal government leverage and prepare for advances in AI systems that may transform scientific research—including automated hypothesis generation, experimental design, literature synthesis, and autonomous experimentation? What infrastructure investments, organizational models, and workforce development strategies are needed to realize these capabilities while maintaining scientific rigor and research integrity?
Realizing the transformative potential of AI for scientific research will require coordinated federal investments in infrastructure, new organizational models, and workforce development strategies. The following recommendations address each of these dimensions.
Infrastructure Investments
Data Infrastructure — The foundation for AI-driven scientific research is high-quality, AI-ready data. Currently, even when scientific data is technically available, it often remains unusable for AI applications because it was not originally generated with modeling in mind. Federal agencies should invest in:
Discipline-specific, open-access, curated, standardized, and federated data repositories. The federal government should fund collaborative roadmapping, certification, collection, and sharing of large, high-quality datasets optimized for AI and machine learning applications. 18
Standardized metadata formats, protocols, and validation checklists to enhance reusability and machine readability. Benchmarking programs should follow models like MLCommons.
Infrastructure that captures both experimental successes and failures, preventing wasted effort by the R&D enterprise.
Compute Infrastructure — The National AI Research Resource (NAIRR) is critical to advancing U.S. competitiveness in AI and should receive the resources necessary for continued success. The NAIRR pilot program has demonstrated success, partnering with 14 governmental and over 25 non-governmental organizations and supporting over 250 research projects in more than 40 states. The Administration should build on the pilot’s success to scale up and expand, leveraging supporters in industry and the nonprofit sector.
Beyond NAIRR, the federal government should support the development of federated AI infrastructure to unify access to compute, data, and AI tools across disciplines and institutions, ensuring geographic and institutional diversity and reducing dependency on single-vendor platforms.
Autonomous Experimentation Infrastructure — To connect AI to the physical experimentation process, the federal government should invest in autonomous research labs and self-driving laboratories. DOE should take a leading role in advancing AI-powered materials discovery and establish a National Center for Autonomous Materials Science to leverage AI-driven autonomous laboratories for rapid innovation in materials and manufacturing. This center should develop standardized, modular lab ecosystems, foster public-private partnerships, and provide funding incentives to overcome adoption barriers. 19
These testbeds should be accessible to academia, nonprofits, and small and medium-sized businesses, storing AI-driven automated experiments and replicable protocols.
Organizational Models
National Lab Testbeds and AI Innovation Hubs — The federal government should continue to support AI infrastructure and testbeds at DOE, NIST, and NSF in partnership with national laboratories. Building on DOE’s existing supercomputing capabilities (e.g., Frontier, Aurora, El Capitan), the Administration should support the DOE FASST Initiative and consider deploying dedicated AI Innovation Hubs that unify mission-aligned national laboratories, universities, industry partners, and STEM education organizations around priority domains such as fusion, quantum materials, climate modeling, and autonomous experimentation.
Federally Supported Evaluation Ecosystems — Federal agencies should create evaluation ecosystems for scientific AI, including benchmarks, leaderboards, and reproducibility protocols. NIST and NSF should develop and publish standardized metrics to assess the trustworthiness, safety, and interpretability of AI systems used in science, including robustness scores, explainability indices, and model audit logs.
DOE should also explore building specialized AI agents that automatically test the robustness of submitted research findings, acting as intelligent reviewers and replication testers.
Build Open-Source Ecosystems for AI in Science — Open-source software now underpins much of the AI software stack powering advances in AI for molecules, quantum science, materials discovery, and scientific computing. Federal investment should explicitly support open-source development as core research infrastructure. Federal agencies should expand programs like NSF’s Pathways to Enable Open-Source Ecosystems (POSE) and create new funding mechanisms that support cohorts of open-source contributors working on projects identified by multi-stakeholder initiatives like the AI Alliance. These programs should fund not just specific software projects but the broader community infrastructure — documentation, training materials, governance structures, and contributor pipelines — that makes open-source ecosystems sustainable.
Workforce Development Strategies
Training for Hybrid Scientific Talent — The application of AI for science will fundamentally transform skills requirements from technicians to research scientists. Federal agencies should develop workforce programs to create generations of “scientific AI engineers” with dual expertise in AI and a specific scientific domain. These individuals serve as translators between computational scientists and domain experts.
AI literacy should become a required cornerstone of all federally funded training programs. Federal funds should support the creation of modular and accessible high-quality AI training programs serving diverse audiences, from traditional degrees to microcredentials, online certifications, and continuing upskilling.
Aligning Funding Timelines with AI Development Speeds — Funding mechanisms should align with the pace of AI development: rapid seed grants (3–6 months), iterative renewals, and sustained infrastructure support. Traditional multi-year grant cycles can be too slow to keep pace with advances in AI capabilities.
Preserving Scientific Rigor — To maintain research integrity while leveraging AI capabilities:
NSF and the National Academies should conduct research on how scientific research jobs will change with the introduction of AI and develop a report identifying best practices for integration of AI into scientific workflows. This report should inform federal grantmaking and project design, as well as guide workforce development and scientific training programs to ensure that human scientists remain at the center of the discovery process.
Federal agencies should invest in advanced AI explainability research through NSF, NIST, and DOE to improve reliability and accelerate discoveries. Sound science relies on transparent methods that can be rigorously scrutinized, and explainable AI directly supports that goal. Mechanistic interpretability helps demystify AI decision-making by revealing how individual model components contribute to outcomes, enhancing trustworthiness in scientific applications.
As discussed above, federal funding mechanisms should require grant applicants to include reproducibility and validation strategies, similar to data management plans, outlining how results, code, and data will be made verifiable. Program managers should evaluate reproducibility as a criterion for selection and continuation.
(ix) What specific Federal statutes, regulations, or policies create unnecessary barriers to scientific research or the deployment of research outcomes? Please describe the barrier, its impact on scientific progress, and potential remedies that would preserve legitimate policy objectives while enabling innovation.
Several federal policies create friction that impedes scientific research and the deployment of research outcomes. Below we describe key barriers, their impacts, and potential remedies.
Data Sharing and Access Restrictions. Many federal regulations governing data sharing, while designed to protect privacy and security, have become obstacles to creating the AI-ready datasets necessary for modern scientific research. Overly rigid data-sharing and human subjects rules slow the creation of de-identified, AI-ready biological datasets. In healthcare, for example, restrictions intended to protect patient privacy can inadvertently prevent the secure, privacy-preserving use of data that could improve and save lives.
Remedy: Federal agencies should develop and disseminate standardized, privacy-preserving governance templates that enable researchers to share data responsibly without navigating ad hoc compliance processes for each project. These templates should balance legitimate privacy objectives with the need for data access, particularly for AI and machine learning applications where large, well-structured datasets are essential.
Weak Enforcement of Existing Data Sharing Obligations. Paradoxically, while some regulations are too restrictive, others go unenforced. There is endemic under-compliance with data management and sharing obligations for federally funded research. Many research agencies require grantees to develop data management and sharing plans and publish their data in a timely manner, but there are few penalties for noncompliance and much of this data goes unpublished. This undermines reproducibility, limits downstream innovation, and enables wasteful spending on redundant data collection.
Remedy: OMB and OSTP should issue agency-wide guidance and create enforcement mechanisms that ensure public access to research data within 12 months of project completion, including negative results. The culture around scientific publishing discourages scientists from presenting what didn’t work, but negative results are just as important as positive results for informing downstream models and preventing duplicative effort.
Inconsistent Treatment of Cloud and On-Premises Research Infrastructure. The Administration has sought to reduce facilities and administration (F&A) rates in federal grants so that more taxpayer dollars directly support research. However, some federal grant recipients have interpreted F&A guidelines to suggest that only cloud-based infrastructure—not hardware—is subject to F&A fees. This creates a perverse incentive for researchers to purchase their own hardware instead of leveraging cloud-based services that could provide access to AI and other advanced capabilities.
Remedy: As part of OMB’s implementation of the Executive Order on Improving Oversight of Federal Grantmaking, the Administration should revise the Uniform Guidelines to clarify that hardware and cloud-based systems should be treated equivalently in calculating F&A rates, with both not subject to the fees.
Grants and Contracts Undervaluing Software, Data, and Infrastructure. Current grant and contract structures often undervalue software, data, and infrastructure as research outputs, hindering the creation of shared tools that could accelerate science across institutions. Research that produces public goods—open datasets, validated protocols, shared computational tools—is often deprioritized relative to traditional publications, even when such outputs would have broader impact.
Remedy: Federal agencies should update grant evaluation criteria to recognize and reward the development of community infrastructure, including datasets, standards, benchmarks, and software, as primary research outcomes
(x) How can Federal programs better identify and develop scientific talent across the country, particularly leveraging digital tools and distributed research models to engage researchers outside traditional academic centers?
Identifying and cultivating scientific talent beyond traditional academic centers requires deliberate investments in shared infrastructure, distributed research models, and flexible pathways for participation. The federal government should pursue the following strategies:
Democratize Access to Shared Computing, Data, and Training Resources. Federal programs should continue to expand access to shared computing and data resources, particularly for researchers at community colleges, primarily undergraduate institutions, and institutions in underserved regions. The National AI Research Resource (NAIRR) pilot has demonstrated the viability of this approach by providing widely-accessible cyberinfrastructure to researchers with limited institutional resources. The Administration should continue to champion and expand the NAIRR, leveraging existing partnerships with governmental and non-governmental organizations to ensure researchers outside major research universities can access the computational and data resources necessary to participate meaningfully in cutting-edge scientific work.
Leverage Existing Workforce Development Infrastructure. Universities play a vital role in preparing America’s scientific and technical workforce at scale. Leading research universities enroll tens of thousands of students annually, including substantial graduate and professional student populations pursuing advanced STEM degrees. University job training capacity extends far beyond traditional degree programs through technical partnerships with community colleges, manufacturing and biotechnology certification programs, and specialized agricultural training.
This comprehensive workforce pipeline, ranging from skilled technicians to doctoral researchers, is indispensable for maintaining American competitiveness. Federal investment in university-industry apprenticeships, credentialing programs, and targeted scholarship initiatives can significantly strengthen this infrastructure. Rather than creating entirely new training systems, federal policy should identify and scale programs that have demonstrated success in developing and placing technical talent across diverse geographic regions and industry sectors.
Expand Distributed Research Models and Virtual Collaboration. Federal agencies should fund and scale distributed research models that enable researchers to participate regardless of geographic location. This includes supporting remote fellowships, virtual labs, and funding for work conducted outside traditional campus settings. Digital tools like federated learning platforms can enable remote collaboration among researchers nationwide, allowing talent in rural areas or at smaller institutions to contribute to major research initiatives without relocating. Cloud-based learning environments, developed in partnership with technology companies, can further extend these opportunities to underserved regions.
Establish Virtual Mentorship Networks. Federal programs should create and support pools of virtual mentors drawn from academic institutions, industry, and national laboratories. These mentors should be incentivized through financial support to serve researchers across a broad spectrum of institutions and career stages. For researchers at under-resourced institutions, mentorship, networking, and administrative support can be as valuable as grant funding itself in enabling meaningful participation in the scientific enterprise.
Support Modular and Accessible Training Programs. Federal agencies should support the development of high-quality, vetted AI and scientific training programs that serve diverse audiences and training needs. This includes traditional degree pathways as well as microcredentials, online certifications, and continuing education for workforce upskilling. These programs should be designed for accessibility and modularity, enabling researchers and technical workers to build relevant skills regardless of their institutional affiliation or prior training.
Expand PI Eligibility and Recognize Non-Traditional Career Paths. Federal grant programs should allow more flexible principal investigator eligibility criteria, recognizing that scientific talent increasingly resides in nonprofits, community labs, and hybrid industry-academic careers. Digital platforms and open calls can help federal agencies reach researchers outside major institutions who might otherwise never learn of funding opportunities or feel eligible to apply.
Fund Distributed Research Experiences. Federal agencies should fund AI and scientific research experiences for undergraduates that are distributed in nature, leveraging virtual collaboration technology to enable participation from students across all types of institutions and regional contexts. These experiences can serve as a pipeline for identifying promising talent early and connecting them to further opportunities in the research ecosystem.
(xi) How can the Federal government foster closer collaboration among scientists, engineers, and skilled technical workers, and better integrate training pathways, recognizing that breakthrough research often requires deep collaboration between theoretical and applied expertise?
Breakthrough research increasingly depends on deep collaboration across disciplines and skill levels. The federal government can take several concrete steps to foster closer collaboration and better integrate training pathways.
Incentivize Integrated Project Teams. Federal research agencies should treat cross-role collaboration as a criterion in grant evaluation. Funding tracks should be developed that require or strongly incentivize AI team science initiatives with co-PIs from both AI and domain disciplines, with shared budget control. These integrated project teams should embed experimentalists, computational scientists, engineers, and technicians working together on shared problems.
Support Industry-Academia Exchange. The federal government should expand mechanisms for personnel exchange between sectors. This includes dedicated funding tracks to support industry-academia exchange fellowships, as well as “researcher in residence” programs (academic embeds in industry settings) and “entrepreneur in residence” programs (industry experts embedded in academia) to establish bridges between private and public research.
Expand Technical Workforce Pipelines. Federal programs should support training pathways for skilled technical workers, including apprenticeships and technician-to-engineer advancement programs. This should include co-op programs linking community colleges and vocational institutions to federally funded research projects, ensuring that technical workers are integrated into the research enterprise rather than siloed from it.
Existing programs like NSF’s ExLENT should be expanded to incorporate AI-relevant skills and team science competencies. 20
Invest in Collaborative Infrastructure. Physical and virtual shared spaces where theoretical and applied work occur side-by-side can foster organic collaboration. The federal government should support testbeds and shared facilities that bring together domain scientists, AI experts, engineers, and technicians to co-design solutions aligned with real-world needs.
Leverage Public-Private Partnerships for Curricula Development. The transformation of scientific work by AI will require a revolution in how individuals are trained for roles in academia and industry. The federal government should support public-private partnerships for the development of curricula, training programs, and mentorship structures that prepare researchers for collaborative, AI-enabled science.
(xii) What policy mechanisms would ensure that the benefits of federally-funded research—including access to resulting technologies, economic opportunities, and improved quality of life—reach all Americans?
Open Access to Research Outputs. The federal government should fully implement and enforce mandates requiring that publications and datasets produced from federally-funded research be freely and publicly available without embargo periods. Clearer enforcement of data sharing obligations for federal grantees, with penalties for noncompliance, is necessary across the board. OMB and OSTP should issue agency-wide guidance and create enforcement mechanisms that ensure public access to research data within 12 months of project completion, including negative results.
Federal research agencies should increasingly offer access to underlying data via application programming interfaces (APIs), in addition to making raw data available. This could help enable the creation of AI-based tools to help researchers interrogate federally-funded research data and enable additional scientific discovery.
Prioritize Public Goods in Funding Decisions. Federal agencies should prioritize funding for research that produces public goods: open tools, datasets, standards, and shared infrastructure. This represents a shift from evaluating research solely on novelty or commercial potential toward recognizing the value of foundational resources that benefit the broader research ecosystem. Grant programs should tie funding to access plans that address affordability, open licensing for public health tools, and deployment pathways for under-resourced communities.
Geographic and Institutional Equity. Realizing the benefits of federally-funded research will rely on the ability of researchers across the country to access necessary resources, especially those with limited access or who have been historically excluded from certain fields. Current structures often limit participation to the largest corporations and research institutions. Smaller businesses, regional universities, and communities outside major metropolitan and coastal areas must be actively supported and encouraged to contribute.
To address this, the federal government should:
Support shared computing, data, and training resources to democratize access, particularly for community colleges and primarily undergraduate institutions
Deploy tiered access models where publicly funded datasets are paired with computing credits, integrating computing access into dataset hubs to streamline workflows for smaller institutions lacking in-house capacity
Promote recognition and funding for research that addresses regional needs and regional workforce development
Leverage existing federal research assets like national laboratories and federally-funded research centers as anchors for regional innovation ecosystems
(xiii) How can the Federal government strengthen research security to protect sensitive technologies and dual-use research while minimizing compliance burdens on researchers?
Adopt a risk-based, tiered approach to research security. Federal research security requirements should be proportional to actual risk. High-consequence or dual-use research warrants strong controls, but applying the same requirements to low-risk, fundamental research creates unnecessary burden without improving security outcomes. A tiered framework would allow agencies to focus resources where they matter most while preserving openness in the broader scientific enterprise.
Clearly distinguish open scientific infrastructure from restricted research. Shared assets such as open datasets, benchmarks, standards, and evaluation platforms are critical to accelerating science and generally pose low security risk. Explicitly carving out these activities from heightened security requirements would prevent inadvertent suppression of the very infrastructure that enables reproducibility, transparency, and broad participation.
Harmonize research security policies across agencies. Inconsistent disclosure requirements, definitions of foreign affiliation, and training expectations across agencies create confusion and duplicative compliance burdens. Greater alignment would improve compliance quality while reducing administrative overhead, particularly for researchers supported by multiple agencies.
Shift emphasis from paperwork to practical risk mitigation. Compliance regimes that rely heavily on reporting and certifications often consume researcher time without meaningfully reducing risk. Agencies should prioritize practical controls—such as secure research environments, access management, and audit logging—that directly mitigate security concerns while allowing legitimate collaboration to proceed.
Provide centralized support and expertise. Many institutions, nonprofits, and small research organizations lack in-house security expertise. Federally supported templates, training resources, and shared advisory services would improve consistency, reduce institutional burden, and prevent over-restriction driven by uncertainty. Provides an additional opportunity to consolidate across agencies.
Improve timeliness and predictability of security reviews. Slow or open-ended security determinations can delay projects indefinitely, discouraging collaboration and investment. Agencies should establish clear timelines for reviews, provide transparent guidance on escalation pathways, and ensure that security processes do not become barriers to research.
Treat openness as a strategic asset, not a vulnerability by default. The U.S. scientific enterprise derives strength from openness, collaboration, and talent attraction. Research security policies should be designed to protect genuinely sensitive work while preserving these advantages, recognizing that excessive restrictions on low-risk research can ultimately undermine competitiveness and innovation.
Citations
https://govspend.com/blog/federal-procurements-new-frontier-the-rise-of-otas/
https://oig.nasa.gov/wp-content/uploads/2024/02/IG-14-001.pdf
https://www.nist.gov/system/files/documents/2017/05/09/tassey_eint_paper.pdf
https://nap.nationalacademies.org/catalog/27519
https://www.nationalacademies.org/read/26376/chapter/1#viii
https://goodscience.substack.com/p/comments-on-nih-reform-for-sen-cassidy
https://www.nsbaadvocate.org/post/news-sbir-sttr-programs-still-lapsed-as-ndaa-week-lands-on-capitol-big-questio%0Ans-for-small-busine
https://journals.plos.org/plosone/article?id=10.1371%2Fjournal.pone.0065263
https://academic.oup.com/rev/article-abstract/25/4/396/2525343
https://pmc.ncbi.nlm.nih.gov/articles/PMC2887040/
https://www.nationalacademies.org/read/29231/chapter/1
https://www.science.org/doi/10.1126/science.1157679
https://goodscience.substack.com/p/why-peer-review-should-be-more-like
https://goodscience.substack.com/p/why-peer-review-should-be-more-like
https://www.nsf.gov/funding/opportunities/sosdci-science-science-discovery-communication-impact
https://goodscience.substack.com/p/the-paradox-of-progress-trying-to
https://www.nsf.gov/news/nsf-announces-new-initiative-launch-scale-new-generation
https://fas.org/publication/collaborative-datasets-life-sciences/
https://doi.org/10.6028/NIST.SP.1320
https://www.nsf.gov/funding/opportunities/exlent-experiential-learning-emerging-novel-technologies
