NIH is talking a lot about implementation science. I should be excited, but I’m nervous. Here's why.
As someone who has spent a career in HIV and implementation research, you might expect me to be overjoyed by NIH’s recent investment in implementation science, in particular with respect to HIV. Indeed, the attention has been remarkable — highlighted in Science and repeatedly emphasized in communications from NIH leadership.
On one hand, I am actually extremely excited. I am a true believer in implementation science. I drink as much of the Kool-aid as possible. I work on implementation science journals and applied research. I have spent the last decade trying to convince my colleagues to drink it too — that investing in a science of implementation will help us all get closer ending the HIV epidemic here and around the world. I have also long argued that implementation science need not stand alone as a siloed field — though there are certainly questions specific to it — but should also inform translational work across clinical, epidemiologic, and even basic research.
But, at this moment, I am also nervous. NIH’s investment is a great opportunity, but we must walk this road carefully if we are to make the most of it. Big investments almost always have unintended consequences. Might we avoid some of them with forethought?
First, it has to be clear that implementation science is only one piece of the implementation puzzle. In our country (in any country), the social fabric, institutions, organizations, financing (e.g., insurance), and policy are the main drivers of implementation success in health. Implementation science plays a supporting role. Sometimes I use a sports analogy: implementation science is more like a coach and the implementing world (e.g., departments of health, hospitals, insurance payers, communities) are the players on the field. We can study the game, identify patterns, and offer guidance to help the system perform better if we are a good coach. But just as the coach does not throw strikes or score goals herself, implementation research cannot itself stand in for public health systems. Today, public health mechanisms are being dismantled, public trust in health and healthcare is at historic lows, and distorted incentives have made basics unaffordable while insurance coverage is shrinking. Implementation research can and should address such problems, but it cannot by itself repair what society has broken. A big investment in implementation research alongside destruction of the organs of society that actually implement sets us up for failure. Without the rest of the team, we are not going to win. For HIV implementation research, rollbacks in Medicaid, heightened stigma due to rhetoric, and reduction in preventative measures will make the research impact even harder.
Second, implementation science must avoid enabling “command-and-control” vision of health. Too often, implementation research is framed as finding ways to make individuals (primary care providers, communities) — frequently the least powerful in a system — do what we want. In primary care, for example, doctors are expected to implement ever-growing lists of screenings and tasks, though their time is already overburdened. Even the term implementation sometimes makes me uneasy. Carl May, one of the leading thinkers in the field, reminds us that implementation is about translating the strategic intentions of one group into the behavior of another. While there is undoubtedly a need for managed and disciplined approaches to implementation—whether by managers, policymakers, or communities—it is important to reflect on what this implies for the relationships involved. One risk, if we don’t, is to slip into a kind of bureaucratic instrumentalism: using administrative forces to compelling people to act in ways they may not otherwise choose. Top-down processes might lead to short term gains, but also inevitably risk unintended consequences, sometimes catastrophic ones, or long-term public resistance, as was the case in COVID. Even if it takes longer, it may be more worthwhile — more sustainable — when the process is as collaborative and inclusive as possible.
Third, implementation science needs to be very careful not to take systems for granted but rather to interrogate systems. One way to look at implementation science is through the lens of structures and agents. That is to say people, whether their physicians or patients (i.e., agents) act within structures (e.g., insurance policies, organizational structure). Their behavior can be either enabled or constrained by these structures. Implementation science tries to understand these relationships and at its best builds better ways for systems and people to work together. When we see that something needs to change (i.e., an implementation gap), we should be careful to not mistake the easiest cog to change in the system for the change that needs to happen. Early “real world” HIV research often focused narrowly on patient adherence, while neglecting structural barriers like stigma, transport, service quality, and health system access. The danger is that implementation science ends up asking victims of weak systems to shoulder the burden for change. If implementation research is to add real value, it must focus on structures, contexts, and incentives, not simply on downstream behavior. An emphasis on implementation science at a time when health systems are under attack might tempt studies to investigate how to make users fit into systems, rather than how to ask systems to fit to end-users. In HIV research, studies of retention should focus not only of say, “risk perception,” among patients, but the accessibility, appropriateness, and availability of healthcare itself. If not, we might fail as a field even if our individual studies succeed.
Finally, we must recognize that implementation science itself is not fully baked - and remain committed to building the scientific scaffolding of the field. Given the aspiration for big and fast wins, there will be pressure for us to present the field as if it already has definitive methods and solutions — “fully baked cookies” rather than what I think it really is: cookie dough. Our frameworks, theories, and logic models are still interim. They are unequivocally useful, but they are not the final word. If we over emphasize that what we have today will give us solutions, we may stifle the innovation the field needs. Instead, we should continue investing in methodological development, conceptual clarifications, refining our tools, and being transparent about the fact that this science itself is still evolving. Some of these investments will not pay immediate dividends (some might not at all). But implementation is complicated, and science is never a straight line. One recent NIH calls for applications mandated the use of a certain a logic model. Logic models are tempting – nice and clean. They can be useful in some cases but in other cases can dumb down science. H.L. Mencken (I believe) once said something like, “For every problem there is a solution that is simple, neat, and… it is wrong.” Implementation science is complicated because the world is complicated.
And one more thing: as attention to implementation science grows, we must be careful not to lose sight of the science itself. My colleague Lisa Abuogi at Colorado (a leading HIV implementation researcher focused on adolescents and young people) reminded me that peer review (whether at study section or for journal articles) often gets caught up in rigid judgments (and arbitrary judgements) about what “is” or “is not” implementation science or who “is” or “is not” an “implementation scientist.” Whether as reviewers, editors, or peers, we should assess whether a study advances scientifically sound aims about implementation. Given that how we do implementation science is evolving, we not forget about the underlying content, innovation, and significance of the work. I sometimes say we need less implementation science and more science of implementation—that is, we should embrace our preference for science about implementation while also holding a broad and generous view of what approach is taken on that research if it is rigorous. While efforts to standardize the field are valuable, if the pendulum swings too far we risk losing sight of the forest for the trees.
In summary, implementation science holds great promise, and NIH’s interest is a rare moment of national attention. I, for one, am eager to jump. To make the most of it, we must keep recognize that the science plays a supporting role in a much larger system, resist the temptation to target the most movable piece of the system (rather than the structures that shape their behavior), and remain committed to developing the science itself. If we can do this, implementation science might just deliver on some of it’s promises.
Would love to hear what you think.

Really appreciate this, especially about making sure all the players (ie, public health) are supported and funded.
Very well said, Elvin! There is tremendous potential for implementation science to help transform research into better health outcomes for people and better systems for implementers. But we should consider the pitfalls and obstacles we create for ourselves (like insisting there is only one true definition or best framework). In addition, implementation scientists have to pause to see how we can better communicate complex IS concepts and approaches back to other researchers and most importantly, the public. Otherwise, we will waste this opportunity and continue to widen the divide and distrust.