The concept of nuclear damage limitation dates back to the early 1960s. Robert McNamara, Secretary of Defense during the Kennedy and Johnson administrations, described it as “the ability to reduce the weight of the enemy attack by both offensive and defensive measures and to provide protection for our population against the effects of nuclear detonations.” In both internal documents and public statements, he asserted that the primary objective of U.S. strategic forces was to deter a nuclear attack on the United States and its allies. If they failed to achieve that objective, however, and nuclear war did occur, their second goal would be to “limit damage to our population and industrial capacity.”
Then, as now, damage limitation could take many forms, including counterforce attacks on an opponent’s nuclear arsenal, ballistic missile defense, air defenses against enemy bombers, anti-submarine warfare, and civil defense. Air defense was emphasized by U.S. planners during the 1950s and early 1960s, but it fell by the wayside once it became clear that Moscow intended to rely primarily on missiles to deliver its strategic nuclear weapons. The U.S. Navy possessed (and likely still possesses) a potent ability to hunt and destroy enemy ballistic missile submarines, but the bulk of the USSR’s strategic forces resided in its fleet of land-based ICBMs. Not long after taking office, the Kennedy administration launched a campaign to invigorate U.S. civil defense programs. However, the initiative fizzled due to a lack of public support. As a result, most discussions of damage limitation during the Cold War focused on counterforce and missile defense.
Deterring a Soviet nuclear attack required the U.S. to have enough survivable weapons to inflict an “unacceptable” amount of damage on the USSR in a retaliatory strike. It was presumed that Moscow, too, sought to ensure that it possessed such a capability. The possession of an assured destruction capability by each side helped ensure nuclear stability, thereby reducing the likelihood of war. Each superpower was deterred from attacking the other by the knowledge that its opponent could launch a devastating counterattack. Thus, U.S. policymakers viewed maintaining the nation’s nuclear deterrent as paramount.
At the same time, there was a widespread view in Washington that the U.S. should also possess some ability to limit damage to itself in case deterrence failed. This was an understandable desire since relying solely on deterrence left the United States vulnerable to a Soviet attack. No one could guarantee that deterrence would always prevail since it ultimately depended on a potential adversary’s state of mind.
The question of what role damage limitation should play in U.S. nuclear planning and the form it should take was—and still is—the subject of considerable debate within the national security community. Some policymakers prioritized it more than others during the Cold War. However, because the U.S. strategic arsenal always could strike an opponent’s nuclear forces, there has never been a time over the last seven decades when the United States has not had some damage limiting capability, even if public officials have not always referred to it as such.
Yet damage limitation turned out to be a very complicated concept. One issue stemmed from the realization that no damage limitation system could be 100 percent effective. If a strategic nuclear conflict with the USSR arose, a certain number of Soviet bombs would inevitably reach U.S. soil no matter what. The resulting death toll would likely number in the tens of millions.
Such a scenario raised a difficult question for policymakers: How much damage limitation capability should the U.S. seek? If a given U.S. damage limiting capability were sufficient to limit U.S. fatalities in an all-out war to, say, 80 million, would it make sense to pay the high costs associated with enhancing that capability further to reduce the expected death toll to 50 million? For those who viewed nuclear war as a real possibility, and who therefore believed that the U.S. should possess the ability to win if one occurred, saving 30 million Americans seemed like a worthwhile goal no matter what the cost.
To those who viewed nuclear war as unthinkable, and who therefore rejected nuclear warfighting as a concept, enhancing U.S. damage limiting capabilities seemed pointless since it did little to strengthen the country’s nuclear credibility. In a crisis in which critical U.S. interests were at stake, would a U.S. president really feel freer to act to protect those interests if he knew that “only” 50 million American lives were at risk rather than 80 million? Would the Soviets actually be more deterred if that were the case?
The two primary forms of damage limitation available to the United States during the Cold War, counterforce and missile defense, each presented their own set of challenges. The ability of the U.S. to use its strategic offensive forces to limit damage to the American homeland depended on its ability to destroy Soviet nuclear weapons before they could be launched. If the Soviets were able to attack first, U.S. missiles and bombers would be unable to limit the initial damage.
A damage-limiting counterforce strike by the U.S. would, therefore, be vastly more effective if the U.S. struck first. However, launching a first strike meant initiating strategic nuclear war, the very thing that U.S. nuclear forces were ostensibly intended to prevent. Indeed, U.S. declaratory policy in the later years of the Cold War seemed to rule out this option. The Pentagon’s 1983 annual report to Congress stated that U.S. strategy “excludes the possibility that the United States would initiate a war or launch a pre-emptive strike against the forces or territories of other nations.”
If the U.S. was attacked first, it could launch a retaliatory counterforce attack. The conventional wisdom was that if the Soviets did launch a first strike, they would likely do so with only a part of their arsenal, keeping many of their strategic weapons in reserve. If so, the U.S. could hit the residual Soviet nuclear forces in a second strike in an attempt to reduce any further damage that could be inflicted on the United States. This option, however, would hardly be straightforward.
If the Soviet first strike were a counterforce attack, it would leave the U.S. with a diminished ability to retaliate against hardened targets (such as ICBM silos). If it were a counter value strike against American cities, U.S. strategic forces would remain intact, but damage to the United States in terms of casualties and economic destruction would be enormous. The U.S. president would then have to decide whether to retaliate against Soviet cities or the USSR’s remaining strategic arsenal.
The possibility of achieving damage limitation through anti-ballistic missile (ABM) defense also received a great deal of attention during the Cold War, just as it does today. Unlike counterforce, it offered a way to actively defend the U.S. homeland from a Soviet attack after it had been launched. Nevertheless, missile defense had its downsides. For one, many strategic planners had severe doubts as to how well such a system would work. It was generally recognized that even an elaborate missile defense system could only be partially effective against a major Soviet attack. Moreover, the tracking radars needed to guide ABM interceptors to their targets would themselves be vulnerable to a Soviet attack. If the Soviets were able to destroy U.S. radar installations in advance of the main attack on the United States, the ABM system would be crippled.
Additionally, developing and deploying a missile defense system was a costly proposition. A 1965 Pentagon study determined that a system capable of protecting 75 percent of the U.S. population in an all-out nuclear war would cost $35 billion, or more than two-thirds of the defense budget at the time. Furthermore, even if such a system were able to protect three-quarters of the U.S. population in an all-out nuclear war, American fatalities would number close to 50 million. Opponents of missile defense also pointed out that the USSR would almost certainly respond to a U.S. ABM deployment by expanding the size of its strategic arsenal or by implementing relatively inexpensive countermeasures such as equipping its existing ICBMs with decoy warheads or multiple independently targetable reentry vehicles (MIRVs).
The most compelling argument against emphasizing damage limitation in nuclear planning was that it made war more likely. As noted, each side possessed enough survivable nuclear weapons that it would be able to inflict great devastation on its adversary in retaliation for a first strike. Under normal peacetime conditions, both the U.S. and the Soviet Union had the option of either starting a nuclear war by launching a “bolt-from-the-blue” surprise attack on its opponent or maintaining the status quo. However, even if the attacking country believed that launching a sudden first strike would enable it to emerge from the conflict stronger than its adversary, the opposing state’s assured destruction capability would ensure that the attacking state suffered catastrophic damage, leaving it worse off than before the war. Inaction would, therefore, be the wiser choice.
That calculus could easily change in a crisis, however. During a period of acute tension in which both sides possessed a significant damage-limiting counterforce capability, each nation would have some incentive to strike preemptively to limit the amount of damage that could be inflicted on it. The risk that one side would act preemptively under such circumstances would correspond to the perceived likelihood of war. If nuclear war seemed inevitable—or even highly likely—the apparent choice for each side would then be between launching a preemptive attack that would destroy a large number of its opponent’s strategic forces, thereby limiting (but not eliminating) the adversary’s ability to inflict harm on the attacking state, or permitting the opponent to act first and do the same thing.
Furthermore, worst-case assumptions could lead to a negative feedback loop, further undermining crisis stability. The U.S., for instance, would be aware that the Soviet leadership might believe that Soviet fatalities could be dramatically reduced by launching a first strike against the United States. Soviet leadership would know that the U.S. was aware of the Soviet leadership’s belief that a first strike would significantly reduce Soviet fatalities. The U.S, in turn, would then know that the Soviet Union knew that the United States was aware that the Soviets could launch a first strike to reduce its fatalities. In this way, decision making in a nuclear crisis would resemble a hall of mirrors. A war could easily occur under such circumstances even if both sides preferred to avoid one.
In a hypothetical crisis, worst-case scenario thinking could lead one side or the other to believe that a first strike by the opposing side was imminent and launch a pre-emptive counter-force first strike, under the impression, correctly or not, that it was acting in self-defense to preempt action by a perceived aggressor. In this scenario, both sides could perceive themselves to be the defending state while casting the opposing state as the aggressor.
The possession of an ABM system by one side and not the other would further contribute to crisis instability. Ballistic missile defenses are fundamentally defensive, but defensive weapons can be made to serve offensive purposes. Then-U.S. President Ronald Reagan, arguably history’s most ardent proponent of missile defense, acknowledged as much during his March 1983 speech unveiling his Strategic Defense Initiative. He noted that “if paired with offensive systems, [missile defenses] can be viewed as fostering an aggressive policy, and no one wants that.”
Had the U.S. deployed a missile defense system that was perceived by both sides as being partially effective, and had the Soviets lacked any comparable system of their own, each side would have been presented with an added incentive to strike first during a period of heightened tension. The Soviets’ incentive would stem from their knowledge that if they launched a major counterforce first strike against the United States, they could likely overwhelm U.S. missile defenses and destroy some portion of the U.S. strategic arsenal, thereby limiting the amount of damage they would experience from an American attack. The U.S. would be incentivized to launch a counterforce first strike by the knowledge that its ABM system could significantly reduce the effectiveness of a diminished Soviet retaliatory attack. Again, each side would be aware of the incentive facing its opponent.
In the end, of course, the nuclear war that everyone feared during the Cold War never took place. Both the U.S. and the Soviet Union maintained arsenals that featured significant counterforce capabilities, but neither superpower ever developed an ability to carry out a completely disarming first strike against the other. Similarly, both the United States and the Soviet Union engaged in extensive research into missile defense technologies, but neither nation ever came close to deploying a major ABM system.
After the Soviet Union dissolved, the U.S. and Russia implemented dramatic reductions in their nuclear arsenals. The two nations developed a relatively friendly relationship over the next two decades, the threat of nuclear conflict receded, and debates over nuclear deterrence and nuclear warfighting were supplanted by other topics such as international peacekeeping, ethnic cleansing, and counterterrorism that seemed more relevant to the post-Cold War world. Now, almost thirty years later, the United States finds itself facing three potential adversaries—Russia, China, and North Korea—armed with nuclear weapons and a fourth, Iran, which many fear will develop a nuclear capability at some point in the future.
Editor’s note: This is the first piece in a two-part series examining the role of damage limitation strategy in U.S. nuclear war planning. Read part two here.
About the Author
Richard Purcell
Richard Purcell is an independent national security analyst and freelance writer in Washington, D.C.He holds a master’s degree from the Johns Hopkins School of Advanced International Studies (SAIS) and previously worked as a legislative staffer for Senator Richard Durbin (D-IL). His work has appeared in World Politics Review, The National Interest, Arc Digital, War Is Boring, and other outlets. You can follow him on Twitter at @SecurityDilems.