SUBSCRIBE TO OUR FREE NEWSLETTER
Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
5
#000000
#FFFFFF
");background-position:center;background-size:19px 19px;background-repeat:no-repeat;background-color:var(--button-bg-color);padding:0;width:var(--form-elem-height);height:var(--form-elem-height);font-size:0;}:is(.js-newsletter-wrapper, .newsletter_bar.newsletter-wrapper) .widget__body:has(.response:not(:empty)) :is(.widget__headline, .widget__subheadline, #mc_embed_signup .mc-field-group, #mc_embed_signup input[type="submit"]){display:none;}:is(.grey_newsblock .newsletter-wrapper, .newsletter-wrapper) #mce-responses:has(.response:not(:empty)){grid-row:1 / -1;grid-column:1 / -1;}.newsletter-wrapper .widget__body > .snark-line:has(.response:not(:empty)){grid-column:1 / -1;}:is(.grey_newsblock .newsletter-wrapper, .newsletter-wrapper) :is(.newsletter-campaign:has(.response:not(:empty)), .newsletter-and-social:has(.response:not(:empty))){width:100%;}.newsletter-wrapper .newsletter_bar_col{display:flex;flex-wrap:wrap;justify-content:center;align-items:center;gap:8px 20px;margin:0 auto;}.newsletter-wrapper .newsletter_bar_col .text-element{display:flex;color:var(--shares-color);margin:0 !important;font-weight:400 !important;font-size:16px !important;}.newsletter-wrapper .newsletter_bar_col .whitebar_social{display:flex;gap:12px;width:auto;}.newsletter-wrapper .newsletter_bar_col a{margin:0;background-color:#0000;padding:0;width:32px;height:32px;}.newsletter-wrapper .social_icon:after{display:none;}.newsletter-wrapper .widget article:before, .newsletter-wrapper .widget article:after{display:none;}#sFollow_Block_0_0_1_0_0_0_1{margin:0;}.donation_banner{position:relative;background:#000;}.donation_banner .posts-custom *, .donation_banner .posts-custom :after, .donation_banner .posts-custom :before{margin:0;}.donation_banner .posts-custom .widget{position:absolute;inset:0;}.donation_banner__wrapper{position:relative;z-index:2;pointer-events:none;}.donation_banner .donate_btn{position:relative;z-index:2;}#sSHARED_-_Support_Block_0_0_7_0_0_3_1_0{color:#fff;}#sSHARED_-_Support_Block_0_0_7_0_0_3_1_1{font-weight:normal;}.grey_newsblock .newsletter-wrapper, .newsletter-wrapper, .newsletter-wrapper.sidebar{background:linear-gradient(91deg, #005dc7 28%, #1d63b2 65%, #0353ae 85%);}
To donate by check, phone, or other method, see our More Ways to Give page.
Daily news & progressive opinion—funded by the people, not the corporations—delivered straight to your inbox.
To say an technology will always be with us because it can’t be disinvented is like saying you will always be alive because you can’t be disborn.
After seeing the movie Oppenheimer, a friend glumly commented, “I certainly don’t like them [nuclear weapons], but what can we do? We can’t put that genie back in its bottle.”
Those of us eager to get rid of nukes hear this a lot and at first glance it seems true; common sense suggests that we’re stuck with J. Robert Oppenheimer’s genie because after all, it can’t be disinvented. But this “common sense” is uncommonly wrong. Technologies have appeared throughout human history, and just as the great majority of plant and animal species have eventually gone extinct, ditto for the great majority of technological genies. Only rarely have they been actively erased. Nearly always they’ve simply been abandoned once people recognized they were inefficient, unsafe, outmoded, or sometimes just plain silly.
Don’t be bamboozled, therefore, by the oft-repeated claim by defense intellectuals that we can’t put the nuclear genie back in its bottle. We don’t have to. Plenty of lousy technologies have simply been forsaken. It’s their usual fate. (Dear readers, please don’t misunderstand. Our argument is NOT that nuclear weapons shouldn’t be actively restricted and eventually abolished. They should. Indeed, doing so is a major goal for basic planetary hygiene. Our point is that we, as a society and as antinuclear activists, shouldn’t be buffaloed by the widespread, influential claim that we are stuck with nukes simply because “that genie is out of the bottle.” Our argument isn’t, in itself, a profound case against nuclear weapons; we have made that point in other contexts, and we intend to keep doing so. Rather, we write to dispute one of the troublesome arguments that, for some people, has inhibited discussion about the necessity of nuclear abolition.)
Why isn’t it possible to imagine that they will be abandoned—just as other technologies that are dangerous and essentially useless?
Now, back to our thesis: that we’re not necessarily stuck with bad genies simply because they have somehow gotten out of their bottles. There are many instructive examples. The earliest high-wheel bicycles, called penny-farthings in England because their huge front wheel and tiny rear one resembled a penny alongside a farthing, were very popular in the 1870s and 1880s. They were not only difficult to ride, but dangerous to fall off.
Between 1897 and 1927, the Stanley Motor Carriage Company sold more than ten thousand Stanley Steamers, automobiles powered by steam engines. Both technologies are now comical curiosities, reserved for museums. Perhaps transportation intellectuals warned at the time that you couldn’t put the Stanley Steamer or penny-farthing genies back in their bottles.
Technological determinism—the idea that some objective technological reality decides what technologies exist—seems persuasive. After all, we can’t disinvent anything, nuclear weapons no less than penny-farthings and Stanley Steamers. There are no disinvention laboratories that undo things that shouldn’t have been done in the first place. To say that nuclear weapons will always be with us because they can’t be disinvented is like saying you will always be alive because you can’t be disborn.
Pessimists clinging to the myth of disinvention also argue that nuclear weapons can never be done away with because the knowledge of how to build them will always exist. Inventing something is conceived as a one-way process in which the crucial step is the moment of invention. Once that line has been crossed, there is no going back.
Again, this is superficially plausible. After all, it’s almost always the case that once knowledge is created or ideas are promulgated, they rarely go away. But there is a crucial difference between knowledge and ideas on the one hand, and technology on the other. Human beings don’t keep technology around (except sometimes in museums) the way they keep knowledge around in libraries, textbooks, and cultural traditions.
Bad ideas may persist in libraries, but not in the real world. The physicist Edward Teller, “father of the hydrogen bomb,” had some bad ideas. He urged, for instance, that H-bombs be used to melt arctic ice in order to dig seaports and also to free up the Northwest Passage, while other physicists, including Freeman Dyson, spent years on Project Orion, hoping to design a rocket that would be powered by a successive series of nuclear explosions. Crappy ideas don’t have to be forgotten in order to be abandoned.
Useless, dangerous, or outmoded technology needn’t be forced out of existence. Once a thing is no longer useful, it unceremoniously and deservedly gets ignored.
To understand how nuclear weapons might fit this mold, and be eliminated, look at technologies more generally, and how they go away. Venture capitalists, for example, are aware that new things don’t become permanent the moment they’re invented, nor do they disappear because they’ve been disinvented. Technologies have a life cycle whose two end points aren’t birth and death, but invention, then (sometimes) adoption, followed by either modification and continuation, or abandonment.
A new device can be utterly brilliant, but if it isn’t widely used, it won’t persist; certainly, it won’t live on forever just because it has been invented. Technologies go away when enough people decide to give up on them. This applies to weapons, too. Stone axes didn’t disappear because people couldn’t make them anymore, or because our ancestors ran out of stone. Iron replaced bronze, steel replaced iron. Spears, blowguns, bows and arrows, matchlock rifles, blunderbusses, the gatling gun: Each went extinct because they were simply abandoned, and for good reasons.
Consider the hand mortar. Developed in the 1600s, these guns (sort of like a sawed off, wide-barreled shotgun) were supposed to fire an exploding grenade at an adversary. At the time, however, triggers that could ignite on impact had not yet been developed, so the hand mortar relied on a somewhat complicated process: You primed the gun, set it down, grabbed the grenade (carefully!), lit its fuse, stuffed it in the gun’s muzzle, and shoved it all the way down the barrel, picked up the gun, aimed, and fired.
In theory, hand mortars ought to have been effective weapons. But there were lots of things that could go wrong, and did. The fuse could touch the grenade and detonate it in the barrel. Or the fuse could get doubled on itself as it was stuffed down the barrel, shortening the burn time, again detonating it in the barrel. The gun could misfire, leaving the grenade in the barrel, where it would eventually detonate. (None of these events were healthy for the soldier firing it.) The shock of shooting it could separate the fuse from the grenade, making it no more deadly than a thrown rock. If you mis-estimated the amount of powder needed to fire the grenade out of the gun incorrectly, it could either deposit the grenade at your feet or just a few yards away among your own troops, or it could send it sailing far over the heads of your adversaries.
As a practical matter, there were too many things that could go dangerously wrong with hand mortars, such that ultimately killing a knot of enemy soldiers if everything went right wasn’t worth the many risks involved. Even though hand mortars had been invented, and even though any madman who wanted to could have armed his forces with them, they had a negligible impact on war fighting. They were never outlawed or disinvented. Being a technology that was both dangerous and not very useful, they were simply abandoned.
And nuclear weapons? They are certainly dangerous, given that deterrence cannot persist indefinitely without someday failing. Bertrand Russell noted that one can imagine watching a tightrope walker balance aloft for five minutes, or even 15, but for a whole year? Or a hundred? At the same time, nuclear weapons have never been very useful, if indeed they have been useful at all, except to benefit those relatively few individuals, civilian and military, whose careers have profited from designing, developing, and deploying them.
So why isn’t it possible to imagine that they will be abandoned—just as other technologies that are dangerous and essentially useless? They could readily go extinct even though the memory of how to make them persists.
Yes, nuclear weapons cannot be disinvented. Oppenheimer and his colleagues bequeathed us something that was remarkable, not very useful, and very dangerous. The way to eliminate the danger is to understand that they were never a very good technology to begin with. And to recognize that insofar as they are bad genies they needn’t be stuffed back into their bottles. They can be left to fall of their own weight, or, alternatively, to suffer the fate suggested by Brent Scowcroft—no peacenik—when, in retirement, he was asked what should best be done with them: “Let ‘em rot.”
The word “infinity” is frequently misused and abused. It is too often deployed simply to mean something vast, indefinite, or large beyond conception. But the idea of the infinite is another thing entirely. It is after all part of the lore of science that the 19th Century mathematician Georg Cantor wound up in an insane asylum, after a life spent trying to measure the size of different infinities.
And yet, with regard to what was really at stake for Dr. J. Robert Oppenheimer and his associates on July 16, 1945, as they aimed to bring about the world’s first nuclear detonation but also harbored some fear that they might bring about the instantaneous end of the world, no lesser word will do.
What are the chances that Trinity will happen again and again, that groups of inventors will gamble the fate of the world again and again, in a world with endless new types of armaments and eternal arms races?
The blockbuster Oppenheimer film won the 2024 Academy Award for Best Picture on Sunday evening March 10th. In perhaps its most chilling scene, Major General Leslie R. Groves, played by Matt Damon, implores Dr. Oppenheimer, played by Cillian Murphy, to reassure him that there was no possibility that “when we push that button, we destroy the world.” “The chances are near zero,” Oppenheimer replies, rather blithely. Groves’s eyes widen. A look of incredulity appears on his face. “Near zero?” “What do you expect from theory alone?” asks Oppenheimer. “Zero,” says the general in exasperation, “would be nice.”
It Didn't Happen Then, But It May Happen Someday
The real story behind this fictional exchange is, yes, infinitely more terrifying. And it bodes very ill indeed for the future of the human race and life on Earth. Because in the absence of a supranational global authority, in a world caught up instead in what we might call a “forever arms race,” this scenario will surely happen again. Albert Einstein made this point only a month after Nagasaki and Hiroshima. “As long as nations demand unrestricted sovereignty,” he said, “we shall undoubtedly be faced with still bigger wars, fought with bigger and technologically more advanced weapons.” Laser weapons, space weapons, cyber weapons, nano weapons, bio weapons. Fast forward just a few decades, and one cannot even envisage the new adjectives in front of the ancient noun. Perhaps most ominously, consider the arms race already underway to develop artificially-intelligent semi-autonomous robot weapons. While artificial general intelligence is unlikely to destroy the world in a quick flash, the existential threat it poses to humanity (intentionally weaponized or otherwise) is today on everyone’s mind.
And one day, the dice will roll badly. The gamble will go wrong. And then the game will be over forever.
Edward Teller Identified the Scenario and No One Could Convince Everyone to Stop Worrying and Love the Bomb
This episode, where rational human beings undertook an action even though they believed there was a non-zero possibility that it would bring an abrupt end to all life on Earth, began at a meeting of scientists at UC Berkeley on July 7, 1942. Edward Teller was the first to suggest it. He informed his colleagues that the heat produced by the fission reaction they were planning to generate might, repeat might, ignite a fusion reaction in both the hydrogen and deuterium in our single world ocean, and the nitrogen in Earth’s atmosphere. In an instant, intense temperatures hotter than the sun would immediately extinguish every living thing on Earth. And that would be that.
For the next three years, the leading scientists of the Manhattan Project argued stridently about whether or not this quintessentially apocalyptic event might take place. Everyone seemed to agree that it was extremely unlikely. But no one could get everyone to agree that it was impossible.
The official historian of the Manhattan Project, David Hawkins, said in 1982 that he conducted more interviews with the Manhattan scientists on this topic than on anything else. He said: “Younger researchers kept rediscovering the possibility, from start to finish of the project.” And Hawkins concluded from these conversations that despite an endless number of theoretical calculations, they never managed to confirm with mathematical certainty that setting the world ablaze with a single atomic detonation would definitely not occur.
Hans Bethe insisted that the ignition of the oceans and the atmosphere was “impossible.” But Enrico Fermi was never fully convinced. According to Peter Goodchild, who produced a superb TV miniseries about Oppenheimer in 1980 starring Sam Waterston, Fermi – in the true Socratic fashion of recognizing what he did not know – “worried whether there were undiscovered phenomena that, under the novel conditions of extreme heat, might lead to unexpected disaster.” And on the long drive from Los Alamos to Alamogordo for the Trinity test, Fermi (presumably facetiously) told a companion: “It would be a miracle if the atmosphere were ignited. I reckon the chance of a miracle to be about ten percent.”
The scientists in the film are shown placing bets on whether the Trinity test would unleash that fusion reaction upon the entire globe. (How those who put their money down on the affirmative planned to collect, if they had “won,” was not made entirely clear.) Scientist James B. Conant, who had resigned as the president of Harvard University to become head of the National Defense Research Committee -- and the de facto boss of both Oppenheimer and Groves -- recounted his own Trinity experience years later. Conant said that at the moment of detonation, he was most surprised by the duration of the light. It wasn’t a quick bright burst like a camera flash, but persisted for seconds. And what did that lead him to fear? “My instantaneous reaction was that something had gone wrong, and that the thermal nuclear transformation of the atmosphere, once discussed as a possibility and jokingly referred to a few minutes earlier, had actually occurred.”
Daniel Ellsberg, who wrote elaborately about this Manhattan Project dilemma in his masterful 2017 book The Doomsday Machine: Confessions of a Nuclear War Planner, drew the same conclusion: “The Manhattan Project did continue, at full blast (so to speak), but not because further calculations and partial tests proved beyond doubt that there was no possibility of … atmospheric ignition. … No one, including Bethe, was able to convince most others that the ultimate catastrophe was ‘not possible’ … Most of the senior theorists did believe the chance was very small, but not zero.”
They Pushed the Button
But the Manhattan scientists, nonetheless, went ahead. They set off history’s first atomic detonation. They took a chance, however remote, on ending everything. Because of the pressure to end the war, because of the demands of military competition with adversaries present (Japan) and prospective (the Soviet Union), and surely to some extent because of concern for their own positions (“We gave you three years and $2.2 billion and now you want to call the whole thing off?”), they took it upon themselves to gamble the fate of the world.
And perhaps the most disturbing part of this story is who made the decision to take the risk. Because it wasn’t “the American government.” It wasn’t the elected officials of our representative democracy. It wasn’t President Harry S. Truman. Even though the scientists, for three long years, discussed among themselves “the odds” that they might bring about the sudden and fiery end of the world, there is no record, none, that they ever brought their dilemma to the attention of any political authorities. Or even General Groves! In the film, Oppenheimer talks to Einstein about it. But no one ever talked to any democratically-elected political leaders about it.
There are some who dispute whether the Manhattan scientists really did make this choice. “This thing has been blown out of proportion over the years. They knew what they were doing. They were not feeling around in the dark,” says Richard Rhodes, author of the Pulitzer Prize-winning book The Making of the Atomic Bomb. “What a physicist means by ‘near zero’ would be zero to an engineer,” says Aditi Verma, an assistant professor of nuclear engineering and radiological sciences at the University of Michigan.
But the point for us to contemplate for the human future is that human beings can make this choice. Consider, for example, how much more pressure the Manhattan scientists would have felt if the atom bomb had been ready to go sooner. Germany was already vanquished by July 1945, and Japan very nearly so. But when the project began three years earlier, there seemed to the protagonists a quite real possibility that Germany might invent an atom bomb first. What if there was, say, a one in a million chance that pressing the button would incinerate all of creation, but a vastly greater likelihood that failing to press the button would lead to Hitler incinerating American cities?
And Yes, The Stakes Are Truly Infinite
During my time at the Pardee RAND Graduate School and the RAND Corporation – where many of the strategic nuclear doctrines of the Cold War originally were forged -- I was taught that the best way to conceptualize “risk” was the probability of a hypothetical event multiplied by the magnitude of its consequences. (And yes, no one could resist calling those of us training to be nuclear use theorists “the NUTs.”)
But even if it wasn’t anywhere near that 10% figure that Fermi tossed out there (doubtless in jest), what is, oh, 0.0001% times infinity? Because the Manhattan scientists took it upon themselves to “risk” ending the existence not only of every living thing on Planet Earth on July 16, 1945, but also all of us born later, destined never to exist if their wager had gone wrong. An infinity of future lives, human and otherwise. The greatest imaginable crime against humanity, and upon the whole circle of life on Earth.
And too, it may be the case, perhaps not likely but assuredly not impossible, that we Earthlings are alone, in all of space and time, past, present, and future. After all, it was that same Enrico Fermi, at a Manhattan Project lunch table, who ostensibly posed his famous “paradox.” If the universe is teeming with extraterrestrial intelligence, he asserted, by now they would have had plenty of time to spread almost everywhere (with self-replicating autonomous robots if not biological beings). So where are they?
Paleobiologists know very little about the origins of life, and most especially, the frequency or rarity of whatever happened on our lonely blue planet 3.5 billion years ago. The emergence of life, or its elaboration into sentient life, may be something that takes place quite frequently on planets where the conditions are right. Or it may be instead something that happens only once in a trillion trillion trillion times. We homo sapiens may, repeat may, be the only beings smart enough, anywhere and ever, to create a social and cultural and technological civilization, to contemplate the nature of the universe, and perhaps one day to fill it with our descendants.
And yet a tiny handful of those human beings took it upon themselves to take the chance, however infinitesimal they might have assessed it to be, of bringing this (possibly) unique single blossom of intelligence to an abrupt and sudden end. Perhaps then we might say that they risked committing not only an infinite crime against an infinite future, but upon the entirety of the universe as well.
The Logic of Anarchy Prevails
“I believe that our Great Maker is preparing the world … to become one nation,” said U.S. President Ulysses S. Grant during his second inaugural address in 1873, “when armies and navies will be no longer required.” But until the dawn of that day, separate sovereign nations will be forced to compete with each other eternally, to keep up with each other in the latest in weapons technology. As soon as any kind of new weapon gets invented? The great military powers feel irresistibly compelled to develop it and to deploy it. Because if they don’t, their adversaries will.
This dynamic is called “the security dilemma” by international relations theorists. “Because any state may at any time use force, all states must constantly be ready either to counter force with force or to pay the cost of weakness,” said one of the pioneers of that academic discipline, Professor Kenneth Waltz of Berkeley and Columbia, in his classic Man, The State, and War of 1954. Karl von Clausewitz advanced the same argument in his On War of 1832 – sounding a bit like Isaac Newton. “Each of the adversaries forces the hand of the other, and a reciprocal action results which in theory can have no limit … So long as I have not overthrown my adversary I must fear that he may overthrow me. I am no longer my own master. He forces my hand as I force his.” The Native American political humorist Will Rogers put it more severely in 1929. “You can’t say humanity don’t advance. In every war, they kill you in a new way.”
Forever Would Be Nice
There are many other reasons to lament this eternal competition in developing new instruments of annihilation, of course, besides the prospect that the testing of new weapons will go maximally awry. There is the permanent necessity for groups of humans to devote vast quantities of both toil and treasure to “defending themselves” against other groups of humans – when those resources could do so much to improve the lives of the humans within those groups instead. And if history is any guide there’s the virtual certainty, every few decades or so, that perpetual preparation for war degenerates into actual war – with ever more cataclysmic arsenals of doom.
So what are the chances that Trinity will happen again and again, that groups of inventors will gamble the fate of the world again and again, in a world with endless new types of armaments and eternal arms races? What are the chances that we can roll these dice, time and time again, before the atmosphere and the oceans do instantaneously ignite, or before who knows what other apocalyptic scenario -- in the spirit of Enrico Fermi utterly unknown to us today -- will bring about the end of the world tomorrow? What are the odds that our fair species can survive indefinitely if we remain perpetually divided into tribes with clubs, and if every generation or so scientists face the same excruciating choice as J. Robert Oppenheimer faced on July 16, 1945, solely because they cannot escape the inexorable obligation to build bigger and better clubs?
The chances are near zero.
Where are the images of those who died by the bombs that haunt humanity still?
Since the first reviews of the Oppenheimer film appeared, a question has been floating in the ether: Why don’t we see substantial images of the destruction and victims of the Hiroshima and Nagasaki A-Bombs?
Days before the Academy Awards, I had the opportunity to learn the answers to that question. I had just returned from Japan, where I marched with Hiroshima and Nagasaki A-bomb survivors (Hibakusha) and participated in 70th anniversary commemorations of the victims of the Bikini Bravo H-Bomb test on March 1, 1954. That bomb was 1,000 times more powerful than the atomic bomb dropped on the people of Hiroshima. It claimed and poisoned the lives of nearly all the inhabitants of Rongelap atoll 125 miles away from Bikini. It also claimed the lives of Japanese fishermen and irradiated more than 1,000 Japanese fishing vessels and contaminated much of Japan’s food supply. The resulting 1954-55 petition campaign urging the abolition of nuclear weapons garnered 31.5 million petition signatures, 65% of Japanese voters, and launched the world’s first and likely most influential social movement for a nuclear weapons-free world.
I was carrying these people and this history deep in my bones when Harvard Professor Elaine Scarry, a friend and member of my organization’s board, encouraged me to come to a panel she had organized with Kai Bird, co-author of American Prometheus: The Triumph and Tragedy of Robert J. Oppenheimer, the biography on which the Oppenheimer film is based.
I have known Kai and his now late co-author Martin Sherwin, over the years. Kai is a generous and modest man, and an excellent scholar and biographer. We chatted briefly before the panel, where I learned and was happy for him that he will be at the Academy Awards for the Oscars tonight.
In his presentation, Kai explained that the film drew heavily from his book, with many of its lines taken directly from his and Marty’s text—something very unusual for Hollywood. Kai was given only a few hours to review the film’s 200-page script before the filming began, and he said that he found only one error, which Christopher Nolan, the film's director, corrected. Kai and the other panelist, Peter Galison of Harvard’s History of Science Department, described Oppenheimer as brilliant (physicist and otherwise), complex, and emotionally fragile, a man who might have been better known for his work on black holes—begun in 1935—had World War II and the Manhattan Project not intervened.
Come the question and answer time, after a brief reference to what I had learned and did in Japan, I asked Kai if Nolan, during production of the film, had any serious conversations about exposing his audiences to what Oppenheimer’s bomb wrought? Kai’s answer was thoughtful and illuminated some of the most disturbing images from the film’s conclusion.
Kai’s direct response was “No.” Such discussions did not take place, Kai had earlier explained that the dramatic arc of the film and the book were the Atomic Energy Commission hearings in which those in power sought to destroy Oppenheimer’s role as the world’s leading scientist and very influential public intellectual. Edward Teller, Lewis Strauss of the AEC, and powerful forces in the Pentagon reacted with fury to Oppie’s opposition to developing the hydrogen bomb. Kai explained that the film and book are primarily biographies of Oppenheimer with as Elaine put it, the film “telling the story from the point of view of what’s going on in Oppenheimer’s mind,” not from other and broader perspectives
Kai noted several places in the film where Nolan subtly pointed to both the A-bombs’ devastation and Oppenheimer’s moral misgivings. The first of the film’s references comes shortly after the Trinity test, with another three months after the A-bombings when Oppenheimer learned that Japan had been on the verge of surrender at the time his “gadgets” were fired. At one point, following the Trinity test we see Oppenheimer mumbling about those “poor little people,” the innocent Japanese civilians who he knew would be killed and devastated by the A-bombs. At the same time, Kai noted, Oppenheimer was meeting with senior military officials to explain how best to detonate the bombs (altitude, etc.)
Instead of showing us the roasted bodies, people with burnt flesh hanging from their arms, eyeballs hanging from their sockets, and people drowned in cisterns, Nolan gave us the image of Oppenheimer watching a newsreel clip of the devastation, with his face showing his horror at what his bomb had wrought. Perhaps most powerfully, we see a disturbing image from Oppenheimer’s imagination as he speaks to an audience in the Los Alamos assembly hall: a girl’s face melting from the A-bomb’s heat. That face, in fact, was that of Nolan’s daughter, as Elaine Scarry later explained. “This was a very ethical decision on Nolan’s part," she said, "not to reenact the original harm by disfiguring Japanese faces.”
And Nolan gives us Oppenheimer’s sense of guilt when he meets with President Truman and Secretary Byrnes, confronting them with the truth that they all have blood on their hands.
Elaine closed out this part of the panel discussion by pointing to the resistance in U.S. culture to seeing film scenes in which “the viewer is asked to be sympathetic to the person injured.” In Japan, she explained, even young children are shown horrific photos and images of the A-bomb’s human devastations. She reinforced this by explaining that she and I arranged an exhibit of framed Hiroshima/Nagasaki A-bomb and Hibakusha posters in a Cambridge public library. The morning after we set up the display we returned to the library to find that it had been totally rearranged without our permission or knowledge. Each of the posters that included photos of the dead and maimed had been removed.
After the panel, my wife and I resolved to watch the film again. Others who have already seen the film and watched the Academy Awards and who shared my question might also want to do the same. If nothing else, it will deepen our resolve to eliminate the existential nuclear threat to human survival.