On the Morning of April 11, Abu al-Nour was lounging at home in a small town in Syria’s Idlib Province. It was a pleasant day, and his seven children—ages 2 to 23—were playing outside or studying inside. The house was small, but al-Nour was proud of it. He had built it himself and enjoyed having family and friends over to spend time in the big yard. His wife was cooking lunch in the kitchen.
Al-Nour is a farmer, as were many of the town’s residents, but since the Syrian civil war started in 2011, fuel and fertilizer prices had shot up well beyond his means. Al-Nour had been getting by with the odd construction job or harvest work here and there. The area had fallen to rebel forces in 2012, and though his village was too tiny for the rebels to bother with much, he’d noticed fighters from the Free Idlib Army and Jaysh al-Izza groups passing through on occasion.
Being in rebel-held territory meant government air strikes. The bombings began in 2012 and got worse in 2014. Many villagers fled out of fear. Others fell deeper into poverty, their businesses ruined by the relentless conflict. When the first air strike hit al-Nour’s neighborhood, he says, it killed eight people from one family. Al-Nour tried to help with rescue efforts, but instead was overcome with grief, unable to move. Afterward, he couldn’t stop imagining what could happen to his family. Finally, five long years into this reality, he heard about a service called Sentry from a friend. If he signed up, it would send him a Facebook or Telegram message to let him know a government warplane was heading his way.
Around noon on that day in April, al-Nour’s phone lit up with an urgent warning: A Syrian jet had just taken off from Hama air base 50 miles away. It was flying toward his village.
He shouted to his family and grabbed the younger children. The group dashed out to a makeshift bomb shelter that al-Nour called his “cave.” Many residents of the heavily bombed areas in Idlib had dug similar shelters—really, just holes in the ground—and fitted them with something like storm-cellar doors.
Al-Nour managed to get all his children into the cave, but not his wife. He kept calling her name as he heard the awful sound of an approaching jet overhead. His wife reached the door to the shelter just as a bomb hit. Al-Nour remembers the door blowing off the cave, everything shaking, and an almost unbearable pressure in his ears. “It smelled of dust and fire,” he says. “The dust was everywhere.”
Shrapnel had pierced his wife’s back. Some of his children were in shock; others were crying. Through the smoke, he could tell that his house was destroyed. Still, everyone was alive. For that he was grateful. “We saw the death with our own eyes,” al-Nour says over the phone through an interpreter. “Without the Sentry warning, my family and I would probably be dead.” (Al-Nour is a pseudonym; he fears using his real name.)
In the seven years since the start of Syria’s civil war, it’s estimated that at least 500,000 Syrians have been killed. That number includes tens of thousands of civilians killed in air strikes carried out by Syrian president Bashar al-Assad’s regime and its allies. (Meanwhile, US and coalition forces are estimated to have killed as many as 6,200 Syrian civilians in their air campaign against ISIS.) Assad’s forces have been accused by the international community of war crimes for indiscriminate bombings. Six million Syrians have fled the country, creating a refugee crisis in the region and the world. International efforts to find a peaceful resolution continue to fail. The Assad regime has slowly regained territory; about two-thirds of the people in Syria currently live in areas under government control. The rest are in places held by an array of rebel groups as well as Kurdish and Turkish forces. Millions of people still live in unending fear of the sound of fighter jets overhead.
The conflict has left many Syrians feeling defeated. Huge swaths of the country have been laid to waste, and the humanitarian crisis isn’t expected to get better with coming government offenses. And yet even if these larger forces are implacable, a small effort can sometimes make a meaningful difference—like helping a family of nine escape with their lives.
The warning that came over al-Nour’s phone was created by three men—two Americans, one a hacker turned government technologist, the other an entrepreneur, and a Syrian coder. The three knew they couldn’t stop the bombings. But they felt sure they could use technology to give people like al-Nour a better chance of survival. They’re now building what you might call a Shazam for air strikes, using sound to predict when and where the bombs will rain down next. And thus opening a crucial window of time between life and death.
As a kid in rural McHenry County, Illinois, John Jaeger didn’t have much to do until his stepdad built him a homebrew 486 computer. It was the late ’80s—still the early days of PCs—and he mostly played videogames. Eventually he found his way onto a BBS with connections to the demoscene, an early underground subculture obsessed with electronic music and computer graphics. By the time he was 15, Jaeger was in deep with hackers, software crackers, and phone phreakers.
“We would exploit weaknesses in computer networks in order to gain administrative privileges and learn how the networks worked,” Jaeger says. He messed around but adds that he didn’t do anything more “destructive” than hack into Harvard’s system to give himself a Harvard.edu email address.
Jaeger took a job at modem manufacturer US Robotics right out of high school, followed by a gig at General Electric Medical Systems. The promise of “good drugs and startup parties” lured him to Silicon Valley in the late ’90s. The adventure, he says, was “forgettable.” He took computer security and network management jobs before working his way up to IT director. “I basically made all the wrong decisions,” he says. “Instead of becoming a multibillionaire, I went and worked for three companies that don’t exist anymore.”
Jaeger moved to Chicago and got a job in the financial industry. He designed and developed a trading platform and did risk management analysis. He was enjoying the work, but then the financial crisis hit. “I saw 20- and 30-year veterans of Wall Street soiling their trousers, genuinely scared,” he says. “It was really humbling.” That experience, he says, turned him off finance. But it was another three years before he finally left the industry.
Through a friend who had worked on President Barack Obama’s reelection campaign, he got an introduction to someone in the State Department. It was 2012, a year after the start of the Arab Spring, and the US government was recruiting people who could bring corporate experience and technical expertise to Syria. Jaeger wasn’t exactly familiar with the civil war that was building. “I had no idea what was going on,” he says. But he wanted to go overseas, so he relocated to Istanbul and basically became a consultant for the people trying to achieve a semblance of normalcy in areas of Syria that weren’t under Assad’s control.
“You had a whole lot of chiropractors and dentists suddenly respond to the needs of their local communities in a way they had never anticipated,” Jaeger says. “These guys need clean water. These guys need power. These folks need medicine.” Jaeger’s job was to help them figure out how to provide services and maintain some stable governance.
In October 2012, he started working with journalists and developing a program to support Syrian independent media. But two years in, the conflict started wearing on him. Jaeger had grown attached to many of his Syrian contacts and mourned when they were killed. Everyone he knew had lost family. It became clear that the biggest problem he could address was the bombing of civilians.
Options for mitigating the damage from air strikes, Jaeger knew, were few. And most were out of his reach. You could stop them. But even the international community had failed to do that. You could treat people after the air strikes hit. Various groups, like Syria Civil Defense, were doing that work. Or you could warn people ahead of time.
That last option seemed within his technical expertise. So he approached the State Department. But when he couldn’t rally any interest in the idea of an early-warning alert system, he left the agency in May 2015. He was convinced he was onto something. But he needed help.
Observers were already watching for planes. If Hala could capture that data and connect it to where those planes dropped bombs, it would have the foundation of a prediction system.
Dave Levin is a Wharton MBA who had worked for the UN Global Compact under Kofi Annan, had been an entrepreneur in the Philippines, and had consulted for McKinsey. In 2014, Levin founded Refugee Open Ware, an organization that helps people start projects using tech to do good in troubled regions. He was working in Jordan on an effort to develop 3-D-printed prosthetics for victims of war when a Syrian activist connected him to Jaeger. Levin flew to Turkey and the two met to talk about Jaeger’s idea. Levin signed on right away. (Refugee Open Ware has since invested in the project, and Levin splits his time between the organizations.)
In November 2015, two months after he met Levin, Jaeger got another lead. An expat friend in Turkey told him there was someone he needed to meet, a Syrian coder who was looking for ways to warn civilians about air strikes. The man, who goes by the alias Murad for safety reasons, grew up in a prominent, largely apolitical family in Damascus.
At university, Murad met people from other parts of Syria, young men and women who hadn’t grown up as sheltered as he had. Their stories of poverty and repression, of relatives imprisoned or killed by the government, shook Murad. He started to understand the grim authoritarian reality of his country.
When the war started, Murad was in his mid-twenties and a recent graduate with a degree in management information systems. He started working with groups that were housing displaced people. Eventually he realized that this activity had made him a target of the regime, and he fled to Jordan. There, he volunteered as a teacher in a refugee camp. But six months later, troubled by stories he heard from Syrians who were fleeing their homes, he felt he had to return.
Once he got back to Syria, Murad began teaching activists how to keep the government from intercepting digital communications. But regime thugs threatened his family, and he had to flee again. This time he went to Turkey. He started organizing schools for the growing community of Syrian refugees there and helping Syria Civil Defense with data management. As the air war ramped up, he saw more and more Syrians arriving mutilated—and traumatized. “This was horrible,” he says. “People without arms or legs.”
Murad had an idea: Start connecting civil defense organizations in different towns so they could better communicate about impending attacks. He mentioned the idea to Jaeger’s friend. Jaeger and Murad soon met for coffee, and Jaeger offered him a job. It came with low pay, long hours, and no job security. Murad was all in.
With a team in place, the group was ready for the most arduous startup task: fund-raising. Jaeger went to VCs, who told him the idea was great—but would never generate billions. They pointed him toward social-impact investors, who told him the idea was great—but they didn’t invest in the “conflict space.” They suggested foundations—which said they didn’t invest in for-profit businesses and sent him to VCs.
Screw it, thought Jaeger. In late 2015, the cofounders put together what they could from their personal bank accounts and managed to get some funding from an angel investor Levin knew. It was time for their startup, which Jaeger had named Hala Systems, to try to make a business out of saving lives.
During World War II, British farmers and pub owners in rural areas along the flight paths of German warplanes would phone ahead to big cities, warning them when the Luftwaffe was on the way. Seventy years later, Syrian civilians set up a similar ad hoc system. People who lived near military bases kept watch; when they saw a warplane take off, they used walkie-talkies to notify other people, who would contact others, spreading the word up the chain. Many of the participants were members of Syria Civil Defense, known as the White Helmets, who also served as rescue workers. But the process was spotty, unreliable. There was no systematic way for observations to come in and warnings to go out.
Jaeger thought that with the right technology it should be possible to design a better system. People were already watching for planes. If Hala could capture that information and connect it with reports of where those planes dropped their bombs, it would have the foundation of a prediction system. That data could be plugged into a formula that could calculate where the warplanes were most likely headed, taking into account the type of plane, trajectory, previous flight patterns, and other factors.
The Hala team started reaching out to the people who were monitoring the planes, including the White Helmets. At the same time, the team hacked together the first iteration of a system that would analyze data from the aircraft monitors, predict where the planes were headed, and broadcast alerts to people under threat of attack. Jaeger and Murad sketched it out, eventually filling up a notebook and using napkins to get the rest down. Jaeger says at first the system was just a bunch of if/then statements, a logic tree, and an Android app.
Basically, if someone saw, for example, a Russian-built MIG-23 Syrian warplane take off from Hama air base, then entered that information into the system—now called Sentry—it would issue a warning via social media with predictions about when an attack could be expected to hit a targeted area. It might estimate that the jet could be headed for the town of, say, Darkush with an ETA of 14 minutes, or Jisr al-Shughur in 13. When more people reported a specific plane as it flew over different locations, Sentry could then send more specific and accurate warnings directly to people in threatened areas.
How the Sentry System Works
Hala’s warning system relies on both human observers and remote sensors to collect data on potential air strikes. The startup is working toward making its network more autonomous, the better to save lives. — Andrea Powell
1. When observers near government air bases spot warplanes taking off, they enter the type of aircraft, heading, and coordinates into an Android app, which sends the info to Hala’s servers.
2. Sensor modules placed in trees or atop buildings collect acoustic data, which helps Sentry confirm the type of plane, its location, and flight path.
3. Software crunches all the data and compares it to past attacks, predicting the likelihood of an air raid, as well as when and where it might occur.
4. If the potential for an air strike is high enough, the system generates an alert that’s broadcast via social media. Hala has also set up air raid sirens that Sentry can activate remotely. The warning system now gives people an average of eight minutes to seek shelter.
5. Using a neural network, an automated system continuously scans Facebook, Twitter, and Telegram for posts that might indicate air strikes.
As the team gathered data, they constantly tweaked the formula. Everything was trial and error. “One of the things we learned early on was that our model for predicting arrival times was super aggressive,” Jaeger says of Sentry before it was released to the public. “It had planes arriving much faster than they actually did.” They couldn’t figure out what was wrong. Then they talked to a pilot who had defected from the Syrian air force. “Oh, that’s not how we fly that plane,” the pilot told Jaeger when the team showed him the system. The program assumed jets would always fly at maximum cruising speed, but the actual speeds were much lower, most likely to conserve fuel. “When we fly that plane, we fly it at exactly these altitudes and speeds at these intervals, using these waypoints,” the pilot said. With that information, the Hala team was able to fine-tune Sentry’s predictions to be accurate to within 30 seconds of the warplane’s arrival.
Precision was essential, Murad says. If Sentry went live too early and was inaccurate, civilians wouldn’t trust it, and it would fail to catch on. But Murad was eager to get it out there. Every day it was in development was another day people could be dying. At this point, part of his job was to watch videos of air strikes and look for eyewitness accounts on social media and in news reports to verify the information they received from people on the ground. Day after day, from Hala’s office, he monitored the aftermath of the strikes—the dead, the wounded and the dying, the bodies, the blood, and the maimed limbs. “You cannot stop crying, you can’t stop yourself,” he says, “and you can’t get used to it.”
Even though the Hala team was still getting by on scant funding, they managed to hire three more Syrians to help Murad look at the video and social media evidence and match it against Sentry’s predictions. But it took hours to verify the trajectory of a specific plane from air base to bombing site. And some days there were dozens of strikes. The new staffers couldn’t keep up. So the team figured they needed to automate the process. Jaeger hired engineers and researchers to develop software that, with the help of a neural network, could search Arabic language media for keywords that would help confirm the location and timing of an air strike. More data on more air strikes meant better information and better predictions.
As they were working to get accurate data, they also needed a way to get the warnings out to civilians. Murad wrote scripts for Telegram, Facebook, and Twitter, as well as the walkie-talkie app Zello.
Day after day, from Hala’s office, Murad monitored the aftermath of the air strikes—the dead, the wounded and the dying, the bodies, the blood, and the maimed limbs. “You cannot stop crying. And you can’t get used to it.”
On August 1, 2016, Sentry was ready to go live. The team started small, launching it in part of Idlib Province, which was getting hit hard by air strikes. They reached out to Syrian contacts and shared the news on social media. Volunteers passed out flyers. “Within a day and a half,” Jaeger says, “we got a testimonial video from someone who said, ‘My family is alive because I logged in and I got this message and I moved from my house. The house got blown up, my neighbors got killed.’ ”
He showed me the video, sent to him by someone in Syria. In it, a young man, visibly shaken and standing near a pile of rubble, confirms what happened. When Jaeger first saw it, he cried. “It was the first time we actually realized what we had done,” he says. “One family being saved. It was all worth it.” After that, no one was going to take a break. Levin remembers putting in 90- and 100-hour workweeks. Murad once toiled for three days straight without sleep.
All those hours led to a number of important improvements. Take the warnings. They need to reach as many people as possible, even those without access to cell phones, computers, or radios. Some areas in Syria already had air raid sirens, but they had to be manually activated. That meant running across town. “You’re bleeding off minutes at that point,” Jaeger says. So Hala modified a siren by adding a component that would let Sentry activate it remotely. The team shipped prototypes, each about the size of a cigarette carton, to the White Helmets, who helped test the units by placing them in civil defense bases and hospitals. There are now as many as 150 of these sirens inside the country, and Hala is figuring out how to make them work even during power and internet outages.
The latest addition to Sentry is a sensor module that’s designed to distinguish between airplanes, and gauge speed and direction. Every sound has a unique signature, whether it’s a reggae song, a human voice, or the roar of a warplane. To capture the signatures they needed to train Sentry’s sensors, Jaeger’s team used open source data and field recordings of Syrian and Russian jets. According to Hala, at optimal range Sentry can now identify threatening aircraft about 95 percent of the time.
Jaeger is cagey about how many of Hala’s sensor modules are deployed in Syria, but he says they’ve been operational since March. People have placed the briefcase-sized units on rooftops in opposition-held areas, giving clear access to the sound signatures of government warplanes overhead. The modules are still in development but have been made entirely from cheap, off-the-shelf technology. “Ten years ago this was impossible,” Jaeger says, “especially at such a low cost.” What Hala has done, essentially, is give Syrian civilians a radar system—and a better chance of surviving against overwhelming and indiscriminate force.
In a five-story walk-up, Jaeger, Murad, and Levin work out of a three-bedroom apartment that has served as Hala’s headquarters since October 2017. Perched on couches, they could pass for cofounders of any startup. A very basic startup: There are a few laptops lying around and not much else. Most coordination with the company’s now 18 employees is done over Slack—many work in cities like London and Washington, DC. Jaeger is fond of mentioning the PhD engineers, researchers, and data scientists he has on his meager payroll.
The company is currently surviving off the initial investment, grants and contributions from the UK, Denmark, Dutch, US, and Canadian governments, and a small round of funding from friends, family, and a couple of other investors.1
As we talk, Murad pulls out his cell phone. A warning has come in: A Russian warplane is circling Jisr al-Shughur, an opposition-held city. Within a minute, Sentry reports it has activated a siren. Minutes later, Murad pulls up a tweet from a Syrian account confirming that an air strike has hit the city. Hala’s data shows that about 11 minutes elapsed between the siren and the bombing. Later analysis showed no deaths or injuries.
Everything about Sentry hinges on a simple fact: The more time someone has to prepare for an air strike, the greater their chance of survival. And now lots of people are relying on Sentry for that edge: 60,000 follow the Facebook page. Its Telegram channels have 16,400 subscribers. A local radio station broadcasts Sentry alerts. And there are all the people within range of the sirens. In surveys conducted in Syria, Hala found that people need a minimum of 1 minute to seek adequate shelter. Had Abu al-Nour not had time to gather his children, they certainly would have been injured or possibly killed. A few seconds more would have kept his wife from injury. Jaeger says Sentry now averages a warning time of eight minutes.
The team knows they have saved lives. But they also did something they hadn’t foreseen: gathered a critical set of data. “We believe we have the most complete picture of the air war in Syria outside of the classified environment,” Jaeger says. That data is invaluable for groups trying to address human rights issues and war crimes. Hala has already made data available to the UN. “From a prosecution perspective, it’s invaluable,” says Tobias Schneider, a research fellow at the Global Public Policy Institute who studies chemical weapons and war crimes in Syria. “We can now link bombardments and human casualties and all these war crimes; we can connect them to an airplane, which means we can connect them to a pilot, we can connect them to an air base, to an air wing, to a commander.”
“We can now link bombings and human casualties and all these war crimes to an airplane, to a pilot, to an air base, to an air wing, to a commander.”
An official involved in investigating war crimes at an international human rights organization says Hala has played a key role in identifying the perpetrators of attacks on targets like schools and hospitals: “They have laid the groundwork for the attribution of human rights violations to specific parties and, ultimately, for their accountability.”
Jaeger imagines other valuable applications for Hala’s technology, often to monitor hard-to-govern spaces. It could track poachers in Kenya or help poor countries with border security. Essentially, he says, the tech could be useful wherever sound signatures—gunfire, vehicles—can help monitor wrongdoing. It’s like a mash-up of ShotSpotter’s sensor capabilities and Palantir’s data analytics, but aimed at markets that neither of those companies would likely find lucrative enough.
Of course, it could also be used for other, less beneficent, purposes. One need not look far in the tech sector to find products intended to do good that instead cause a lot of harm. Sure, Sentry could be used to stop poaching or track Boko Haram, but could poachers use similar tech to locate elephants, or could a dictator use it to monitor activists? How do you stop it from getting into the hands of bad actors, from being repurposed to target the very people it was designed to protect? What if the Assad regime figures out how to hack Sentry?
Jaegar acknowledges the potential for misuse. Hala is a for-profit business that wants to offer its services to public and private entities and license its tech to other companies. There’s no telling who might be interested in it and how big an offer might be. Jaeger says that Hala will be picky about its clients. Every technology has many uses, he adds. The team’s only goal is to save lives, he says, and he’s confident they can uphold their mission: “We’re not making things that are inherently dangerous. We’re not making weapons.”
After al-Nour’s home was bombed, he and his family salvaged what they could and relocated to a not-too-distant town. Air strikes followed not long after. They fled to a camp for displaced people. When the conditions there became unbearable, they moved to a house near their home village. Al-Nour has tried to find work in factories but hasn’t had any luck. For a while he thought he’d never go back to his home. His children were terrified to return, and he feels a sort of hatred toward it. But he was spending so much of what little money his family had on rent that he decided to restore the ruined structure. He now spends his days trying to erase traces of the bombs that shattered their lives.
1Updated 8/17/18, 11:35 AM EDT: The story was changed to include the Dutch government as a current source of funding.
Danny Gold(@DGisSERIOUS) is a writer and filmmaker based in Brooklyn.
On Tuesday, July 10, the DOJ announced a landmark settlement with Austin-based Defense Distributed, a controversial startup led by a young, charismatic anarchist whom Wired once named one of the 15 most dangerous people in the world.
Hyper-loquacious and media-savvy, Cody Wilson is fond of telling any reporter who’ll listen that Defense Distributed’s main product, a gun fabricator called the Ghost Gunner, represents the endgame for gun control, not just in the US but everywhere in the world. With nothing but the Ghost Gunner, an internet connection, and some raw materials, anyone, anywhere can make an unmarked, untraceable gun in their home or garage. Even if Wilson is wrong that the gun control wars are effectively over (and I believe he is), Tuesday’s ruling has fundamentally changed them.
At about the time the settlement announcement was going out over the wires, I was pulling into the parking lot of LMT Defense in Milan, IL.
LMT Defense, formerly known as Lewis Machine & Tool, is as much the opposite of Defense Distributed as its quiet, publicity-shy founder, Karl Lewis, is the opposite of Cody Wilson. But LMT Defense’s story can be usefully placed alongside that of Defense Distributed, because together they can reveal much about the past, present, and future of the tools and technologies that we humans use for the age-old practice of making war.
The legacy machine
Karl Lewis got started in gunmaking back in the 1970’s at Springfield Armory in Geneseo, IL, just a few exits up I-80 from the current LMT Defense headquarters. Lewis, who has a high school education but who now knows as much about the engineering behind firearms manufacturing as almost anyone alive, was working on the Springfield Armory shop floor when he hit upon a better way to make a critical and failure-prone part of the AR-15, the bolt. He first took his idea to Springfield Armory management, but they took a pass, so he rented out a small corner in a local auto repair ship in Milan, bought some equipment, and began making the bolts, himself.
Lewis worked in his rented space on nights and weekends, bringing the newly fabricated bolts home for heat treatment in his kitchen oven. Not long after he made his first batch, he landed a small contract with the US military to supply some of the bolts for the M4 carbine. On the back of this initial success with M4 bolts, Lewis Machine & Tool expanded its offerings to include complete guns. Over the course of the next three decades, LMT grew into one of the world’s top makers of AR-15-pattern rifles for the world’s militaries, and it’s now in a very small club of gunmakers, alongside a few old-world arms powerhouses like Germany’s Heckler & Koch and Belgium’s FN Herstal, that supplies rifles to US SOCOM’s most elite units.
The offices of LMT Defense, in Milan, Ill. (Image courtesy Jon Stokes)
LMT’s gun business is built on high-profile relationships, hard-to-win government contracts, and deep, almost monk-like know-how. The company lives or dies by the skill of its machinists and by the stuff of process engineering — tolerances and measurements and paper trails. Political connections are also key, as the largest weapons contracts require congressional approval and months of waiting for political winds to blow in this or that direction, as countries to fall in and out of favor with each other, and paperwork that was delayed due to a political spat over some unrelated point of trade or security finally gets put through so that funds can be transfered and production can begin.
Selling these guns is as old-school a process as making them is. Success in LMT’s world isn’t about media buys and PR hits, but about dinners in foreign capitals, range sessions with the world’s top special forces units, booths at trade shows most of us have never heard of, and secret delegations of high-ranking officials to a machine shop in a small town surrounded by corn fields on the western border of Illinois.
The civilian gun market, with all of its politics- and event-driven gyrations of supply and demand, is woven into this stable core of the global military small arms market the way vines weave through a trellis. Innovations in gunmaking flow in both directions, though nowadays they more often flow from the civilian market into the military and law enforcement markets than vice versa. For the most part, civilians buy guns that come off the same production lines that feed the government and law enforcement markets.
All of this is how small arms get made and sold in the present world, and anyone who lived through the heyday of IBM and Oracle, before the PC, the cloud, and the smartphone tore through and upended everything, will recognize every detail of the above picture, down to the clean-cut guys in polos with the company logo and fat purchase orders bearing signatures and stamps and big numbers.
The author with LMT Defense hardware.
Guns, drugs, and a million Karl Lewises
This is the part of the story where I build on the IBM PC analogy I hinted at above, and tell you that Defense Distributed’s Ghost Gunner, along with its inevitable clones and successors, will kill dinosaurs like LMT Defense the way the PC and the cloud laid waste to the mainframe and microcomputer businesses of yesteryear.
Except this isn’t what will happen.
Defense Distributed isn’t going to destroy gun control, and it’s certainly not going to decimate the gun industry. All of the legacy gun industry apparatus described above will still be there in the decades to come, mainly because governments will still buy their arms from established makers like LMT. But surrounding the government and civilian arms markets will be a brand new, homebrew, underground gun market where enthusiasts swap files on the dark web and test new firearms in their back yards.
The homebrew gun revolution won’t create a million untraceable guns so much as it’ll create a hundreds of thousands of Karl Lewises — solitary geniuses who had a good idea, prototyped it, began making it and selling it in small batches, and ended up supplying a global arms market with new technology and products.
In this respect, the future of guns looks a lot like the present of drugs. The dark web hasn’t hurt Big Pharma, much less destroyed it. Rather, it has expanded the reach of hobbyist drugmakers and small labs, and enabled a shadow world of pharmaceutical R&D that feeds transnational black and gray markets for everything from penis enlargement pills to synthetic opioids.
Gun control efforts in this new reality will initially focus more on ammunition. Background checks for ammo purchases will move to more states, as policy makers try to limit civilian access to weapons in a world where controlling the guns themselves is impossible.
Ammunition has long been the crack in the rampart that Wilson is building. Bullets and casings are easy to fabricate and will always be easy to obtain or manufacture in bulk, but powder and primers are another story. Gunpowder and primers are the explosive chemical components of modern ammo, and they are difficult and dangerous to make at home. So gun controllers will seize on this and attempt to pivot to “bullet control” in the near-term.
Ammunition control is unlikely to work, mainly because rounds of ammunition are fungible, and there are untold billions of rounds already in civilian hands.
In addition to controls on ammunition, some governments will also make an effort at trying to force the manufacturers of 3D printers and desktop milling machines (the Ghost Gunner is the latter) to refuse to print files for gun parts.
This will be impossible to enforce, for two reasons. First, it will be hard for these machines to reliably tell what’s a gun-related file and what isn’t, especially if distributors of these files keep changing them to defeat any sort of detection. But the bigger problem will be that open-source firmware will quickly become available for the most popular printing and milling machines, so that determined users can “jailbreak” them and use them however they like. This already happens with products like routers and even cars, so it will definitely happen with home fabrication machines should the need arise.
Ammo control and fabrication device restrictions having failed, governments will over the longer term employ a two-pronged approach that consists of possession permits and digital censorship.
Photo courtesy of Getty Images: Jeremy Saltzer / EyeEm
First, governments will look to gun control schemes that treat guns like controlled substances (i.e. drugs and alchohol). The focus will shift to vetting and permits for simple possession, much like the gun owner licensing scheme I outlined in Politico. We’ll give up on trying to trace guns and ammunition, and focus more on authorizing people to possess guns, and on catching and prosecuting unauthorized possession. You’ll get the firearm equivalent of a marijuana card from the state, and then it won’t matter if you bought your gun from an authorized dealer or made it yourself at home.
The second component of future gun control regimes will be online suppression, of the type that’s already taking place on most major tech platforms across the developed world. I don’t think DefCad.com is long for the open web, and it will ultimately have as hard a time staying online as extremist sites like stormfront.org.
Gun CAD files will join child porn and pirated movies on the list of content it’s nearly impossible to find on big tech platforms like Facebook, Twitter, Reddit, and YouTube. If you want to trade these files, you’ll find yourself on sites with really intrusive advertising, where you worry a lot about viruses. Or, you’ll end up on the dark web, where you may end up paying for a hot new gun design with a cryptocurrency. This may be an ancap dream, but won’t be mainstream or user-friendly in any respect.
As for what comes after that, this is the same question as the question of what comes next for politically disfavored speech online. The gun control wars have now become a subset of the online free speech wars, so whatever happens with online speech in places like the US, UK, or China will happen with guns.
It was a Saturday night last December, and Oleksii Yasinsky was sitting on the couch with his wife and teenage son in the living room of their Kiev apartment. The 40-year-old Ukrainian cybersecurity researcher and his family were an hour into Oliver Stone’s film Snowden when their building abruptly lost power.
“The hackers don’t want us to finish the movie,” Yasinsky’s wife joked. She was referring to an event that had occurred a year earlier, a cyberattack that had cut electricity to nearly a quarter-million Ukrainians two days before Christmas in 2015. Yasinsky, a chief forensic analyst at a Kiev digital security firm, didn’t laugh. He looked over at a portable clock on his desk: The time was 00:00. Precisely midnight.
Yasinsky’s television was plugged into a surge protector with a battery backup, so only the flicker of images onscreen lit the room now. The power strip started beeping plaintively. Yasinsky got up and switched it off to save its charge, leaving the room suddenly silent.
He went to the kitchen, pulled out a handful of candles and lit them. Then he stepped to the kitchen window. The thin, sandy-blond engineer looked out on a view of the city as he’d never seen it before: The entire skyline around his apartment building was dark. Only the gray glow of distant lights reflected off the clouded sky, outlining blackened hulks of modern condos and Soviet high-rises.
Noting the precise time and the date, almost exactly a year since the December 2015 grid attack, Yasinsky felt sure that this was no normal blackout. He thought of the cold outside—close to zero degrees Fahrenheit—the slowly sinking temperatures in thousands of homes, and the countdown until dead water pumps led to frozen pipes.
That’s when another paranoid thought began to work its way through his mind: For the past 14 months, Yasinsky had found himself at the center of an enveloping crisis. A growing roster of Ukrainian companies and government agencies had come to him to analyze a plague of cyberattacks that were hitting them in rapid, remorseless succession. A single group of hackers seemed to be behind all of it. Now he couldn’t suppress the sense that those same phantoms, whose fingerprints he had traced for more than a year, had reached back, out through the internet’s ether, into his home.
The Cyber-Cassandras said this would happen. For decades they warned that hackers would soon make the leap beyond purely digital mayhem and start to cause real, physical damage to the world. In 2009, when the NSA’s Stuxnet malware silently accelerated a few hundred Iranian nuclear centrifuges until they destroyed themselves, it seemed to offer a preview of this new era. “This has a whiff of August 1945,” Michael Hayden, former director of the NSA and the CIA, said in a speech. “Somebody just used a new weapon, and this weapon will not be put back in the box.”
Now, in Ukraine, the quintessential cyberwar scenario has come to life. Twice. On separate occasions, invisible saboteurs have turned off the electricity to hundreds of thousands of people. Each blackout lasted a matter of hours, only as long as it took for scrambling engineers to manually switch the power on again. But as proofs of concept, the attacks set a new precedent: In Russia’s shadow, the decades-old nightmare of hackers stopping the gears of modern society has become a reality.
And the blackouts weren’t just isolated attacks. They were part of a digital blitzkrieg that has pummeled Ukraine for the past three years—a sustained cyberassault unlike any the world has ever seen. A hacker army has systematically undermined practically every sector of Ukraine: media, finance, transportation, military, politics, energy. Wave after wave of intrusions have deleted data, destroyed computers, and in some cases paralyzed organizations’ most basic functions. “You can’t really find a space in Ukraine where there hasn’t been an attack,” says Kenneth Geers, a NATO ambassador who focuses on cybersecurity.
In a public statement in December, Ukraine’s president, Petro Poroshenko, reported that there had been 6,500 cyberattacks on 36 Ukrainian targets in just the previous two months. International cybersecurity analysts have stopped just short of conclusively attributing these attacks to the Kremlin, but Poroshenko didn’t hesitate: Ukraine’s investigations, he said, point to the “direct or indirect involvement of secret services of Russia, which have unleashed a cyberwar against our country.” (The Russian foreign ministry didn’t respond to multiple requests for comment.)
To grasp the significance of these assaults—and, for that matter, to digest much of what’s going on in today’s larger geopolitical disorder—it helps to understand Russia’s uniquely abusive relationship with its largest neighbor to the west. Moscow has long regarded Ukraine as both a rightful part of Russia’s empire and an important territorial asset—a strategic buffer between Russia and the powers of NATO, a lucrative pipeline route to Europe, and home to one of Russia’s few accessible warm-water ports. For all those reasons, Moscow has worked for generations to keep Ukraine in the position of a submissive smaller sibling.
But over the past decade and a half, Moscow’s leash on Ukraine has frayed, as popular support in the country has pulled toward NATO and the European Union. In 2004, Ukrainian crowds in orange scarves flooded the streets to protest Moscow’s rigging of the country’s elections; that year, Russian agents allegedly went so far as to poison the surging pro-Western presidential candidate Viktor Yushchenko. A decade later, the 2014 Ukrainian Revolution finally overthrew the country’s Kremlin-backed president, Viktor Yanukovych (a leader whose longtime political adviser, Paul Manafort, would go on to run the US presidential campaign of Donald Trump). Russian troops promptly annexed the Crimean Peninsula in the south and invaded the Russian-speaking eastern region known as Donbass. Ukraine has since then been locked in an undeclared war with Russia, one that has displaced nearly 2 million internal refugees and killed close to 10,000 Ukrainians.
“Russia will never accept a sovereign, independent Ukraine. Twenty-five years since the Soviet collapse, Russia is still sick with this imperialistic syndrome.”
From the beginning, one of this war’s major fronts has been digital. Ahead of Ukraine’s post-revolution 2014 elections, a pro-Russian group calling itself CyberBerkut—an entity with links to the Kremlin hackers who later breached Democratic targets in America’s 2016 presidential election—rigged the website of the country’s Central Election Commission to announce ultra-right presidential candidate Dmytro Yarosh as the winner. Administrators detected the tampering less than an hour before the election results were set to be declared. And that attack was just a prelude to Russia’s most ambitious experiment in digital war, the barrage of cyberattacks that began to accelerate in the fall of 2015 and hasn’t ceased since.
Yushchenko, who ended up serving as Ukraine’s president from 2005 to 2010, believes that Russia’s tactics, online and off, have one single aim: “to destabilize the situation in Ukraine, to make its government look incompetent and vulnerable.” He lumps the blackouts and other cyberattacks together with the Russian disinformation flooding Ukraine’s media, the terroristic campaigns in the east of the country, and his own poisoning years ago—all underhanded moves aimed at painting Ukraine as a broken nation. “Russia will never accept Ukraine being a sovereign and independent country,” says Yushchenko, whose face still bears traces of the scars caused by dioxin toxicity. “Twenty-five years since the Soviet collapse, Russia is still sick with this imperialistic syndrome.”
But many global cybersecurity analysts have a much larger theory about the endgame of Ukraine’s hacking epidemic: They believe Russia is using the country as a cyberwar testing ground—a laboratory for perfecting new forms of global online combat. And the digital explosives that Russia has repeatedly set off in Ukraine are ones it has planted at least once before in the civil infrastructure of the United States.
One Sunday morning in October 2015, more than a year before Yasinsky would look out of his kitchen window at a blacked-out skyline, he sat near that same window sipping tea and eating a bowl of cornflakes. His phone rang with a call from work. He was then serving as the director of information security at StarLightMedia, Ukraine’s largest TV broadcasting conglomerate. During the night, two of StarLight’s servers had inexplicably gone offline. The IT administrator on the phone assured him that the servers had already been restored from backups.
But Yasinsky felt uneasy. The two machines had gone dark at almost the same minute. “One server going down, it happens,” Yasinsky says. “But two servers at the same time? That’s suspicious.”
Resigned to a lost weekend, he left his apartment and took the 40-minute metro ride to StarLightMedia’s office. When he got there, Yasinsky and the company’s IT admins examined the image they’d kept of one of the corrupted servers. Its master boot record, the deep-seated, reptile-brain portion of a computer’s hard drive that tells the machine where to find its own operating system, had been precisely overwritten with zeros. This was especially troubling, given that the two victim servers were domain controllers, computers with powerful privileges that could be used to reach into hundreds of other machines on the corporate network.
Yasinsky printed the code and laid the papers across his kitchen table and floor. He'd been in information security for 20 years, but he’d never analyzed such a refined digital weapon.
Yasinsky quickly discovered the attack was indeed far worse than it had seemed: The two corrupted servers had planted malware on the laptops of 13 StarLight employees. The infection had triggered the same boot-record overwrite technique to brick the machines just as staffers were working to prepare a morning TV news bulletin ahead of the country’s local elections.
Nonetheless, Yasinsky could see he’d been lucky. Looking at StarLight’s network logs, it appeared the domain controllers had committed suicide prematurely. They’d actually been set to infect and destroy 200 more PCs at the company. Soon Yasinsky heard from a competing media firm called TRK that it had been less fortunate: That company lost more than a hundred computers to an identical attack.
Yasinsky managed to pull a copy of the destructive program from StarLight’s network. Back at home, he pored over its code. He was struck by the layers of cunning obfuscation—the malware had evaded all antivirus scans and even impersonated an antivirus scanner itself, Microsoft’s Windows Defender. After his family had gone to sleep, Yasinsky printed the code and laid the papers across his kitchen table and floor, crossing out lines of camouflaging characters and highlighting commands to see its true form. Yasinsky had been working in information security for 20 years; he’d managed massive networks and fought off crews of sophisticated hackers before. But he’d never analyzed such a refined digital weapon.
“With every step forward, it became clearer that our Titanic had found its iceberg. The deeper we looked, the bigger it was.”
Beneath all the cloaking and misdirection, Yasinsky figured out, was a piece of malware known as KillDisk, a data-destroying parasite that had been circulating among hackers for about a decade. To understand how it got into their system, Yasinsky and two colleagues at StarLight obsessively dug into the company’s network logs, combing them again and again on nights and weekends. By tracing signs of the hackers’ fingerprints—some compromised corporate YouTube accounts, an administrator’s network login that had remained active even when he was out sick—they came to the stomach-turning realization that the intruders had been inside their system for more than six months. Eventually, Yasinsky identified the piece of malware that had served as the hackers’ initial foothold: an all-purpose Trojan known as BlackEnergy.
Soon Yasinsky began to hear from colleagues at other companies and in the government that they too had been hacked, and in almost exactly the same way. One attack had hit Ukrzaliznytsia, Ukraine’s biggest railway company. Other targets asked Yasinsky to keep their breaches secret. Again and again, the hackers used BlackEnergy for access and reconnaissance, then KillDisk for destruction. Their motives remained an enigma, but their marks were everywhere.
“With every step forward, it became clearer that our Titanic had found its iceberg,” says Yasinsky. “The deeper we looked, the bigger it was.”
Even then, Yasinsky didn’t know the real dimensions of the threat. He had no idea, for instance, that by December 2015, BlackEnergy and KillDisk were also lodged inside the computer systems of at least three major Ukrainian power companies, lying in wait.
At first, Robert Lee blamed the squirrels.
It was Christmas Eve 2015—and also, it so happened, the day before Lee was set to be married in his hometown of Cullman, Alabama. A barrel-chested and bearded redhead, Lee had recently left a high-level job at a three-letter US intelligence agency, where he’d focused on the cybersecurity of critical infrastructure. Now he was settling down to launch his own security startup and marry the Dutch girlfriend he’d met while stationed abroad.
As Lee busied himself with wedding preparations, he saw news headlines claiming that hackers had just taken down a power grid in western Ukraine. A significant swath of the country had apparently gone dark for six hours. Lee blew off the story—he had other things on his mind, and he’d heard spurious claims of hacked grids plenty of times before. The cause was usually a rodent or a bird—the notion that squirrels represented a greater threat to the power grid than hackers had become a running joke in the industry.
The next day, however, just before the wedding itself, Lee got a text about the purported cyberattack from Mike Assante, a security researcher at the SANS Institute, an elite cybersecurity training center. That got Lee’s attention: When it comes to digital threats to power grids, Assante is one of the most respected experts in the world. And he was telling Lee that the Ukraine blackout hack looked like the real thing.
The hackers had spread through the power companies’ networks and eventually compromised a VPN used for remote access.
Just after Lee had said his vows and kissed his bride, a contact in Ukraine messaged him as well: The blackout hack was real, the man said, and he needed Lee’s help. For Lee, who’d spent his career preparing for infrastructure cyberattacks, the moment he’d anticipated for years had finally arrived. So he ditched his own reception and began to text with Assante in a quiet spot, still in his wedding suit.
Lee eventually retreated to his mother’s desktop computer in his parents’ house nearby. Working in tandem with Assante, who was at a friend’s Christmas party in rural Idaho, they pulled up maps of Ukraine and a chart of its power grid. The three power companies’ substations that had been hit were in different regions of the country, hundreds of miles from one another and unconnected. “This was not a squirrel,” Lee concluded with a dark thrill.
By that night, Lee was busy dissecting the KillDisk malware his Ukrainian contact had sent him from the hacked power companies, much as Yasinsky had done after the StarLightMedia hack months before. (“I have a very patient wife,” Lee says.) Within days, he’d received a sample of the BlackEnergy code and forensic data from the attacks. Lee saw how the intrusion had started with a phishing email impersonating a message from the Ukrainian parliament. A malicious Word attachment had silently run a script on the victims’ machines, planting the BlackEnergy infection. From that foothold, it appeared, the hackers had spread through the power companies’ networks and eventually compromised a VPN the companies had used for remote access to their network—including the highly specialized industrial control software that gives operators remote command over equipment like circuit breakers.
The same group that snuffed out the lights for nearly a quarter-million Ukrainians had infected American electric utilities with the very same malware.
Looking at the attackers’ methods, Lee began to form a notion of who he was up against. He was struck by similarities between the blackout hackers’ tactics and those of a group that had recently gained some notoriety in the cybersecurity world—a group known as Sandworm. In 2014 the security firm FireEye had issued warnings about a team of hackers that was planting BlackEnergy malware on targets that included Polish energy firms and Ukrainian government agencies; the group seemed to be developing methods to target the specialized computer architectures that are used for remotely managing physical industrial equipment. The group’s name came from references to Dune found buried in its code, terms like Harkonnen and Arrakis, an arid planet in the novel where massive sandworms roam the deserts.
No one knew much about the group’s intentions. But all signs indicated that the hackers were Russian: FireEye had traced one of Sandworm’s distinctive intrusion techniques to a presentation at a Russian hacker conference. And when FireEye’s engineers managed to access one of Sandworm’s unsecured command-and-control servers, they found instructions for how to use BlackEnergy written in Russian, along with other Russian-language files.
Most disturbing of all for American analysts, Sandworm’s targets extended across the Atlantic. Earlier in 2014, the US government reported that hackers had planted BlackEnergy on the networks of American power and water utilities. Working from the government’s findings, FireEye had been able to pin those intrusions, too, on Sandworm.
For Lee, the pieces came together: It looked like the same group that had just snuffed out the lights for nearly a quarter-million Ukrainians had not long ago infected the computers of American electric utilities with the very same malware.
It had been just a few days since the Christmas blackout, and Assante thought it was too early to start blaming the attack on any particular hacker group—not to mention a government. But in Lee’s mind, alarms went off. The Ukraine attack represented something more than a faraway foreign case study. “An adversary that had already targeted American energy utilities had crossed the line and taken down a power grid,” Lee says. “It was an imminent threat to the United States.”
On a cold, bright day a few weeks later, a team of Americans arrived in Kiev. They assembled at the Hyatt, a block from the golden-domed Saint Sophia Cathedral. Among them were staff from the FBI, the Department of Energy, the Department of Homeland Security, and the North American Electric Reliability Corporation, the body responsible for the stability of the US grid, all part of a delegation that had been assigned to get to the bottom of the Ukrainian blackout.
The Feds had also flown Assante in from Wyoming. Lee, a hotter head than his friend, had fought with the US agencies over their penchant for secrecy, insisting that the details of the attack needed to be publicized immediately. He hadn’t been invited.
On that first day, the suits gathered in a sterile hotel conference room with the staff of Kyivoblenergo, the city’s regional power distribution company and one of the three victims of the power grid attacks. Over the next several hours, the Ukrainian company’s stoic execs and engineers laid out the blow-by-blow account of a comprehensive, almost torturous raid on their network.
“The message was, ‘I’m going to make you feel this everywhere.’ These attackers must have seemed like they were gods.”
As Lee and Assante had noticed, the malware that infected the energy companies hadn’t contained any commands capable of actually controlling the circuit breakers. Yet on the afternoon of December 23, Kyivoblenergo employees had watched helplessly as circuit after circuit was opened in dozens of substations across a Massachusetts-sized region, seemingly commanded by computers on their network that they couldn’t see. In fact, Kyivoblenergo’s engineers determined that the attackers had set up their own perfectly configured copy of the control software on a PC in a faraway facility and then had used that rogue clone to send the commands that cut the power.
Once the circuit breakers were open and the power for tens of thousands of Ukrainians had gone dead, the hackers launched another phase of the attack. They’d overwritten the firmware of the substations’ serial-to-ethernet converters—tiny boxes in the stations’ server closets that translated internet protocols to communicate with older equipment. By rewriting the obscure code of those chunks of hardware—a trick that likely took weeks to devise—the hackers had permanently bricked the devices, shutting out the legitimate operators from further digital control of the breakers. Sitting at the conference room table, Assante marveled at the thoroughness of the operation.
The hackers also left one of their usual calling cards, running KillDisk to destroy a handful of the company’s PCs. But the most vicious element of the attack struck the control stations’ battery backups. When the electricity was cut to the region, the stations themselves also lost power, throwing them into darkness in the midst of their crisis. With utmost precision, the hackers had engineered a blackout within a blackout.
“The message was, ‘I’m going to make you feel this everywhere.’ Boom boom boom boom boom boom boom,” Assante says, imagining the attack from the perspective of a bewildered grid operator. “These attackers must have seemed like they were gods.”
That night, the team boarded a flight to the western Ukrainian city of Ivano-Frankivsk, at the foot of the Carpathian Mountains, arriving at its tiny Soviet-era airport in a snowstorm. The next morning they visited the headquarters of Prykarpattyaoblenergo, the power company that had taken the brunt of the pre-Christmas attack.
The power company executives politely welcomed the Americans into their modern building, under the looming smokestacks of the abandoned coal power plant in the same complex. Then they invited them into their boardroom, seating them at a long wooden table beneath an oil painting of the aftermath of a medieval battle.
Before their eyes, phantom hands clicked through dozens of breakers—each serving power to a different swath of the region—and one by one by one, turned them cold.
The attack they described was almost identical to the one that hit Kyivoblenergo: BlackEnergy, corrupted firmware, disrupted backup power systems, KillDisk. But in this operation, the attackers had taken another step, bombarding the company’s call centers with fake phone calls—possibly to delay any warnings of the power outage from customers or simply to add another layer of chaos and humiliation.
There was another difference too. When the Americans asked whether, as in Kiev, cloned control software had sent the commands that shut off the power, the Prykarpattyaoblenergo engineers said no, that their circuit breakers had been opened by another method. That’s when the company’s technical director, a tall, serious man with black hair and ice-blue eyes, cut in. Rather than try to explain the hackers’ methods to the Americans through a translator, he offered to show them, clicking Play on a video he’d recorded himself on his battered iPhone 5s.
The 56-second clip showed a cursor moving around the screen of one of the computers in the company’s control room. The pointer glides to the icon for one of the breakers and clicks a command to open it. The video pans from the computer’s Samsung monitor to its mouse, which hasn’t budged. Then it shows the cursor moving again, seemingly of its own accord, hovering over a breaker and attempting again to cut its flow of power as the engineers in the room ask one another who’s controlling it.
The hackers hadn’t sent their blackout commands from automated malware, or even a cloned machine as they’d done at Kyivoblenergo. Instead, the intruders had exploited the company’s IT helpdesk tool to take direct control of the mouse movements of the stations’ operators. They’d locked the operators out of their own user interface. And before their eyes, phantom hands had clicked through dozens of breakers—each serving power to a different swath of the region—and one by one by one, turned them cold.
In August 2016, eight months after the first Christmas blackout, Yasinsky left his job at StarLightMedia. It wasn’t enough, he decided, to defend a single company from an onslaught that was hitting every stratum of Ukrainian society. To keep up with the hackers, he needed a more holistic view of their work, and Ukraine needed a more coherent response to the brazen, prolific organization that Sandworm had become. “The light side remains divided,” he says of the balkanized reaction to the hackers among their victims. “The dark side is united.”
So Yasinsky took a position as the head of research and forensics for a Kiev firm called Information Systems Security Partners. The company was hardly a big name. But Yasinsky turned it into a de facto first responder for victims of Ukraine’s digital siege.
Not long after Yasinsky switched jobs, almost as if on cue, the country came under another, even broader wave of attacks. He ticks off the list of casualties: Ukraine’s pension fund, the country’s treasury, its seaport authority, its ministries of infrastructure, defense, and finance. The hackers again hit Ukraine’s railway company, this time knocking out its online booking system for days, right in the midst of the holiday travel season. As in 2015, most of the attacks culminated with a KillDisk-style detonation on the target’s hard drive. In the case of the finance ministry, the logic bomb deleted terabytes of data, just as the ministry was preparing its budget for the next year. All told, the hackers’ new winter onslaught matched and exceeded the previous year’s—right up to its grand finale.
On December 16, 2016, as Yasinsky and his family sat watching Snowden, a young engineer named Oleg Zaychenko was four hours into his 12-hour night shift at Ukrenergo’s transmission station just north of Kiev. He sat in an old Soviet-era control room, its walls covered in beige and red floor-to-ceiling analog control panels. The station’s tabby cat, Aza, was out hunting; all that kept Zaychenko company was a television in the corner playing pop music videos.
The 20th and final circuit switched off and the lights in the control room went out, along with the computer and TV.
He was filling out a paper-and-pencil log, documenting another uneventful Saturday evening, when the station’s alarm suddenly sounded, a deafening continuous ringing. To his right Zaychenko saw that two of the lights indicating the state of the transmission system’s circuits had switched from red to green—in the universal language of electrical engineers, a sign that it was off.
The technician picked up the black desk phone to his left and called an operator at Ukrenergo’s headquarters to alert him to the routine mishap. As he did, another light turned green. Then another. Zaychenko’s adrenaline began to kick in. As he hurriedly explained the situation to the remote operator, the lights kept flipping: red to green, red to green. Eight, then 10, then 12.
As the crisis escalated, the operator ordered Zaychenko to run outside and check the equipment for physical damage. At that moment, the 20th and final circuit switched off and the lights in the control room went out, along with the computer and TV. Zaychenko was already throwing a coat over his blue and yellow uniform and sprinting for the door.
The transmission station is normally a vast, buzzing jungle of electrical equipment stretching over 20 acres, the size of more than a dozen football fields. But as Zaychenko came out of the building into the freezing night air, the atmosphere was eerier than ever before: The three tank-sized transformers arrayed alongside the building, responsible for about a fifth of the capital’s electrical capacity, had gone entirely silent. Until then Zaychenko had been mechanically ticking through an emergency mental checklist. As he ran past the paralyzed machines, the thought entered his mind for the first time: The hackers had struck again.
This time the attack had moved up the circulatory system of Ukraine’s grid. Instead of taking down the distribution stations that branch off into capillaries of power lines, the saboteurs had hit an artery. That single Kiev transmission station carried 200 megawatts, more total electric load than all the 50-plus distribution stations knocked out in the 2015 attack combined. Luckily, the system was down for just an hour—hardly long enough for pipes to start freezing or locals to start panicking—before Ukrenergo’s engineers began manually closing circuits and bringing everything back online.
But the brevity of the outage was virtually the only thing that was less menacing about the 2016 blackout. Cybersecurity firms that have since analyzed the attack say that it was far more evolved than the one in 2015: It was executed by a highly sophisticated, adaptable piece of malware now known as "CrashOverride," a program expressly coded to be an automated, grid-killing weapon.
Lee’s critical infrastructure security startup, Dragos, is one of two firms that have pored through the malware's code; Dragos obtained it from a Slovakian security outfit called ESET. The two teams found that, during the attack, CrashOverride was able to “speak” the language of the grid’s obscure control system protocols, and thus send commands directly to grid equipment. In contrast to the laborious phantom-mouse and cloned-PC techniques the hackers used in 2015, this new software could be programmed to scan a victim’s network to map out targets, then launch at a preset time, opening circuits on cue without even having an internet connection back to the hackers. In other words, it's the first malware found in the wild since Stuxnet that's designed to independently sabotage physical infrastructure.
“In 2015 they were like a group of brutal street fighters. In 2016, they were ninjas.”
And CrashOverride isn’t just a one-off tool, tailored only to Ukrenergo’s grid. It’s a reusable and highly adaptable weapon of electric utility disruption, researchers say. Within the malware’s modular structure, Ukrenergo’s control system protocols could easily be swapped out and replaced with ones used in other parts of Europe or the US instead.
Marina Krotofil, an industrial control systems security researcher for Honeywell who also analyzed the Ukrenergo attack, describes the hackers’ methods as simpler and far more efficient than the ones used in the previous year’s attack. “In 2015 they were like a group of brutal street fighters,” Krotofil says. “In 2016, they were ninjas.” But the hackers themselves may be one and the same; Dragos’ researchers have identified the architects of CrashOverride as part of Sandworm, based on evidence that Dragos is not yet ready to reveal.
For Lee, these are all troubling signs of Sandworm’s progress. I meet him in the bare-bones offices of his Baltimore-based critical infrastructure security firm, Dragos. Outside his office window looms a series of pylons holding up transmission lines. Lee tells me that they carry power 18 miles south, to the heart of Washington, DC.
For the first time in history, Lee points out, a group of hackers has shown that it’s willing and able to attack critical infrastructure. They’ve refined their techniques over multiple, evolving assaults. And they’ve already planted BlackEnergy malware on the US grid once before. “The people who understand the US power grid know that it can happen here,” Lee says.
To Sandworm’s hackers, Lee says, the US could present an even more convenient set of targets should they ever decide to strike the grid here. US power firms are more attuned to cybersecurity, but they are also more automated and modern than those in Ukraine—which means they could present more of a digital “attack surface.” And American engineers have less experience with manual recovery from frequent blackouts.
“Tell me what doesn’t change dramatically when key cities across half of the US don’t have power for a month.”
No one knows how, or where, Sandworm’s next attacks will materialize. A future breach might target not a distribution or transmission station but an actual power plant. Or it could be designed not simply to turn off equipment but to destroy it. In 2007 a team of researchers at Idaho National Lab, one that included Mike Assante, demonstrated that it’s possible to hack electrical infrastructure to death: The so-called Aurora experiment used nothing but digital commands to permanently wreck a 2.25-megawatt diesel generator. In a video of the experiment, a machine the size of a living room coughs and belches black and white smoke in its death throes. Such a generator is not all that different from the equipment that sends hundreds of megawatts to US consumers; with the right exploit, it’s possible that someone could permanently disable power-generation equipment or the massive, difficult-to-replace transformers that serve as the backbone of our transmission system. “Washington, DC? A nation-state could take it out for two months without much issue,” Lee says.
In fact, in its analysis of CrashOverride, ESET found that the malware may already include one of the ingredients for that kind of destructive attack. ESET’s researchers noted that CrashOverride contains code designed to target a particular Siemens device found in power stations—a piece of equipment that functions as a kill-switch to prevent dangerous surges on electric lines and transformers. If CrashOverride is able to cripple that protective measure, it might already be able to cause permanent damage to grid hardware.
An isolated incident of physical destruction may not even be the worst that hackers can do. The American cybersecurity community often talks about “advanced persistent threats”—sophisticated intruders who don’t simply infiltrate a system for the sake of one attack but stay there, silently keeping their hold on a target. In his nightmares, Lee says, American infrastructure is hacked with this kind of persistence: transportation networks, pipelines, or power grids taken down again and again by deep-rooted adversaries. “If they did that in multiple places, you could have up to a month of outages across an entire region,” he says. “Tell me what doesn’t change dramatically when key cities across half of the US don’t have power for a month.”
It’s one thing, though, to contemplate what an actor like Russia could do to the American grid; it’s another to contemplate why it would. A grid attack on American utilities would almost certainly result in immediate, serious retaliation by the US. Some cybersecurity analysts argue that Russia’s goal is simply to hem in America’s own cyberwar strategy: By turning the lights out in Kiev—and by showing that it’s capable of penetrating the American grid—Moscow sends a message warning the US not to try a Stuxnet-style attack on Russia or its allies, like Syrian dictator Bashar al-Assad. In that view, it’s all a game of deterrence.
“It would be hard to say we’re not vulnerable. Anything connected to something else is vulnerable.”
But Lee, who was involved in war-game scenarios during his time in intelligence, believes Russia might actually strike American utilities as a retaliatory measure if it ever saw itself as backed into a corner—say, if the US threatened to interfere with Moscow’s military interests in Ukraine or Syria. “When you deny a state’s ability to project power, it has to lash out,” Lee says.
People like Lee have, of course, been war-gaming these nightmares for well over a decade. And for all the sophistication of the Ukraine grid hacks, even they didn’t really constitute a catastrophe; the lights did, after all, come back on. American power companies have already learned from Ukraine’s victimization, says Marcus Sachs, chief security officer of the North American Electric Reliability Corporation. After the 2015 attack, Sachs says, NERC went on a road show, meeting with power firms to hammer into them that they need to shore up their basic cybersecurity practices and turn off remote access to their critical systems more often. “It would be hard to say we’re not vulnerable. Anything connected to something else is vulnerable,” Sachs says. “To make the leap and suggest that the grid is milliseconds away from collapse is irresponsible.”
But for those who have been paying attention to Sandworm for almost three years, raising an alarm about the potential for an attack on the US grid is no longer crying wolf. For John Hultquist, head of the team of researchers at FireEye that first spotted and named the Sandworm group, the wolves have arrived. “We’ve seen this actor show a capability to turn out the lights and an interest in US systems,” Hultquist says. Three weeks after the 2016 Kiev attack, he wrote a prediction on Twitter and pinned it to his profile for posterity: “I swear, when Sandworm Team finally nails Western critical infrastructure, and folks react like this was a huge surprise, I’m gonna lose it.”
The headquarters of Yasinsky’s firm, Information Systems Security Partners, occupies a low-lying building in an industrial neighborhood of Kiev, surrounded by muddy sports fields and crumbling gray high-rises—a few of Ukraine’s many lingering souvenirs from the Soviet Union. Inside, Yasinsky sits in a darkened room behind a round table that’s covered in 6-foot-long network maps showing nodes and connections of Borgesian complexity. Each map represents the timeline of an intrusion by Sandworm. By now, the hacker group has been the consuming focus of his work for nearly two years, going back to that first attack on StarLightMedia.
Yasinsky says he has tried to maintain a dispassionate perspective on the intruders who are ransacking his country. But when the blackout extended to his own home four months ago, it was “like being robbed,” he tells me. “It was a kind of violation, a moment when you realize your own private space is just an illusion.”
Yasinsky says there’s no way to know exactly how many Ukrainian institutions have been hit in the escalating campaign of cyberattacks; any count is liable to be an underestimate. For every publicly known target, there’s at least one secret victim that hasn’t admitted to being breached—and still other targets that haven’t yet discovered the intruders in their systems.
“They’re testing out red lines, what they can get away with. You push and see if you’re pushed back. If not, you try the next step.”
When we meet in ISSP’s offices, in fact, the next wave of the digital invasion is already under way. Behind Yasinsky, two younger, bearded staffers are locked into their keyboards and screens, pulling apart malware that the company obtained just the day before from a new round of phishing emails. The attacks, Yasinsky has noticed, have settled into a seasonal cycle: During the first months of the year, the hackers lay their groundwork, silently penetrating targets and spreading their foothold. At the end of the year, they unleash their payload. Yasinsky knows by now that even as he’s analyzing last year’s power grid attack, the seeds are already being sown for 2017’s December surprises.
Bracing for the next round, Yasinsky says, is like “studying for an approaching final exam.” But in the grand scheme, he thinks that what Ukraine has faced for the past three years may have been just a series of practice tests.
He sums up the attackers’ intentions until now in a single Russian word: poligon. A training ground. Even in their most damaging attacks, Yasinsky observes, the hackers could have gone further. They could have destroyed not just the Ministry of Finance’s stored data but its backups too. They probably could have knocked out Ukrenergo’s transmission station for longer or caused permanent, physical harm to the grid, he says—a restraint that American analysts like Assante and Lee have also noted. “They’re still playing with us,” Yasinsky says. Each time, the hackers retreated before accomplishing the maximum possible damage, as if reserving their true capabilities for some future operation.
Many global cybersecurity analysts have come to the same conclusion. Where better to train an army of Kremlin hackers in digital combat than in the no-holds-barred atmosphere of a hot war inside the Kremlin’s sphere of influence? “The gloves are off. This is a place where you can do your worst without retaliation or prosecution,” says Geers, the NATO ambassador. “Ukraine is not France or Germany. A lot of Americans can’t find it on a map, so you can practice there.” (At a meeting of diplomats in April, US secretary of state Rex Tillerson went so far as to ask, “Why should US taxpayers be interested in Ukraine?”)
In that shadow of neglect, Russia isn’t only pushing the limits of its technical abilities, says Thomas Rid, a professor in the War Studies department at King’s College London. It’s also feeling out the edges of what the international community will tolerate. The Kremlin meddled in the Ukrainian election and faced no real repercussions; then it tried similar tactics in Germany, France, and the United States. Russian hackers turned off the power in Ukraine with impunity—and, well, the syllogism isn’t hard to complete. “They’re testing out red lines, what they can get away with,” Rid says. “You push and see if you’re pushed back. If not, you try the next step.”
What will that next step look like? In the dim back room at ISSP’s lab in Kiev, Yasinsky admits he doesn’t know. Perhaps another blackout. Or maybe a targeted attack on a water facility. “Use your imagination,” he suggests drily.
Behind him the fading afternoon light glows through the blinds, rendering his face a dark silhouette. “Cyberspace is not a target in itself,” Yasinsky says. “It’s a medium.” And that medium connects, in every direction, to the machinery of civilization itself.
When the DNA results came back, even Lukis Anderson thought he might have committed the murder.
"I drink a lot," he remembers telling public defender Kelley Kulick as they sat in a plain interview room at the Santa Clara County, California, jail. Sometimes he blacked out, so it was possible he did something he didn't remember. "Maybe I did do it."
Kulick shushed him. If she was going to keep her new client off death row, he couldn't go around saying things like that. But she agreed. It looked bad.
Before he was charged with murder, Anderson was a 26-year-old homeless alcoholic with a long rap sheet who spent his days hustling for change in downtown San Jose. The murder victim, Raveesh Kumra, was a 66-year-old investor who lived in Monte Sereno, a Silicon Valley enclave 10 miles and many socioeconomic rungs away.
Around midnight on November 29, 2012, a group of men had broken into Kumra's 7,000-square-foot mansion. They found him watching CNN in the living room, tied him, blindfolded him, and gagged him with mustache-print duct tape. They found his companion, Harinder, asleep in an upstairs bedroom, hit her on the mouth, and tied her up next to Raveesh. Then they plundered the house for cash and jewelry.
After the men left, Harinder, still blindfolded, felt her way to a kitchen phone and called 911. Police arrived, then an ambulance. One of the paramedics declared Raveesh dead. The coroner would later conclude that he had been suffocated by the mustache tape.
Three and a half weeks later, the police arrested Anderson. His DNA had been found on Raveesh's fingernails. They believed the men struggled as Anderson tied up his victim. They charged him with murder. Kulick was appointed to his case.
As they looked at the DNA results, Anderson tried to make sense of a crime he had no memory of committing.
"Nah, nah, nah. I don't do things like that," he recalls telling her. "But maybe I did."
"Lukis, shut up," Kulick says she told him. "Let's just hit the pause button till we work through the evidence to really see what happened."
What happened, although months would pass before anyone figured it out, was that Lukis Anderson's DNA had found its way onto the fingernails of a dead man he had never even met.
Back in the 1980s, when DNA forensic analysis was still in its infancy, crime labs needed a speck of bodily fluid—usually blood, semen, or spit—to generate a genetic profile.
That changed in 1997, when Australian forensic scientist Roland van Oorschot stunned the criminal justice world with a nine-paragraph paper titled "DNA Fingerprints from Fingerprints." It revealed that DNA could be detected not just from bodily fluids but from traces left by a touch. Investigators across the globe began scouring crime scenes for anything—a doorknob, a countertop, a knife handle—that a perpetrator may have tainted with incriminating "touch" DNA.
But van Oorschot's paper also contained a vital observation: Some people's DNA appeared on things that they had never touched.
In the years since, van Oorschot's lab has been one of the few to investigate this phenomenon, dubbed "secondary transfer." What they have learned is that, once it's out in the world, DNA doesn't always stay put.
In one of his lab's experiments, for instance, volunteers sat at a table and shared a jug of juice. After 20 minutes of chatting and sipping, swabs were deployed on their hands, the chairs, the table, the jug, and the juice glasses, then tested for genetic material. Although the volunteers never touched each other, 50 percent wound up with another's DNA on their hand. A third of the glasses bore the DNA of volunteers who did not touch or drink from them.
Then there was the foreign DNA—profiles that didn't match any of the juice drinkers. It turned up on about half of the chairs and glasses, and all over the participants' hands and the table. The only explanation: The participants unwittingly brought with them alien genes, perhaps from the lover they kissed that morning, the stranger with whom they had shared a bus grip, or the barista who handed them an afternoon latte.
In a sense, this isn't surprising: We leave a trail of ourselves everywhere we go. An average person may shed upward of 50 million skin cells a day. Attorney Erin Murphy, author of Inside the Cell, a book about forensic DNA, has calculated that in two minutes the average person sheds enough skin cells to cover a football field. We also spew saliva, which is packed with DNA. If we stand still and talk for 30 seconds, our DNA may be found more than a yard away. With a forceful sneeze, it might land on a nearby wall.
To find out the prevalence of DNA in the world, a group of Dutch researchers tested 105 public items—escalator rails, public toilet door handles, shopping basket handles, coins. Ninety-one percent bore human DNA, sometimes from half a dozen people. Even items intimate to us—the armpits of our shirts, say—can bear other people's DNA, they found.
The itinerant nature of DNA has serious implications for forensic investigations. After all, if traces of our DNA can make their way to a crime scene we never visited, aren't we all possible suspects?
Forensic DNA has other flaws: Complex mixtures of many DNA profiles can be wrongly interpreted, certainty statistics are often wildly miscalculated, and DNA analysis robots have sometimes been stretched past the limits of their sensitivity.
But as advances in technology are solving some of these problems, they have actually made the problem of DNA transfer worse. Each new generation of forensic tools is more sensitive; labs today can identify people with DNA from just a handful of cells. A handful of cells can easily migrate.
A survey of the published science, interviews with leading scientists, and a review of thousands of pages of court and police documents associated with the Kumra case has elucidated how secondary DNA transfer can undermine the credibility of the criminal justice system's most-trusted tool. And yet, very few crime labs worldwide regularly and robustly study secondary DNA transfer.
This is partly because most forensic scientists believe DNA to be the least of their field's problems. They're not wrong: DNA is the most accurate forensic science we have. It has exonerated scores of people convicted based on more flawed disciplines like hair or bite-mark analysis. And there have been few publicized cases of DNA mistakenly implicating someone in a crime.
But, like most human enterprises, DNA analysis is not perfect. And without study, the scope and impact of that imperfection is difficult to assess, says Peter Gill, a British forensic researcher. He has little doubt that his field, so often credited with solving crimes, is also responsible for wrongful convictions.
"The problem is we're not looking for these things," Gill says. "For every miscarriage of justice that is detected, there must be a dozen that are never discovered."
The phone rang five times.
"Are you awake?" the dispatcher asked.
"Yeah," lied Corporal Erin Lunsford.
"Are you back on full duty or you still light duty?" she asked, according to a tape of the call.
Lunsford had been off crutches for two weeks already, but it was 2:15 am and pouring rain. Probably some downed tree needed to be policed. "Light duty," Lunsford said.
"Oh," she said. "Never mind."
"Why, what are you calling about?" he asked.
"We had a home invasion that turned into a 187," she said. Cop slang for murder.
"Shit, seriously?" Lunsford said, waking up.
Lunsford had served all 15 of his professional years as a police officer at the Los Gatos–Monte Sereno Police Department, a 38-officer agency that policed two drowsy towns. He rose through the ranks and was working a stint in the department's detective bureau. He had mostly been investigating property crimes. Los Gatos, a wealthy bedroom community of Silicon Valley, averaged a homicide once every three or four years. Monte Sereno, a bedroom community of the bedroom community, hadn't had a homicide in roughly 20.
Lunsford got dressed. He drove through the November torrent. He spotted cop cars clustered around a brick and iron gate. An ambulance flashed quietly in the driveway. Beyond it, the lit Kumra mansion.
Lunsford's boss told him to take the lead on the investigation. The on-scene supervisor walked him through the house. Dressers emptied, files dumped. A cellphone in a toilet, pissed on. A refrigerator beeping every 10 seconds, announcing its doors were ajar. Raveesh’s body, heavyset and disheveled, on the floor near the kitchen. His eyes still blindfolded.
An investigator from the county coroner's office arrived and moved Kumra's body into a van. Lunsford followed her to the morgue for the autopsy. A doctor undressed the victim and scraped and cut his fingernails for evidence.
Lunsford recognized Raveesh, a wealthy businessman who had once owned a share of a local concert venue. Lunsford had come to the Kumra mansion a couple times on "family calls" that never amounted to anything: "Just people arguing," he recalled. He had also run into him at Goguen's Last Call, a dive frequented by Raveesh as a regular and Lunsford as a cop responding to calls. Raveesh was an affable extrovert, always buying rounds; the unofficial mayor of that part of town, Lunsford called him.
In the coming days, as Lunsford interviewed people who knew the Kumras, he was told that Raveesh also had relationships with sex workers. Raveesh and Harinder had divorced around 2010 after more than 30 years of marriage, but still lived together.
While Lunsford attended the autopsy, a team of gloved investigators combed the mansion. They tucked paper evidence into manila envelopes; bulkier items into brown paper bags. They amassed more than 100.
Teams specializing in crime scene investigations were first assembled over a century ago, after the French scientist Edmond Locard devised the principle that birthed the field of forensics: A perpetrator will bring something to a crime scene and leave with something from it. Van Oorschot's touch DNA discovery had unveiled the most literal expression imaginable of Locard's principle.
Like those early teams, the investigators in the Kumra mansion were looking for fingerprints, footprints, and hair. But unlike their predecessors, they devoted considerable time to thinking through everything the perpetrators may have touched.
Some perpetrators are giving thought to this as well. A 2013 Canadian study of 350 sexual homicides found that about a third of perpetrators appeared to have taken care not to leave DNA, killing their victims in tidier ways than beating or strangling, which are likely to leave behind genetic clues, for instance. And it worked: In those "forensically aware" cases, police solved the case 50 percent of the time, compared to 83 percent of their sloppier counterparts.
The men who killed Kumra seemed somewhat forensically aware, albeit clumsily. They had worn latex gloves through their rampage; a pile of them were left in the kitchen sink, wet and soapy, as though someone had tried to wash off the DNA.
In the weeks after the murder, Tahnee Nelson Mehmet, a criminalist at the county crime lab, ran dozens of tests on the evidence collected from the Kumra mansion. Most only revealed DNA profiles consistent with Raveesh or Harinder.
But in her first few batches of evidence, Mehmet hit forensic pay dirt: a handful of unknown profiles—including on the washed gloves. She ran them through the state database of people arrested for or convicted of felonies and got three hits, all from the Bay Area: 22-year-old DeAngelo Austin on the duct tape; 21-year-old Javier Garcia on the gloves; and, on the fingernail clippings, 26-year-old Lukis Anderson.
Within weeks of the DNA hits, Lunsford had plenty of evidence implicating Austin and Garcia: Both were from Oakland, but a warrant for their cellphone records showed they'd pinged towers near Monte Sereno the night of the homicide. Police records showed that Austin belonged to a gang linked to a series of home burglaries. And most damning of all, Austin's older sister, a 32-year-old sex worker named Katrina Fritz, had been involved with Raveesh for 12 years. Police had even found her phone backed up on Raveesh's computer. Eventually she would admit that she had given her brother a map of the house.
Connecting Anderson to the crime proved trickier. There were no phone records showing he had traveled to Monte Sereno that night. He wasn't associated with a gang. But one thing on his rap sheet drew Lunsford's attention: A felony residential burglary.
Eventually Lunsford found a link. A year earlier, Anderson had been locked up in the same jail as a friend of Austin's named Shawn Hampton. Hampton wore an ankle monitor as a condition of his parole. It showed that two days before the crime he had driven to San Jose. He made a couple of stops downtown, right near Anderson's territory.
It started to crystallize for Lunsford: When Austin was planning the break-in, he wanted a local guy experienced in burglary. So Hampton hooked him up with his jail buddy Anderson.
Anderson had recently landed back in jail after violating his probation on the burglary charge. Lunsford and his boss, Sergeant Mike D'Antonio, visited him there. They taped the interview.
"Does this guy look familiar to you? What about this lady?" Lunsford said, laying out pictures of the victims on the interview room table.
"I don't know, man," Anderson said.
Lunsford pulled out a picture of Anderson's mother.
"All right, what about this lady here? You don't know who she is?" Lunsford said.
Anderson met Lunsford's sarcasm with silence.
Lunsford set down a letter from the state of California showing the database match between Anderson's DNA and the profile found on the victim's fingernails.
"This starting to ring some bells?" Lunsford said.
"My guess is you didn't think anybody was gonna be home," D'Antonio said. "My guess is it went way farther than you ever thought it would go."
"I don't know what you're talking about, sir," Anderson said.
"You do," Lunsford said. "You won't look at their pictures. The only picture you looked at good was your mom."
Finally, D'Antonio took a compromising tone.
"Lukis, Lukis, Lukis," he said. "I don't have a crystal ball to know what the truth is. Only you do. And in all the years I've been doing this I've never seen a DNA hit being wrong."
Anderson had been in jail on the murder charge for over a month when a defense investigator dropped a stack of records on Kulick's desk. Look at them, the investigator said. Now.
They were Anderson's medical records. Because his murder charge could carry the death penalty, Kulick had the investigator pull everything pertinent to Anderson's medical history, including his mental health, in case they had to ask for leniency during sentencing.
She suspected Anderson could be a good candidate for such leniency. He spent much of his childhood homeless. In early adulthood, he was diagnosed with a mental health disorder and diabetes. And he had developed a mighty alcohol addiction. One day, while drunk, he stepped off a curb and into the path of a moving truck. He survived, but his memory was never quite right again. He lost track of days, sometimes several in a row.
That's not to say his life was bleak. He made friends easily. He had a coy sense of humor and dimples that shone like headlights. His buddies, many on the streets themselves, looked after him, as did some downtown shopkeepers. Kulick and her investigator had spoken to several of them. They shook their heads. Anderson might be a drunk, they said, but he wasn't a killer.
His rap sheet seemed to agree. It was filled with petty crimes: drunk in public, riding a bike under the influence, probation violations. The one serious conviction—the residential burglary that had caught Lunsford's attention—seemed more benign upon careful reading. According to the police report, Anderson had drunkenly broken the front window of a home and tried to crawl through. The horrified resident had pushed him back out with blankets. Police found him a few minutes later standing on the sidewalk, dazed and bleeding. Though nothing had been stolen, he had been charged with a felony and pleaded no contest. His DNA was added to the state criminal database.
The medical records showed that Anderson was also a regular in county hospitals. Most recently, he had arrived in an ambulance to Valley Medical Center, where he was declared inebriated nearly to the point of unconsciousness. Blood alcohol tests indicated he had consumed the equivalent of 21 beers. He spent the night detoxing. The next morning he was discharged, somewhat more sober.
The date on that record was November 29. If the record was right, Anderson had been in the hospital precisely as Raveesh Kumra was suffocating on duct tape miles away.
Kulick remembers turning to the investigator, who was staring back at her. She was used to alibis being partial and difficult to prove. This one was signed by hospital staff. More than anything, she felt terrified. "To know that you have a factually innocent client sitting in jail facing the death penalty is really scary," she said later. "You don't want to screw up."
She knew Lunsford and the prosecutors would try to find holes: Perhaps the date on the record was wrong, or someone had stolen his ID, or there was more than one Lukis Anderson.
So she and the investigator systematically retraced his day. Anderson had only patchy recollections of the night in question. But they found a record that a 7-Eleven clerk called authorities at 7:54 pm complaining that Anderson was panhandling. He moved on before the police arrived.
His meanderings took him four blocks east, to S&S Market. The clerk there told Kulick that Anderson sat down in front of the store at about 8:15 pm, already drunk, and got drunker. A couple of hours later, he wandered into the store and collapsed in an aisle. The clerk called the authorities.
The police arrived first, followed by a truck from the San Jose Fire Department. A paramedic with the fire department told Kulick he had picked up Anderson drunk so often that he knew his birth date by heart. Two other paramedics arrived with an ambulance. They wrestled Anderson onto a stretcher and took him to the hospital. According to his medical records, he was admitted at 10:45 pm. The doctor who treated him said Anderson remained in bed through the night.
Harinder Kumra had said the men who killed Raveesh rampaged through her house sometime between 11:30 pm and 1:30 am.
Kulick called the district attorney's office. She wanted to meet with them and Lunsford.
In 2008, German detectives were on the trail of the "Phantom of Heilbronn." A serial killer and thief, the Phantom murdered immigrants and a cop, robbed a gemstone trader, and munched on a cookie while burglarizing a caravan. Police mobilized across borders, offered a large reward, and racked up more than 16,000 hours on the hunt. But they struggled to discern a pattern to the crimes, other than the DNA profile the Phantom left at 40 crime scenes in Germany, France, and Austria.
At long last, they found the Phantom: An elderly Polish worker in a factory that produced the swabs police used to collect DNA. She had somehow contaminated the swabs as she worked. Crime scene investigators had, in turn, contaminated dozens of crime scenes with her DNA.
Contamination, the unintentional introduction of DNA into evidence by the very people investigating the crime, is the best understood form of transfer. And after Lunsford heard Kulick's presentation—then retraced Anderson's day himself, concluded he had jailed an innocent man, and felt sick to his stomach for a while—he counted contamination among his leading theories.
As the Phantom of Heilbronn case demonstrated, contamination can happen long before evidence arrives in a lab. A 2016 study by Gill, the British forensic researcher, found DNA on three-quarters of crime scene tools he tested, including cameras, measuring tapes, and gloves. Those items can pick up DNA at one scene and move it to the next.
Once it arrives in the lab, the risk continues: One set of researchers found stray DNA in even the cleanest parts of their lab. Worried that the very case files they worked on could be a source of contamination, they tested 20. Seventy-five percent held the DNA of people who hadn't handled the file.
In Santa Clara County, the district attorney's office reviewed the Kumra case and found no obvious evidence of errors or improper use of tools in the crime lab. They checked if Anderson's DNA had shown up in any other cases the lab had recently handled, and inadvertently wandered into the Kumra case. It had not.
So they began investigating a second theory: That Raveesh and Anderson somehow met in the hours or days before the homicide, at which point Anderson's DNA became caught under Raveesh's fingernails.
"We are convinced that at some point—we just don't know when in the 24 hours, 48 hours, or 72 hours beforehand—that their paths crossed," deputy district attorney Kevin Smith told a San Francisco Chronicle reporter.
There now exists a small pile of studies exploring how DNA moves: If a man shakes someone's hand and then uses the restroom, could their DNA wind up on his penis? (Yes.) If someone drags another person by the ankles, how often does their profile clearly show up? (40 percent of the time.) And, of utmost relevance to Lukis Anderson, how many of us walk around with traces of other people's DNA on our fingernails? (1 in 5.)
Whether someone's DNA moves from one place to another—and then is found there—depends on a handful of factors: quantity (two transferred cells are less likely to be detected than 2,000), vigor of contact (a limp handshake relays less DNA than a bone-crushing one), the nature of the surfaces involved (a tabletop's chemical content affects how much DNA it picks up), and elapsed time (we're more likely carrying DNA of someone we just hugged than someone we hugged hours ago, since foreign DNA tends to rub off over time).
Then there's a person's shedding status: "Good" shedders lavish their DNA on their environment; "poor" shedders move through the world virtually undetectable, genetically speaking. In general, flaky, sweaty, or diseased skin is thought to shed more DNA than healthy, arid skin. Nail chewers, nose pickers, and habitual face touchers spread their DNA around, as do hands that haven't seen a bar of soap lately—discarded DNA can accumulate over time, and soap helps wash it away.
And some people simply seem to be naturally superior shedders. Mariya Goray, a forensic science researcher in van Oorschot's lab who coauthored the juice study with him, has found one of her colleagues to be an outrageously prodigious shedder. "He's amazing," she said, her voice tinged with admiration. "Maybe I'll do a study on him. And the study will just be called, 'James.'"
She hopes to develop a test to determine a person's shedder status, which could be deployed to assess a suspect's claims that their DNA arrived somewhere innocently.
Such a test could have been useful in the case of David Butler, an English cabdriver. In 2011, DNA found on the fingernails of a woman who had been murdered six years earlier was run through a database and matched Butler's. He swore he'd never met the woman. His defense attorney noted that he had a skin condition so severe that fellow cabbies had dubbed him "Flaky." Perhaps he had given a ride to the actual murderer that day, who inadvertently picked up Butler's DNA in the cab and later deposited it on the victim, they theorized.
Investigators didn't buy the explanation, but jurors did. Butler was acquitted after eight months in jail. Upon release, he excoriated police for their blind faith in the evidence.
"DNA has become the magic bullet for the police," Butler told the BBC. "They thought it was my DNA, ergo it must be me."
Traditional police work would have never steered police to Anderson. But the DNA hit led them to seek other evidence confirming his guilt. "It wasn't malicious. It was confirmation bias," Kulick says. "They got the DNA, and then they made up a story to fit it."
Had the case gone to trial, jurors may well have done the same. A 2008 series of studies by researchers at the University of Nevada, Yale, and Claremont McKenna College found that jurors rated DNA evidence as 95 percent accurate and 94 percent persuasive of a suspect's guilt.
Eleven leading DNA transfer scientists contacted for this story were in consensus that the criminal justice system must be willing to question DNA evidence. They were also in agreement about whose job it should be to navigate those queries: forensic scientists.
As it stands, forensic scientists generally stick to the question of source (whose DNA is this?) and leave activity (how did it get here?) for judges and juries to wrestle with. But the researchers contend that forensic scientists are best armed with the information necessary to answer that question.
Consider a case in which a man is accused of sexually assaulting his stepdaughter. He looks mighty guilty when his DNA and a fragment of sperm is found on her underwear. But jurors might give the defense more credence if a forensic scientist familiarized them with a 2016 Canadian study showing that fathers' DNA is frequently found on their daughters' clean underwear; occasionally, a fragment of sperm is there too. It migrates there in the wash.
This shift—from reporting on who to reporting on how—has been encouraged by the European Network of Forensic Science Institutes. But the shift has been slow on that continent and virtually nonexistent in the United States, where defense attorneys have argued that forensic scientists—in many communities employed by the prosecutor's office or police department—should be careful to stick to the facts rather than make conjectures.
"The problem is that when forensic scientists get involved in those determinations, they're wrought with confirmation bias," says Jennifer Friedman, a Los Angeles County public defender.
Meanwhile, forensic scientists in the US have resisted the shift, arguing they lack the data to confidently testify about how DNA moves.
Van Oorschot and Gill concede this point. Only a handful of labs in Europe and Australia regularly research transfer. The forensic scientists interviewed for this story say they are not aware of any lab or university in the US that routinely does so.
Funding gets some of the blame: The Australian labs and some European labs get government dollars to study DNA transfer. But British forensic researcher and professor Georgina Meakin of University College London says she must find alternative ways to pay for her own transfer research; the Centre for Forensic Sciences, where Meakin works, has launched a crowdfunding page for a new lab to study trace evidence transfer. In the US, all the grants from the National Science Foundation, the National Institute of Standards and Technology, and the National Institute of Justice for forensics research put together likely sum just $13.5 million a year, according to a 2016 report on forensic science by the President's Council of Advisors on Science and Technology (PCAST); of that, very little has been spent looking into DNA transfer.
"The folks with the greatest interest in making sure forensic science isn't misused are defendants," says Eric Lander, principal leader of the Human Genome Project, who cochaired PCAST under President Obama. "Defendants don't have an awful lot of power."
In 2009, after issuing a report harshly criticizing the paucity of science behind most forensics, the National Academy of Sciences urged Congress to create a new, independent federal agency to oversee the field. There was little political appetite to do that. Instead, in 2013, Obama created a 40-member National Commission on Forensic Science, filled it with people who saw forensics from radically different perspectives—prosecutors, defense attorneys, academics, lab analysts, and scientists—and made a rule that all actions must be approved by a supermajority. Naturally, the commission got off to a slow start. But ultimately it produced more than 40 recommendations and opinions. These lacked the teeth of a regulatory ruling, but the Justice Department was obligated to respond to them.
At the beginning, most of the commission's efforts were focused on improving other disciplines, "because DNA testing as a whole is so much better than much forensic science that we had focused a lot of our attention elsewhere," says US district judge Jed Rakoff, a member of the commission.
According to Rakoff and other members interviewed, the commission was just digging into issues touching on DNA transfer when attorney general Jeff Sessions took office last year. In April 2017, his department announced it would not renew the commission's charter. It never met again.
Then, in August, President Trump signed the Rapid DNA Act of 2017, allowing law enforcement to use new technology that produces DNA results in just 90 minutes. The bill had bipartisan support and received little press. But privacy advocates worry it may usher in an era of widespread "stop and spit" policing, in which law enforcement asks anyone they stop for a DNA sample. This is already occurring in towns in Florida, Connecticut, North Carolina, and Pennsylvania, according to reporting by ProPublica. If law enforcement deems there is probable cause, they can compel someone to provide DNA; otherwise, it is voluntary.
If stop-and-spit becomes more widely used and police databases swell, it could have a disproportionate impact on African Americans and Latinos, who are more often searched, ticketed, and arrested by police. In most states, a felony arrest is enough to add someone in perpetuity to the state database. Just this month, the California Supreme Court declined to overturn a provision requiring all people arrested or charged for a felony to give up their DNA; in Oklahoma, the DNA of any undocumented immigrant arrested on suspicion of any crime is added to a database. Those whose DNA appears in a database face a greater risk of being implicated in a crime they didn't commit.
It was Lunsford who figured it out in the end.
He was reading through Anderson's medical records and paused on the names of the ambulance paramedics who picked up Anderson from his repose on the sidewalk outside S&S Market. He had seen them before.
He pulled up the Kumra case files. Sure enough, there were the names again: Three hours after picking up Anderson, the two paramedics had responded to the Kumra mansion, where they checked Raveesh's vitals.
The prosecutors, defense attorney, and police agree that somehow, the paramedics must have moved Anderson's DNA from San Jose to Monte Sereno. Santa Clara County District Attorney Jeff Rosen has postulated that a pulse oximeter slipped over both patients' fingers may have been the culprit; Kulick thinks it could have been their uniforms or another piece of equipment. It may never be known for sure.
A spokesman for Rural/Metro Corporation, where the paramedics worked, told San Francisco TV station KPIX5 that the company had high sanitation standards, requiring paramedics to change gloves and sanitize the vehicles.
Deputy District Attorney Smith framed the incident as a freak accident. "It's a small world," he told a San Francisco Chronicle reporter.
The trial against the other men implicated in the case moved forward. Austin's older sister, Fritz, testified in trials against him and Garcia. She also testified against a third man, Marcellous Drummer, whose DNA had been found on evidence from the Kumra crime scene months after the initial hits.
During the trials, Harinder Kumra told jurors she was still haunted by the image of the man who split her lip open. "Every day I see that face. Every night when I sleep, when there's a noise, I think it's him," she said. She has sold the mansion. Members of the Kumra family declined to comment for this story.
The DNA in the case did not go uncontested. Garcia's attorney argued that, like Anderson's, Garcia's DNA had arrived at the scene inadvertently. According to the attorney, Austin had come by the trap house where Garcia hung out to pick up Garcia's cousin; the cousin was in on the crime and had borrowed a box of gloves that Garcia frequently used, which is why Garcia's DNA was found on the gloves at the crime scene; the reason Garcia's cellphone pinged towers near Monte Sereno was because his cousin had borrowed it that night. However, the cousin died within weeks of the crime, and therefore wasn't questioned or investigated.
Jurors were not persuaded and convicted Garcia, along with Drummer and Austin, of murder, robbery of an inhabited place, and false imprisonment.
"I get it," says Garcia's attorney Christopher Givens. "People hear DNA and say, oh, sure you loaned your phone to someone."
A jury could have had the same reaction to Anderson, had his alibi not been discovered, Givens says. "The sad thing is, I wouldn't be surprised if he actually pleaded to something. They probably would have offered him a deal, and he would have been scared enough to take it."
Garcia received a sentence of 37 years to life; Drummer and Austin's sentences were enhanced for gang affiliation to life without parole. Garcia and Austin have appeals pending. Fritz received a reduced sentence for her testimony. In 2017 she was released from jail after spending four years in custody.
Lunsford received accolades for his detective work in the Kumra case and has since been promoted to sergeant; his boss, D'Antonio, is now a captain. But Lunsford says his perspective on DNA has forever changed. "We shook hands, and I transferred on you, you transferred on me. It happens. It's just biological," he says.
Based on interviews with prosecutors, defense lawyers and DNA experts, Anderson's case is the clearest known case of DNA transference implicating an innocent man. It's impossible to say how often this kind of thing happens, but law enforcement officials argue that it is well outside the norm. "There is no piece of evidence or science which is absolutely perfect, but DNA is the closest we have," says District Attorney Rosen. "Mr. Anderson was a very unusual situation. We haven't come across it again."
Van Oorschot, the forensic science researcher whose 1997 paper revolutionized the field, cautions against disbelieving too much in the power of touch DNA to solve crimes. "I think it's made a huge impact in a positive way," he says. "But no one should ever rely solely on DNA evidence to judge what's going on."
Anderson's case has altered the criminal justice system in a small but important way, says Kulick.
"As defense attorneys, we used to get laughed out of the courtroom if in closing arguments we argued transfer," she says. "That was hocus-pocus. That was made up fiction. But Lukis showed us that it's real."
The cost of that demonstration was almost half a year of Anderson's life.
Being accused of murder was "gut-wrenching," he says. It pains him that he questioned his own innocence, even though, he says, "deep down I knew I didn't do it."
After he was released, Anderson returned to the streets. As is typical in cases where people are wrongly implicated in a crime, he received no compensation for his time in jail. He has continued to struggle with alcohol but has stayed out of major legal trouble since. He's applying for Social Security, which could help him finally secure housing.
Anderson feels certain he's not the only innocent person to be locked up because of transfer. He considers himself blessed by God to be free. And he has advice about DNA evidence: "There's more that's gotta be looked at than just the DNA," he says. "You've got to dig deeper a little more. Re-analyze. Do everything all over again … before you say 'this is what it is.' Because it may not necessarily be so."
Biologist Explains One Concept in 5 Levels of Difficulty – CRISPR
CRISPR is a new biomedical technique that enables powerful gene editing. WIRED challenged biologist Neville Sanjana to explain CRISPR to 5 different people; a child, a teen, a college student, a grad student, and a CRISPR expert.
The email arrived just as Megan Squire was starting to cook Thanksgiving dinner. She was flitting between the kitchen, where some chicken soup was simmering, and her living room office, when she saw the subject line flash on her laptop screen: “LOSer Leak.” Squire recognized the acronym of the League of the South, a neo-Confederate organization whose leaders have called for a “second secession” and the return of slavery. An anonymous insider had released the names, addresses, emails, passwords, and dues-paying records of more than 4,800 members of the group to a left-wing activist, who in turn forwarded the information to Squire, an expert in data mining and an enemy of far-right extremism.
Fingers tapping across the keyboard, Squire first tried to figure out exactly what she had. She pulled up the Excel file’s metadata, which suggested that it had passed through several hands before reaching hers. She would have to establish its provenance. The data itself was a few years old and haphazardly assembled, so Squire had to rake the tens of thousands of information-filled cells into standardized sets. Next, she searched for League members near her home of Gibsonville, North Carolina. When she found five, she felt a shiver. She had recently received death threats for her activism, so she Googled the names to find images, in case those people showed up at her door. Then she began combing through the thousands of other names. Two appeared to be former South Carolina state legislators, one a firearms industry executive, another a former director at Bank of America.
Once she had a long list of people to investigate, Squire opened a database of her own design—named Whack-a-Mole—which contains, as far as anyone can tell, the most robust trove of information on far-right extremists. When she cross-checked the names, she found that many matched, strengthening her belief in the authenticity of the leak. By midafternoon, Squire was exchanging messages via Slack with an analyst at the Southern Poverty Law Center, a 46-year-old organization that monitors hate groups. Squire often feeds data to the SPLC, whose analysts might use it to provide information to police or to reveal white supremacists to their employers, seeking to get them fired. She also sent several high-profile names from the list back to the left-wing activist, who she knew might take more radical action—like posting their identities and photos online, for the public to do with what it would.1
Squire, a 45-year-old professor of computer science at Elon University, lives in a large white house at the end of a suburban street. Inside are, usually, some combination of husband, daughter, two step-children, rescue dog, and cat. In her downtime she runs marathons and tracks far-right extremists. Whack-a-Mole, her creation, is a set of programs that monitors some 400,000 accounts of white nationalists on Facebook and other websites and feeds that information into a centralized database. She insists she is scrupulous to not break the law or violate Facebook’s terms of service. Nor does she conceal her identity, in person or online: “We shouldn’t have to mask up to say Nazis are bad. And I want them to see I don’t fit their stereotypes—I’m not a millennial or a ‘snowflake.’ I’m a peaceful white mom who definitely doesn’t like what they’re saying.”
Though Squire may be peaceful herself, among her strongest allies are “antifa” activists, the far-left antifascists. She doesn’t consider herself to be antifa and pushes digital activism instead of the group’s black-bloc tactics, in which bandanna-masked activists physically attack white supremacists. But she is sympathetic to antifa’s goal of silencing racist extremists and is unwilling to condemn their use of violence, describing it as the last resort of a “diversity of tactics.” She’s an intelligence operative of sorts in the battle against far-right extremism, passing along information to those who might put it to real-world use. Who might weaponize it.
As day shifted to evening, Squire closed the database so she could finish up cooking and celebrate Thanksgiving with her family and friends. Over the next three weeks, the SPLC, with help from Squire, became comfortable enough with the information to begin to act on it. In the shadowy world of the internet, where white nationalists hide behind fake accounts and anonymity is power, Whack-a-Mole was shining a searchlight. By mid-December, the SPLC had compiled a list of 130 people and was contacting them, to give them a chance to respond before possibly informing their employers or taking legal action. Meanwhile, the left-wing activist whom Squire had separately sent data to was preparing to release certain names online. This is just how Squire likes it. Hers is a new, digitally enabled kind of vigilante justice. With no clear-cut rules for just how far a citizen could and should go, Squire has made up her own.
Squire grew up near Virginia Beach in a conservative Christian family. She has been involved in left-leaning movements since she was 15, when her high school environmental club took a trip to protest the pollution from an industrial pig farm. “I loved the activist community,” she says, “and saying things we weren’t supposed to say.” After getting degrees in art history and public policy from William & Mary, she became interested in computers and took a job as a secretary at an antivirus software company, working her way up to webmaster. She eventually got a PhD in computer science from Nova Southeastern University in Florida and moved to North Carolina to work at startup companies before landing a job teaching at Elon. Between classes she could often be spotted around town waving signs against the Iraq War, and in 2008 she went door to door campaigning for Barack Obama. But Obama’s failure, in her view, to live up to his rhetoric, compounded by the Great Recession, was “the turning point when I just threw in the towel on electoral politics,” she says. She plunged into the Occupy movement, coming to identify as a pacifist-anarchist, but she eventually became disillusioned with that as well when the movement’s “sparkle-fingers” utopianism, as she puts it, failed to generate results. In 2016, she cast a vote for the Green Party’s Jill Stein.
Donald Trump’s campaign, though, gave Squire a new sense of mission: “I needed to figure out what talents I had and what direct actions I could do.” When a mosque in the nearby city of Burlington was harassed by a local neo-Confederate group called Alamance County Taking Back Alamance County, she decided to put her skills to use. ACTBAC was using Facebook to organize a protest against the opening of the mosque, so Squire began scraping posts on the page that threatened to “kick Islam out of America.” She submitted her findings to the SPLC to get ACTBAC classified as a hate group, and to the North Carolina Department of the Secretary of State, which started an investigation into the group’s tax-exempt nonprofit status. She also organized a counterprotest to one of the group’s rallies, and it was at this event and others like it where she first became acquainted with the black-clad antifa activists. She was impressed. “They were a level of mad about racism and fascism that I was glad to see. They were definitely not quiet rainbow peace people.” Over the following months, she began feeding information to some of her new local antifa contacts. As white pride rallies intensified during 2017’s so-called Summer of Hate—a term coined by a neo-Nazi website—Squire began to monitor groups outside of North Carolina, corresponding with anonymous informants and pulling everything into her growing Whack-a-Mole database. Soon, in her community and beyond, antifa activists could be heard whispering about a new comrade who was bringing real, and potentially actionable, data-gathering skills to the cause.
The first big test of Whack-a-Mole came just before the white supremacist Unite the Right rally in Charlottesville on Saturday, August 12. In the weeks before, because of her database, Squire could see that nearly 700 white supremacists on Facebook had committed to attend the rally, and by perusing their posts, she knew they were buying plane tickets and making plans to caravan to Charlottesville. Her research also showed that some of them had extensive arrest records for violence. She sent a report to the SPLC, which passed it on to Charlottesville and Virginia law enforcement. She also called attention to the event on anarchist websites and spread the word via “affinity groups,” secret peer-to-peer antifa communication networks.
“Antifa was a level of mad about racism and fascism that I was glad to see. They were definitely not quiet rainbow peace people.”
The night before the rally, Squire and her husband watched in horror on the internet as several hundred white supremacists staged a torch-lit march in Charlottesville to protest the removal of a statue of Robert E. Lee, chanting “Jews will not replace us!” The next morning, the couple got up at 5 am and drove more than 150 miles through rain and mist to Virginia. At a crowded park, she met with a half-dozen or so activists she knew from North Carolina, some of them antifa, and unfurled a banner for the Industrial Workers of the World. (She’d joined the Communist-inspired labor organization in December 2016, after witnessing what she considered its well-organized response to KKK rallies in North Carolina and Virginia.) Just before 10 am, the white supremacists began marching into Emancipation Park, a parade of Klansmen, neo-Nazis, militia members, and so-called alt-right adherents, armed with everything from homemade plexiglass shields to assault weapons. Squire screamed curses at the white supremacists by name—she knew them because she had their information on file in Whack-a-Mole and had memorized their faces. At one point, a group of clergy tried to blockade the white supremacists, and Squire linked arms with other activists to protect them. A petite woman, she was pushed aside by men with plexiglass shields. Fights broke out. Both sides blasted pepper spray. Squire put on a gas mask she’d been carrying in a backpack, but the pepper spray covered her arms, making them sting.
After the police finally separated the combatants, Squire and dozens of other counterprotesters took to Fourth Street in triumph. But then, a gray Dodge Challenger tore down the street—and rammed into their backs. The driver, who had marched with the white nationalists and was later identified as James Alex Fields, missed Squire by only a few feet. She stood on the sidewalk, weeping in shock, as the fatally injured activist Heather Heyer lay bleeding in the street.
Recounting the event months later, Squire began to cry. “I had all this intelligence that I hadn’t used as effectively as I could have. I felt like I’d wasted a chance that could have made a difference.” When she returned home, she threw herself into expanding Whack-a-Mole.
One morning in December, I visited Squire in her small university office. She had agreed to show me the database. First she logged onto a foreign server, where she has placed Whack-a-Mole to keep it out of the US government’s reach. Her screen soon filled with stacks of folders nested within folders: the 1,200-plus hate groups in her directory. As she entered command-line prompts, spreadsheets cascaded across the screen, each cell representing a social media profile she monitors. Not all of them are real people. Facebook says up to 13 percent of its accounts may be illegitimate, but the percentage of fakes in Squire’s database is probably higher, as white nationalists often hide behind multiple sock puppets. The SPLC estimates that half of the 400,000-plus accounts Squire monitors represent actual users.
Until Whack-a-Mole, monitoring white nationalism online mainly involved amateur sleuths clicking around, chasing rumors. Databases, such as they were, tended to be cobbled together and incomplete. Which is one reason no one has ever been able to measure the full reach of right-wing extremism in this country. Squire approached the problem like a scientist. “Step one is to get the data,” she says. Then analyze. Whack-a-Mole harvests most of its data by plugging into Facebook’s API, the public-facing code that allows developers to build within Facebook, and running scripts that pull the events and groups to which various account owners belong. Squire chooses which accounts to monitor based on images and keywords that line up with various extremist groups.
Most of the Whack-a-Mole profiles contain only basic biographical sketches. For more than 1,500 high-profile individuals, however, Squire fills out their entries with information gleaned from sources like the SPLC, informers, and leaks. According to Keegan Hankes, a senior analyst at the SPLC, Squire’s database “allows us to cast a much, much wider net. We’re now able to take a much higher-level look at individuals and groups.”
In October, after a man fired a gun at counterprotesters at a far-right rally in Florida, SPLC analysts used Squire’s database to help confirm that the shooter was a white nationalist and posted about it on their blog. Because so much alt-right digital data vanishes quickly, Whack-a-Mole also serves as an archive, providing a more permanent record of, say, attendees at various rallies. Squire’s database has proven so useful that the SPLC has begun laying the groundwork for it to feed directly into its servers.
When Squire sends her data to actual citizens—not only antifa, but also groups like the gun-toting Redneck Revolt—it gets used in somewhat less official ways. Before a neo-Nazi rally in Boston this past November, Squire provided local antifa groups with a list of 94 probable white nationalist attendees that included their names, Facebook profiles, and group affiliations. As one activist who goes by the pseudonym Robert Lee told me, “Whack-a-Mole is very helpful. It’s a new way to research these people that leads me to information I didn’t have.” He posts the supposed identities of anonymous neo-Nazis and KKK members on his blog, Restoring the Honor, which is read by journalists and left-wing activists, and on social media, in an effort to provoke the public (or employers) to rebuke them.
Lee is careful, he says, to stop short of full-on doxing these individuals—that is, publicizing more intimate details such as home addresses, emails, and family photos that would enable electronic or even real-world harassment against them. Squire says that’s why she feels comfortable sending him information. Of course, once a name is public, finding personal information is not that hard. In the digital age, doxing is a particularly blunt tool, one meant to terrorize and threaten people in their most private spaces. Celebrities, private citizens, left-wing activists, and Nazis have all been doxed. The tactic allows anonymous hordes of any persuasion to practice vigilante justice on anyone they deem evil, problematic, or just plain annoying. As the feminist videogame developer and activist Zoe Quinn, who has been doxed and brutally harassed online, has written: “Are you calling for accountability and reform, or are you just trying to punish someone—and do you have any right to punish anyone in the first place?”
Squire has been doxed herself. Pictures of her home, husband, and children have been passed around on racist websites. She has received death threats and terrorizing voicemails, including one that repeated “dirty kike” for 11 seconds. Elon University has fielded calls demanding she be fired. On Halloween, Confederate flags were planted in her yard. Still, though Squire fears for her family’s safety, she keeps going. “I’m aware of the risks,” she says. “But it seems worth it. That’s what taking a stand is.”
After Charlottesville, Squire considered, in her anger and grief, publicly releasing the entire Whack-a-Mole database. It would have been the largest-ever doxing of the far right. But she worried about the consequences of misidentification. Instead, she worked with her regular partners at the SPLC and activists she trusts. At one point the SPLC contacted a university about a student whom Squire had identified as a potentially violent member of the League of the South. The university did not take action, and she thought about tossing the student’s name to the ever-ravenous social media mobs. But here too, she reasoned that when you have someone’s life at your fingertips, you need rules. If the university wasn’t willing to act, then neither was she. It was, for her, a compromise, an attempt to establish a limit in a national moment pointedly lacking in limits.
Critics might still argue that public shaming of the kind Squire promotes constitutes a watered-down form of doxing, and that this willingness to take matters into their own hands makes Squire and her cohort no better than vigilantes. As David Snyder, executive director of the First Amendment Coalition, says of Squire’s work: “Is it ethical to digitally stalk people? It may not be. Is it legal? Probably, as long as she doesn’t hack into their accounts and she’s collecting information they post publicly on an open platform like Facebook.” But he warns that limiting speech of anyone, even white supremacists, starts down a slippery slope. “Political winds can shift across time. Liberals who might cheer at a university limiting neo-Nazi speech also have to worry about the flip side of that situation when someone like Trump might penalize them in the future.”
As far as Squire is concerned, there’s a clear difference between protected speech and speech that poses an imminent threat to public safety. “Richard Spencer yelling about wanting a white ethno-state after events like Charlottesville—it’s hard to argue that kind of speech doesn’t constitute danger.”
Ultimately, Squire sees her work as a type of “fusion center”—a government term for a data center that integrates intelligence from different agencies—for groups combating white nationalism. And she admits that she is outsourcing some of the ethical complexities of her work by handing her data off to a variety of actors. “But it’s the same as how Facebook is hypocritical in claiming to be ‘just a platform’ and not taking responsibility for hate. Every time we invent a technology to solve a problem, it introduces a bunch more problems. At least I’m attentive to the problems I’ve caused.” Squire sees herself as having to make difficult choices inside a system where old guidelines have been upended by the seismic powers of the internet. White nationalists can be tracked and followed, and therefore she believes she has a moral obligation to do so. As long as law enforcement keeps “missing” threats like James Alex Fields, she says, “I don’t have any moral quandaries about this. I know I’m following rules and ethics that I can stand up for.”
After Charlottesville, some white supremacist groups did find themselves pushed off certain social media and hosting sites by left-wing activists and tech companies wary of being associated with Nazis. These groups relocated to platforms like the far-right Twitter clone Gab and Russia’s Facebook-lite VK. Squire sees this as a victory, believing that if white nationalists flee to the confines of the alt-right echo chamber, their ability to recruit and organize weakens. “If the knowledge that we’re monitoring them on Facebook drives them to a darker corner of the internet, that’s good,” she asserts.
That doesn’t mean Squire won’t follow them there. She has no plans to stop digitally surveilling far-right extremists, wherever they may be. After Jason Kessler, the organizer of the Unite the Right rally, was unverified on Twitter, he joined VK. His first post read, “Hello VK! I’d rather the Russians have my information than Mark Zuckerberg.” The declaration was quickly scooped up by Squire. She had already built out Whack-a-Mole to track him there too.
1Correction appended, 1/22/2018, 2:58 PM EDT: A previous version of this story incorrectly stated that Squire sent names from the LOSer Leak spreadsheet to another contact. She only sent them back to the person she received the spreadsheet from.
Facebook deserves a lot of the flack it gets, be it for providing Russian propaganda with a platform or gradually eroding privacy norms. Still, it has some genuine usefulness. And while the single best way to keep your privacy safe on Facebook is to delete your account, taking these simple steps in the settings is the next best thing.
Remember, it's not just friends of friends you need to think about hiding from; it's an army of advertisers looking to target you not just on Facebook itself, but around the web, using Facebook's ad platform. In the video above and the post below, we'll show you how to deal with both.
Limiting who can see which of your posts is an easy first step. On a desktop, go to the little dropdown arrow in the upper-right corner, and click Settings. From there, click on Privacy on the left-hand side. This is where the magic happens.
Under Who can see my stuff, click on Who can see your future posts to manage your defaults. You can make public to anyone at all, limited to your friends, or exclude specific friends. You can quarantine your posts by geography, or by current or previous employers or schools, or by groups. Just remember that the next time you change it, the new group becomes the default. So double check every time you post.
This section has other important privacy tools you can fiddle with, including who can look you up with your email address or phone number. We'd recommend not listing either in the first place, but if you do, keep the circle as small as possible. (If you do have to share one or the other with Facebook for account purposes, you can hide them by going to your profile page, clicking Contact and Basic Info, then Edit when you mouse over the email field. From there, click on the downward arrow with two silhouettes to customize who can see it, including no one but you.)
But pay special attention to the option to (deep breath) Limit the audience for posts you’ve shared with friends of friends or public? If you ever had a public account, taking it private wasn't retroactive. If you want to hide those previously viewable posts, lock this setting down.
Over on Timeline and Tagging you can control over what shows up on your own Facebook timeline. Basically, you can’t stop your friends from tagging you (sorry!), but you can stop those embarrassing photos from popping up on your page. At the very least, you should go to Review posts you’re tagged in before the post appears on your timeline, and enable that so that you can screen any tags before they land on your page.
To test out your changes, go to Review what other people see on your timeline. You can even see how specific people view your page, like your boss or your ex or complete strangers. It also never hurts to take stock of you present yourself to the world. (Looking at you, people who haven't updated your cover photo since the Obama administration.)
That should about cover your friends. Now onto advertisers, which are like friends, except they never leave you alone, even if you ask nicely.
Ad It Up
In that same Settings panel, head down to Ads. As you probably realized, Facebook knows what you do pretty much everywhere online. So does Google, so do dozens of ad networks you’ve never heard of. You're being tracked pretty much all the time, by everyone, thanks to this here internet.
You can still limit how Facebook uses that information, though. Tired of that lawnmower you looked at following you to Facebook? Turn off Ads based on my use of websites and apps. Saying no to Ads on apps and websites off the Facebook companies does the same, except for all the sites Facebook serves ads to around the web. Which is most of them.
Lastly, for some fun insight into how advertisers think of you, click on Your Interests. There you’ll find all the categories Facebook uses to tailor ads for you. You can remove any you don’t like, and marvel at the ones that don’t make any sense. This won't make the ads go away, but it'll at least you can banish all those off-brand kitchen gadgets from your News Feed.
And you’re good! Or at least, as good as can be expected. It’s still Facebook, after all.