by Kate Ferguson, World Policy Review, New York, September 7, 2021
There is extensive evidence that the Chinese government is violating the human rights of ethnic Uyghurs, and that these violations include crimes against humanity and genocide. Satellite imagery, testimonials, demographic data and photographs substantiate the extensive allegations against China, which include the use of mass surveillance technologies throughout Xinjiang province, the arbitrary detention of more than 1 million Uyghurs and other Turkic Muslims, the torture and inhumane treatment of detainees, the separation of children from their parents, systematic sterilization, rape, forced labor and organ harvesting.
While survivors, relatives and diaspora communities have long sought to draw attention to the systematic and widespread violence in Xinjiang, states, multilateral organizations, corporations and many civil society groups have only recently begun considering their response to these grave violations. For more than a year, much of the public debate on China’s treatment of Uyghurs has focused not on what can be done in the face of such brutality, but how that brutality should be described: Is this a genocide or not? Concrete efforts to end the violence—or even meaningful debates on what those efforts should look like—have been harder to spot.
For advocates like me who work on preventing and responding to genocide and crimes against humanity, this is a familiar state of affairs, and one that raises important questions. Ever since the Holocaust, advocates have invoked the world’s promise that genocide would happen “never again” to mobilize international attention and action. But does securing official determinations that a mass atrocity is a genocide actually lead to constructive policy responses? If not, at a time when political attention to foreign policy is in retreat all over the world, these campaigns may not be the most effective use of our finite resources.
In the mid-2000s, the U.S.-based Save Darfur movement was successful in drawing attention to the Sudanese government’s killing of civilians in west Sudan, and its work prompted the U.S. to designate the violence as a genocide. But the campaign was too blunt to leverage its extraordinary platform to advance the comprehensive and necessarily complex policies the crisis warranted. That is not to say the advocacy movement is to blame for the failures in Darfur. But, as Rebecca Hamilton, a law professor at American University, put it in an interview with Slate, Save Darfur’s story is really “the story of mass movement advocacy” writ large. “It is second-to-none when it comes to sheer awareness-raising,” she explained, “but agility and nuance are rarely its strengths.”
Efforts to recognize atrocities as genocide have only rarely, and never in isolation, contributed to effective protective or preventative policy decisions.
Political genocide designations, like the ones on Xinjiang reached by former President Donald Trump’s administration in the U.S. and passed by the British Parliament earlier this year, do help to raise awareness and mobilize public engagement—but they also suck up political attention and civil society energy. There is only so much space a legislative calendar will ever give to individual foreign affairs issues, however pressing. And so far, efforts to recognize atrocities as genocide have only rarely, and never in isolation, contributed to effective protective or preventative policy decisions.
It is difficult to see what might be different for the populations currently suffering in Xinjiang. As Kate Cronin-Furman, a human rights expert at University College London, has argued, genocide determinations alone have never saved lives. To pretend anything else is disingenuous.
A Diluted Concept of Genocide
The word “genocide” was coined by Polish lawyer Raphael Lemkin in 1944 by bringing together the ancient Greek word genos, meaning “race” or “tribe,” and the Latin cide, or “killing.” It was, he explained, to “signify a coordinated plan of different actions aimed at the destruction of essential foundations of the life of national groups.”
As a lawyer, Lemkin was committed to establishing genocide as a crime so that states could be held accountable. But the definition of it eventually codified in the United Nations’ 1948 Convention on the Prevention and Punishment of the Crime of Genocide, better known as the Genocide Convention, is much narrower than Lemkin had envisioned.
The convention’s definition outlaws actions carried out with the “intent to destroy, in whole or in part, a national, ethnical, racial or religious group,” including but not limited to killing its members, sterilization and transferring children into the care of other groups. During the convention’s drafting, countries within the Soviet bloc and many in Latin America worked to remove “political groups” from the list of protected categories, while the U.K., France and the Netherlands vetoed the inclusion of “cultural genocide,” which includes efforts to eliminate a group’s language, values and cultural practices.
These refinements left the world with a restricted legal framework as the primary definition of the multidimensional phenomenon Lemkin had sought to articulate. In addition to being a lawyer, Lemkin was also a historian and activist; he understood that atrocities are often the culmination of discriminatory policies. Having analyzed the early policies and decrees that preceded the Holocaust, he wrote that he could “read the intentions of the Nazi government” in them, noting in his autobiography that “Hitler was one of the few statesmen in history who proclaimed his intentions many years before he took power.”
Many forget—or do not know—that Lemkin was inspired to create the new term not only by the Holocaust, but also by his study of the systematic persecution, cultural destruction and organized violence faced by Armenians at the hands of the Ottoman Empire in 1915-1917 and by Ukrainians during the human-made Holodomor famine in 1932-1933, as well as by the atrocities committed by the Roman Empire across the ancient world.
“He was always indignant that genocide was associated solely with physical extermination, in whole or part, of a group,” Michael Ignatieff, the former leader of Canada’s Liberal party, wrote in a 2013 essay. “He always believed that genocide could take non-exterminatory forms, as in the determined attempt he had seen in his native Poland to crush Polish language, culture and faith and turn a people into slaves.”
“Genocide,” then, was simultaneously intended by Lemkin to be a new crime, a new concept of violence and a new call to action. It was meant to give name to the many forms of discrimination that target identity groups—including structural violence. It is a great shame that Lemkin’s vision did not prevail over the legal definition and the popular understanding of it, which connects the term so inextricably to the Holocaust’s concentration camps. In the end, as much as the creation of the word “genocide” has inspired and focused advocacy, it has also brought challenges that actually obstruct more holistic approaches to preventing identity-based mass violence.
As much as the creation of the word “genocide” has inspired advocacy, it has also brought challenges that obstruct more holistic approaches to preventing mass violence.
As the historian Dirk Moses has explained, the term “genocide” has “distort[ed] our criminal vocabulary with its paralyzingly monumental status as the ‘crime of crimes.’” From a legal perspective, there is no hierarchy between different forms of mass atrocity. In political culture, though, genocide is often assumed to be more severe, while crimes against humanity, as the lawyer Philippe Sands put it, “came to be seen as the lesser evil.”
And this fixation on “genocide” as the top tier in a hierarchy of suffering has had tangible consequences. Despite the fact that the Genocide Convention makes the dual commitment to both punish and prevent genocide, states usually employ a retrospective approach to the crime, in which they wait for the violence to demonstrably reach a certain, recognizable threshold before reaching for the language of genocide. Yet by the time an atrocity reaches the point where it is more easily identifiable as one crime or another, prevention efforts have evidently already failed.
Moving From Distraction to Prevention
When journalists from the British media company ITN reported on a string of concentration camps in Bosnia in 1992, at the height of the Serbian ethnic cleansing campaign against Bosnian Muslims, it kicked-off a high-profile, intellectual debate over whether it was right to draw parallels between the atrocities in Bosnia and those of the Holocaust. Was “genocide” an appropriate word for what was happening? And, by the way, how could we be sure of the veracity of ITN’s images? It was a “debate” that only helped the perpetrators and did nothing for the victims.
The same disingenuous arguments are made today by China, Myanmar, Syria and their supporters to counter condemnations of their own gross human rights violations. Perpetrators and their public relations auxiliaries have honed in on the genocide question, not because there is any merit to their arguments against mass atrocity designations, but because they have learned that doing so can tap into a pre-existing and protracted discourse around the use of the term. It is a tactic that also activates conspiracy theorists and violence deniers, diverts advocates and campaigners toward collecting more evidence, and ultimately helps foreign states reluctant to respond to avoid their international obligations to help protect populations from atrocity crimes. As the challenges of disinformation mount, getting outside governments to stand with victims will only become more challenging.
Naming mass violence is important. It is a means of acknowledging suffering, standing in solidarity and confronting denial. Contrary to popular belief, genocide and crimes against humanity are not rare aberrations but are relatively common. Of today’s major and emerging crises, the vast majority—including those in Syria, Yemen, Libya, Myanmar, South Sudan, the Democratic Republic of Congo, Cameroon, Venezuela, China and Afghanistan—are driven at least in part by the deliberate, violent targeting of civilian groups by political elites. Systematic or widespread discrimination against people because of their race, ethnicity, religion, gender, sexuality, political affiliation, age, disability or class remains a common phenomenon in our modern world. In fact, as the impacts of the COVID-19 pandemic, climate change and democratic backsliding set in, the risk factors for genocide and crimes against humanity are set to increase in the medium and long term. Far from being a holdover from the past, atrocity crimes are becoming more common.
States and advocates, then, must move beyond the distracting conversations about genocide determinations toward prevention. Protecting populations from mass violence is a collective responsibility, as well as an individual one, and—73 years after the signing of the Genocide Convention—it must finally become an international priority.
In 2017, U.N. Secretary-General Antonio Guterres called for member states to integrate atrocity prevention into national policy processes. States should have the ability to make internal determinations that a mass atrocity may be emerging or under way, which can then be used to guide their policies—whether or not these analyses are disclosed to the public. That process should include monitoring the risk factors for atrocities, making assessments and formulating policy responses long before the threshold of atrocity is reached. The U.N. has helpfully produced an imperfect but useful frameworkthat sets out risk indicators for mass atrocities, such as the presence of an exclusionary ideology, the politicization of grievances, social fracture and disputes over resources or territory. This guide can inform national risk assessments. It’s worth noting, too, that the U.N. makes no distinction between risk factors for crimes against humanity versus genocide: They are the same.
Actively choosing not to do this kind of analysis—whether out of a fear of what might be determined or because intelligence agencies lack the necessary capabilities or coordination—leaves states vulnerable to pursuing bad policies based on incomplete intelligence. States should then establish their own strategies for reducing the risks of atrocity crimes and create a playbook for responding when violence is already underway, whether via targeting illicit financial flows, evacuating those at risk or pressuring perpetrators through diplomatic channels.
Governments should also create national coordinating authorities that can contribute to policy and maximize atrocity-prevention efforts across departments, from trade and development to diplomacy and refugee policy. States should similarly center the prevention of mass atrocities in their strategies for each foreign country, including by adopting more systematized ways to collect data on the dynamics of violence and risk factors. Better data allows states to identify opportunities for prevention, protection and punishment that they currently miss—or, at the very least, it would help them to avoid being complicit in international crimes.
States and advocates must move beyond the distracting conversations about genocide determinations toward prevention.
The principles and practice of atrocity prevention should also be consciously integrated into job descriptions, job titles and staff trainings, so that a prevention-first way of thinking is fully institutionalized across government offices. Diplomats must be trained to recognize the distinguishing features and causes of mass atrocities. While the U.S. has mandated atrocity-prevention training for its diplomats, this is the exception, not the rule. Indeed, in the aftermath of the dramatic U.S. withdrawal from Afghanistan, one wonders if implementing this program sooner could have led the U.S. to pursue a plan that would have left fewer people at such immediate and acute risk. In recent months, the U.K. Foreign, Commonwealth and Development Office, too, has faced questions about why its staff in China, Ethiopia and elsewhere did not receive atrocity-prevention training.
Relying on awareness-raising campaigns to mobilize political leadership, attention and will toward ending atrocities simply doesn’t work. It never has. Instead, our approach to these crimes should be results-oriented and should recognize that the challenge of making determinations cannot distract from our responsibility to help save lives.
Thanks to the hard work of many advocates and policymakers, efforts to integrate prevention and protection into the working methods of national governments are already becoming more common. A number of “national mechanisms for atrocity crimes prevention” now exist across Latin America and the Great Lakes region of Africa. The U.S., too, has an interagency Atrocities Early Warning Task Force, founded as the Atrocities Prevention Board in 2011 by then-President Barack Obama. But even these structures fail to implement what Lemkin sought to teach us about recognizing early warning signs, raising alarms and assuming the responsibility to interrupt identity-based discrimination before it escalates to mass violence. We have a long way to go.
The threat of mass atrocities is not going anywhere. We need to urgently pivot to meet that challenge by building policy and processes that reduce complicity in mass violence, rather than spending so much time on definitions and determinations. The Uyghur community in Xinjiang, like other communities facing genocide and crimes against humanity around the world, urgently needs not just international attention, but creative, consistent and coordinated work to identify the measures that can reduce suffering and save lives.
Kate Ferguson is the co-executive director of the U.K.-based NGO Protection Approaches and chair of policy at the European Center for the Responsibility to Protect.