Cybersecurity in Development: to mainstream or prioritize?

Blog Post
Flickr - Debelzie
Jan. 16, 2018

The call to incorporate cybersecurity into development projects and operations is not new. As early as 2010, the United Nations Group of Governmental Experts on developments in the field of information and telecommunications recognized the importance of building the cybersecurity capacity of nations around the world and particularly in less developed countries. Since 2010, as information and communication technology (ICT) increasingly drives development outcomes, the need for more and better cybersecurity capacity building has only grown as nearly all pillars of society—from the economy, to governance, to social interaction—are or can be touched by ICT.

To many, the importance of cybersecurity in light of these new pan-societal dependencies is given. Events like WannaCry and the various cybersecurity concerns around the 2016 United States election have catalyzed additional interest and investment in cybersecurity many resource-rich, fully developed countries. However, in much of the lesser developed world—the part of the world where the digital economy is growing at nearly two times the speed as it is in the developed world; the part of the world where developments in e-voting and e-governance could have an outsized impact on the quality of human life—cyber insecurity is considered a longer-term threat that will be handled once the full benefits of ICT are being reaped across society.

Yet, in 2016, the World Bank’s World Development Report (WDR) explicitly acknowledged for the first time in a WDR the importance of cybersecurity as a concern for international development, noting that, “some of the perceived benefits of digital technologies are offset by emerging risks.” However, despite that recognition, the question remains: how exactly should cybersecurity be folded into international development writ large? In the past, the development community has incorporated or focused on emerging issues as they bubbled to the surface in one of two ways: prioritization or mainstreaming.

Prioritization is the act of identifying a key issue for the development community to focus on. Prominent examples of prioritization from the last decade include the goals outlined in the Millennium Development Goals and Sustainable Development Goals, like achieving universal primary education, reducing child mortality or conserving the oceans. Priorities are often identified by leading development institutions, like the World Bank Group, and communicated to the broader community through strategy documents like the Sustainable Development Goals or the Millennium Development Goals. In most past cases, prioritization takes an existing development focus and elevates it for critical attention.

By contrast, mainstreaming is most relevant in the context of an emerging issue that has the potential to cut across many or all areas of development and may not receive sufficient focus from the development community. Mainstreaming seeks to fold this new issue into existing development practice as a new equity or consideration in the practice of the community. Perhaps the most notable examples of mainstreaming have occurred in the past two decades in the form of women’s rights and human rights. In both of these cases, leaders in the development community—from prominent celebrity voices to major development donors—highlighted the need to consider these basic rights as development activity unfolds.

Because cybersecurity cuts across nearly all sectors of the economy, society, and government, mainstreaming seems like a better fit. The question then becomes: how? Although it lacks some of the intrinsic and visceral aspects that human rights possesses as an issue, mainstreaming cybersecurity in development could follow a template similar to that of human rights.

The mainstreaming of human rights in development was the result of a concerted effort on the part of the human rights movement to “operationalize the relevance of human rights to various fields of development.” The breakthrough was precipitated by two important shifts in approach. The first was a shift of emphasis from the “right-holder” approach/model—expanding human rights opportunities for individuals—to the “duty-bearer” approach/model—ensuring that states and non state actors understand, respect, protect, and fulfill human rights obligations. The second was a shift from a violations approach—where the emphasis was on identifying and punishing human rights violators—to a policy approach, which “demands developing new tools to bring human rights concerns into forward-looking policy-making processes,” like Human Rights Impact Assessments (HRIAs).

In fact, the mainstreaming of human rights manifested most obviously in the creation and implementation of HRIAs. An initial push for HRIAs in business came in 2005 when UN Secretary General Kofi Annan appointed noted international relations scholar and the force behind the Millennium Development Goals John Ruggie as the Special Representative on the issue of human rights, transnational corporations, other business enterprises. Ruggie’s mandate included “identifying and clarifying standards of corporate responsibility and accountability with regard to human rights.” This work spilled over into development, where HRIAs six essential elements:

  1. A normative human rights framework,
  2. Public participation,
  3. Equality and non-discrimination,
  4. Transparency and access to information,
  5. Accountability mechanisms, and
  6. Inter-sectoral approach.

A good template for cybersecurity impact assessments doesn’t exist right now, but such assessments for corporations, lending institutions, and other development actors—underpinned by the similar essential elements as HRIAs—could be an important tool to drive forward the conversation about the impact of cybersecurity on development outcomes.