Beyond the Safe Harbour: Navigating the ‘Fake News’ Conundrum

‘From the thicket of relations between fiction and truth we have seen a third term emerge: the false, the non-authentic—the pretence that advertises itself as true.’

— Carlo Ginzburg (2012)

Digital intermediaries, such as Facebook, YouTube, Instagram and Twitter, weave together countries and people in ways undreamed of even a few decades ago. Online platforms, while radically increasing our opportunities for cross-cultural communication, also amplify the potential consequences of our communication which are not always to beneficent effect. Following the 2016 US presidential elections, the manipulation and abuse of digital platforms by state and non-state actors in influencing critical social and political events has attracted global attention.

From the Brexit referendum (Wright, et al., 2019) to the campaign of ethnic cleansing against Myanmar’s Rohingya Muslim minority (Mozur, 2018), a specific concern has been the rise of ‘fake news’ on popular social networking sites and messaging apps. While hoaxes, conspiracy theories and fabricated information have been documented throughout the history of communication, the centrality of platforms in our digitally connected worlds and the scale of ‘fake news’ presents an unprecedented challenge.

The label ‘fake news’ is popularly used to refer to a broad range of perceived and deliberately manipulated messages and information. Researchers working on the issue emphasise that the term inadequately captures the different ways in which communication gets distorted, and prefer to use the term ‘information disorder’ (Wardle and Derakhshan, 2017).

The destructive impact of information disorder ranges from: (i) enabling foreign interference and propaganda; (ii) undermining trust in institutions that facilitate the functioning of democracy; (iii) the disruption of democratic deliberation; and (iv) contributing to violent ethnic and religious nationalism. While much of the discussion on the impact of false and misleading information has been focused on its political dimensions, information disorder muddies public discourse and trust on a range of issues.

A recent investigation by The Guardian uncovered that search results for information about vaccines on Facebook and YouTube were dominated by recommendations that steered viewers away from fact-based medical information toward deliberate misinformation meant to undermine trust in vaccines (Wong, 2018). In addition to hosting groups and channels promoting anti-vaccination propaganda, Facebook also accepted thousands of advertising dollars from organisations to target people interested in ‘vaccine controversies’ (Pilkington and Glenza, 2019).

In India, the Ministry of Electronics and Information Technology (MEITy) sent notices, ordering Google and Facebook to remove false and ‘malicious’ content on food safety, noting that ‘such false propaganda…erodes trust in the global food system and potentially has far-reaching public health, social and trade implications’ (Doval, 2019).

The complexities of information disorder raise the question who, if anyone, is responsible for the impact and consequence of the various modes of contamination in the information ecosystem, especially when those impacts and consequences are ethically problematic. In December 2018, MEITy announced its review of the regulatory treatment of digital intermediaries to hold them accountable under law for ‘fake news’. The draft amendments present a striking contrast, both to the way in which the government has approached the regulation of intermediaries, and to the manner in which online platforms view their role regarding contentious content.

In this article, I argue that the focus on technological solutions outlined under the draft regulations are short-sighted interventions that fail to address the broader issue of platform responsibility.

Information disorder is a complex policy problem, and designing interventions to address it necessitates establishing accountability for the proliferation of false and manipulated content. However, this does not mean ceding control to unaccountable private companies when they seem ill-equipped to deal with the protection of speech.

I lay out my argument in three parts. First, I examine efforts to define and delineate types of ‘fake news’. Next, I discuss the role of online platforms in facilitating the spread of contentious content online. In part three, I discuss proposals for increased liability for platforms, focusing on the viability and desirability of the changes being contemplated to the legal framework that governs intermediaries in India.

NEITHER FAKE NOR NEWS

At the outset, one of the most challenging aspects of addressing the phenomenon dubbed ‘fake news’ is defining what constitutes fake news. The expression has been appropriated by politicians to dismiss disagreeable news coverage and is often used as a synonym for inaccurate journalism, propaganda, conspiracy theories, hoaxes, lies, fabricated pictures, and even Internet memes.

Claire Wardle has approached the categorisation of information disorder by distinguishing between messages that are true from those that are false, and messages that are created, produced and distributed by agents with an intent to harm from those that are not (Wardle and Derakhshan, 2017). The framework has been immensely useful in distinguishing between various forms of information disorder, and identifying factors contributing to the production and consumption of false information.

Under this categorisation, misinformation refers to false content being shared without an intent to harm. Although some misinformation can be relatively innocuous, it can still lead to harm: for example, when unverified information shared during public emergencies or calamities creates panic or disorder. People share misinformation for a variety of reasons, including as validation of their identity and belief systems, or as a form of civic duty. As a BBC study on fake news and citizens highlights, in some instances simply the act of seeking to validate the veracity of misinformation contributes to different types of false messages being shared widely within networks (Chakrabarti, et al., 2018).

Disinformation refers to scenarios where false information is knowingly created and disseminated to sow mistrust and confusion, or harm a person, social group or country. One striking example of disinformation is the fake video floated by a member of the Legislative Assembly from the then Opposition, and now ruling party in India, the Bharatiya Janata Party (BJP), which snowballed into the Muzaffarnagar riots of 2013 that claimed 60 lives and displaced more than 40,000 people (Ahuja, 2013). Alarmingly, Prime Minister Narendra Modi’s official mobile application has emerged as a major hub for the dissemination of disinformation where supporters, including several BJP party members, often post objectionable content, including fake quotes from leaders of rival parties (Bansal, 2019).

The focus on truth assumes the existence of a single verifiable version or source of truth. In reality, the line between truth and untruth is often difficult to draw. This becomes immediately evident when we consider the growing problem of deep fakes or the phenomenon of hyper-realistic video or audio recordings, created with artificial intelligence (AI). This approach also assumes that people share falsehoods, because they do not have access to trustworthy information. However, a recent study from the Massachusetts Institute of Technology (MIT) has uncovered the novelty of false or manipulated information, and people’s emotional reactions to it contribute to false content being shared more than accurate reportage online (Vosoughi, et al., 2018).

Similarly, the metric of intent creates the perception of objectivity in the process of evaluating information. In practice, identifying intent behind misinformation and disinformation, and taking action against such content entails taking messy and politically fraught decisions. For example, in 2016, Google amended its policy to restrict advertisements that ‘misrepresent, misstate, or conceal information’. This move has resulted in reducing the number of channels that qualify for earning advertising revenue, a trend which impacts both long-term and new users of the platform. In the absence of clear guidelines or oversight, content creators are being demonetised without an explanation.

As the moderating practices of social networking platforms reveal, identifying truth or intent is not an easy task, since a lot of content lies in the middle ground between the intentionally deceitful or misleading, and accurate, factual information.1 Platforms’ strategies, based on interpreting truth or intent, break down in environments where, for a significant part, users determine what content they upload, amplify and are exposed to. As the MIT study on rumour cascading on social networking platforms highlights, false information spreads because people share it. In this regard, the motivations of actors or organisations that engage with the content of a post, or amplify it, becomes as important as identifying the intent of the source of content (Helberger, et al., 2018).

The significance of social sharing becomes particularly evident when we contemplate the string of attacks across our country that have been fuelled by the circulation of fake messages on social networking sites. More than 30 people across 10 states became victims of vigilante justice, triggered by rumours and fake warnings of kidnappers or organ harvesters that were circulated on WhatsApp. Responding to the violence, the government has sought to allocate the responsibility of curbing the circulation of false information to online platforms.

By and large, the government’s demands have been limited to seeking ‘technological solutions’, such as traceability and the use of AI and machine-learning systems, to track content. This approach towards regulating content on the Internet is as much due to the complexities of online content distribution, as it is to the fact that digital intermediaries qualify for ‘safe harbour’ and cannot be held liable for fake news or other types of contentious content on their platforms.

A SHIP IN HARBOUR IS SAFE

Although much of the recent discourse on information disorder in India has revolved round the question of whether platforms can be held accountable for content shared through them, as per Section 79 of the Information Technology Act, 2000 (IT Act), intermediaries are shielded from civil or criminal liability for any third-party content made available by, or hosted on, them. In return for this legal immunity or safe harbour, intermediaries have to comply with certain obligations, such as adopting statutory due diligence, or enforcing ‘notice and takedown’ procedures.

The expansive safe harbour regime in India provides little recourse for regulators, even in instances when platforms have facilitated state-sponsored information warfare (Singh, 2018). A few countries have sought to frame information disorder as a cybersecurity issue, and introduce penalties on the platforms’ failure to prevent or facilitate information warfare. However, this response targets one form of information disorder—state-sponsored disinformation campaigns—leaving other forms of strategic manipulation of information unaddressed.

Section 79 of the IT Act was challenged in the Supreme Court of India (SC), in Shreya Singhal v. Union of India2 (Shreya Singhal case). The SC upheld intermediary safe harbour, and strengthened it by requiring a judicial or executive review of content removal requests. As a result of the standards set by the SC in the Shreya Singhal case, intermediaries are deemed to have no knowledge of unlawful content, and are not required to take down content until the receipt of government or judicial order. Importantly, intermediaries in India are not required to proactively monitor their platforms to track or remove contentious or harmful content.

The absence of legal requirements has not prevented digital intermediaries from regulating content on their platforms, and most large private companies have set up elaborate schemes for moderating commercial content, such as reporting tools, human moderators and automated systems. However, beyond such self-regulatory measures, private Internet corporations appear to be very reluctant to moderate misleading content and hate speech (Caplan, et al., 2018).

Companies such as Facebook, Google and Twitter argue that the information on their platforms is user-generated, and attempting to control it would make them the ‘arbiters of truth’ and infringe on the free-speech rights of their users. While there are merits to this argument, the reality is that almost all large private companies have developed elaborate content moderation and governance systems for deciding what kinds of speech are permissible, promoted and banned on their platforms. Such internal mechanisms have resulted in wide variations in how moderation is applied to different groups and content categories across platforms. There are numerous examples where platforms have been slow in their response to removing harmful content, as well as instances of over-broad censorship of legal and truthful information.

From the legal perspective, digital intermediaries rely on the host–editor dichotomy with the contention that since most of the information on their platforms is user-generated content, their role is limited to providing the infrastructure for hosting content, unlike publishers or creators who take editorial decisions. Although content creators and users of platforms exercise some control over the information they engage with online, the social and technical architecture of digital platforms wields considerable influence in shaping access to information and user engagement online.

OF THE PEOPLE, BY THE PEOPLE, FOR THE PLATFORM?

Digital platforms have expanded the ways in which Indian audiences discover and access information. The trend towards the greater availability of information obfuscates subtler transformations in the production, consumption and distribution of information online. Until recently, only special-effects experts could make realistic looking and sounding fake videos. But, today, cheap, sophisticated and user-friendly editing technologies allow people with little to no technical expertise to create deep fakes.

Tutorials, with step-by-step instructions for people who want to create such content, are widely available across multiple online platforms. Community-based forums, such as Quora, that have grown to become more influential than the websites of established media houses, have made it easier for non-expert influencers and opinion makers to share unverified, false or manipulated content with large audiences (Dasgupta and Sathe, 2018).

While diverse information is becoming more easily available, the distribution of information has become concentrated on a few platforms. The increasing reliance on social networking platforms— Facebook, Google and Twitter—for accessing information has led to the mass consolidation of audiences on these sites. For example, in 2018, Indian audiences spent nearly 47 billion hours on the country’s top five video-streaming apps. WhatsApp, which counts India as its biggest market, with more than 200 million users, has reported that users spend more than two billion minutes each day on audio and video calls.

The concentration of information sources and audiences on opaque and largely profit-driven private platforms is particularly relevant to analysing the problem of information disorder. Platforms derive value from continued user engagement, and because of the economic incentive of retaining audience attention, they are less likely to design or incorporate features that expose individuals to information that challenges their identity or world view. Arguably, by customising and tailoring information, based on users’ histories and preferences, social networking and broadcasting platforms have exacerbated the polarised pluralism underlying the ‘fake news’ problem in India.

The business models and commercial interests pursued by online platforms also encourage the production of content that is ‘click-worthy’ or engaging, irrespective of accuracy or truth. Fake news generates clicks, shares and social engagements, metrics that platforms build on to quantify individuals’ social and political behaviour or gauge their preferences for precision targeting. This dynamic has been exploited by individuals and organisations to benefit financially or politically from spreading false or misleading content.

Considering the influence of digital intermediaries from this perspective, the host–editor dichotomy does not adequately cover the responsibility or accountability of companies for contentious content spreading on their platforms. Over the years, there has been a growing realisation amongst legislators, policymakers and opinion leaders that existing intermediary laws and regulations might be inadequate to address the technological and social changes ushered in by digital platforms.

A slow reconsideration of ‘safe harbour’ or conditional immunity frameworks that shield digital intermediaries from liability for user-generated content is underway. European lawmakers have established the Code of Conduct3 as a way to pressure social networking platforms to crack down on hate speech. In 2017, the German Parliament passed a law requiring social networking platforms to take action against the spread of hate speech, criminal or false material directed at minorities being shared on their platforms.4 In India, MEITy has released the Intermediary Guidelines (Amendment) Rules 2018 (Draft Rules) for public comments.5 The draft amendments, while not yet finalised, have reignited the debate on the rights and responsibilities of digital intermediaries for content on their platforms.

THROUGH THE LOOKING GLASS

Under the existing liability regime in India, digital intermediaries either rely on external sources to report contentious content, or voluntarily regulate and manage content according to their policies and ‘community’ standards. The draft rules seek to dramatically expand the obligations of intermediaries with regard to user-generated content on their platforms. As policymakers assign responsibility to platforms, it is vital to think about how the changes being proposed to the liability framework will impact citizens’ rights and democracy.

Consider the use of the term ‘unlawful’ in the draft rules. The vague term does not provide sufficient clarity on the types of content that should be restricted. In the absence of standards and guidelines for unlawful content, the draft rules ignore the complexities of assessing the legality of content. For example, in the content removal cases involving hate speech, satire, parody or defamation, the determination of legality or veracity is highly context driven, and nuances, which are crucial for the review of such content, are hard to code into technology designs. Facebook CEO Mark Zuckerberg, too, has acknowledged the company’s AI struggle with language dialects, the context, and whether or not a statement qualified as hate speech, and that while it may be able to root out hate speech in five to 10 years, ‘today we are not there yet’ (Alba, 2018).

Leaving the determination of unlawful content to intermediaries goes against the SC’s observations in the Shreya Singhal case: that an intermediary ought not to be placed in the position to decide the legitimacy or legality of information. Given the difficulties of identifying and interpreting information disorder, a more appropriate regulatory intervention would be for MEITy to enable key stakeholders to deliberate and negotiate shared understandings of what constitutes a genuine threat to public safety, and what will enable public access to trustworthy information through an open and transparent process (Helberger, et al., 2018).

Another aspect of concern of the draft rules is that the legislative approach to content regulation demonstrates an increasing reliance on technology to take decisions about the legality of online content. For instance, the draft rules direct companies to deploy ‘technology-based automated tools’ for proactively identifying, removing and disabling public access to unlawful content. Platforms have deployed automatic content detection tools to de-rank or remove a wide range of illegal content, including child pornography and copyright violations, or to take action against suspicious accounts posting such content (Gerken, 2019).

While automatic content detection systems have had some success in removing illegal content at scale, such tools and technologies are very expensive. For example, YouTube invested more than $60 million to develop Content ID, its proprietary system for copyright and content management. In India, few digital intermediaries can afford to evolve such technological responses for content moderation. By requiring all types of platforms to implement these filters in order to qualify for safe harbour, the draft rules put young start-ups, which cannot afford to invest in such technologies, at a tremendously competitive disadvantage. Over the long term, this strategy will result in diminished innovation and diversity in Indian media markets, resulting in less choice for citizens.

The call for expensive technology-based interventions stem from the belief that digital communications platforms are neutral mediums and will advance sophisticated, technical tools in accordance with policymakers’ specifications, as long as they have an obligation under the law to do so. In reality, far from being neutral, predictive models and algorithms to identify and filter content are vulnerable to the biases of their creators and users. This is because while social media companies rely on data- driven signals to determine importance or relevance, false content continues to not only exist on these platforms, but also to trend (Caplan, et al., 2018). The producers, consumers and amplifiers of harmful content with economic or political motivations can also easily shift their tactics to avoid detection by automated content moderation systems.

Automatic content moderation technologies also make it difficult for users to understand and challenge content removal decisions. The lack of transparency or accountability in automated systems is particularly relevant, as there is no way to measure how much content platforms remove or deprioritise using such tools, or the impact of their use on limiting contentious content. For instance, Facebook claims to be able to remove 99 per cent of ISIS- and al-Qaida-affiliated content using AI-powered algorithms and human content moderators (Matthews and Pogadl, 2019). The company’s claims have not been independently investigated, and with little to no public information about the workings of the moderation system, there is no way of knowing whether AI or humans are the key to the strategy’s success.

By giving too much weightage to the power of automatic content detection, the draft rules have overlooked the problems of over-broad censorship, the arbitrary removal of content, or the abuse of detection tools of platforms for extortion (Geigner, 2019). The growing evidence of the biases and discrimination in automated content moderation systems serve as an important reminder that proactive filtering technologies neither can, nor should, become an industry-wide standard.

In the absence of measures to strengthen transparency and accountability, the draft rules hand over the power to set and enforce the appropriate boundaries of public speech to profit-driven technology companies. Information disorder is a complex policy problem, and addressing it requires accounting for the role of both platforms and users in organising cross-cultural communication. The lack of accountability in centralised liability regimes and gaps in the self-regulatory frameworks suggest that new forms of governance, based on a more distributed approach to the allocation of responsibility, are required.

TOWARDS DISTRIBUTED RESPONSIBILITY

The rapid evolution/expansion of information disorder, and the associated risks to society, require comprehensive and durable solutions to meaningfully address the problem. The complexities of information disorder necessitate allocating responsibility to human, technological and institutional actors that contribute to the creation, distribution and circulation of false and misleading information (Helberger, et al., 2018).

In our current media environment, the distribution of information has become concentrated on platforms such as Facebook, Google and Twitter. From deciding the information that is most accessible to audiences, including type or format of content, to controlling financial incentives for influencers and opinion makers on their platforms, the private ordering of online platforms has considerable influence in organising and shaping communications. Although digital intermediaries have evolved in their role as facilitators of digital communication, laws and regulations have not kept pace with their shifting role.

The conditional immunity regime under Section 79 has ensured that digital intermediaries can set standards and processes to govern speech allowed on their platforms without being required to have any editorial control or liability for that content. While technology companies have benefitted from an open, unconstrained regulatory environment, the safe harbour provision and the requirement of removing content upon the receipt of government or judicial order has also shielded technology companies from confronting targeted manipulation, such as information disorder enabled by the socio- technical design of their platforms and services. The lack of overview of the platforms’ information moderation systems, the high costs of approaching courts, and the glacial speed at which the judicial system operates, render judicial and executive orders as an insufficient recourse for tackling harmful and illegal information online.

The scale of information disorder, and the dynamic context of abuse and misuse of digital platforms, suggest that the current liability regime may be untenable. A re-examination of the different types of intermediaries and their concomitant duties and obligations will bring a measure of accountability to online platforms and service providers. Designing interventions that seek to expand the legal liability of intermediaries for contentious content gets complicated, because even though platforms shape user engagement, they do not determine it (ibid.).

Various types of individual and institutional actors create, distribute and amplify false and misleading information for financial, political and social reasons. Policy interventions to tackle information disorder must put in place systems to identify different state and non-state actors, and hold them accountable for targeting and manipulating public opinion.

Reigning in information disorder will require going beyond deputising platforms to proactively remove information, based on their own standards and interpretation of unlawful content. This approach not only entrenches the dominance of a few platforms, but without oversight or transparency, technology companies will continue to take opaque content decisions without any potential consequences. The technological solutions outlined under the draft rules do not fix the broader problem: that the design of these socio- technical architectures is geared towards user engagement and rapid content consumption. As Zeynep Tufekci has emphasised, the ability to control the attention of the people is much more powerful than outright censorship (2017).

Instead of leaving private platforms to continue scaling content moderation practices and technologies, policymakers need to intervene to establish responsibility for the business practices of digital intermediaries. Policymakers and regulators should guide technology companies to provide more information about their content moderation practices, including details on the organisation and functioning of internal or external content review bodies. Similarly, platforms should also be required to report on content removal appeals, and establish a right to be heard for users whose information has been removed, including by automated systems. Structural interventions, aimed at introducing transparency and due process, would go a long way in clarifying the role of various types of digital intermediaries with regard to the management of the wide variety of contentious content on their platforms.

In addition to these measures, there are several possible avenues for legislation to improve accountability in the enforcement of content moderation policies, and to encourage platforms to take on responsibility for the protection of consumer rights, public safety and security of communications. Interventions to determine enforceable standards and to guide technology companies to better serve the laws of our country should be developed through an open, multi-stakeholder process.

In the absence of such collaborative and nuanced deliberation, there is a real danger that policymakers will wind up with over-broad or obsolete solutions. Regulators and policymakers must tread carefully in this complex policy area and pursue nuanced, incremental improvements over reactive, catch-all solutions. The release of draft rules provides an opportunity for a thorough discussion on the social responsibility of platforms. Whatever the future of the regulation of digital intermediaries, the consideration of responsibility of platforms will only be one component in a larger effort to combat information disorder.

NOTES

  1. Facebook. 2018 ‘Facing Facts’ https://newsroom.fb.com/news/2018/05/inside- feed-facing-facts/.
  2. Shreya Singhal vs. Union of India. 24 March 2015. Writ Petition (Criminal) No. 167 OF 2012. https://indiankanoon.org/doc/110813550/.
  3. In 2016, the European Union launched the online ‘Code of Conduct’ to fight hate speech, racism and xenophobia across Europe. Facebook, Twitter, YouTube and Microsoft were involved in the creation of the Code and have signed up
    to it, although the terms are not legally binding. The Code establishes ‘public commitments’ for the companies, including the requirement to review the ‘majority of valid notifications for removal of illegal hate speech’ in less than24 hours, and to make it easier for law enforcement to notify the firms directly. https://ec.europa.eu/info/sites/info/files/code_of_conduct_on_countering_illegal_ hate_speech_online_en.pdf.
  4. In 2017, the Bundestag, Germany’s Parliament, passed a Network Enforcement Act (Netzdurchsetzunggesetz, NetzDG) that allows authorities to fine social media companies which fail to remove hate speech posts, fake news and terrorist content that violate German law within 24 hours. In cases that are more ambiguous, Facebook and other sites have seven days to deal with the offending post. If they don’t comply with the new legislation, the companies could face a fine of up to 50 million Euros ($57.1 million). https://www.dw.com/en/eu-hails- social-media-crackdown-on-hate-speech/a-47354465.
  5. Ministry of Electronics and Information Technology (MEITy). 2018. ‘The Information Technology [Intermediary Guidelines (Amendment) Rules] 2018.’ http://meity.gov.in/content/comments-suggestions-invited-draft-“-information- technology-intermediary-guidelines.

REFERENCES

Ahuja, R. 2013. ‘Muzaffarnagar Riots: Fake Video Spreads Hate on Social Media’,The Hindustan Times, 10 September. https://www.hindustantimes.com/ india/muzaffarnagar-riots-fake-video-spreads-hate-on-social-media/story- WEOKBAcCOQcRb7X9Wb28qL.html.

Alba, D. 2018. ‘Why Facebook will Never Fully Solve its Problems with AI’, Buzzfeed News, 11 April. https://www.buzzfeednews.com/article/daveyalba/mark- zuckerberg-artificial-intelligence-facebook-content-pro.

Bansal, S. 2019. ‘Narendra Modi App has a Fake News Problem’, Medium, 27 January. https://blog.usejournal.com/narendra-modi-app-has-a-fake-news-problem- d60b514bb8f1.

Caplan, R., L. Hanson and J. Donovan. 2018. ‘Dead Reckoning: Navigating Content Moderation after Fake News’, Data and Society, 21 February. https://datasociety. net/output/dead-reckoning/.

Chakrabarti, S., L. Stengel and S. Solanki. 2018. ‘Duty, Identity, Credibility: Fake News and the Ordinary Citizen in India’, BBC. http://downloads.bbc.co.uk/ mediacentre/duty-identity-credibility.pdf.

Dasgupta, P. and G. Sathe. 2018. ‘After Ruining Twitter, Indians are Turning Quora into a Troll-Fest’, Huffington Post, 28 December. https://www.huffingtonpost.in/ entry/twitter-indians-quora-politics_in_5c24c958e4b08aaf7a8e0eb1.

Doval, Pankaj. 2019. ‘Remove “Fake” Content on Food Quality, Govt. tells Facebook’, Economic Times, 21 January. http://timesofindia.indiatimes. com/articleshow/67617097.cms?utm_source=contentofinterest&utm_ medium=text&utm_campaign=cppst.

Gerken, T. 2019. ‘YouTube’s Copyright Claim System Abused by Extorters’, BBC, 14 February. https://www.bbc.com/news/technology-47227937.

Geigner, T. 2019. ‘YouTube’s Content ID System being Repurposed by Blackmailers’,Celebrity Access, 14 February. https://celebrityaccess.com/2019/02/14/youtubes- contentid-system-being-repurposed-by-blackmailers/.

Ginzburg, Carlo. 2012. Threads and Traces: True False Fictive. US: University of California Press.

Helberger, N., J. Pierson and T. Poell. 2018. ‘Governing Online Platforms: From Contested to Cooperative Responsibility’, The Information Society, 34 (1): 1–14. DOI: 10.1080/01972243.2017.1391913. https://www.tandfonline.com/doi/pdf/1 0.1080/01972243.2017.1391913.

Matthews, K. and N. Pogadl. 2019. ‘Big Tech is Overselling AI as the Solution to Online Extremism’, The Conversation, 17 September. http://theconversation.com/ big-tech-is-overselling-ai-as-the-solution-to-online-extremism-102077.

Mozur, P. 2018. ‘A Genocide Incited on Facebook, With Posts from Myanmar’s Military’, The New York Times, 15 October. https://www.nytimes.com/2018/10/15/ technology/myanmar-facebook-genocide.html.

Pilkington, E. and J. Glenza. 2019. ‘Facebook Under Pressure to Halt Rise of Anti- vaccination Groups’, The Guardian, 12 February. https://www.theguardian.com/ technology/2019/feb/12/facebook-anti-vaxxer-vaccination-groups-pressure- misinformation.

Singh, P. 2018. ‘Planet-scale Influence Operation Strikes at the Heart of Polarised Indian Polity’, 26 November. https://pukhraj.me/2018/11/26/planet-scale- influence-operation-strikes-at-the-heart-of-polarised-indian-polity/.

Tufekci, Z. 2017. Twitter and Tear Gas: The Power and Fragility of Networked Protest. US: Yale University Press.

Vosoughi, S., D. Roy and S. Aral. 2018. ‘The Spread of True and False News Online’,Science. 9 March. http://science.sciencemag.org/content/359/6380/1146.

Wardle, C. and H. Derakhshan. 2017. ‘Information Disorder: Toward an Interdisciplinary Framework for Research and Policy Making.’ https:// rm.coe.int/information-disorder-toward-an-interdisciplinary-framework-for- researc/168076277c.

Wong, J. 2018. ‘Don’t Give Facebook and YouTube Credit for Shrinking Alex Jones’ Audience’, The Guardian, 5 September. https://www.theguardian.com/ commentisfree/2018/sep/04/alex-jones-infowars-social-media-ban.

Wright, M., R. Mendick, C. Hope and G. Rayner. 2019. ‘Facebook Paid Hundreds of Thousands to Host Anti-Brexit “Fake News”’, The Telegraph, 18 January. https:// www.telegraph.co.uk/news/2019/01/18/facebook-accused-pumping-fake-news- running-ads-claiming-endangered/.

Rising Demands for Data Localization a Response to Weak Data Protection Mechanisms

Don’t Trust Data Localization Exceptions in Trade Agreements to Guarantee Protection of Personal Data

The digital economy relies on cross-border provision of services and goods, and in the past government trade regulators have embraced the borderless nature of the Internet and adopted light-touch regulation. But with the growing perception of data as the new oil, governments around the world are now flexing their muscles and stepping up efforts to limit or tax cross-border data flows. Multiple countries have enacted lawslocalizing storage and processing of data within their territory or subjecting cross-border transfers to to strict conditions.

The wave of data localisation policies suggest that a marked regulatory shift is underway. National localization is creating tension within trade negotiations such as RCEP, NAFTA, and TiSA in which countries like the United States, Singapore, Thailand and Japan, along with tech companies, are seeking to prohibit data localization practices.

Although governments push for data localization to achieve diverse policy goals, there is an inherent conflict between the logic of most data localization efforts and the policy objectives that countries pursue by participating in free trade agreements. Resolving localization demands and reconciling conflicting ideologies and interests may be difficult to achieve through trade agreements.

As in the case of copyright rules in trade agreements, developing trade solutions to data localization are sure to get caught up in the wider socio-politics of trade and Internet governance. Negotiating on data localization for the protection of personal information creates the risk of compromise on protections that should be a minimum guarantee, as countries could lay down localization conditions as a trade-off for respecting privacy rights.

Policy Objectives for Pursuing Data Localisation

Government demands for localization are driven by diverse rationales, one of which is security or surveillance concerns. Consider China’s National Security Law which limits operations and maintenance of “critical Internet infrastructure” to mainland China as matter of national and cyber security. Similarly, Vietnam and Indonesia mandate maintaining in-country servers for access by law enforcement agencies.

The desire to attract investment, fuel innovation and create competitive advantage for local companies is another important logic driving localization efforts. When framed from the narrative of economic and employment gains, localization is politically appealing and enjoys support of local business constituencies. This approach seems to be at working for some countries. Google and Amazon Web Services (AMS) have announceddata centers in Singapore, Taiwan (province of China)* and Japan. Alibaba Cloud, the computing arm of the Chinese company, announced that it would be setting up data centers in India and Indonesia.

Protection of national autonomy or efforts to reign in the hegemony of US firms is also used to drum-up support for introducing rules for transfers of data. Last week, India’s telecom regulator issued a consultation paperexploring measures to address cross-border flow of information and jurisdictional challenges in the digital ecosystem. The regulator’s move appears to be triggered by its displeasure with Apple’s refusal to list an app developed by the regulator that tracks user’s messages and call logs to identify spam.

Beyond the economic rationale, there is a growing perception that nations able to control data flows will fare better in the Internet governance order. For developing and developed countries alike, leadership with regard to digital economy is linked to establishing their claims of sovereignty in cyberspace. Therefore, nations mandate storage and processing of data within their jurisdiction. In a similar vein, governments may also lay down conditions for allowing transfer of data such as the company’s nation of incorporation or principal sites of operations and management. The new Chinese cybersecurity regulation defines the notion of territory not only based on the location of operations, but also of ownership.

Not all localization demands are blanket bans on data transfers or on the use of foreign servers. Establishing local facilities can also be incentivized by raising the costs of the data transfer to other jurisdictions either through tedious procedures or through strict compliance obligations. A recent example would be the security review procedure for transfer of personal information laid down under the Chinese cybersecurity law. Other localization laws are narrow in scope. Think of South Korea’s Land Survey Act banning exporting local mapping data to foreign companies that do not operate domestic data servers. India’s National Data Sharing and Accessibility Policy requires all data collected using public funds to be stored within the borders of India.

Balancing Data Protection and Data Localization in Trade

Another important issue driving localization demands is privacy and protection of personal information. The inclusion of commitments prohibiting localisation mandates in treaties is promoted by industry groups [PDF] as a victory for user rights, security and openness of the Internet… but it’s not quite as simple as that. Some countries argue that limiting how personal data can be transferred across borders is one of the only practical ways they have to protect the privacy of their citizens, in the absence of a more comprehensive shared data protection regime between the countries concerned.

Thus concerns about the lack of control over user data and its transfer, processing and storage in jurisdictions with autocratic governments, a weak rule of law, or surveillance programs, have led governments to recognise data protection as a legitimate reason to limit transfer of data. For example, without such exceptions, sensitive health information from Canada and Australia could be processed in jurisdictions with weaker privacy protections. The European Union also maintains that data protection and privacy are legitimate reasons to place limits cross-border transfer of data, and its Privacy Shield agreement with the United States is its attempt at doing exactly this.

Not surprisingly, there has been strong pushback from the US and large tech firms on this stance. Last week, the Information Technology Industry Council (ITIC) a US-based technology group has alleged that several countries, including India, China, South Korea, Russia, Vietnam, Canada, Mexico and Indonesia have turned to discriminatory policies and forced localisation that unfairly disadvantage American companies. The group has submitted a report to the Trump Administration and is urging for an intervention from the Trump administration to remove barriers to trade.

There is no agreement on where to draw the line between data protection based restrictions on data flows that are protectionist and against trade and liberalisation, and those that are necessary to guarantee the rights of citizens. Privacy experts have argued that data protection is qualitatively different from forced localization and the issue of data localization for data protection would disappear if nations implement stronger privacy laws or adopted baseline best practices. Nevertheless countries continue to pursue carving exemptions for data protection in trade agreements.

Several regional trade agreements under discussion include provisions addressing the cross-border transfer of personal information. Texts and analysis of TTIP, TPP, TISA and NAFTA seems to suggest an emerging strategy on data localization linked to transfer of personal information. Participating nations commit to general obligations to not restrict data flows or to require localization of infrastructure, facilities or restriction on transfer of ICT goods and services. For the RCEP, which includes countries with strong national localization strategies or ambitions such as China and India, and countries like Australia and Japan that oppose localization, it is as yet unclear how data localization will be treated.

A strategy to harmonize national approaches followed in the TPP which may see adoption in other trade agreements such as NAFTA and RCEP would be to create exceptions for countries to the general obligations against data localisations. Exceptions allowing restrictions have to based on “legitimate public policy concerns” and are expected to provide the flexibility to accommodate national approaches in regional agreements. Not including such exceptions could require certain countries to roll-back data protections guaranteed to citizens in order to allow cross-border transfer. Global trade bodies recognise the need for flexibility and the World Trade Organization provides such exceptions under Article XIV of its General Agreement on Trade in Services (GATS).

Yet the problem with this is it exposes data protection rules to the possibility of trade complaints about whether these rules are legitimate and proportionate—and these complaints would be heard by a panel of trade lawyers, who have no particular expertise in privacy law or human rights. A lot depends on the implementation of restrictions crafted under these exceptions. When specifying exceptions it is important that governments lay down conditions to facilitate transfer of data where privacy concerns have been adequately addressed. Thinking through and being critical of effectiveness of de-identification measures or thresholds for meaningful informed consent will go a long way in understanding if restricting data to a jurisdiction is a long-term solution for protecting personal data.

EFF’s Recommendation

We believe that countries should consider other measures apart from data localization for strengthening data protection in trade agreements. While there is no global framework for data protection, there are regional initiatives such as the Asia Pacific Economic Cooperation (APEC) Privacy Principles and APEC’s Cross Border Privacy Rules (CBPR) system. Such mechanisms could be a starting point for harmonising national approaches and gaining consensus on data protection.

The CBPR features principles and guidelines for the development of a system of voluntary cross-border transfer of personal information in the region. In addition to Canada, Japan, Mexico, and the US, nearly two dozen private companies are also participatory members in the CBPR framework. Earlier this year, South Korea became the fifth member and Singapore and the Philippines are expected to join in the near future. The incentives for integration of such a template will depend on how far countries can accommodate domestic strategies to be harmonious with global rules. By themselves, Australia, India, China, Japan and South Korea are large economies and their role in regional structures and ambitions will influence their role in trade negotiations.

Since the APEC privacy principles do not impose obligations on its member organisations with respect to privacy, but merely confirm a baseline level of protection, Mexico has asked for more in the NAFTA negotiations which begin this week. It is pushing for a Privacy Shield style agreement that would require U.S. companies to abide by Mexico’s stronger data protection rules if they wish to gain access to the benefits of liberalized trade within the NAFTA region. The response from the United States remains to be seen, but we can expect some pushback against this suggestion.

Calls to regulate data localization laws in trade agreements aren’t going to go away while the factors driving these laws remain, and weak cross-border data protection is one such factor. But data localization isn’t a comprehensive solution to this problem, as it doesn’t guarantee that data will be secure or adequately protect it against misuse. Pushing localization for short-term social, political and economic gains could ultimately harm users and innovators.

Given the complex political and cultural contexts driving data localization, reconciliation of the multitude of interests and ideologies will not be easy.  Ideally, the privacy and personal data of users would be protected through measures that support a free and open Internet, and that would not be vulnerable to being overturned by trade tribunals who place the free flow of data above the human rights of users. Threading this needle is a challenge in the best of conditions, but doing so under the closed, opaque, and lobbyist-dominated conditions of trade negotiations makes it even harder.

(*) As part of EFF’s application to become an accredited NGO at the United Nations, we have been requested to use that organization’s official terminology: Taiwan, province of China.

Aadhaar: Ushering in a Commercialized Era of Surveillance in India

Since last year, Indian citizens have been required to submit their photograph, iris and fingerprint scans in order to access legal entitlements, benefits, compensation, scholarships, and even nutrition programs. Submitting biometric information is needed for the rehabilitation of manual scavengers, the training and aid of disabled people, and anti-retroviral therapy for HIV/AIDS patients. Soon police in the Alwar district of Rajasthan will be able to register criminals, and track missing persons through an app that integrates biometric information with the Crime and Criminal Tracking Network Systems (CCTNS).

These instances demonstrate how intrusive India’s controversial national biometric identity scheme, better known as Aadhaar has grown. Aadhaar is a 12-digit unique identity number (UID) issued by the government after verifying a person’s biometric and demographic information. As of April 2017, the Unique Identification Authority of India (UIDAI) has issued 1.14 billion UIDs covering nearly 87% of the population making Aadhaar, the largest biometric database in the world. The government asserts that enrollment reduces fraud in welfare schemes and brings greater social inclusion. Welfare schemes that provide access to basic services for marginalized and vulnerable groups are essential. However, unlike countries where similar schemes have been implemented, invasive biometric collection is being imposed as a condition for basic entitlements in India. The privacy and surveillance risks associated with the scheme have caused much dissension in India. 

Identity and Privacy in India

Initiated as an identity authentication tool, the critical problem with Aadhaar is that it is being pushed as a unique identifier to access a range of services. The government continues to maintain that the scheme is voluntary, and yet it has galvanized enrollment by linking Aadhaar to over 50 schemes. Aadhaar has become the de-facto identity document accepted at private, banks, schools, and hospitals. Since Aadhaar is linked to the delivery of essential services, authentication errors or deactivation has serious consequences including exclusion and denial of statutory rights. But more importantly, using a unique identifier across a range of schemes and services enables seamless combination and comparison of databases. By using Aadhaar, the government can match existing records such as driving license, ration card, financial history to the primary identifier to create detailed profiles. Aadhaar may not be the only mechanism, but essentially, it’s a surveillance tool that the Indian government can use to surreptitiously identify and track citizens. 

This is worrying, particularly in context of the ambiguity regarding privacy in India. The right to privacy for Indian citizens is not enshrined in the Constitution. Although, the Supreme Court has located the right to privacy as implicit in the concept of “ordered liberty” and held that it is necessary in order for citizens to effectively enjoy all other fundamental rights. There is also no comprehensive national framework that regulates the collection and use of personal information. In 2012, Justice K.S. Puttaswamy challenged Aadhaar in the Supreme Court of India on the grounds that it violates the right to privacy. The Court passed an interim order restricting compulsory linking of Aadhaar for benefits delivery, and referred the clarification on privacy as a right to a larger bench. More than a year later, the constitutional bench is yet to be constituted.

The delay in sorting out the nature and scope of privacy as right in India has allowed the government to continue linking Aadhaar to as many schemes as possible, perhaps with the intention of ensuring the scheme becomes too big to be rolled back. In 2016, the government enacted the ‘Aadhaar Act‘ passing the legislation without any debate, discussion or even approval of both houses of Parliament. In April this year, Aadhaar was made compulsory for filing income tax or PAN number application and the decision is being challenges in Supreme Court. Defending the State , the Attorney-General of India claimed that the arguments on so-called privacy and bodily intrusion is bogus, and citizens cannot have an absolute right over their body! The State’s articulation is chilling, especially in light of the Human DNA Profiling Bill seeking the right to collect biological samples and DNA indices of citizens. Such anti-rights arguments are worth note because biometric tracking of citizens isn’t just government policy – it is also becoming big business. 

Role of Private Companies

Private companies supply hardware, software, programs, and the biometric registration services for rolling out Aadhaar to India’s large population. UIDAI’s Committee on Biometrics acknowledges that biometrics data are national assets though American biometric technology provider L-1 Identity Solutions, and consulting firms Accenture and Ernst and Young can access and retain citizens’ data. The Aadhaar Act introduces electronic Know-Your-Customer (eKYC) that allows government agencies and private companies to download data such as name, gender and date of birth from the Aadhaar database at the time of authentication. Banks and telecom companies using authentication process to download data and auto-fill KYC forms and to profile users. Over the last few years, the number of companies or applications built around profiling of citizens’ personally sensitive data has grown exponentially.

A number of people linked with creating the UIDAI infrastructure have founded iSPIRT, an organisation that is pushing for commercial uses of Aadhaar. Private companies are using Aadhaar for authentication purposes and background checks. Microsoft has announced SkypeLite integration with Aadhaar to verify users. Others, such as TrustId and Eko are integrating rating systems into their authentication services and tracking users through platforms they create. In essence such companies are creating their own private database to track authenticated Aadhaar users and they may sell this data to other companies. The growth of companies that share and combine databases to profile users is an indication of the value of personal data and its centrality for both large and small companies in India. 

Integrating and linking large biometrics collections to each other, which are then linked with traditional data points that private companies hold such as geolocation or phone number enables constant surveillance to take over. So far, there has been no parliamentary discussion on the role of private companies. UIDAI remains the ultimate authority in deciding the nature, level and cost of access granted to private companies. For example, there is nothing in Aadhaar Act that prevents Facebook from entering into an agreement with the Indian government to make Aadhaar mandatory to access WhatsApp or any of its other services. Facebook could also pay data brokers and aggregators to create customer profiles to add to its ever growing data points for tracking and profiling its users. 

Security Risks and Liability

A series of data leakages have raised concerns about which private entities are involved, and how they handle personal and sensitive data. In February, UIDAI registered a complaint against three companies for storing and using biometric data for multiple transactions. Aadhaar numbers of over 130 million people and bank account details of about 100 million people have been publicly displayed through government portals owing to poor security practices. A recent report from Centre for Internet and Society (CIS) showed that a simple tweaking of URL query parameters of the National Social Assistance Programme (NSAP) website could unmask and display private information of a fifth of India’s population.

Such data leaks pose a huge risk as compromised biometrics can never be recovered. The Aadhaar Act establishes UIDAI as the primary custodian of identity information, but  is silent on the liability in case of data breaches. The Act is also unclear about notice and remedies for victims of identity theft and financial frauds and citizens whose data has been compromised. UIDAI has continued to fix breaches upon being notified, but maintains that storage in federated databases ensures that no agency can track or profile individuals. 

After almost a decade of pushing a framework for mass collection of data, the Indian government has issued guidelines  to secure identity and sensitive personal data in India. The guidelines could have come earlier, and given large data leaks in the past may also be redundant. Nevertheless, it is reassuring to see practices for keeping information safe and the idea of positive informed consent being reinforced for government departments. To be clear, the guidelines are meant for government departments and private companies using Aadhaar for authentication, profiling and building databases fall outside its scope. With political attitudes to corporations exploiting personal information changing the world over, the stakes for establishing a framework that limits private companies commercializing personal data and tracking Indian citizens are as high as they have ever been. 

Mapping MAG: A study in Institutional Isomorphism

The paper is an update to a shorter piece of MAG analysis that had been conducted in July 2015. At that time our analysis was limited by the MAG membership data that was made available by the Secretariat. Subsequently we wrote to the Secretariat and this paper is based on the data shared by them including for the years for which membership details were previously not available. I delve into the history of the formation of the Multi-Stakeholder Advisory Group (MAG) and the Internet Governance Forum (IGF) to highlight lessons from the past that should be applied in strengthening its present structure. The paper covers three broad areas:

  • History of the formation of the MAG, its role within the IGF structure, influences that have impinged on its scope of work, manner in which its evolution has deviated from conceptualization
  • Analysis of MAG membership (2006-2015): Trends in the selection and rotation of the MAG membership 
  • Recommendations to reform MAG/IGF

The recent renewal of the Internet Governance Forum[2] (IGF) mandate at the World Summit on the Information Society (WSIS)+10 High-Level Meeting[3] was something of a missed opportunity. The discussions unerringly focused on the periphery of the problem – the renewal of the mandate, leaving aside questions of vital importance such as strengthening and improving the structures and processes associated with the IGF. The creation of the IGF as a forum for governments and other stakeholders to discuss policy and governance issues related to Internet was a watershed moment in the history of the Internet.

In the first decade of its existence the IGF has proven to be a valuable platform for policy debates, a space that fosters cooperation by allowing stakeholders to self-organise to address common areas of concern. But the IGF rests at being a platform for multistakeholder dialogue and is yet to realise its potential as per its mandate to “find solutions to the issues arising from the use and misuse of the Internet” as well as “identify emerging issues […] and, where appropriate, make recommendations”.[4]

From the information available in the public domain, it is evident that the IGF is not crafting solutions and recommendations or setting the agenda on emerging issues. Even if unintended, this raises the disturbing possibility that alternative processes and forums are filling the vacuum created by the unrealised IGF mandate and helming policy development and agenda setting on Internet use and access worldwide. This sits uneasily with the fact that currently there is no global arrangement that serves or could be developed as an institutional home for global internet governance issues.

Moreover, the economic importance of the internet as well as its impact on national security, human rights and global politics has created a wide range of actors who seek to exert their influence over its governance. Given the lack of a global centralized body with authority to enforce norms and standards across political and functional boundaries, control of internet is an important challenge for both developed and emerging economies. As the infrastructure over which the internet runs is governed by nation states and their laws, national governments continue to seek to exert their influence on global issues.

Divergence of approaches to regulation and differences in capacity to engage in processes, has led to fragmentation of approaches to common challenges.[5] Importantly, not all governments are democratic and may exert restrictions on content and access that conflict with the open and global nature of the internet. Alongside national governments, transnational private corporations play a critical role in security and stability of the internet. Much like the state, they too raise the niggling question of how to guard against the guardians.

Corporations control of sensitive information, their institutional identity, secrecy of operations: all are essential to their functioning but could also erode the practice of democratic governance, and the rights and liberties of users online. Additionally, as issues of human rights, access and local content have become interlinked with public policy issues civil society and academia have become relevant to traditionally closed policy spaces. Considering the variety of stakeholders and their competing interests, concerns about ensuring stability and security of the Internet have led the international community to pursue a range of governance initiatives.

Implementing a Multistakeholder Approach

At the broadest level debates about the appropriate way forward has evolved as a contestation between the choice of two models. On the one hand is the state-centric ‘multilateral’ model of participation, and on the other a ‘multistakeholder’ approach that aims for bottom up participation by all affected stakeholders. The multistakeholder approach sees resonance across several quarters[6] including a high level endorsement from the Indian government last year.[7] An innovative concept, a multistakeholder approach fits well within the wider debate about rethinking governance in a globalized world.

Proponents of the multistakeholder approach see it as a democratic process that allows for a variety of views to be included in decision making.[8] Nevertheless, the intertwining of the Internet and society pitches actors and interests at opposing ends. While a multistakeholder approach broadens the scope for participation, it also raises serious issues of representation and accountability. Since multistakeholder processes fall outside the traditional paradigm of governance, establishing legitimacy of processes and structures becomes all the more important.

The multistakeholder concept is only beginning to be critically studied or evaluated. There have been growing concerns, particularly, from emerging economies[9] of a lack of representation in policy development bodies and that issues affecting marginalised communities being overlooked in policy development process. From this view, the multistakeholder model has created ‘transnational and semi privatized’ structures and ‘transnational elites’.[10] Such critics define emerging and existing platforms derived from the multistakeholder concept as ‘an embryonic form of transnational democracy’ that are occupied by elite actors.[11]

Elite actors may include the state, private and civil society organisations, technical and academic communities and intergovernmental institutions. In the context thus sketched out, the key question that the WSIS+10 Review should have addressed is whether the IGF provides the space for the development of institutions and solutions that are capable of responding to the challenges of applying the multistakeholder concept to internet governance.  The existing body of work on the role of the IGF has yet to identify, let alone come to terms with, this problem.

Applying critical perspectives examining essential structures and processes associated with the IGF becomes even more relevant given its recently renewed mandate. However, already the forum’s first planning meeting scheduled to take place in Geneva this week is already mired in controversy[12] after a new Chair was named by the UN Secretary General.

The decision for appointing a new Chair was made without any form of public process, or any indication on the selection criteria. Moreover, the “multistakeholder advisory group” (MAG), which decides the content and substance of the forum, membership was also renewed recently. Problematically most of the nominations put forth by different constituent groups to represent them were rejected and individuals were appointed through a parallel top-down and secretive UN process. Of the 55 MAG members, 21 are new but only eight were officially selected by their respective groups.[13]

This paper focuses on the role of the MAG structure and functioning and highlights issues and challenges in its working so as to pave the way for strategic thinking on its improvement. A tentative beginning towards identifying what the levers for change can be made by sifting through the eddies of history to uncover how the MAG has evolved and become politicised.

The paper makes two separate, but interrelated claims: first, it argues that as the de-facto bureau essential to the functioning of the IGF, there is an urgent need to introduce transparency and accountability in the selection procedure of the MAG members. Striking an optimum balance between expertise and legitimacy in the MAG composition is essential to ensure that workshops and sessions are not dominated by certain groups or interests and that the IGF remains an open, well-functioning circuit of information and robust debate.

Second, it argues for immediate evaluation of MAG’s operations given the calls for  the production of tangible outcomes. There has been on-going discussion within the broader community about the role of the IGF with divisions between those who prefer a narrow interpretation of its mandate, while others who want to broaden its scope to provide policy recommendations and solutions.[14]

The interpretation of the IGF mandate and whether the IGF should make recommendations has been a sticking point and is closely linked to the question of IGF’s legitimacy and relevance. Be that as it may, the intersessional work, best practices forum and dynamic coalitions over the last ten years have led to the creation of a vast repository of information that should feed into the pursuit of policy options and identification of best practices.

The true test of the multistakeholder model is not only to bring together wide range of views but to also ensure that accumulated knowledge is applied to address common problems. Implementing a multistakeholder approach and developing solutions necessitates enhanced coordination amongst stakeholder groups and in the context of the IGF, is contingent on the strength and stability of the MAG to be able to facilitate such cooperation.

The paper is organised in three parts: in the first section I delve into the history of the formation of the MAG. To understand the MAG’s role within the IGF structure it is essential to revisit the influences that shaped its conceptualisation and subsequent evolution over the decade. A critical historical perspective provides the context of the multiple considerations that have impinged on MAG’s scope of work, of the manner in which MAG’s evolution has deviated from intentions, and the lessons from the past that should be applied in strengthening its present structure.

The second section analyses trends in the selection and rotation of the MAG membership and traces out the elite elements in the composition of the MAG. The analysis reveals two distinct stages in the evolution of the MAG membership which has remained significantly homogeneous across stakeholder representation. The final section of the paper focuses on a set of recommendations to ensure that the MAG is strengthened, becomes sustainable and provides the impetus for IGF reform in the future.

Origins of the IGF

The WSIS process was divided in two phases, the Geneva phase focused on principles of internet governance. The outcome documents of the first phase included a Declaration of Principles and a Plan of Action being adopted by 175 countries. Throughout the process, developing countries such as China, Brazil and Pakistan opposed the prevailing regime that allowed US dominance and control of ‘critical infrastructure’. As the first phase of the WSIS could not resolve these differences the Working Group on Internet Governance (WGIG) was set up by the UN Secretary General to deliberate and report on the issues.

The establishment of the WGIG is an important development in the WSIS process not only because of the recommendations it developed to feed into the second phase of the negotiations, but also because of the procedural legitimacy the WGIG established through its working. The WGIG embodied the multistakeholder principle in its membership and open consultation processes. WGIG members were selected and appointed in their personal capacity through an open and consultative process. As a result the membership demonstrated diversity in the geography, stakeholder groups represented and gender demographics.

The consultations were open, transparent and allowed for a diverse range of views in the form of oral and written submissions from the public to feed into the policy process. At its final meeting the WGIG membership divided into smaller working groups to focus on specific issues, and reassembled at the plenary to review, discuss and consolidate sections which were then approved in a public forum. As the WGIG background paper notes “The WGIG agreed that transparency was another key ingredient to ensure ownership of the process among all stakeholders.”[15]

The WGIG final report[16] identified a vacuum within the context of existing structures and called for the establishment of a forum linked to the UN. The forum was to be modelled on the best practices and open format of the WGIG consultative processes allowing for the participation of diverse stakeholders to engage on an equal footing. It was in this context that the IGF was first conceptualised as a space for global multistakeholder ‘dialogue’ which would interface with intergovernmental bodies and other institutions on matters relevant to Internet governance.

The forum was conceived as a body that would connect different stakeholders involved in the management of the internet, as well as contribute to capacity-building for governance for developing countries drawing on local sources of knowledge and expertise. Importantly, the forum was to promote and assess on an ongoing basis the embodiment of WSIS principles in Internet governance processes and make recommendations’ and ‘proposals for action’ addressing emerging and existing issues not being dealt with elsewhere. However, as things turned out the exercises of power between states and institutional arrangements ultimately led to the development of a subtly altered version of the original IGF mandate.

Aftermath of the WGIG Report

The WGIG report garnered much attention and was welcomed by most stakeholders with the exception of the US government which along with private sector representatives such as Coordinating Committee of Business Interlocutors (CCBI) disagreed with the recommendations.[17] Pre-empting the publication of the report, the National Telecommunications and Information Administration (NTIA) issued a statement in June 2005 affirming its resolve to “maintain its historic role in authorizing changes or modifications to the authoritative root zone file.”[18]

The statement reiterated US government’s intention to fight for the preservation of the status quo, effectively ruling out the four alternative models for internet governance put forward in the WGIG report. The statement even referenced the WGIG report stating, “Dialogue related to Internet governance should continue in relevant multiple fora. Given the breadth of topics potentially encompassed under the rubric of Internet governance there is no one venue to appropriately address the subject in its entirety.”[19]

The final report was presented to PrepCom 3 of the second phase in July 2005 and the subsequent negotiations were by far, the most significant in the context of the role and structure that the IGF would take in the future. US stance on its role with regard to the root zone garnered pushback from both civil society and other governments including Russia, Brazil, Iran and China. However the most significant reaction to US stance came from the European Union issuing a statement after the commencement of PrepCom 3 in September.

EU’s position recognised that adjustments were needed in institutional arrangements for internet governance and called for a new model for international cooperation which would include “the development and application of globally applicable public policy principles.”[20] the US had not preempted this “shocking and profound change” and now isolated in its position on international governance of the internet, and it sent forth a strongly worded letter[21] invoking its long-standing relationship and urging the EU to reconsider its stance.

The pressure worked since the US was in a strong position to stymie the achievement of a resolution from WSIS process. Moreover, introducing reforms to the internet naming and numbering arrangements was not possible without US cooperation. The letter resulted in EU going back on its aggressive stance and with it, the push for the establishment of global policy oversight over the domain names and numbers lost its momentum.

The letter significantly impacted the WSIS negotiations and shaped the role of the IGF. By creating a deadlock and by applying pressure US was able to negotiate a favourable outcomes for itself. The last minute negotiations led to the status quo continuing and in exchange the US provided an undertaking that it would not interfere with other countries’ ccTLDs. The weakened mandate meant that even though creation of the IGF under the WSIS process moved forward the direction changed from its conceptualisation and origins from the WGIG report.

Institutionalizing the IGF

In 2006, the UN Secretary General appointed Markus Kummer to assist with the establishment of the IGF. The newly formed IGF Secretariat initiated an open consultation to be held in Geneva in and issued an open call to stakeholders seeking written submissions as inputs into the consultation.[22] Notably neither the US government nor the EU sent in a response to the consultation and submissions made by other stakeholders were largely a repetition of the views expressed at WSIS.

The division on the mandate of IGF was evident in this very first consultation. Private sector representatives such as the CCBI and ICC-Basis, government representatives from OECD countries like Canada and the technical community represented by likes of Nominet and ISOC[23] opposed the development of the IGF as platform for policy development. On the other hand, civil society representatives such as APC called for the IGF to produce specific recommendations on issues where there is sufficient consensus.[24]

With reference to the MAG structure, again there was division on whether the “effective and cost-efficient bureau” referred to in the Tunis Agenda should have a narrow mandate limited to setting the agenda for plenary meetings or if it should have a more substantial role. Civil society stakeholders envisioned assigning the bureau a more substantial role and notably the Internet Governance Project (IGP) discussion paper released in advance of the February 2006 Geneva consultations.[25]

The paper offered design criteria for the Forum including specific organizational structures and processes proposing “a small, quasi-representational decision making structure” for the IGF Bureau.[26] The paper recommended formation of twelve member bureau with five representatives from governments (from each UN geographic region) and two each from private sector civil society academic and technical communities. The bureau would set the agenda for the plenary meeting not arbitrarily through private discussions, but driven by working group proposals and it would also have the power to approve or reject applications for forming working groups.

The proposed structure in the IGP paper had it been implemented would have developed the bureau along the lines of the IETF where the working groups would develop recommendations which would feed into the deliberation process. However, there was a clear divide on the proposed structure with many stakeholders opposing the establishment of sub-groups or committees under the IGF.[27]

Following the written submissions the first open consultations on the establishment of the IGF were held in Geneva on 16 and 17 February 2006, and were chaired by Nitin Desai.[28] The consultation was well attended with more than 300 participants including 40 representatives from governments and the proceedings were webcast. Further, the two-day consultation was structured as a moderated roundtable event at which most interventions were read from prepared statements, many of which were also tabled as documents and later made available from the IGF Web site. This ofcourse meant that there was a repetition of the views expressed in response to the questionnaire or the WGIG report and as a consequence, there was little opportunity for consensus-building.

Once again there was conflict on whether the IGF should be conceptualised as annual ‘event’ that would provide space for policy dialogue or a ‘process’ of engaging with policy issues which would culminate in an annual event. The CCBI reiterated that “[t]he Tunis Agenda is clear that the IGF does not have decision-making or policy-making authority,” and the NRO emphasised that the “IGF must be a multi-stakeholder forum without decision-making attributions.”[29]

William Drake argued for the IGF “as a process, not as a series of one-off meetings, but as a process that would promote collective dialogue, learning, and mutual understanding on an ongoing basis.”[30] Government representatives were split for example see El Salvador statement “that the Internet Governance Forum will come up with recommendations built on consensus on specific issues,” and Brazil even characterised the first meeting as “an excellent opportunity to initiate negotiations on a framework treaty to deal with international Internet public policy issues.”[31]

Although a broad consensus was declared on need for a lightweight multi-stakeholder bureau there was no consensus on its size, composition and the mandate of this bureau. Nitin Desai held the issue for further written input and the subsequent consultation received twelve submissions with most respondents recommended a body of ten and twenty five members. The notable exceptions were submissions from the Group of 77 and China that sought a combined total of forty members half of which would be governmental representatives.

The discussions during the February consultations and the input received from the written submissions paved the way for what eventually became the MAG. The IGF Secretariat announced the formation of a bureau with forty members and while not expressly stated, half of these would be governmental representatives. It has been speculated that the large membership decision was a result of political wrangling among governments, especially the G77 governments insisting on large group that would accommodate all the political and regional differences among its members.[32]

IGF Secretariat – Set to Fail?

The unwieldy size of the MAG meant that it would have to rely on the newly constituted Secretariat for organization, agenda-setting, and results. This structure empowered the Secretariat while limiting the scope of the MAG, a group that was already divided in its interests and agenda. However, the Secretariat was restrained in its services to stakeholders as it had limited resources since it was not funded by the United Nations and relied upon voluntary donations to a trust fund.[33]

Early donors included the Swiss Agency for Development and Cooperation (SWADC), ICANN and Nominet.[34] Due to disjointed sources of funding, the Secretariat was vulnerable to the influence of its donors. For example, the decision to to base the Secretariat in Geneva was to meet the condition set by SWADC contribution. Distressingly, of the 20 non-governmental positions in the MAG, most were directly associated with the ICANN regime.

The over-representation of ICANN representatives in MAG selection was problematic since the IGF was conceptualised to address the lack of acceptance of ICANN’s legitimacy in the WSIS process. The lack of independent funding led to a deficit of accountability demonstrated in instances where it was possible for one of the MAG members to quietly insinuate that private sector support for the IGF and its Secretariat would be withdrawn if reforms unacceptable to that stakeholder group went ahead.[35]

As might perhaps be expected from a Secretariat with such limited resources, its services to stakeholders were confined to maintaining a rudimentary website and responding to queries and requests. The transparency of the Secretariat’s activities was also very limited, most clearly exemplified by the process by which the Advisory Group was appointed.

Constituting the MAG

Following the announcement of the establishment of the MAG, a call for membership to the advisory group was made in March 2006. From the beginning the nomination process was riddled with lack of transparency and the nominations received from stakeholders were not acknowledged by the IGF Secretariat, nor was the selection criteria of  made available. The legitimacy of the exercise was also marred by a top-down approach where first that nominees heard of the outcomes was the Secretariat’s announcement of selected nominees. Lack of transparency and accountability resulted in the selection and appointment procedure being driven  by patronage and lobbying.

The political wrangling was evident in the composition of the first MAG which was expanded to accommodate six regional coordinators personally appointed by Chair Nitin Desai to the Special Advisory Group (SAG). Of the twenty non-governmental positions, most were associated with the naming and numbering regime including sitting and former Board members and ICANN staff.[36] Participation from civil society was limited as the composition did not recognise[37] technical community as a distinct group, including it along with academic community and as part of civil society.

The political struggles at play was visible in the appointment of Michael D. Gallagher, the former head of the US Commerce Department’s NTIA. This appointment was all the more relevant since it was Gallagher who had had only a few months back stated that the US government owns the DNS root and has no intention of giving it up. His presence signalled that the US government took the forum seriously enough to ensure its interests were voiced and received attention on the MAG.

Beyond issues of representation the working of the MAG suffered from a serious lack of transparency as meetings of the Advisory Group were closed, and no reports or minutes were released. The Advisory Group met in May and September in Geneva before the inaugural IGF meeting in Athens. Coordination between members for the preparations for Athens was done utilising a closed mailing list that was not publicly archived. Consequently, the detail of the operations of the Advisory Group ahead of the first IGF meeting were known only to its members.

Whatever little has been reported suggests that the Advisory Group possessed little formal authority, operating like a forum where members expressed views and debated issues without the object of taking formal decisions. Decisions were settled upon by rough consensus as declared by the Chair, and on all matters where there was no agreement the issues were summarised by the Chair in a report to the UN Secretary-General. The Secretary-General would take the report summary in consideration however retained the ultimate authority to make a formal decision.[38]

The UN’s clear deciding role was not so obvious in the early years of the MAG’s existence because of the relatively novel nature of the IGF. Moreover Nitin Desai Chair, MAG and Markus Kummer, IGF Secretariat were appointed by the UN Secretary General and were on good terms with the then-Secretary General Kofi Annan and working together they acted as de facto selectors of the members of the MAG.  Most of the MAG’s core membership in the first five years of its existence was made up of leaders from across the different stakeholder groups and self-selection within those groups was encouraged to lend broader stability.

Over the last decade,  changes in institutional arrangements led the IGF to be moved as a ‘project’ under the UNDESA umbrella, where it is not a core mission, but simply one of many conferences that it handles across the world every year. The core personnel that shepherded the MAG and the IGF from its early days retired allowing for the creation a new core membership. The new group of leaders in the MAG membership have emerged partly as the result of selection and rotation process instituted by the UNDESA in appointing a ‘program committee’.

The history presented above is to help understand how the MAG was established under the UN umbrella and to highlight the key developments that shaped its scope and working. Importantly the weakened IGF mandate created divergences on the scope of the MAG to function as a ‘program committee’ limited to selecting proposals and planning the IGF or as an ‘advisory committee’ with a  more substantial role in developing the forum as an innovative governance mechanism. In its conception the IGF was a novel idea and by empowering MAG and introducing transparency in the selection procedures of members and their workings could have perhaps led to a more democratic and accountable IGF. However, the possibility of this was stemmed early on.

The opacity in the appointment processes meant that patronage and lobbying became key to being selected as a member of the MAG. It established the worrying trend of ensuring diversity and representation taking precedent over the necessity of ensuring that representatives were appointed through a bottom-up multistakeholder process. Further, distributing the composition to ensure geographic representation severely limited participation of technical, academic and civil society. In the next section, I focus on the rotation of members of the MAG over the last ten years to identify and highlight trends that have emerged in its composition.

Analysis of MAG Composition (2006 – 2015)

This primary data for the analysis of the MAG membership has been collected from the membership list from 2010-2015 available on the I website. The membership list for 2005, 2006, 2007 and 2008 have been provided by the UN IGF Secretariat during the course of this research. To the best of my knowledge, this data is yet to be made publicly available and may be accessed here.[39] The Secretariat notes that the MAG membership did not change in 2008 and 2009 and the confirmation is the only account of the list of members for both years, as the records were poorly maintained and are therefore unavailable in the public domain.

It is also worth noting that to the best of my knowledge, no data has been made available by the IGF Secretariat regarding the nomination process and the criteria on which a particular member has been re-elected to the MAG. The stakeholder groups identified for this analysis include government, civil society, industry, technical community and academia. Any overlap between two or more of these groups or movements of individuals between stakeholder groups and affiliations has been taken into account.

Over the decade of its existence, the MAG has had 196 unique members from various stakeholder groups. As per the Terms of Reference[40] (ToR) of the MAG, it is the prerogative of the UN Secretary General to select MAG members. There also exists a policy of rotating one-third members of MAG every year for diversity and taking new viewpoints in consideration. Diversity within the UN is an ingrained process where every group is expected to be evenly balanced in geographic and gender representation. However, ensuring a diverse membership often comes at the cost of legitimate expertise. Further it may often lead to top-down decision making where individuals are appointed based on their characteristics rather than qualifications.

The complexity of the selection process is further compounded by the fact that the IGF Secretariat provides an initial set of recommendations identifying which members should be appointed to the MAG, but the selection and appointment is undertaken by UNDESA civil servants based in New York. Notably, while the IGF Secretariat staff is familiar with and interacts with stakeholder representatives at internet governance meetings and forums that are regularly held in Geneva, the New York UN based officials do not share such relationships with constituent groups.

Consequently, they end up selecting members who meet all their diversity requirements and have put themselves forward through the standard UN open nomination process. The practice of ensuring that UN diversity criteria is met, creates tension within the MAG membership as representatives nominated by different stakeholder and who have more legitimacy within their respective constituencies are not appointed to the MAG.

The stress on maintaining diversity is evident in the MAG membership’s gradual expansion from an initial group of 46 members in 2006 to include a total of 56 members as of 2015. However the increase in membership has not impacted representation of the technical, academic and civil society constituencies with only 56 members having been appointed from the three groups over the last decade.

This is problematic considering that at the time of its constitution of the MAG the composition did not recognise[41] technical community as a distinct group, including it along with academic community as part of civil society. Consequently the three stakeholder groups have been represented collectively in the MAG and yet account for only 24.77% of the total membership compared to the government’s share of 39.3% and industry’s share of 35.7% respectively. At the regional level too membership across the three groups has ranged between 20-25% of the total membership.

Stakeholder share in MAG

The technical community is the least represented constituency accounting for only 5% of the total membership with only 10 members having been appointed over ten years. Of the 10, 6 were appointed from the WEOG region and there were no representatives appointed from the GRULAC region. Representatives from academia accounted for only 6% of the total membership with 13 representatives from the group having been appointed on the MAG. The technical community representation too was low from the US with only two members being appointed to the MAG and with each serving for a period of three years.

Civil society accounted for only 17% of the total membership with a total of 33 members and representation from the constituency was abysmally low across all regions. Civil society representation from the US included a total of five members, of which one served for one year, three served for two years each and only one representative continued for more than three years. Notably, there have been no academics from the US which is surprising given that most of the scholarship on internet governance is dominated by US scholars.

Stakeholder representation across regions

Industry was second largest represented group with a total of 64 members appointed to the MAG of which a whopping 30 members were appointed from the WEOG region. Representation was the highest across WEOG countries with 39.47% of the total  membership and the group accounted for 32.4% and 32.5% of the total members from Africa and Asia Pacific respectively. Across Eastern European and GRULAC countries industry representation was very low accounting for merely 11.53% and 18.18% of the total membership respectively. Industry representative from the US Included two members serving one year each, five members who served two years each, two members who continued for three years each, one member was appointed for five years, one member who completed the maximum MAG  term of eight years.

It is also interesting to note that the industry membership base expanded steadily, spiking in 2012 with a total of 40 representatives from the industry on the MAG. When assessed against the trend of the core leadership trickling out in 2012, the sudden increase in industry representation may point to attempts at capture from the stakeholder group in 2012. Industry representation from US in the MAG was by far the most consistent over the years and had the most evenly distributed appointment terms for members within a group.

Industry Representation across Regions

Government has been the most dominant group within the MAG averaging a consistent 40% of the total membership over the last 10 years. At a regional level representation on the MAG was highest from Eastern Europe with more than 61% of its total membership comprising of individuals from the government constituency. GRULAC countries appointments to the MAG also demonstrate a preference for government representation with almost 58% of the total members appointed from within this group. The share of government representation in the total membership from Asia Pacific was 47.5% and 32.43% across Africa.

Government representation across regions
Participation from industry and governement

Another general policy followed in the selection procedure is that members are appointed for a period of one year, which is automatically extendable for two more years consecutively depending on their engagement in MAG activities. Members serving for one year term is inevitable due to the rotation policy, as new members replace existing members and often it may be the case of filling slots to ensure stakeholder group, geographic and gender diversity. Due to the limited resources made available for coordination between MAG members, one year appointments may not allow sufficient time for integrating new members into the procedures and workings of UN institutions.

Over the last decade 24.36% of the total appointed MAG members have been limited to serving a term of one year. Of the total 55 one year appointments 26 individuals served their first term in 2015 alone. This includes all nine representatives of civil society and it could be argued that for a stakeholder group with only 11% of the total membership share, such a rehaul weakens the ability of members to develop linkages severely limiting their ability to exert influence on decision making within the MAG.

Interestingly, the analysis reveals that one year term was a trend in the early years of the MAG where a core group took on the leadership role and continued guiding activities for newcomers including negotiating often conflicting agendas. The pattern of one year appointments was hardly visible from 2008-2012 but picked up again in 2013 and has continued ever since. The trend is perhaps indicative of the movement in the core MAG leadership as many of the original members retired or moved on to other engagements from 2010.

Importantly, the MAG ToR note that in case there is a lack of candidates fitting the desired area or under exceptional circumstances a member may continue beyond three years. However in the formative years the MAG this exception was the norm with most members continuing for more than three years. An analysis of the membership reveals that between 2006-2012 an elite core emerges which guided  and was responsible for shaping the MAG and the IGF in its present day format. No doubt some of these members were exceptional talents and difficult to replace, however the lack of transparency in the nomination system makes it difficult to determine the basis on which these people continued beyond the stipulated one year term.

The analysis also suggests a shift in the leadership core over the last three years and points that a  new leadership group is emerging which is distinguishable in that most members have served on the MAG for three or four years. Members serving for one, two or three years makes up more than 75% of the total membership and 111 individual members have served more than 2 years on the MAG. This could be the result of the depletion in membership of those familiar with internal workings and power structures within the UN, and the selection and rotation criteria and procedures that have weakened the original composition over the last decade.

Rotating membership might be necessary to prevent capture from any particular constituency or group, on the other hand more than half of the total members have spent less than three years on the MAG which makes the composition a shifting structure that limits long term engagement. Regular rotation of members can also lead to power struggles as continuing members exercise their influence to ensure that more members from within their constituency groups are appointed. Only seven individuals have completed the maximum term of eight years on the MAG while 23 individuals have completed five years or more on the MAG.

Finally, in terms of gender diversity, the ratio of male to female members is approximately 13:7 in the total membership with the approximate value in percentage being 65% and 35% respectively. Female representatives from WEOG countries dominate with a total of 29 women having been appointed from the region. Participation of women was the lowest across Asia Pacific and Eastern Europe with only nine and five representatives having been appointed respectively. There was a better balance of gender ratios across countries from Africa and GRULAC with 12 and 14 females having been appointed from the region.

Further analysis and visualisations derived from the MAG composition and identifying trends in appointment of individual members are available on the CIS website. The visualizations include MAG membership distribution across region[42] and stakeholder groups[43], evolution of stakeholder groups over the years[44], stakeholder group distribution across countries[45] and the timeline of total number of years served by individual members[46]. The valuation also include a comparison of stakeholder group representatives appointed across India and the USA.[47]

Recommendations: Reforming MAG & the IGF

Between April 4-6, 2016 the MAG convened in Geneva towards the IGF’s first planning meeting for the year[48]. The meeting marks the beginning of MAG’s work in planning and delivering the forum, the first in its recently renewed and now extended mandate. This report is a much needed documentation of its working and processes and has been undertaken as an attempt to scrutinize if the MAG is truly a multi-stakeholder institution or if it is has evolved as a closed group of elite members cloaked in a multi-stakeholder name.

There is very little literature on the evolution of, or critiquing the MAG structure partly due to it being a relatively new structure and partly due its workings being shrouded in secrecy. The above analysis has been conducted with the aim of trying to understand MAG’s functioning of the selection of its membership. The paper explores the history of the formation of IGF and the MAG to identify the geo-political influences that have contributed to the MAG’s evolution and role in shaping the IGF over the last decade.

In this section I apply the theory of institutional isomorphism developed by DiMaggio and Powell in their seminal paper[49] on organizational theory and social change. The paper posits that as organisations emerge as a field, a paradox arises where rational actors make their organizations increasingly similar as they try to change them. A focus on institutional isomorphism can add a much needed perspective on the political struggle for organizational power and survival that is missing from much of discourse and literature around the IGF and the MAG.

A consideration of isomorphic processes also leads to a bifocal view of power and its application in modern politics. I believe that there is much to be gained by attending to similarity as well as to variation between organisations within the same field and, in particular, to change in the degree of homogeneity or variation over time. In this paper I have attempted to study the incremental change in the IGF mandate as well as in the selection of the MAG members.

Applying the theoretical framework proposed by DiMaggio and Powell I identify possible areas of concern and offer recommendations for improvement of the IGF and reform of the MAG. I detail these recommendations through the impact of resource centralization and dependency, goal ambiguity, professionalization and structuration on isomorphic change. There is variability in the extent to and rate at which organizations in a field change to become more like their peers. Some organizations respond to external pressures quickly; others change only after a long period of resistance.

DiMaggio and Powell hypothesize that the greater the extent to which an organizational field is dependent upon a single (or several similar) source of support for vital resources, the higher the level of isomorphism. Their organisational theory also posits that the greater the extent to which the organizations in a field transact with agencies of the state, the greater the extent of isomorphism in the field as a whole. As my analysis reveals both hypotheses hold true for the IGF which is currently defined as  a ‘project’ of the UNDESA. Since the IGF and the MAG are dependent on the UN for their existence, it is not surprising that both structures emulate the UN principles for diversity and governmental representation.

It is also worth noting that UN projects are normally not permanent and require regular renewal of mandate, reallocation of resources and budgets. When budget cuts take place as was the case during the global economic crisis, project funding is jeopardized as was the case when the IGF was left without an executive coordinator or a secretariat due to UN budget cuts.

This led to constituent groups coming together to directly fund the IGF secretariat through a special IGF Trust Fund created under an an agreement with the United Nations and to be administered by the UNDESA.[50] The fund was drawn up to expire on 31 December 2015 and efforts to renew contribution to the fund for 2016 is being opposed and questions on the legality of the arrangement are being raised.[51]

It is widely rumoured that the third party opposing the contribution is UNDESA itself. Securing guaranteed, stable and predictable funding for the IGF, including through a broadened donor base, is essential for the forum’s long term stability and ability to realize its underutilized potential. There have been several suggestions from the community in this regard including IT for Change’s suggestion that part of domain names tax collected by ICANN should to be dedicated to IGF funding through statutory/ constitutional arrangements. Centralisation of resources may lead to power structures being created and therefore any attempts at IGF and MAG reform in the future must  consider the choice between incorporating the IGF as a permanent body with institutional funding under the UN and the implications of that on the forum’s structure.

There are four other hypotheses in DiMaggio and Powell’s framework that may be helpful in identifying levers for improvement of the IGF and the MAG. The first states that, the greater the extent to which goals are ambiguous within afield, the greater the rate of isomorphic change. As my analysis suggests, there is an urgent need to address the decade long debate on the MAG’s scope as a programme committee limited to planning an annual forum.

The question is linked to the broader need to clarify if the IGF will continue to evolve as an annual policy-dialogue forum or if it can take on a more substantive role that includes offering recommendations and assisting with development of policy on critical issues related to internet governance. Even the MAG is divided in its interpretation of its roles and responsibilities. A resurgence of the IGF necessitates that the global community reassess the need of the forum not only on the mandate assigned to it at the time of its conceptualisation but also in light of the newer and more complex challenges that have emerged over the decade.

The second hypothesis holds that the greater the extent of professionalization in a field greater the amount of institutional isomorphic change. DiMaggio and Powell measure professionalization by the universality of credential requirements, the robustness of training programs, or the vitality of professional associations. As the MAG composition analysis reveals the structure has evolved in a manner that gives preference to participation from the government and industry over participation from civil society, technical and academic communities.

Since the effect of institutional isomorphism is homogenization, the best indicator of isomorphic change is a decrease in variation and diversity, which could be measured by lower standard deviations of the values of selected indicators in a set of organizations. Such professionalization is evident in the functioning of the MAG that has taken on bureaucratic structure akin to other UN bodies where governmental approval weighs down an otherwise light-weight structure. Further the high level of  industry representation creates distrust amongst other stakeholders and may be a reason the forum lacks legitimacy as a mechanism for governance as it could be perceived as being susceptible to capture.

The third hypothesis states that fewer the number of visible alternative organizational models in a field, the faster the rate of isomorphism in that field. The IGF occupies a special place in the UN pantheon of semi-autonomous groups and is often  held up as a shining example of the ‘multistakeholder model’,  where all groups have an equal say in decisions. Currently, there is no global definition of the multistakeholder model which at best remains a consensus framework for legitimizing Internet institutions.

It is worth noting that the system of sovereignty where authority is imposed is at odds with the earned authority within Internet institutions. Given the various interpretations of the approach, if multistakeholderism is to survive as a concept then it needs to be understood as a legitimizing principle that is strictly at odds with state sovereignty-based conceptions of legitimacy.[52] Under a true multistakeholder system, states can have roles in Internet governance but they cannot unilaterally declare authority, or collectively assert it without the consent of the rest of the Internet.

Unfortunately as the MAG membership reveals the composition is dominated by governmental representatives who seek to enforce territorial authority over issues of global significance. Further, while alternative approaches to its application exist within the ecosystem they are context specific and have evolved within unique environments.[53] As critics note emerging and existing platforms derived from the multistakeholder concept create ‘an embryonic form of transnational democracy’. Therefore it is important to recognise that the IGF is a physical manifestation of a much larger ideal, one where individuals and organizations have the ability to help shape the  Internet and the information society to which it is intrinsically connected. This points to the need to study and develop alternative models to multistakeholder governance while continuing to strengthen existing practices and platforms.

As such, the IGF and its related local, national and regional initiatives represent a critical channel for expression especially for countries where such conversation is not pursued adequately and keeps discussions of the internet in the public space as opposed to building from regional/national initiatives. However, interaction between the global IGF and national IGFs is yet to be established. The MAG can play a critical role in developing and establishing mechanism to improve the national IGFs coordination with regional and national initiatives. A strengthened IGF could better serve national initiatives by providing formal backing and support to develop as platforms for engaging with long standing and emerging issues and identifying possible ways to address them.

DiMaggio and Powell’s final hypothesis holds that the greater the extent of structuration of a field, the greater the degree of isomorphism. As calls for creating structures to govern cyberspace pick up pace and given the extension of the IGF mandate its structure and working are in need of a rehaul. More research and analysis is needed to understand if there is a preferred approach for multistakeholder participation and engagement is emerging within both the IGF and MAG.

For example, if a portion or category of stakeholder group, countries and regions are not engaging in common dialogue, does the MAG have the mandate to promote and encourage participation? Has a process been established for ensuring a right balance when engaging different stakeholders and if yes, how is such a process initiated and promoted? The data shared by the IGF Secretariat confirmed that there were no records of the nomination procedure, that the membership list was missing for a year and that there was confusion in some cases who the nominees were are actually representing.

This opens up glaring questions on the legitimacy of the MAG such as on what criteria were MAG members selected and rotated? Was this evaluation undertaken by objective criteria or were representative handpicked by the UN? Moreover, it is important to asses of selection took place following an open call for nominations; or if members were handpicked by UN. Such analysis will help determine if there is scope within the current selection procedure to reach out to the wider multistakeholder community or if all  MAG activities and discussions are restricted to its constituent membership. Clarifying the role of the IGF in the internet governance and policy space is inextricably linked to reforms in the MAG structure and processes and the questions raised above need urgent attention.

While these issues have been well known and documented for a number of years, yet there has been no progress on resolving them. Currently there is no website or document that lists the activities conducted by MAG in furtherance of ToR, nor does it produce annual report or maintain a publicly archived mailing list. Important recommendations for strengthening the IGF were made by the UN CSTD working group on IGF improvements.

The group took two years to produce its report identifying problems and offering recommendations  that were to be implemented by end of 2015 and yet many of the problems identified within it have yet to be addressed. Worryingly, an internal MAG proposal to set up a working group to dig into the delays is being bogged down with discussions over scope and membership and a similar effort six months ago was also shot down.[54]

The ineffectiveness of the MAG to institute reform have led to calls for a new oversight body with established bylaws as the MAG in its present form does not seem up to the task. Further the opaque decision making process and lack of clarity on the scope of the MAG means that each time it undertakes efforts for improvements these are thwarted as being outside of its mandate. There remains a lot of work to be done in strengthening the MAG structure as the group that undertakes the day-to-day work of the IGF and the many issues that plague the role and function of the IGF. A tentative beginning can be made by introducing transparency and accountability in MAG member selection.


[1] This paper has been authored as part of a series on internet governance and has been made possible through a grant from the MacArthur Foundation.

[2] The Internet Governance Forum See: http://www.intgovforum.org/cms/

[3] World Summit on the Information Society (WSIS)+10 High-Level Meeting See: https://publicadministration.un.org/wsis10/

[4]The mandate and terms of reference of the IGF are set out in paragraphs 72 to 80 of the Tunis Agenda for the Information Society (the Tunis Agenda). See: http://www.itu.int/net/wsis/docs2/tunis/off/6rev1.html

[5] Samantha Bradshaw, Laura DeNardis, Fen Osler Hampson, Eric Jardine and Mark Raymond ‘The Emergence of Contention in Global Internet Governance’, the Centre for International Governance Innovation and Chatham House, 2015 See: https://www.cigionline.org/sites/default/files/no17.pdf

[6] Mikael Wigell, ‘Multi-Stakeholder Cooperation in Global Governance’, The Finnish Institute of International Affairs. June 2008, See: https://www.ciaonet.org/attachments/6827/uploads

[7] Arun Mohan Sukumar, India’s New ‘Multistakeholder’ Line Could Be a Game Changer in Global Cyberpolitics,The Wire, 22 June 2015 See:http://thewire.in/2015/06/22/indias-new-multistakeholder-line-could-be-a-gamechanger-in-global-cyberpolitics-4585/

[8] Background Note on Sub-Theme Principles of Multistakeholder/Enhanced Cooperation, IGF Bali 2013 See: https://www.intgovforum.org/cmsold/2013/2013%20Press%20Releases%20and%20Articles/Principles%20of%20Multistakeholder-Enhanced%20Cooperation%20-%20Background%20Note%20on%20Sub%20Theme%20-%20IGF%202013-1.pdf

[9] Statement by Mr. Santosh Jha, Director General, Ministry of External Affairs, at the First Session of the Review by the UN General Assembly on the implementation of the outcomes of the World Summit on Information Society in New York on July 1, 2015 See: https://www.pminewyork.org/adminpart/uploadpdf/74416WSIS%20stmnt%20on%20July%201,%202015.pdf

[10] Jean-Marie Chenou, Is Internet governance a democratic process ? Multistakeholderism and transnational elites, IEPI – CRII Université de Lausanne, ECPR General Conference 2011,Section 35 Panel 4 See: http://ecpr.eu/filestore/paperproposal/1526f449-d7a7-4bed-b09a-31957971ef6b.pdf

[11] Ibid. 9

[12] Kieren McCarthy, ‘Critics hit out at ‘black box’ UN internet body’, The Register 31 March 2016 See: http://www.theregister.co.uk/2016/03/31/black_box_un_internet_body/?page=3

[13] Ibid.

[14] Malcolm Jeremy, ‘Multistakeholder governance and the Internet Governance Forum, Terminus Press 2008

[15] Background Report of the Working Group on Internet Governance June 2005 See: https://www.itu.int/net/wsis/wgig/docs/wgig-background-report.pdf

[16] Report of the Working Group on Internet Governance, Château de Bossey June 2005  http://www.wgig.org/docs/WGIGREPORT.pdf

[17] Compilation of Comments received on the Report of the WGIG, PrepCom-3 (Geneva, 19-30 September 2005) See: http://www.itu.int/net/wsis/documents/doc_multi.asp?lang=en&id=1818%7C2008

[18] U.S. Principles on the Internet’s Domain Name and Addressing System June 30, 2005 See: https://www.ntia.doc.gov/other-publication/2005/us-principles-internets-domain-name-and-addressing-system

[19] Ibid. 16.

[20] Tom Wright, ‘EU Tries to Unblock Internet Impasse’, International Herald Tribune

Published: September 30, 2005 See: http://www.nytimes.com/iht/2005/09/30/business/IHT-30net.html

[21] Kieren McCarthy, Read the letter that won the internet governance battle’, The Register,  2 Dec 2005 See: http://www.theregister.co.uk/2005/12/02/rice_eu_letter/

[22] United Nations Press Release, 2 March, 2006 Preparations begin for Internet Governance Forum,

http://www.un.org/press/en/2006/sgsm10366.doc.htm

[23] The Internet Society’s contribution on the formation of the Internet Governance Forum, February 2006 See: http://www.internetsociety.org/sites/default/files/pdf/ISOC_IGF_CONTRIBUTION.pdf

[24] APC, Questionnaire on the Convening the Internet Governance Forum (IGF) See:http://igf.wgig.org/contributions/apc-questionnaire.pdf

[25] Milton Mueller, John Mathiason, Building an Internet Governance Forum, 2 Febryary 2006, See: http://www.internetgovernance.org/wordpress/wp-content/uploads/igp-forum.pdf

[26] Ibid.

[27] Supra note 11.

[28] Supra note 20.

[29] Consultations on the convening of the Internet Governance Forum, Transcript of Morning Session 16 February 2006. See: http://unpan1.un.org/intradoc/groups/public/documents/igf/unpan038960.pdf

[30] Ibid.

[31] Ibid.

[32]Milton Mueller, ICANN Watch, ‘The Forum MAG: Who Are These People?’ May 2006 See: http://www.icannwatch.org/article.pl?sid=06/05/18/226205&mode=thread

[33] IGF Funding, See: https://intgovforum.org/cmsold/funding

[34] Supra note 12.

[35] Ibid.

[36] ICANN’s infiltration of the MAG was evident in the composition of the first advisory group which included Alejandro Pisanty and Veni Markovski who were sitting ICANN Board members, one staff member (Theresa Swineheart), two former ICANN Board members (Nii Quaynor and Masanobu Katoh); two representatives of ccTLD operators (Chris Disspain and Emily Taylor); two representatives of the Regional Internet Address Registries (RIRs) (Raul Echeberria and Adiel Akplogan).  Even the “civil society” representatives appointed were all associated with either ICANN’s At Large Advisory Committee or its Noncommercial Users Constituency (or both) Adam Peake of Glocom, Robin Gross of IP Justice, Jeanette Hofmann of WZ Berlin, and Erick Iriarte of Alfa-Redi.

[37] United Nations Press Release, Secretary General establishes Advisory Group to assist him in convening Internet Governance Forum,  17 May 2006 See: http://www.un.org/press/en/2006/sga1006.doc.htm

[38] Jeremy Malcolm, Multi-Stakeholder Public Policy Governance and its Application to the Internet Governance Forum See: https://www.malcolm.id.au/thesis/x31762.html

[39] MAG Spreadsheet CIS Website https://docs.google.com/spreadsheets/d/1uZzfBz9ihj1M0QSvlnORE0nRD62TCRxhA5d1E_RKfhc/edit#gid=1912343648

[40] Terms of Reference for the Internet Governance Forum (IGF) Multistakeholder Advisory Group (MAG) Individual Member Responsibilities and Group Procedures See: http://www.intgovforum.org/cms/175-igf-2015/2041-mag-terms-of-reference

[41] United Nations Press Release, Secretary General establishes Advisory Group to assist him in convening Internet Governance Forum,  17 May 2006 See: http://www.un.org/press/en/2006/sga1006.doc.htm

[42] IGF MAG Membership Analysis, 2006-2015 http://cis-india.github.io/charts/2016.04_MAG-analysis/CIS_MAG-Analysis-2016_Treemap.html

[43] IGF MAG Membership – Stakeholder Types and Regions – 2006-2015 See: http://cis-india.github.io/charts/2016.04_MAG-analysis/CIS_MAG-Analysis-2016_StakeholderTypes-Regions.html

[44] IGF MAG Membership – Stakeholder Types across Years – 2006-2015 See: http://cis-india.github.io/charts/2016.04_MAG-analysis/CIS_MAG-Analysis-2016_StakeholderTypes-Years.html

[45] IGF MAG Membership – Stakeholder Types and Countries – 2006-2015 See: http://cis-india.github.io/charts/2016.04_MAG-analysis/CIS_MAG-Analysis-2016_StakeholderTypes-Country.html

[46] IGF MAG Membership Timeline, 2006-2015 See: http://cis-india.github.io/charts/2016.04_MAG-analysis/CIS_MAG-Analysis-2016_Member-Timeline.html

[47] MAG Membership – India and USA – 2006-2015

See: http://cis-india.github.io/charts/2016.04_MAG-analysis/CIS_MAG-Analysis-2016_StakeholderTypes-India-USA.html

[48] MAG Meetings in 2016

http://www.intgovforum.org/cms/open-consultations-and-mag-meeting

[49] Paul J. DiMaggio and Walter W. Powell, ‘The Iron Cage Revisited: Institutional Isomorphism and Collective Rationality in Organizational Fields’, Yale University, American Sociological Review 1983, Vol. 48 (April: 147-160)

[50] United Nations Funds-In-Trust Project Document Project number: GLO/11/X01 Project title: Internet Governance Forum Country/area: Global Start date: 1 April 2011 End date: 31 December 2015 Executing agency: UNDESA Funding: Multi-donor – extrabudgetary Budget: Long-term project framework – budget “A” See: http://www.intgovforum.org/cms/2013/TrustFund/Project%20document%20IGF.pdf

[51] Kieren McCarthy, Critics hit out at ‘black box’ UN internet body, The Register 31 March 2016 See: http://www.theregister.co.uk/2016/03/31/black_box_un_internet_body/?page=2

[52] Eli Dourado, Too Many Stakeholders Spoil the Soup, Foreign Policy, 15 May 2013 See:http://foreignpolicy.com/2013/05/15/too-many-stakeholders-spoil-the-soup/

[53] IANA Transition, NetMundial are some of the other examples of multi-stakeholder engagement.

[54] Ibid.

Contestations of Data, ECJ Safe Harbor Ruling and Lessons for India

The European Court of Justice has invalidated a European Commission decision, which had previously concluded that the ‘Safe Harbour Privacy Principles’ provide adequate protections for European citizens’ privacy rights for the transfer of personal data between European Union and United States. The inadequacies of the framework is not news for the European Commission and action by ECJ has been a long time coming. The ruling raises important questions about how the claims of citizenship are being negotiated in the context of the internet, and how increasingly the contestations of personal data are being employed in the discourse. Continue reading “Contestations of Data, ECJ Safe Harbor Ruling and Lessons for India”

IANA Transition Stewardship & ICANN Accountability (I)

This paper is the first in a multi-part series, in which we provide a background to the IANA transition and updates on the ensuing processes. An attempt to familiarise people with the issues at stake, this paper will be followed by a second piece that provides an overview of submitted proposals and areas of concern that will need attention moving forward. The series is a work in progress and will be updated as the processes move forward. It is up for public comments and we welcome your feedback.

In developing these papers we have been guided by Kieren McCarthy’s writings in The Register, Milton Mueller writings on the Internet Governance Project, Rafik Dammak emails on the mailings lists, the constitutional undertaking argument made in the policy paper authored by Danielle Kehl & David Post for New America Foundation.


Introduction

The 53rd ICANN conference in Buenos Aires was pivotal as it marked the last general meeting before the IANA transition deadline on 30th September, 2015. The multistakeholder process initiated seeks communities to develop transition proposals to be consolidated and reviewed by the the IANA Stewardship Transition Coordination Group (ICG). The names, number and protocol communities convened at the conference to finalize the components of the transition proposal and to determine the way forward on the transition proposals. The Protocol Parameters (IANA PLAN Working Group) submitted to ICG on 6 January 2015, while the Numbering Resources (CRISP Team) submitted on 15 January 2015. The Domain Names (CWG-Stewardship) submitted its second draft to ICG on 25 June 2015. The ICG had a face-to-face meeting in Buenos Aires and their proposal to transition the stewardship of the IANA functions is expected to be out for public comment July 31 to September 8, 2015.

Parallelly, the CCWG on Enhancing ICANN Accountability offered its first set of proposals for public comment in June 2015 and organised two working sessions at ICANN’53. More recently, the CCWG met in Paris focusing on the proposed community empowerment mechanisms, emerging concerns and progress on issues so far. CIS reserves its comments to the CCWG till the second round of comments expected in July.

This working paper explains the IANA Transition, its history and relevance to management of the Internet. It provides an update on the processes so far, including the submissions by the Indian government and highlights areas of concern that need attention going forward.

How is IANA Transition linked to DNS Management?

The IANA transition presents a significant opportunity for stakeholders to influence the management and governance of the global network. The Domain Name System (DNS), which allows users to locate websites by translating the domain name with corresponding Internet Protocol address, is critical to the functioning of the Internet. The DNS rests on the effective coordination of three critical functions—the allocation of IP Addresses (the numbers function), domain name allocation (the naming function), and protocol parameters standardisation (the protocols function).

History of the ICANN-IANA Functions contract

Initially, these key functions were performed by individuals and public and private institutions. They either came together voluntarily or through a series of agreements and contracts brokered by the Department of Commerce’s National Telecommunications and Information Administration (NTIA) and funded by the US government. With the Internet’s rapid expansion and in response to concerns raised about its increasing commercialization as a resource, a need was felt for the creation of a formal institution that would take over DNS management. This is how ICANN, a California-based private, non-profit technical coordination body, came at the helm of DNS and related issues. Since then, ICANN has been performing the Internet Assigned Numbers Authority (IANA) functions under a contract with the NTIA, and is commonly referred to as the IANA Functions Operator.

IANA Functions

In February 2000, the NTIA entered into the first stand-alone IANA Functions HYPERLINK “http://www.ntia.doc.gov/files/ntia/publications/sf_26_pg_1-2-final_award_and_sacs.pdf”contract[1] with ICANN as the Operator. While the contractual obligations have evolved over time, these are largely administrative and technical in nature including:

(1) the coordination of the assignment of technical Internet protocol parameters;

(2) the allocation of Internet numbering resources; and

(3) the administration of certain responsibilities associated with the Internet DNS root zone management;

(4) other services related to the management of the ARPA and top-level domains.

ICANN has been performing the IANA functions under this oversight, primarily as NTIA did not want to let go of complete control of DNS management. Another reason was to ensure NTIA’s leverage in ensuring that ICANN’s commitments, conditional to its incorporation, were being met and that it was sticking to its administrative and technical role.

Root Zone Management—Entities and Functions Involved

NTIA’ s involvement has been controversial particularly in reference to the Root Zone Management function, which allows allows for changes to the HYPERLINK “http://www.internetsociety.org/sites/default/files/The Internet Domain Name System Explained for Non-Experts (ENGLISH).pdf”highest level of the DNS namespace[2] by updating the databases that represent that namespace. DNS namespace is defined to be the set of names known as top-level domain names or TLDs which may be at the country level (ccTLDs or generic (gTLDs). This HYPERLINK “https://static.newamerica.org/attachments/2964-controlling-internet-infrastructure/IANA_Paper_No_1_Final.32d31198a3da4e0d859f989306f6d480.pdf”function to maintain the Root was split into two parts[3]—with two separate procurements and two separate contracts. The operational contract for the Primary (“A”) Root Server was awarded to VeriSign, the IANA Functions Contract—was awarded to ICANN.

These contracts created contractual obligations for ICANN as IANA Root Zone Management Function Operator, in co-operation with Verisign as the Root Zone Maintainer and NTIA as the Root Zone Administrator whose authorisation is explicitly required for any requests to be implemented in the root zone. Under this contract, ICANN had responsibility for the technical functions for all three communities under the IANA Functions contract.

ICANN also had policy making functions for the names community such as developing HYPERLINK “https://www.iana.org/domains/root/files”rules and procedures and policies under HYPERLINK “https://www.iana.org/domains/root/files”which any changes to the Root Zone File[4] were to be proposed, including the policies for adding new TLDs to the system. The policy making of numbers and protocols is with IETF and RIRs respectively. HYPERLINK “http://www.ntia.doc.gov/files/ntia/publications/ntias_role_root_zone_management_12162014.pdf”NTIA role in root zone management[5] is clerical and judgment free with regards to content. It authorizes implementation of requests after verifying whether procedures and policies are being followed.

This contract was subject to extension by mutual agreement and failure of complying with predefined commitments could result in the re-opening of the contract to another entity through a Request For Proposal (RFP). In fact, in 2011 HYPERLINK “http://www.ntia.doc.gov/files/ntia/publications/11102011_solicitation.pdf”NTIA issued a RFP pursuant to ICANNHYPERLINK “http://www.ntia.doc.gov/files/ntia/publications/11102011_solicitation.pdf”HYPERLINK “http://www.ntia.doc.gov/files/ntia/publications/11102011_solicitation.pdf”s Conflict of Interest Policy.[6]

Why is this oversight needed?

The role of the Administrator becomes critical for ensuring the security and operation of the Internet with the Root Zone serving as the directory of critical resources. In December 2014, HYPERLINK “http://www.theregister.co.uk/2015/04/30/confidential_information_exposed_over_300_times_in_icann_security_snafu/”a report revealed 300 incidents of internal security breaches[7] some of which were related to the Centralized Zone Data System (CZDS) – where the internet core root zone files are mirrored and the WHOIS portal. In view of the IANA transition and given ICANN’s critical role in maintaining the Internet infrastructure, the question which arises is if NTIA will let go of its Administrator role then which body should succeed it?

Transition announcement and launch of process

On 14 March 2014, the NTIA HYPERLINK “http://www.ntia.doc.gov/press-release/2014/ntia-announces-intent-transition-key-internet-domain-name-functions”announced[8] “its intent to transition key Internet domain name functions to the global multistakeholder community”. These key Internet domain name functions refer to the IANA functions. For this purpose, the NTIA HYPERLINK “http://www.ntia.doc.gov/press-release/2014/ntia-announces-intent-transition-key-internet-domain-name-functions”asked[9] the Internet Corporation for Assigned Names and Numbers (ICANN) to convene a global multistakeholder process to develop a transition proposal which has broad community support and addresses the following four principles:

  • Support and enhance the multistakeholder model;
  • Maintain the security, stability, and resiliency of the Internet DNS;
  • Meet the needs and expectation of the global customers and partners of the IANA services; and
  • Maintain the openness of the Internet.

The transition process has been split according to the three main communities naming, numbers and protocols.

Structure of the Transition Processes

ICANN performs both technical functions and policy-making functions. The technical functions are known as IANA functions and these are performed by ICANN are for all three communities.

I. Naming function: ICANN performs technical and policy-making for the names community. The technical functions are known as IANA functions and the policy-making functions relates to their role in deciding whether .xxx or .sucks should be allowed amongst other issues. There are two parallel streams of work focusing on the naming community that are crucial to completing the transition. The first, Cross-Community Working Group to Develop an IANA Stewardship Transition Proposal on Naming Related Functions will enable NTIA to transition out of its role in the DNS. Therefore, accountability of IANA functions is the responsibility of the CWG and accountability of policy-making functions is outside its scope. CWG has submitted its second draft to the ICG.

The second, Cross-Community Working Group on Accountability (CCWG-Accountability) is identifying necessary reforms to ICANN’s bylaws and processes to enhance the organization’s accountability to the global community post-transition. Therefore accountability of IANA functions is outside the scope of the CCWG. The CCWG on Enhancing ICANN Accountability offered its first set of proposals for public comment in June 2015.

II. Numbers function: ICANN performs only technical functions for the numbers community. The policy-making functions for numbers are performed by RIRs. CRISP is focusing on the IANA functions for numbers and has submitted their proposal to the ICG earlier this year.

III. Protocols function: ICANN performs only technical functions for the protocols community. The policy-making functions for protocols are performed by IETF. IETF-WG is focusing on the IANA functions for protocols and has submitted their proposal to the ICG earlier this year.

Role of ICG

After receiving the proposals from all three communities the ICG must combine these proposals into a consolidated transition proposal and then seek public comment on all aspects of the plan. ICG’s role is crucial, because it must build a public record for the NTIA on how the three customer group submissions tie together in a manner that ensures NTIA’s HYPERLINK “http://www.ntia.doc.gov/press-release/2014/ntia-announces-intent-transition-key-internet-domain-name-functions”criteria[10] are met and institutionalized over the long term. Further, ICG’s final submission to NTIA must include a plan to enhance ICANN’s accountability based on the CCWG-Accountability proposal.

NTIA Leverage

Reprocurement of the IANA contract is HYPERLINK “http://www.newamerica.org/oti/controlling-internet-infrastructure/”essential for ICANNHYPERLINK “http://www.newamerica.org/oti/controlling-internet-infrastructure/”HYPERLINK “http://www.newamerica.org/oti/controlling-internet-infrastructure/”sHYPERLINK “http://www.newamerica.org/oti/controlling-internet-infrastructure/” legitimacy[11] in the DNS ecosystem and the authority to reopen the contract and in keeping the policy and operational functions separate meant that, NTIA could simply direct VeriSign to follow policy directives being issued from the entity replacing ICANN if they were deemed to be not complying. This worked as an effective leverage for ICANN complying to their commitments even if it is difficult to determine how this oversight was exercised. Perceptually, this has been interpreted as a broad overreach particularly, in the context of issues of sovereignty associated with ccTLDs and the gTLDs in their influence in shaping markets. However, it is important to bear in mind that the NTIA authorization comes after the operator, ICANN—has validated the request and does not deal with the substance of the request rather focuses merely on compliance with outlined procedure.

NTIA’s role in the transition process

NTIA in its HYPERLINK “http://www.ntia.doc.gov/files/ntia/publications/ntia_second_quarterly_iana_report_05.07.15.pdf”Second Quarterly Report to the Congress[12] for the period of February 1-March 31, 2015 has outlined some clarifications on the process ahead. It confirmed the flexibility of extending the contract or reducing the time period for renewal, based on community decision. The report also specified that the NTIA would consider a proposal only if it has been developed in consultation with the multi-stakeholder community. The transition proposal should have broad community support and does not seek replacement of NTIA’s role with a government-led or intergovernmental organization solution. Further the proposal should maintain security, stability, and resiliency of the DNS, the openness of the Internet and must meet the needs and expectations of the global customers and partners of the IANA services. NTIA will only review a comprehensive plan that includes all these elements.

Once the communities develop and ICG submits a consolidated proposal, NTIA will ensure that the proposal has been adequately “stress tested” to ensure the continued stability and security of the DNS. NTIA also added that any proposed processes or structures that have been tested to see if they work, prior to the submission—will be taken into consideration in NTIA’s review. The report clarified that NTIA will review and assess the changes made or proposed to enhance ICANN’s accountability before initiating the transition.

Prior to ICANN’53, Lawrence E. Strickling Assistant Secretary for Communications and Information and NTIA Administrator HYPERLINK “http://www.ntia.doc.gov/blog/2015/stakeholder-proposals-come-together-icann-meeting-argentina”has posed some questions for consideration[13] by the communities prior to the completion of the transition plan. The issues and questions related to CCWG-Accountability draft are outlined below:

  1. Proposed new or modified community empowerment tools—how can the CCWG ensure that the creation of new organizations or tools will not interfere with the security and stability of the DNS during and after the transition? Do these new committees and structures create a different set of accountability questions?
  2. Proposed membership model for community empowerment—have other possible models been thoroughly examined, detailed, and documented? Has CCWG designed stress tests of the various models to address how the multistakeholder model is preserved if individual ICANN Supporting Organizations and Advisory Committees opt out?
  3. Has CCWG developed stress tests to address the potential risk of capture and barriers to entry for new participants of the various models? Further, have stress tests been considered to address potential unintended consequences of “operationalizing” groups that to date have been advisory in nature?
  4. Suggestions on improvements to the current Independent Review Panel (IRP) that has been criticized for its lack of accountability—how does the CCWG proposal analyze and remedy existing concerns with the IRP?
  5. In designing a plan for improved accountability, should the CCWG consider what exactly is the role of the ICANN Board within the multistakeholder model? Should the standard for Board action be to confirm that the community has reached consensus, and if so, what accountability mechanisms are needed to ensure the Board operates in accordance with that standard?
  6. The proposal is primarily focused on the accountability of the ICANN Board—has the CCWG considered accountability improvements that would apply to ICANN management and staff or to the various ICANN Supporting Organizations and Advisory Committees?
  7. NTIA has also asked the CCWG to build a public record and thoroughly document how the NTIA criteria have been met and will be maintained in the future.
  8. Has the CCWG identified and addressed issues of implementation so that the community and ICANN can implement the plan as expeditiously as possible once NTIA has reviewed and accepted it.

NTIA has also sought community’s input on timing to finalize and implement the transition plan if it were approved. The Buenos Aires meeting became a crucial point in the transtion process as following the meeting, NTIA will need to make a determination on extending its current contract with ICANN. Keeping in mind that the community and ICANN will need to implement all work items identified by the ICG and the Working Group on Accountability as prerequisites for the transition before the contract can end, the community’s input is critical.

NTIA’s legal standing

On 25th February, 2015 the US Senate Committee on Commerce, Science & Transportation on ‘Preserving the Multi-stakeholder Model of Internet Governance’[14] heard from NTIA head Larry Strickling, Ambassador Gross and Fadi Chehade. The hearing sought to plug any existing legal loopholes, and tighten its administrative, technical, financial, public policy, and political oversight over the entire process no matter which entity takes up the NTIA function. The most important takeaway from this Congressional hearing came from Larry Strickling’s testimony[15] who stated that NTIA has no legal or statutory responsibility to manage the DNS.

If the NTIA does not have the legal responsibility to act, and its role was temporary; on what basis is the NTIA driving the current IANA Transition process without the requisite legal authority or Congressional mandate? Historically, the NTIA oversight, effectively devised as a leverage for ICANN fulfilling its commitments have not been open to discussion. HYPERLINK “http://forum.icann.org/lists/comments-ccwg-accountability-draft-proposal-04may15/pdfnOquQlhsmM.pdf”Concerns have also been raised[16] on the lack of engagement with non-US governments, organizations and persons prior to initiating or defining the scope and conditions of the transition. Therefore, any IANA transition plan must consider this lack of consultation, develop a multi-stakeholder process as the way forward—even if the NTIA wants to approve the final transition plan.

Need to strengthen Diversity Principle

Following submissions by various stakeholders raising concerns regarding developing world participation, representation and lack of multilingualism in the transition process—the Diversity Principle was included by ICANN in the Revised Proposal of 6 June 2014. Given that representatives from developing countries as well as from stakeholder communities outside of the ICANN community are unable to productively involve themselves in such processes because of lack of multilingualism or unfamiliarity with its way of functioning merely mentioning diversity as a principle is not adequate to ensure abundant participation. As CIS has pointed out[17] before issues have been raised about the domination by North American or European entities which results in undemocratic, unrepresentative and non-transparent decision-making in such processes. Accordingly, all the discussions in the process should be translated into multiple native languages of participants in situ, so that everyone participating in the process can understand what is going on. Adequate time must be given for the discussion issues to be translated and circulated widely amongst all stakeholders of the world, before a decision is taken or a proposal is framed. This was a concern raised in the recent CCWG proposal which was extended as many communities did not have translated texts or adequate time to participate.

Representation of the global multistakeholder community in ICG

Currently, the Co-ordination Group includes representatives from ALAC, ASO, ccNSO, GNSO, gTLD registries, GAC, ICC/BASIS, IAB, IETF, ISOC, NRO, RSSAC and SSAC. Most of these representatives belong to the ICANN community, and is not representative of the global multistakeholder community including governments. This is not representative of even a multistakeholder model which the US HYPERLINK “http://cis-india.org/internet-governance/blog/iana-transition-suggestions-for-process-design”gHYPERLINK “http://cis-india.org/internet-governance/blog/iana-transition-suggestions-for-process-design”ovHYPERLINK “http://cis-india.org/internet-governance/blog/iana-transition-suggestions-for-process-design”ernment HYPERLINK “http://cis-india.org/internet-governance/blog/iana-transition-suggestions-for-process-design”has announced[18] for the transition; nor in the multistakeholder participation spirit of NETmundial. Adequate number of seats on the Committee must be granted to each stakeholder so that they can each coordinate discussions within their own communities and ensure wider and more inclusive participation.

ICANN’s role in the transition process

Another issue of concern in the pre-transition process has been ICANN having been charged with facilitating this transition process. This decision calls to question the legitimacy of the process given that the suggestions from the proposals envision a more permanent role for ICANN in DNS management. As Kieren McCarthy has pointed out [19]ICANN has taken several steps to retain the balance of power in managing these functions which have seen considerable pushback from the community. These include an attempt to control the process by announcing two separate processes[20] – one looking into the IANA transition, and a second at its own accountability improvements – while insisting the two were not related. That effort was beaten down[21] after an unprecedented letter by the leaders of every one of ICANN’s supporting organizations and advisory committees that said the two processes must be connected.

Next, ICANN was accused of stacking the deck[22] by purposefully excluding groups skeptical of ICANN’s efforts, and by trying to give ICANN’s chairman the right to personally select the members of the group that would decide the final proposal. That was also beaten back. ICANN staff also produced a “scoping document”[23], that pre-empt any discussion of structural separation and once again community pushback forced a backtrack.[24]

These concerns garner more urgency given recent developments with the community working HYPERLINK “http://www.ietf.org/mail-archive/web/ianaplan/current/msg01680.html”groups[25] and ICANN divisive view of the long-term role of ICANN in DNS management. Further, given HYPERLINK “https://www.youtube.com/watch?v=yGwbYljtNyI#t=1164″ICANNHYPERLINK “https://www.youtube.com/watch?v=yGwbYljtNyI#t=1164” HYPERLINK “https://www.youtube.com/watch?v=yGwbYljtNyI#t=1164″President Chehade’s comments that the CWG is not doing its job[26], is populated with people who do not know anything and the “IANA process needs to be left alone as much as possible”. Fadi also specified that ICANN had begun the formal process of initiating a direct contract with VeriSign to request and authorise changes to be implemented by VeriSign. While ICANN may see itself without oversight in this relationship with VeriSign, it is imperative that proposals bear this plausible outcome in mind and put forth suggestions to counter this.

The HYPERLINK “http://www.ietf.org/mail-archive/web/ianaplan/current/msg01680.html”update from IETF on the ongoing negotiation with ICANN on their proposal[27] related to protocol parameters has also flagged that ICANN is unwilling to cede to any text which would suggest ICANN relinquishing its role in the operations of protocol parameters to a subsequent operator, should the circumstances demand this. ICANN has stated that agreeing to such text now would possibly put them in breach of their existing agreement with the NTIA. Finally, HYPERLINK “https://twitter.com/arunmsukumar/status/603952197186035712”ICANN HYPERLINK “https://twitter.com/arunmsukumar/status/603952197186035712”Board Member, Markus Kummer[28] stated that if ICANN was to not approve any aspect of the proposal this would hinder the consensus and therefore, the transition would not be able to move forward.

ICANN has been designated the convenor role by the US government on basis of its unique position as the current IANA functions contractor and the global coordinator for the DNS. However it is this unique position itself which creates a conflict of interest as in the role of contractor of IANA functions, ICANN has an interest in the outcome of the process being conducive to ICANN. In other words, there exists a potential for abuse of the process by ICANN, which may tend to steer the process towards an outcome favourable to itself.

Therefore there exists a strong rationale for defining the limitations of the role of ICANN as convenor.The community has suggested that ICANN should limit its role to merely facilitating discussions and not extend it to reviewing or commenting on emerging proposals from the process. Additional safeguards need to be put in place to avoid conflicts of interest or appearance of conflicts of interest. ICANN should further not compile comments on drafts to create a revised draft at any stage of the process. Additionally, ICANN staff must not be allowed to be a part of any group or committee which facilitates or co-ordinates the discussion regarding IANA transition.

How is the Obama Administration and the US Congress playing this?

Even as the issues of separation of ICANN’s policy and administrative role remained unsettled, in the wake of the Snowden revelations, NTIA initiated the long due transition of the IANA contract oversight to a global, private, non-governmental multi-stakeholder institution on March 14, 2014. This announcement immediately raised questions from Congress on whether the transition decision was dictated by technical considerations or in response to political motives, and if the Obama Administration had the authority to commence such a transition unilaterally, without prior open stakeholder consultations. Republican HYPERLINK “http://www.reuters.com/article/2015/06/02/us-usa-internet-icann-idUSKBN0OI2IJ20150602”lawmakers have raised concerns about the IANA transition plan [29]worried that it may allow other countries to capture control.

More recently, HYPERLINK “https://www.congress.gov/bill/114th-congress/house-bill/2251”Defending Internet Freedom Act[30] has been re-introduced to US Congress. This bill seeks ICANN adopt the recommendations of three internet community groups, about the transition of power, before the US government relinquishes control of the IANA contract. The bill also seeks ownership of the .gov and .mil top-level domains be granted to US government and that ICANN submit itself to the US Freedom of Information Act (FOIA), a legislation similar to the RTI in India, so that its records and other information gain some degree of public access.It has also been asserted by ICANN that neither NTIA nor the US Congress will approve any transition plan which leaves open the possibility of non-US IANA Functions Operator in the future.

Funding of the transition

The Obama administration is also HYPERLINK “http://www.broadcastingcable.com/news/washington/house-bill-blocks-internet-naming-oversight-handoff/141393”fighting a Republican-backed Commerce, Justice, Science, and HYPERLINK “http://www.broadcastingcable.com/news/washington/house-bill-blocks-internet-naming-oversight-handoff/141393”Related Agencies Appropriations Act (H.R. 2578)[31] which seeks to block NTIA funding the IANA transition. One provision of this bill restricts NTIA from using appropriated dollars for IANA stewardship transition till the end of the fiscal year, September 30, 2015 also the base period of the contact in function. This peculiar proviso in the Omnibus spending bill actually implies that Congress believes that the IANA Transition should be delayed with proper deliberation, and not be rushed as ICANN and NTIA are inclined to.

The IANA Transition cannot take place in violation of US Federal Law that has defunded it within a stipulated time-window. At the Congressional Internet Caucus in January 2015, NTIA head Lawrence Strickling clarified that NTIA will “not use appropriated funds to terminate the IANA functions…” or “to amend the cooperative agreement with Verisign to eliminate NTIA’s role in approving changes to the authoritative root zone file…”. This implicitly establishes that the IANA contract will be extended, and Strickling confirmed that there was no hard deadline for the transition.

DOTCOM Act

The Communications and Technology Subcommittee of the House Energy and Commerce CommitteeHYPERLINK “http://energycommerce.house.gov/markup/communications-and-technology-subcommittee-vote-dotcom-act”amended the DOTCOM Act[32], a bill which, in earlier drafts, would have halted the IANA functions transition process for up to a year pending US Congressional approval. The bill in its earlier version represented unilateral governmental interference in the multistakeholder process. The new bill reflects a much deeper understanding of, and confidence in, the significant amount of work that the global multistakeholder community has undertaken in planning both for the transition of IANA functions oversight and for the increased accountability of ICANN. The amended DOTCOM Act would call for the NTIA to certify – as a part of a proposed GAO report on the transition – that “the required changes to ICANN’s by-laws contained in the final report of ICANN’s Cross Community Working Group on Enhancing ICANN Accountability and the changes to ICANN’s bylaws required by ICANN’s IANA have been implemented.” The bill enjoys immense bipartisan support[33], and is being lauded as a prudent and necessary step for ensuring the success of the IANA transition.


[1] IANA Functions Contract <http://www.ntia.doc.gov/files/ntia/publications/sf_26_pg_1-2-final_award_and_sacs.pdf> accessed 15th June 2015

[2] Daniel Karrenberg, The Internet Domain Name System Explained For Nonexperts <http://www.internetsociety.org/sites/default/files/The%20Internet%20Domain%20Name%20System%20Explained%20for%20Non-Experts%20(ENGLISH).pdf> accessed 15 June 2015

[3] David Post and Danielle Kehl, Controlling Internet Infrastructure The “IANA Transition” And Why It Matters For The Future Of The Internet, Part I (1st edn, Open Technology Institute 2015) <https://static.newamerica.org/attachments/2964-controlling-internet-infrastructure/IANA_Paper_No_1_Final.32d31198a3da4e0d859f989306f6d480.pdf> accessed 10 June 2015.

[4] Iana.org, ‘IANA — Root Files’ (2015) <https://www.iana.org/domains/root/files> accessed 11 June 2015.

[5] ‘NTIA’s Role In Root Zone Management’ (2014). <http://www.ntia.doc.gov/files/ntia/publications/ntias_role_root_zone_management_12162014.pdf> accessed 15 June 2015.

[6] Contract ( 2011) <http://www.ntia.doc.gov/files/ntia/publications/11102011_solicitation.pdf> accessed 10 June 2015.

[7] Kieren McCarthy, ‘Confidential Information Exposed Over 300 Times In ICANN Security Snafu’ The Register (2015) <http://www.theregister.co.uk/2015/04/30/confidential_information_exposed_over_300_times_in_icann_security_snafu/> accessed 15 June 2015.

[8] NTIA, ‘NTIA Announces Intent To Transition Key Internet Domain Name Functions’ (2014) <http://www.ntia.doc.gov/press-release/2014/ntia-announces-intent-transition-key-internet-domain-name-functions> accessed 15 June 2015.

[9] NTIA, ‘NTIA Announces Intent To Transition Key Internet Domain Name Functions’ (2014) <http://www.ntia.doc.gov/press-release/2014/ntia-announces-intent-transition-key-internet-domain-name-functions> accessed 15 June 2015.

[10] NTIA, ‘NTIA Announces Intent To Transition Key Internet Domain Name Functions’ (2014) <http://www.ntia.doc.gov/press-release/2014/ntia-announces-intent-transition-key-internet-domain-name-functions> accessed 15 June 2015.

[11] David Post and Danielle Kehl, Controlling Internet Infrastructure The “IANA Transition” And Why It Matters For The Future Of The Internet, Part I (1st edn, Open Technology Institute 2015) <https://static.newamerica.org/attachments/2964-controlling-internet-infrastructure/IANA_Paper_No_1_Final.32d31198a3da4e0d859f989306f6d480.pdf> accessed 10 June 2015.

[12] National Telecommunications and Information Administration, ‘REPORT ON THE TRANSITION OF THE STEWARDSHIP OF THE INTERNET ASSIGNED NUMBERS AUTHORITY (IANA) FUNCTIONS’ (NTIA 2015) <http://www.ntia.doc.gov/files/ntia/publications/ntia_second_quarterly_iana_report_05.07.15.pdf> accessed 10 July 2015.

[13] Lawrence Strickling, ‘Stakeholder Proposals To Come Together At ICANN Meeting In Argentina’ <http://www.ntia.doc.gov/blog/2015/stakeholder-proposals-come-together-icann-meeting-argentina> accessed 19 June 2015.

[14] Philip Corwin, ‘NTIA Says Cromnibus Bars IANA Transition During Current Contract Term’ <http://www.circleid.com/posts/20150127_ntia_cromnibus_bars_iana_transition_during_current_contract_term/> accessed 10 June 2015.

[15] Sophia Bekele, ‘”No Legal Basis For IANA Transition”: A Post-Mortem Analysis Of Senate Committee Hearing’ <http://www.circleid.com/posts/20150309_no_legal_basis_for_iana_transition_post_mortem_senate_hearing/> accessed 9 June 2015.

[16] Comments On The IANA Transition And ICANN Accountability Just Net Coalition (2015) <http://forum.icann.org/lists/comments-ccwg-accountability-draft-proposal-04may15/pdfnOquQlhsmM.pdf> accessed 12 June 2015.

[17] The Centre for Internet and Society, ‘IANA Transition: Suggestions For Process Design’ (2014) <http://cis-india.org/internet-governance/blog/iana-transition-suggestions-for-process-design> accessed 9 June 2015.

[18] The Centre for Internet and Society, ‘IANA Transition: Suggestions For Process Design’ (2014) <http://cis-india.org/internet-governance/blog/iana-transition-suggestions-for-process-design> accessed 9 June 2015.

[19] Kieren McCarthy, ‘Let It Go, Let It Go: How Global DNS Could Survive In The Frozen Lands Outside US Control Public Comments On Revised IANA Transition Plan’ The Register (2015) <http://www.theregister.co.uk/2015/05/26/iana_icann_latest/> accessed 15 June 2015.

[20] Icann.org, ‘Resources – ICANN’ (2014) <https://www.icann.org/resources/pages/process-next-steps-2014-08-14-en> accessed 13 June 2015.

[21] <https://www.icann.org/en/system/files/correspondence/crocker-chehade-to-soac-et-al-18sep14-en.pdf> accessed 10 June 2015.

[22] Richard Forno, ‘[Infowarrior] – Internet Power Grab: The Duplicity Of ICANN’ (Mail-archive.com, 2015) <https://www.mail-archive.com/infowarrior@attrition.org/msg12578.html> accessed 10 June 2015.

[23] ICANN, ‘Scoping Document’ (2014) <https://www.icann.org/en/system/files/files/iana-transition-scoping-08apr14-en.pdf> accessed 9 June 2015.

[24] Milton Mueller, ‘ICANN: Anything That Doesn’T Give IANA To Me Is Out Of Scope |’ (Internetgovernance.org, 2014) <http://www.internetgovernance.org/2014/04/16/icann-anything-that-doesnt-give-iana-to-me-is-out-of-scope/> accessed 12 June 2015.

[25] Andrew Sullivan, ‘[Ianaplan] Update On IANA Transition & Negotiations With ICANN’ (Ietf.org, 2015) <http://www.ietf.org/mail-archive/web/ianaplan/current/msg01680.html> accessed 14 June 2015.

[26] DNA Member Breakfast With Fadi Chehadé (2015-02-11) (The Domain Name Association 2015).

[27] Andrew Sullivan, ‘[Ianaplan] Update On IANA Transition & Negotiations With ICANN’ (Ietf.org, 2015) <http://www.ietf.org/mail-archive/web/ianaplan/current/msg01680.html> accessed 14 June 2015.

[28] Mobile.twitter.com, ‘Twitter’ (2015) <https://mobile.twitter.com/arunmsukumar/status/603952197186035712> accessed 12 June 2015.

[29] Alina Selyukh, ‘U.S. Plan To Cede Internet Domain Control On Track: ICANN Head’ Reuters (2015) <http://www.reuters.com/article/2015/06/02/us-usa-internet-icann-idUSKBN0OI2IJ20150602> accessed 15 June 2015.

[30] 114th Congress, ‘H.R.2251 – Defending Internet Freedom Act Of 2015’ (2015).

[31] John Eggerton, ‘House Bill Blocks Internet Naming Oversight Handoff: White House Opposes Legislation’ Broadcasting & Cable (2015) <http://www.broadcastingcable.com/news/washington/house-bill-blocks-internet-naming-oversight-handoff/141393> accessed 9 June 2015.

[32] Communications And Technology Subcommittee Vote On The DOTCOM Act (2015).

[33] Timothy Wilt, ‘DOTCOM Act Breezes Through Committee’ Digital Liberty (2015) <http://www.digitalliberty.net/dotcom-act-breezes-committee-a319> accessed 22 June 2015.

DeitY says 143 URLs have been Blocked in 2015; Procedure for Blocking Content Remains Opaque and in Urgent Need of Transparency Measures

In February 2015, the Centre for Internet and Society (CIS) requested the Department of Electronics and Information Technology (DeitY) under the Right to Information Act, 2005 (RTI Act) to provide information clarifying the procedures for blocking in India. We have received a response from DeitY which may be seen here.

In this post, I shall elaborate on this response from DeitY and highlight some of the accountability and transparency measures that the procedure needs. To stress the urgency of reform, I shall also touch upon two recent developments—the response from Ministry of Communication to questions raised in Parliament on the blocking procedures and the Supreme Court (SC) judgment in Shreya Singhal v. Union of India.

Section 69A and the Blocking Rules

Section 69A of the Information Technology Act, 2008 (S69A hereinafter) grants powers to the central government to issue directions for blocking of access to any information through any computer resource. In other words, it allows the government to block any websites under certain grounds. The Government has notified rules laying down the procedure for blocking access online under the Procedure and Safeguards for Blocking for Access of Information by Public Rules, 2009 (Rules, 2009 hereinafter). CIS has produced a poster explaining the blocking procedure (download PDF, 2.037MB).

There are three key aspects of the blocking rules that need to be kept under consideration:

Officers and committees handling requests

Designated Officer (DO) – Appointed by the Central government, officer not below the rank of Joint Secretary.
Nodal Officer (NO) – Appointed by organizations including Ministries or Departments of the State governments and Union Territories and any agency of the Central Government.
Intermediary contact–Appointed by every intermediary to receive and handle blocking directions from the DO.
Committee for Examination of Request (CER) – The request along with printed sample of alleged offending information is examined by the CER—committee with the DO serving as the Chairperson and representatives from Ministry of Law and Justice; Ministry of Home Affairs; Ministry of Information and Broadcasting and representative from the Indian Computer Emergency Response Team (CERT-In). The CER is responsible for examining each blocking request and makes recommendations including revoking blocking orders to the DO, which are taken into consideration for final approval of request for blocking by the Secretary, DOT.
Review Committee (RC) – Constituted under rule 419A of the Indian Telegraph Act, 1951, the RC includes the Cabinet Secretary, Secretary to the Government of India (Legal Affairs) and Secretary (Department of Telecom). The RC is mandated to meet at least once in 2 months and record its findings and has to validate that directions issued are in compliance with S69A(1).

Provisions outlining the procedure for blocking

Rules 6, 9 and 10 create three distinct blocking procedures, which must commence within 7 days of the DO receiving the request.

a) Rule 6 lays out the first procedure, under which any person may approach the NO and request blocking, alternatively, the NO may also raise a blocking request. After the NO of the approached Ministry or Department of the State governments and Union Territories and/or any agency of the Central Government, is satisfied of the validity of the request they forward it to the DO. Requests when not sent through the NO of any organization, must be approved by Chief Secretary of the State or Union Territory or the Advisor to the Administrator of the Union Territory, before being sent to the DO.

The DO upon receiving the request places, must acknowledge receipt within 24 four hours and places the request along with printed copy of alleged information for validation by the CER. The DO also, must make reasonable efforts to identify the person or intermediary hosting the information, and having identified them issue a notice asking them to appear and submit their reply and clarifications before the committee at a specified date and time, within forty eight hours of the receipt of notice.

Foreign entities hosting the information are also informed and the CER gives it recommendations after hearing from the intermediary or the person has clarified their position and even if there is no representation by the same and after examining if the request falls within the scope outlined under S69A(1). The blocking directions are issued by the Secretary (DeitY), after the DO forwards the request and the CER recommendations. If approval is granted the DO directs the relevant intermediary or person to block the alleged information.

b) Rule 9 outlines a procedure wherein, under emergency circumstances, and after the DO has established the necessity and expediency to block alleged information submits recommendations in writing to the Secretary, DeitY. The Secretary, upon being satisfied by the justification for, and necessity of, and expediency to block information may issue an blocking directions as an interim measure and must record the reasons for doing so in writing.

Under such circumstances, the intermediary and person hosting information is not given the opportunity of a hearing. Nevertheless, the DO is required to place the request before the CER within forty eight hours of issuing of directions for interim blocking. Only upon receiving the final recommendations from the committee can the Secretary pass a final order approving the request. If the request for blocking is not approved then the interim order passed earlier is revoked, and the intermediary or identified person should be directed to unblock the information for public access.

c) Rule 10 outlines the process when an order is issued by the courts in India. The DO upon receipt of the court order for blocking of information submits it to the Secretary, DeitY and initiates action as directed by the courts.

Confidentiality clause

Rule 16 mandates confidentiality regarding all requests and actions taken thereof, which renders any requests received by the NO and the DO, recommendations made by the DO or the CER and any written reasons for blocking or revoking blocking requests outside the purview of public scrutiny. More detail on the officers and committees that enforce the blocking rules and procedure can be found here.

Response on blocking from the Ministry of Communication and Information Technology

The response to our RTI from E-Security and Cyber Law Group is timely, given the recent clarification from the Ministry of Communication and Information Technology to a number of questions, raised by parliamentarian Shri Avinash Pande in the Rajya Sabha. The questions had been raised in reference to the Emergency blocking order under IT Act, the current status of the Central Monitoring System, Data Privacy law and Net Neutrality. The Centre for Communication Governance (CCG), National Law University New Delhi have extracted a set of 6 questions and you can read the full article here.

The governments response as quoted by CCG, clarifies under rule 9—the Government has issued directions for emergency blocking of a total number of 216 URLs from 1st January, 2014 till date and that a total of 255 URLs were blocked in 2014 and no URLs has been blocked in 2015 (till 31 March 2015) under S69A through the Committee constituted under the rules therein. Further, a total of 2091 URLs and 143 URLs were blocked in order to comply with the directions of the competent courts of India in 2014 and 2015 (till 31 March 2015) respectively. The government also clarified that the CER, had recommended not to block 19 URLs in the meetings held between 1stJanuary 2014 upto till date and so far, two orders have been issued to revoke 251 blocked URLs from 1st January 2014 till date. Besides, CERT-In received requests for blocking of objectionable content from individuals and organisations, and these were forwarded to the concerned websites for appropriate action, however the response did not specify the number of requests.

We have prepared a table explaining the information released by the government and to highlight the inconsistency in their response.

Applicable rule and procedure outlined under the Blocking Rules

Number of websites

2014

2015

Total

Rule 6 – Blocking requests from NO and others

255

None

255

Rule 9 – Blocking under emergency circumstances

216

Rule 10 – Blocking orders from Court

2091

143

2234

Requests from individuals and orgs forwarded to CERT-In

Recommendations to not block by CER

19

Number of blocking requests revoked

251

In a response to an RTI filed by the Software Freedom Law Centre, DeitY said that 708 URLs were blocked in 2012, 1,349 URLs in 2013, and 2,341 URLs in 2014.

Shreya Singhal v. Union of India

In its recent judgment, the SC of India upheld the constitutionality of 69A, stating that it was a narrowly-drawn provision with adequate safeguards. The constitutional challenge on behalf of the People’s Union for Civil Liberties (PUCL) considered the manner in which the blocking is done and the arguments focused on the secrecy present in blocking.

The rules may indicate that there is a requirement to identify and contact the originator of information, though as an expert has pointed out, there is no evidence of this in practice. The court has stressed the importance of a written order so that writ petitions may be filed under Article 226 of the Constitution. In doing so, the court seems to have assumed that the originator or intermediary is informed, and therefore held the view that any procedural inconsistencies may be challenged through writ petitions. However, this recourse is rendered ineffective not only due to procedural constraints, but also because of the confidentiality clause. The opaqueness through rule 16 severely reigns in the recourse that may be given to the originator and the intermediary. While the court notes that rule 16 requiring confidentality was argued to be unconstitutional, it does not state its opinion on this question in the judgment. One expert, holds the view that this, by implication, requires that requests cannot be confidential. However, such a reading down of rule 16 is yet to be tested.

Further, Sunil Abraham has pointed out, “block orders are unevenly implemented by ISPs making it impossible for anyone to independently monitor and reach a conclusion whether an internet resource is inaccessible as a result of a S69A block order or due to a network anomaly.” As there are no comprehensive list of blocked websites or of the legal orders through which they are blocked exists, the public has to rely on media reports and filing RTI requests to understand the censorship regime in India. CIS has previously analysed the leaked block lists and lists received as responses to RTI requests which have revealed that the block orders are full of errors and blocking of entire platforms and not just specific links has taken place.

While the state has the power of blocking content, doing so in secrecy and without judical scrutiny, mark deficiencies that remain in the procedure outlined under the provisions of the blocking rules . The Court could read down rule 16 except for a really narrow set of exceptions, and in not doing so, perhaps has overlooked the opportunities for reform in the existing system. The blocking of 32 websites, is an example of the opaqueness of the system of blocking orders, and where the safeguards assumed by the SC are often not observed such as there being no access to the recommendations that were made by the CER, or towards the revocation of the blocking orders subsequently. CIS filed the RTI to try and understand the grounds for blocking and related procedures and the response has thrown up some issues that must need urgent attention.

Response to RTI filed by CIS

Our first question sought clarification on the websites blocked on 30th December 2014 and the response received from DeitY, E-Security and Cyber Law Group reveals that the websites had been blocked as “they were being used to post information related to ISIS using the resources provided by these websites”. The response also clarifies that the directions to block were issued on 18-12-2014 and as of 09-01-2015, after obtaining an undertaking from website owners, stating their compliance with the Government and Indian laws, the sites were unblocked.

It is not clear if ATS, Mumbai had been intercepting communication or if someone reported these websites. If the ATS was indeed intercepting communication, then as per the rules, the RC should be informed and their recommendations sought. It is unclear, if this was the case and the response evokes the confidentiality clause under rule 16 for not divulging further details. Based on our reading of the rules, court orders should be accessible to the public and without copies of requests and complaints received and knowledge of which organization raised them, there can be no appeal or recourse available to the intermediary or even the general public.

We also asked for a list of all requests for blocking of information that had been received by the DO between January 2013 and January 2015, including the copies of all files that had accepted or rejected. We also specifically, asked for a list of requests under rule 9. The response from DeitY stated that since January 1, 2015 to March 31, 2015 directions to block 143 URLs had been issued based on court orders. The response completely overlooks our request for information, covering the 2 year time period. It also does not cover all types of blocking orders under rule 6 and rule 9, nor the requests that are forwarded to CERT-In, as we have gauged from the ministry’s response to the Parliament. Contrary to the SC’s assumption of contacting the orginator of information, it is also clear from DeitY’s response that only the websites had been contacted and the letter states that the “websites replied only after blocking of objectionable content”.

Further, seeking clarification on the functioning of the CER, we asked for the recent composition of members and the dates and copies of the minutes of all meetings including copies of the recommendations made by them. The response merely quotes rule 7 as the reference for the composition and does not provide any names or other details. We ascertain that as per the DeitY website Shri B.J. Srinath, Scientist-G/GC is the appointed Designated Officer, however this needs confirmation. While we are already aware of the structure of the CER which representatives and appointed public officers are guiding the examination of requests remains unclear. Presently, there are 3 Joint Secretaries appointed under the Ministry of Law and Justice, the Home Ministry has appointed 19, while 3 are appointed under the Ministry of Information and Broadcasting. Further, it is not clear which grade of scientist would be appointed to this committee from CERT-In as the rules do not specify this. While the government has clarified in their answer to Parliament that the committee had recommended not to block 19 URLs in the meetings held between 1st January 2014 to till date, it is remains unclear who is taking these decisions to block and revoke blocked URLs. The response from DeitY specifies that the CER has met six times between 2014 and March 2015, however stops short on sharing any further information or copies of files on complaints and recommendations of the CER, citing rule 16.

Finally, answering our question on the composition of the RC the letter merely highlights the provision providing for the composition under 419A of the Indian Telegraph Rules, 1951. The response clarifies that so far, the RC has met once on 7th December, 2013 under the Chairmanship of the Cabinet Secretary, Department of Legal Affaits and Secretary, DOT. Our request for minutes of meetings and copies of orders and findings of the RC is denied by simply stating that “minutes are not available”. Under 419A, any directions for interception of any message or class of messages under sub-section (2) of Section 5 of the Indian Telegraph Act, 1885 issued by the competent authority shall contain reasons for such direction and a copy of such order shall be forwarded to the concerned RC within a period of seven working days. Given that the RC has met just once since 2013, it is unclear if the RC is not functioning or if the interception of messages is being guided through other procedures. Further, we do not yet know details or have any records of revocation orders or notices sent to intermediary contacts. This restricts the citizens’ right to receive information and DeitY should work to make these available for the public.

Given the response to our RTI, the Ministry’s response to Parliament and the SC judgment we recommend the following steps be taken by the DeitY to ensure that we create a procedure that is just, accountable and follows the rule of law.

The revocation of rule 16 needs urgent clarification for two reasons:

  1. Under Section 22 of the RTI Act provisions thereof, override all conflicting provisions in any other legislation.
  2. In upholding the constitutionality of S69A the SC cites the requirement of reasons behind blocking orders to be recorded in writing, so that they may be challenged by means of writ petitions filed under Article 226 of the Constitution of India.

If the blocking orders or the meetings of the CER and RC that consider the reasons in the orders are to remain shrouded in secrecy and unavailable through RTI requests, filing writ petitions challenging these decisions will not be possible, rendering this very important safeguard for the protection of online free speech and expression infructuous. In summation, the need for comprehensive legislative reform remains in the blocking procedures and the government should act to address the pressing need for transparency and accountability. Not only does opacity curtial the strengths of democracy it also impedes good governance. We have filed an RTI seeking a comprehensive account of the blocking procedure, functioning of committees from 2009-2015 and we shall publish any information that we may receive.

Reading the Fine Script: Service Providers, Terms and Conditions and Consumer Rights

This year, an increasing number of incidents, related to consumer rights and service providers, have come to light. This blog illustrates the facts of the cases, and discusses the main issues at stake, namely, the role and responsibilities of providers of platforms for user-created content with regard to consumer rights.

On 1st July, 2014 the Federal Trade Commission (FTC) filed a complaint against T-Mobile USA,[1] accusing the service provider of ‘cramming’ customers bills, with millions of dollars of unauthorized charges. Recently, another service provider, received flak from regulators and users worldwide, after it published a paper, ‘Experimental evidence of massive-scale emotional contagion through social networks’.[2] The paper described Facebook’s experiment on more than 600,000 users, to determine whether manipulating user-generated content, would affect the emotions of its users.

In both incidents the terms that should ensure the protection of their user’s legal rights, were used to gain consent for actions on behalf of the service providers, that were not anticipated at the time of agreeing to the terms and conditions (T&Cs) by the consumer. More precisely, both cases point to the underlying issue of how users are bound by T&Cs, and in a mediated online landscape—highlight, the need to pay attention to the regulations that govern the online engagement of users.

I have read and agree to the terms

In his statement, Chief Executive Officer, John Legere might have referred to T-Mobile as “the most pro-consumer company in the industry”,[3] however the FTC investigation revelations, that many customers never authorized the charges, suggest otherwise.  The FTC investigation also found that, T-Mobile received 35-40 per cent of the amount charged for subscriptions, that were made largely through innocuous services, that customers had been signed up to, without their knowledge or consent. Last month news broke, that just under 700,000 users ‘unknowingly’ participated in the Facebook study, and while the legality and ethics of the experiment are being debated, what is clear is that Facebook violated consumer rights by not providing the choice to opt in or out, or even the knowledge of such social or psychological experiments to its users.

Both incidents boil down to the sensitive question of consent. While binding agreements around the world work on the condition of consent, how do we define it and what are the implications of agreeing to the terms?

Terms of Service: Conditions are subject to change 

A legal necessity, the existing terms of service (TOS)—as they are also known—as an acceptance mechanism are deeply broken. The policies of online service providers are often, too long, and with no shorter or multilingual versions, require substantial effort on part of the user to go through in detail. A 2008 Carnegie Mellon study estimated it would take an average user 244 hours every year to go through the policies they agree to online.[4] Based on the study, Atlantic’s Alexis C. Madrigal derived that reading all of the privacy policies an average Internet user encounters in a year, would take 76 working days.[5]

The costs of time are multiplied by the fact that terms of services change with technology, making it very hard for a user to keep track of all of the changes over time. Moreover, many services providers do not even commit to the obligation of notifying the users of any changes in the TOS. Microsoft, Skype, Amazon, YouTube are examples of some of the service providers that have not committed to any obligations of notification of changes and often, there are no mechanisms in place to ensure that service providers are keeping users updated.

Facebook has said that the recent social experiment is perfectly legal under its TOS,[6] the question of fairness of the conditions of users consent remain debatable. Facebook has a broad copyright license that goes beyond its operating requirements, such as the right to ‘sublicense’. The copyright also does not end when users stop using the service, unless the content has been deleted by everyone else.

More importantly, since 2007, Facebook has brought major changes to their lengthy TOS about every year.[7] And while many point that Facebook is transparent, as it solicits feedback preceding changes to their terms, the accountability remains questionable, as the results are not binding unless 30% of the actual users vote. Facebook can and does, track users and shares their data across websites, and has no obligation or mechanism to inform users of the takedown requests.

Courts in different jurisdictions under different laws may come to different conclusions regarding these practices, especially about whether changing terms without notifying users is acceptable or not. Living in a society more protective of consumer rights is however, no safeguard, as TOS often include a clause of choice of law which allow companies to select jurisdictions whose laws govern the terms.

The recent experiment bypassed the need for informed user consent due to Facebook’s Data Use Policy[8], which states that once an account has been created, user data can be used for ‘internal operations, including troubleshooting, data analysis, testing, research and service improvement.’ While the users worldwide may be outraged, legally, Facebook acted within its rights as the decision fell within the scope of T&Cs that users consented to. The incident’s most positive impact might be in taking the questions of Facebook responsibilities towards protecting users, including informing them of the usage of their data and changes in data privacy terms, to a worldwide audience.

My right is bigger than yours

Most TOS agreements, written by lawyers to protect the interests of the companies add to the complexities of privacy, in an increasingly user-generated digital world. Often, intentionally complicated agreements, conflict with existing data and user rights across jurisdictions and chip away at rights like ownership, privacy and even the ability to sue. With conditions that that allow for change in terms at anytime, existing users do not have ownership or control over their data.

In April New York Times, reported of updates to the legal policy of General Mills (GM), the multibillion-dollar food company.[9] The update broadly asserted that consumers interacting with the company in a variety of ways and venues no longer can sue GM, but must instead, submit any complaint to “informal negotiation” or arbitration. Since then, GM has backtracked and clarified that “online communities” mentioned in the policy referred only to those online communities hosted by the company on its own websites.[10] Clarification aside, as Julia Duncan, Director of Federal programs at American Association for Justice points out, the update in the terms were so broad, that they were open to wide interpretation and anything that consumers purchase from the company could have been held to this clause. [11]

Data and whose rights?

Following Snowden revelations, data privacy has become a contentious issue in the EU, and TOS, that allow the service providers to unilaterally alter terms of the contract, will face many challenges in the future. In March Edward Snowden sent his testimony to the European Parliament calling for greater accountability and highlighted that in “a global, interconnected world where, when national laws fail like this, our international laws provide for another level of accountability.”[12] Following the testimony came the European Parliament’s vote in favor of new safeguards on the personal data of EU citizens, when it’s transferred to non-EU.[13] The new regulations seek to give users more control over their personal data including the right to ask for data from companies that control it and seek to place the burden of proof on the service providers.

The regulation places responsibility on companies, including third-parties involved in data collection, transfer and storing and greater transparency on concerned requests for information. The amendment reinforces data subject right to seek erasure of data and obliges concerned parties to communicate data rectification. Also, earlier this year, the European Court of Justice (ECJ) ruled in favor of the ‘right to be forgotten’[14]. The ECJ ruling recognised data subject’s rights override the interest of internet users, however, with exceptions pertaining to nature of information, its sensitivity for the data subject’s private life and the role of the data subject in public life.

In May, the Norwegian Consumer Council filed a complaint with the Norwegian Consumer Ombudsman, “… based on the discrepancies between Norwegian Law and the standard terms and conditions applicable to the Apple iCloud service…”, and, “…in breach of the law regarding control of marketing and standard agreements.”[15] The council based its complaint on the results of a study, published earlier this year, that found terms were hazy and varied across services including iCloud, Drop Box, Google Drive, Jotta Cloud, and Microsoft OneDrive. The Norwegian Council study found that Google TOS, allow for users content to be used for other purposes than storage, including by partners and that it has rights of usage even after the service is cancelled.  None of the providers provide a guarantee that data is safe from loss, while many,  have the ability to terminate an account without notice. All of the service providers can change the terms of service but only Google and Microsoft give an advance notice.

The study also found service providers lacking with respect to European privacy standards, with many allowing for browsing of user content. Tellingly, Google had received a fine in January by the French Data Protection Authority, that stated regarding Google’s TOS, “permits itself to combine all the data it collects about its users across all of its services without any legal basis.”

To blame or not to blame

Facebook is facing a probe by the UK Information Commissioner’s Office, to assess if the experiment conducted in 2012 was a violation of data privacy laws.[16] The FTC asked the court to order T-Mobile USA,  to stop mobile cramming, provide refunds and give up any revenues from the practice. The existing mechanisms of online consent, do not simplify the task of agreeing to multiple documents and services at once, a complexity which manifolds, with the involvement of third parties.

Unsurprisingly, T-Mobile’s Legere termed the FTC lawsuit misdirected and blamed the companies providing the text services for the cramming.[17] He felt those providers should be held accountable, despite allegations that T-Mobile’s billing practices made it difficult for consumers to detect that they were being charged for unauthorized services and having shared revenues with third-party providers. Interestingly, this is the first action against a wireless carrier for cramming and the FTC has a precedent of going after smaller companies that provide the services.

The FTC charged  T-Mobile USA with deceptive billing practices in putting the crammed charges under a total for ‘use charges’ and ‘premium services’ and failure to highlight that portion of the charge was towards third-party charges. Further, the company urged customers to take complaints to vendors and was not forthcoming with refunds. For now, T-Mobile may be able to share the blame, the incident brings to question its accountability, especially as going forward it has entered a pact along with other carriers in USA including Verizon and AT&T, agreeing to stop billing customers for third-party services. Even when practices such as cramming are deemed illegal, it does not necessarily mean that harm has been prevented. Often users bear the burden of claiming refunds and litigation comes at a cost while even after being fined companies could have succeeded in profiting from their actions.

Conclusion 

Unfair terms and conditions may arise when service providers include terms that are difficult to understand or vague in their scope. TOS that prevent users from taking legal action, negate liability for service providers actions despite the companies actions that may have a direct bearing on users, are also considered unfair. More importantly, any term that is hidden till after signing the contract, or a term giving the provider the right to change the contract to their benefit including wider rights for service provider wide in comparison to users such as a term that that makes it very difficult for users to end a contract create an imbalance. These issues get further complicated when the companies control and profiting from data are doing so with user generated data provided free to the platform.

In the knowledge economy, web companies play a decisive role as even though they work for profit, the profit is derived out of the knowledge held by individuals and groups. In their function of aggregating human knowledge, they collect and provide opportunities for feedback of the outcomes of individual choices. The significance of consent becomes a critical part of the equation when harnessing individual information. In France, consent is part of the four conditions necessary to be forming a valid contract (article 1108 of the Code Civil).

The cases highlight the complexities that are inherent in the existing mechanisms of online consent. The question of consent has many underlying layers such as reasonable notice and contractual obligations related to consent such as those explored in the case in Canada, which looked at whether clauses of TOS were communicated reasonably to the user, a topic for another blog. For now, we must remember that by creating and organising  social knowledge that further human activity, service providers, serve a powerful function. And as the saying goes, with great power comes great responsibility.


[1] ‘FTC Alleges T-Mobile Crammed Bogus Charges onto Customers’ Phone Bills’, published 1 July, 2014. See: http://www.ftc.gov/news-events/press-releases/2014/07/ftc-alleges-t-mobile-crammed-bogus-charges-customers-phone-bills

[2] ‘Experimental evidence of massive-scale emotional contagion through social networks’, Adam D. I. Kramera,1, Jamie E. Guilloryb, and Jeffrey T. Hancock, published March 25, 2014. See:http://www.pnas.org/content/111/24/8788.full.pdf+html?sid=2610b655-db67-453d-bcb6-da4efeebf534

[3] ‘U.S. sues T-Mobile USA, alleges bogus charges on phone  bills, Reuters published 1st July, 2014 See: http://www.reuters.com/article/2014/07/01/us-tmobile-ftc-idUSKBN0F656E20140701

[4] ‘The Cost of Reading Privacy Policies’, Aleecia M. McDonald and Lorrie Faith Cranor, published I/S: A Journal of Law and Policy for the Information Society 2008 Privacy Year in Review issue. See: http://lorrie.cranor.org/pubs/readingPolicyCost-authorDraft.pdf

[5] ‘Reading the Privacy Policies You Encounter in a Year Would Take 76 Work Days’, Alexis C. Madrigal, published The Atlantic, March 2012 See: http://www.theatlantic.com/technology/archive/2012/03/reading-the-privacy-policies-you-encounter-in-a-year-would-take-76-work-days/253851/

[6] Facebook Legal Terms. See: https://www.facebook.com/legal/terms

[7] ‘Facebook’s Eroding Privacy Policy: A Timeline’, Kurt Opsahl, Published Electronic Frontier Foundation , April 28, 2010 See:https://www.eff.org/deeplinks/2010/04/facebook-timeline

[8] Facebook Data Use Policy. See: https://www.facebook.com/about/privacy/

[9] ‘When ‘Liking’ a Brand Online Voids the Right to Sue’, Stephanie Strom, published in New York Times on April 16, 2014 See: http://www.nytimes.com/2014/04/17/business/when-liking-a-brand-online-voids-the-right-to-sue.html?ref=business

[10] Explaining our website privacy policy and legal terms, published April 17, 2014 See:http://www.blog.generalmills.com/2014/04/explaining-our-website-privacy-policy-and-legal-terms/#sthash.B5URM3et.dpufhttp://www.blog.generalmills.com/2014/04/explaining-our-website-privacy-policy-and-legal-terms/

[11] General Mills Amends New Legal Policies, Stephanie Strom, published in New York Times  on 1http://www.nytimes.com/2014/04/18/business/general-mills-amends-new-legal-policies.html?_r=0

[12] Edward Snowden Statement to European Parliament published March 7, 2014. See: http://www.europarl.europa.eu/document/activities/cont/201403/20140307ATT80674/20140307ATT80674EN.pdf

[13] Progress on EU data protection reform now irreversible following European Parliament vote, published 12 March 201 See: http://europa.eu/rapid/press-release_MEMO-14-186_en.htm

[14] European Court of Justice rules Internet Search Engine Operator responsible for Processing Personal Data Published by Third Parties, Jyoti Panday, published on CIS blog on May 14, 2014. See: http://cis-india.org/internet-governance/blog/ecj-rules-internet-search-engine-operator-responsible-for-processing-personal-data-published-by-third-parties

[15] Complaint regarding Apple iCloud’s terms and conditions , published on 13 May 2014 See:http://www.forbrukerradet.no/_attachment/1175090/binary/29927

[16] ‘Facebook faces UK probe over emotion study’ See: http://www.bbc.co.uk/news/technology-28102550

[17] Our Reaction to the FTC Lawsuit See: http://newsroom.t-mobile.com/news/our-reaction-to-the-ftc-lawsuit.htm

ICANN Supporting the DNS Industry in Underserved Regions

Towards exploring ideas and strategies to help promote the domain name industry in regions that have typically been underserved, ICANN published a call for public comments on May 14, 2014. In particular, ICANN sought comments related to existing barriers to Registrar Accreditation and operation and suggestions on how these challenges might be mitigated. CIS contributed to the comments on this report, which will be used to determine next steps to support the domain name industry in underserved regions.

Domain names and the DNS are used in virtually every aspect of the Internet, and without the DNS, the Internet as we know it, would not exist. The DNS root zone has economic value and  ICANN’s contract with Verisign delineates the selling of domain names via only ICANN accredited registrars. By the indirect virtue of its control of the root, ICANN has the power and capacity to influence the decisions of entities involved in the management and operations of the DNS, including registrars.

Too far, too many?

We acknowledge some of the efforts for improvements, in particular with reference to barriers to participation in DNS-related business in regions such as Africa and the Middle East, including the creation of a fellowship program, and increased availability of translated materials. However, despite these efforts, the gaps in the distribution of the DNS registrars and registries across the world has become an issue of heightened concern.

This is particularly true, in light of the distribution of registrars and given that, of the 1124 ICANN-accredited registrars, North America has a total of 765 registrars. US and Canada together, have more than double the number of registrars than the rest of the world taken collectively. To put things further into perspective, of the total number of registrars 725 are from the United States alone, and 7 from the 54 countries of Africa.

A barrier to ICANN’s capacity building initiatives has been the lack of trust, given the general view that, ICANN focuses on policies that favour entrenched incumbents from richer countries. Without adequate representation from poorer countries, and adequate representation from the rest of the world’s Internet population, there is no hope of changing these policies or establishing trust. The entire region of Latin America and the Caribbean, comprising of a population of 542.4 million internet users[1] in 2012, has only 22 registrars spread across a total of 10 countries. In Europe, covering a population of 518.5 million internet users[2], are 158 registrars and 94 of those are spread across Germany, UK, France, Spain and Netherlands. The figures paint the most dismal picture with respect to South Asia, in particular India, where just 16 registrars cater to the population of internet users that is expected to reach 243 million by June 2014[3].

While we welcome ICANN’s research and outreach initiatives with regard to the DNS ecosystem in underserved regions, without the crucial first step of clarifying the metrics that constitute an underserved region, these efforts might not bear their intended impact. ICANN cannot hope to identify strategies towards bridging the gaps that exist in the DNS  ecosystem, without going beyond the current ICANN community, which, while nominally being ‘multistakeholder’ and open to all, grossly under-represents those parts of the world that aren’t North America and Western Europe.

The lack of registries in the developing world is another significant issue that needs to be highlighted and addressed. The top 5 gTLD registries are in the USA and it is important that users and the community feels that the fees being collected are equivalent compensation for the services they provide. As registries operate in captive markets that is allocated by ICANN, we invite ICANN to improve its financial accountability, by enabling its stakeholders to assess the finances collected on these registrations.

Multistakeholderism—community and consensus

As an organization that holds itself a champion of the bottom-up policy development process, and, as a private corporation fulfilling a public interest function, ICANN, is in a unique position to establish new norms of managing common resources. In theory and under ICANN’s extensive governance rules, the board is a legislative body that is only supposed to approve the consensus decisions of the community and the staff wield executive control. However in reality, both board and the staff have been criticised for decisions that are not backed by the community.

The formal negotiations between ICANN and Registrar Stakeholder Group Negotiating Team (Registrar NT) over the new Registrar Accreditation Agreement (RAA), is an example of processes that have a multistakeholder approach but fail on values of deliberation and pluralistic decision making.[4] ICANN staff insisted on including a “proposed Revocation (or “blow up”) Clause that would have given them the ability to unilaterally terminate all registrar accreditations” and another proposal seeking to provide ICANN Board ability to unilaterally amend the RAA (identical to proposal inserted in the gTLD registry agreement – a clause met with strong opposition not only from the Registry Stakeholder Group but from the broader ICANN community).

Both proposals undermine the multistakeholder approach of the ICANN governance framework, as they seek more authority for the Board, rather than the community or protections for registrars and more importantly, registrants. The proposed amendments to the RAA were not issues raised by Law Enforcement, GAC or the GNSO but by the ICANN staff and received considerable pushback from the Registrar Stakeholder Group Negotiating Team (Registrar NT). The bottom-up policy making process at ICANN has also been questioned with reference to the ruling on vertical integration between registries and registrars, where the community could not even approach consensus.[5]Concerns have also been raised about the extent of the power granted to special advisory bodies handpicked by the ICANN president, the inadequacy of existing accountability mechanisms for providing a meaningful and external check on Board decisions and the lack of representation of underserved regions on these special bodies. ICANN must evolve its accountability mechanisms, to go beyond the opportunity to provide comments on proposed policy, and extend to a role for stakeholders in decision making, which is presently a privilege reserved for staff rather than bottom-up consensus.

ICANN was created as a consensus based organisation that would enable the Internet, its stakeholders and beneficiaries to move forward in the most streamlined, cohesive manner.[6] Through its management of the DNS, ICANN is undertaking public governance duties, and it is crucial that it upholds the democratic values entrenched in the multistakeholder framework. Bottom up policy making extends beyond passive participation and has an impact on the direction of the policy. Presently, while anyone can comment on policy issues, only a few have a say in which comments are integrated towards outcomes and action. We would like to stress not just improving and introducing checks and balances within the ICANN ecosystem, but also, integrating accountability and transparency practices at all levels of decision making.

Bridging the gap

We welcome the Africa Strategy working group and the public community process that was initiated by ICANN towards building domain name business industry in Africa, and, we are sure there will be lessons that will applicable to many other underserved regions. In the context of this report CIS, wants to examine the existing criteria of the accreditation process. As ICANN’s role evolves and its revenues grow across the DNS and the larger Internet landscape, it is important in our view, that ICANN review and evolve it’s processes for accreditation and see if they are as relevant today, as they were when launched.

The relationship between ICANN and every accredited registrar is governed by the individual RAA, which set out the obligations of both parties, and, we recommend simplifying and improving them. The RAA language is complex, technical and not relevant to all regions and presently, there are no online forms for the accreditation process. While ICANN’s language will be English, the present framing has an American bias—we recommend—creating an online application process and simplifying the language keeping it contextual to the region. It would also be helpful, if ICANN invested in introducing some amount of standardization across forms, this would reduce the barrier of time and effort it takes to go through complex legal documents and contribute to the growth of DNS business.

The existing accreditation process for registrars requires applicants to procure US$70,000 or more for the ICANN accreditation to become effective. The applicants are also required to obtain and maintain for the length of accreditation process, a commercial general liability insurance with a policy limit of US$500,000 or more. The working capital and the insurance are quite high and create a barrier to entrance of underserved regions in the DNS ecosystem.

With lack of appropriate mechanisms registrars resort to using US companies for insurance, creating more foreign currency pressures on themselves. The commercial general liability insurance requirement for the registrars is not limited to their functioning as a registrar perhaps not the most appropriate option. ICANN should, and must, increase efforts towards helping registrars find suitable insurance providers and scaling down the working capital. Solutions may lie in exploring variable fee structures adjusted against profits, and derived after considering factors such as cost of managing domain names and sub-domain names, expansion needs, ICANN obligations and services, financial capacities of LDCs and financial help pledged to disadvantaged groups or countries.

Presently, the start-up capital required is too high for developing countries, and this is reflected in the number of registries in these areas. Any efforts to improve the DNS ecosystem in underserved regions, must tackle this by scaling down the capital in proportion to the requirements of the region.

Another potential issue that ICANN should consider, is that users getting sub-domain names from local registrars located in their own country, are usually taxed on the transaction, however, online registration through US registrars spares users from paying taxes in their country.[7] This could create a reverse incentive for registering domain sub-names online from US registrars. ICANN should push forward on efforts to ensure that registrars are sustainable by providing incentives for registering in underserved regions and help towards maintain critical mass of the registrants. The Business Constituency (BC)—the voice of commercial Internet users within ICANN, could play a role in this and ICANN should endeavour to either, expand the BC function or create a separate constituency for the representation of  underserved regions.


[1] Internet Users and Population stats 2012. http://www.internetworldstats.com/stats2.htm

[2] Internet Users and Population stats 2012. http://www.internetworldstats.com/stats4.htm

[3] Times of India IAMAI Report. http://timesofindia.indiatimes.com/tech/tech-news/India-to-have-243-million-internet-users-by-June-2014-IAMAI/articleshow/29563698.cms

[4] Mar/07/2013 – Registrar Stakeholder Group Negotiating Team (Registrar NT) Statement Regarding ICANN RAA Negotiations.http://www.icannregistrars.org/calendar/announcements.php

[5] Kevin Murphy, Who runs the internet? An ICANN 49 primer. http://domainincite.com/16177-who-runs-the-internet-an-icann-49-primer

[6] Stephen Ryan, Governing Cyberspace: ICANN, a Controversial Internet Standards Body http://www.fed-soc.org/publications/detail/governing-cyberspace-icann-a-controversial-internet-standards-body

[7] Open Root-Financing LDCs in the WSIS process. See: http://www.open-root.eu/about-open-root/news/financing-ldcs-in-the-wsis-process