How Facebook delayed action on inauthentic accounts benefitting BJP MP Vinod Sonkar

ILLUSTRATION BY SHAGNIK CHAKRABORTY
06 June, 2022

After waiting for about half a year to depose before the parliamentary standing committee on communications and information technology, Sophie Zhang—a former data scientist at Facebook—has shared redacted documentation of her work on inauthentic accounts within the company, with a consortium of Indian news organisations, including The Caravan. This documentation includes internal communication among Facebook employees in late 2019 and early 2020, which reveals that several months after Zhang flagged off a network of inauthentic accounts boosting engagement for the Bharatiya Janata Party leader Vinod Kumar Sonkar, the organisation dragged its feet on taking action against the network, despite repeated reminders from her. Sonkar is a member of parliament from Kaushambi, in Uttar Pradesh, and heads the parliamentary standing committee on ethics.

The documentation that Zhang has made accessible sheds light on how networks of inauthentic or compromised user accounts promoted fake engagement—likes, shares, comments—on Facebook for political parties such as the BJP, the Congress and the Aam Aadmi Party. It also highlights the social-media giant’s inconsistent processes in combating inauthentic behaviour that helps Indian politicians artificially amplify certain kinds of content or profiles, thereby distorting public opinion. Zhang redacted the investigating methodologies from the documents to avoid revealing them to those with vested interests.

The network working to amplify content posted by Sonkar was one of four such networks that Zhang internally highlighted in December 2019. A little over a fortnight after they were identified, Facebook initiated action against three of these networks: two benefitting leaders from the Congress and one supporting leaders from the BJP.  It did not take similar measures for the network benefitting Sonkar, once it was discovered that Sonkar’s account was included in the network, which would imply that the MP, or someone with access to his account, was likely involved in the inauthentic activity. The network was fairly small, with about fifty accounts, but Zhang noted that it was significant because of “the direct connection to the sitting MP and for FB’s deeply unusual reaction afterwards in which they refused to act despite having already approved a takedown.” 

Zhang said, in an interview with me, that her focus on the company’s refusal to take action was not about Sonkar himself but “about the world that Facebook is building for India,” in which violators could enjoy impunity simply because they were in positions of power. “It is effectively saying: You can run IT cells in India, as long as you do it without trying to hide who you are.” The networks Zhang found could very well be the tip of the iceberg. “This is what I found,” she said. “It is likely that there are other networks—associated with even more influential politicians or larger in scope—that have simply not been found yet.”

Zhang noted in the interview that Indian politics “seems to be undergoing an IT cells arms race, in which each political party feels that they cannot unilaterally disarm or the opponents will gain an advantage.” This situation, she added, was complicated further because “Facebook appears to feel comfortable taking down IT cells linked to other political parties, but is hesitant with regards to the BJP, which creates an unfair playing field.”

India is one of the largest markets of Facebook, which has since been rebranded as Meta. Facebook and WhatsApp have over 300 million and 400 million users, respectively, in the country. In April 2020, Facebook announced that it would spend $5.7 billion—its largest foreign investment—for a nearly ten-percent stake in Jio Platforms, the digital arm of Mukesh Ambani’s Reliance Industries, to advance its India operations. However, the organisation has come under increasing scrutiny from the international media as well as civil society for its failure to curb hate speech, disinformation and communal content on its platforms in the country. This has also included allegations of active interference or inaction by Facebook’s senior employees in cases involving BJP politicians.

In April 2020, the Wall Street Journal reported that Ankhi Das, Facebook India’s policy head at the time, had opposed the the enforcement of hate-speech rules against a BJP politician, because it would “damage the company’s business prospects in the country.” Das resigned a few months later. A report by Time, also published in April 2020, alleged that Facebook had failed to act against a post with Islamophobic content, by a BJP politician in Assam, for a year, till the magazine asked the company for comment. The article alleged that Shivnath Thukral—then Facebook’s public-policy director for India and South Asia, and now the public-policy director for WhatsApp in India—was present at the meeting in which these content violations, among others, were brought up, and left halfway through. (Facebook claimed that Thukral never intended to stay for the entire meeting.)

According to a report in the Indian Express, the BJP was the biggest advertiser on “social issues, elections and politics” on Facebook in India—spending about Rs 4.61 crore between February 2019 and August 2020. Four other groups linked to the BJP were also among the top ten spenders on advertisements during the same period, netting Facebook a total advertisement revenue of Rs 10.17 crore—64 percent of revenue from the top ten spenders in that category. In the financial year ending in March 2020, Facebook India nearly doubled its advertising revenue to Rs 520 crore.

Since late 2021, Zhang has been waiting to offer her testimony regarding her findings on India to the parliamentary standing committee on communications and information technology. In accordance with parliamentary rules, the committee’s chairperson, Shashi Tharoor, sought permission from the Lok Sabha speaker, Om Birla, to depose Zhang. It was reported that Tharoor’s request, in November 2021, was unanimously endorsed by the committee, and Zhang made the redacted documents accessible to the panel as early as 5 November. She withheld Sonkar’s identity and refused media requests to access or publish the documentation she had gathered on India. “I believed it would be irresponsible to do so without giving the guardians of the Indian democracy—the Lok Sabha—the right of first refusal over the issues I uncovered,” Zhang wrote in an email to the consortium.

Birla did not respond to Tharoor’s request. (An Indian Express report, published on 1 June, quoted an unnamed source in the Lok Sabha secretariat who claimed that the committee had not sent a formal request requesting the speaker’s permission.) On 20 April, according to a report in The Hindu, the committee concluded its final meeting on “Safeguarding citizens’ rights and prevention of misuse of social/online news media platforms.” Since her testimony had been effectively stalled, Zhang decided to “take the next step forward, and take this issue directly to the Indian people, via the press.”

*

Zhang was first catapulted into the global limelight in September 2020, when Buzzfeed News published excerpts from a searing internal memo, nearly eight thousand words long, that she wrote soon after being fired from Facebook. In the memo, she described in sordid detail the myriad inauthentic networks she had encountered in dozens of countries and the ways in which they were being used to influence political outcomes. In April 2021, Zhang provided The Guardian with evidence of the inauthentic activity she had uncovered, along with documentation that illustrated Facebook’s unwillingness to mitigate such abuse consistently, unless it posed a threat to the company’s business interests or reputation.

This work was technically outside of the scope of Zhang’s job profile. She was hired, in January 2018, as a data scientist and joined a team that dealt with issues of fake engagement—more specifically, likes, comments or shares—particularly generated through automated bots. The bulk of such activity on Facebook is not political but commercial or personal: individuals who want to be popular on the platform, business enterprises looking to expand their reach and so on. “A much smaller component of the problem by volume, but disproportionately impactful in reality, consists of inauthentic activity acting in the political sphere as crude information operations—from low-quality unsophisticated literal bots to highly sophisticated networks of paid employers who manually use fake assets en-masse on Monday-Friday work-weeks,” Zhang wrote in her memo.

Zhang soon began encountering such activity and worked in her spare time to weed it out. About six months after she joined, The Guardian reported, she discovered that most of the likes on content posted by the account of the-then president of Honduras were fake. They were coming through Facebook pages—profiles for organisations, businesses or public personalities—that had been set up to look like user accounts. In August 2019, she realised that in Azerbaijan, similarly deceptive Facebook pages were portrayed as user profiles that posted content to intimidate dissidents or independent media outfits, and bolster the reputation of the country’s president.

Often, Zhang said in the interview, it was difficult to ascertain whether inauthentic behaviour boosting a particular individual or organisation stemmed from its beneficiaries or those associated with them. “The average layperson will necessarily presume that the beneficiary is likely responsible or otherwise aware of the activity,” she said. “But in most cases we do not truly know.” In both Honduras and Azerbaijan, the accountability for the inauthentic activity was clear. It appeared to be linked to the Honduran president and the ruling party of Azerbaijan, respectively. Zhang alerted multiple colleagues with the relevant job roles to assess, verify and act on these networks. Yet, she was only able to incite a systemic response, over a year after she discovered the networks, when she pushed for the attention of the organisation through posts on internal message boards or presentations at internal events.

A country such as India, on the other hand, presented an entirely different problem. “India is a very large and important nation. Facebook ascribed an importance to India that wasn’t the case in Azerbaijan and Honduras,” Zhang said, in the interview. “On the flip side, because of that importance, there was a lot more political interference. There were many more concerns expressed against taking action regarding accounts connected to Indian politicians, or benefitting them, than accounts associated with the government of Honduras or Azerbaijan. The importance is essentially a double-edged sword.”

*

On 2 December 2019, Zhang reached out to a colleague from Facebook’s threat intelligence team—a highly specialised and small team of investigators, many of them former intelligence operatives, that focussed on coordinated inauthentic behaviour on Facebook’s platforms. Zhang sought the threat intelligence investigator’s advice on instances of coordinated activity by possibly inauthentic accounts in India, which she had first flagged off in November 2019. She created an entry with the requisite details on Facebook’s task management system, as recommended by the investigator. Through the entry, Zhang internally reported what she referred to as “three separate networks of highly co-ordinated users.” The largest among these, with 526 user accounts, was working to benefit leaders from the Congress in Punjab. Another network of 51 accounts was also working to boost leaders from the Congress in an unspecified region. A third network, with 65 accounts, was working to amplify leaders the BJP in an unspecified region.

The threat intelligence investigator asked a second investigator—who, Zhang said, closely partnered with the threat intelligence team and was familiar with India—to examine the entities. The second investigator verified Zhang’s findings and noted that the inauthentic accounts in the network benefitting the Congress in Punjab were “a part of a manual fake engagement group,” which appeared to contract such compromised or fake user accounts for inauthentic amplification. The investigator recommended the inclusion of a member of Facebook’s site integrity operations team, so that they could evaluate the networks and suggest possible enforcement mechanisms.

By 10 December, Zhang wrote on the thread, she had found a fourth network working to boost content posted by Sonkar through “positive reactions and re-shares of his post.” She tagged the member of the site integrity operations team, the second investigator and a policy manager from India. A day later, the second investigator confirmed that “this is also a case of manual fake engagement” recommended that the member of the site integrity operations team considered enforcement options for this network as well.

After the member of the site integrity operations team responded to the thread, by 19 December, the consensus seemed to be that the compromised or fake user accounts should be put through an identity checkpoint—an enforcement mechanism through which suspicious users are locked from further activity on Facebook until they provide proof of their identity, failing which their accounts may be permanently disabled. On the same day, a trust-and-safety manager applied the checkpoint review. Zhang responded to note that the manager had taken action against user accounts from the first three networks she had identified, but “we forgot to highlight the additional network on Kaushambi,” referring to the network boosting Sonkar.

On 20 December, the manager replied, “Just wanted to confirm that we are comfortable acting against these actors.” One of the accounts on the network benefitting Sonkar was flagged on Facebook’s cross-check system—through which Facebook reviews content decisions on high-profile users—as a “Government partner” and “High Priority Indian.” According to a September 2021 report in the Wall Street Journal, the cross-check system, “initially intended as a quality-control measure for actions taken against high-profile accounts, including celebrities, politicians and journalists,” had evolved to “shield millions of VIP users from the company’s normal enforcement process.”

Zhang found that the user account the manager had flagged was Sonkar’s. This meant that it was highly likely that the network was tied to him, or to someone with access to his account. She recommended that the manager initiate action against the inauthentic accounts related to the network while excluding Sonkar’s authentic account. Zhang noted that she preferred this route because “I knew that it would be much easier to convince FB to do that. There would be severe resistance to any action that publicly implicated the MP; I had taken the same approach hence in a number of other regions with the goal of minimizing harm, and did not expect any resistance to simply taking down the inauthentic accounts, especially as the action had already been approved.” Zhang requested the threat intelligence investigator for advice, and also marked the Indian public-policy manager on her comment, “as this appears to be a case of a sitting Lok Sabha MP tied indirectly to inauthentic activity benefitting said Lok Sabha MP.”

Subsequently, Zhang made the third of her five requests to initiate action against the network. She was informed by the second investigator’s manager that there were resource constraints and that the threat intelligence team was occupied with “high risk situations in India surrounding citizen amendment act.” During this time, Zhang also discovered that the network benefitting leaders from the Congress in Punjab, which action had been taken against, appeared to have returned with new accounts—that is, another network with different user accounts from the old one had popped up and was amplifying the same beneficiaries, with a similar methodology as the previous one.

On 6 January, the second investigator verified Zhang’s findings of this resurgent network. She noted about the network benefitting Sonkar that the accounts on it did not seem to be involved in the creation of deceptive or fake content. Zhang replied that although there was no violating content, “this cluster is absolutely creating inauthentic content.” She included a sample of the comments posted by the inauthentic accounts of the network to illustrate that they were causing harm “by false amplification, especially as they are directly tied to the sitting MP they benefit himself.” In response to a query, Zhang wrote that this initial difference of opinion may likely have been the result of confusion caused by ambiguous terminology, wherein the second investigator was possibly using the term “inauthentic content” to refer to misinformation and Zhang was using it to refer to content posted by accounts that were either fake or compromised. The second investigator eventually concurred with Zhang’s reading and suggested that the user accounts be check-pointed. However, in a comment on 7 January, the threat intelligence investigator noted that since there did not appear to be any politically motivated coordinated inauthentic behaviour, the second investigator could close her work on the task.

Meanwhile, a day later, on 8 January, Zhang said that her direct manager’s boss told her over a video call, as part of her biannual performance review, to discontinue her work on networks relating to civic—or political—targets, including those in India. She did not have any documentation of the call but provided the consortium with redacted screenshots of a conversation with the boss’s manager, in which Zhang had registered her disagreement.

Zhang recounted that the boss’s rationale for pulling her out of this work was broadly that it was not considered valuable. She recalled that he said something along the lines of “if the harm was sufficiently important, it would result in sufficient negative PR for the company to eventually take note and denote more resources to the problem.” Zhang intentionally chose to construe that this directive applied only to starting any new work, so that she could “defend some continuing actions globally,” although in practice, India was the only significant case she was able to work on between being given this order and being told, a few months later, that she was being fired.

In mid- January, when in the run-up to the Delhi assembly elections, a network of fake or compromised accounts began posting in support of the AAP, Zhang was watching. Curiously, this network appeared to share some of the same accounts that were part of the resurgent pro-Congress network in Punjab. These accounts, Zhang noted, were commenting on posts by politicians from the Delhi BJP. “The apparent goal appears to be representing themselves as voters who are both pro-AAP/pro Modi,” she wrote, “potentially creating a narrative for national-level BJP voters to cross over and support the AAP in local elections.” It is not clear why some of the user accounts that were working to amplify Congress leaders in Punjab were rooting for the AAP’s re-election in Delhi.

*

Towards the end of January, realising that action on the pro-AAP network would not be forthcoming, Zhang used an internal meeting at Facebook’s US headquarters to galvanise movement. The civic summit—a congregation that included around a hundred low-to-mid-level employees from Facebook’s integrity teams, as well as some members from its policy and outreach teams—was held in the run-up to the 2020 elections in the United States. Zhang’s presentation, titled “fighting the last war,” argued that Facebook’s reactive approach to inauthentic political activity across the world resulted in a failure to respond to such abuses of the platform in a timely and adequate manner, particularly in the Global South. She brought up the pro-AAP network and highlighted its overlap with some accounts from the network boosting the Congress in Punjab. She noted that, even though India was a Tier 1 country in the company’s classification of at-risk countries, there had been no special organising within Facebook for the Delhi elections.

Her decision to include this case study in her presentation was strategic. She said that she had been “hoping to convince enough employees that the activity was problematic to result in prioritization of the issue.” Zhang recalled that her presentation earned her a standing ovation and a formal admonishment from her manager’s boss for describing his directive for her to stop working on political targets in a way he claimed was misleading. Nevertheless, she succeeded in getting the Indian case the attention she was hoping for within the company.

By 28 January, the Indian public-policy manager got in touch with Zhang and asked for her recommendations regarding the networks. “These guys are trying to imitate the BJP and beat them at their online game,” the manager noted, presumably referring to the Congress and the AAP. Zhang suggested identifying an employee with the relevant profile to validate her results so that no action was taken simply based on her word, check-pointing the users involved in the inauthentic activity and putting processes in place to monitor and mitigate the potential for new accounts being set up for the same inauthentic activity. “To avoid selective response, I strongly recommend also acting at the same time against the separate inauthentic amplification network identified in Kaushambi,” Zhang wrote. The manager responded with the suggestion that they initiate a conversation with the XFN team on the case. The XFN team refers to Facebook’s cross-functional team.

About a week later, on 3 February, Zhang followed up with the trust-and-safety manager on the thread she had created in the task management system, since the manager had not yet applied checkpoint reviews to the networks benefitting the leaders from the AAP and the Congress, which they had indicated they would enforce days earlier. She also pushed again for an assessment on the networks benefitting Sonkar. Zhang noted in a subsequent comment that there were now over eight hundred accounts in the networks benefitting the Congress in Punjab and the AAP in Delhi—about twice their original number.

The second investigator initiated action against the networks boosting leaders from the Congress and the AAP the following day, but did not include the new accounts. Zhang highlighted that this meant the takedown was incomplete and that there was continuing activity on the networks. On 6 February, two days before polling took place in Delhi, the investigator acted against around two hundred other accounts. A few hours later, Zhang provided an additional list of inauthentic accounts from the networks that had not been acted upon. As a precaution, she pre-emptively included some apparent inauthentic accounts in the pro-Congress network that had not yet posted content supporting the AAP. On 7 February, the second investigator tagged the public-policy manager and noted that these accounts appeared to be  linked to the administrators of the Facebook pages of three Congress leaders in Punjab: Arun Dogra, Sunder Sham Arora, and Balwinder Singh Laddi. Laddi and Arora have since joined the BJP.

In this case, the public-policy manager acted with alacrity. In about eight hours, Thukral was added on the thread and, as communicated by the public-policy manager, had approved action on the apparently inauthentic accounts in the pro-Congress network in Punjab. (In November 2021, when asked about Zhang’s allegations regarding the platform’s inconsistent regulation of inauthentic behaviour, Thukral and Facebook India’s legal head reportedly told the parliamentary committee on communication and information technology that her allegations were unsubstantiated and that an internal investigation was being launched into the matter.)

Dogra and Laddi’s pages are run by their children, while Arora’s page is managed by his personal assistant. I spoke to the politicians as well as these administrators. They denied engaging with any inauthentic networks at that time, and said that to the best of their knowledge, no one with temporary access to the pages had done so either. Sonkar did not respond to questions regarding Zhang’s investigation over email or text messages.

On 11 February, Zhang wrote a post-election summary on a closed group called “Indian Elections XFN Team,” summarising the takedown of the networks benefitting the AAP and Congress, without naming either party. She also underlined the continued inaction against the network boosting Sonkar, although she did not identify him or the BJP.

By August 2020, Zhang was told that she was being fired for poor performance. On 7 August, she added a follow-up summary on the task-management-system, before removing herself from the thread: “The remaining follow-up here in the task regarded an apparent inauthentic cluster of inauthentic accounts focussed on boosting MP Vinod Sonkar (BJP-Kaushambi), which were run out of a network associated with account of MP Vinod Sonkar ... Given the close ties to a sitting member of the Lok Sabha, we sought policy approval for a take-down, which we did not receive, and the situation was not deemed to be a focus for prioritization.”

*

When The Guardian first approached Facebook for a response on Zhang’s claims regarding Sonkar’s network, the company denied inaction and claimed that the “vast majority” of accounts underwent a checkpoint review, and were even permanently removed, in December 2019 and early 2020. No such takedown is reflected in the thread that Zhang had created in the task management system, or in the responses to her consistent follow-ups. When The Guardian pointed to the documents that indicated no such action had been initiated, Facebook changed its response to claim that a portion of the cluster had been removed in May 2020, while the rest of the accounts on the network were being monitored.

Later, the company told The Guardian that a “specialist team” had reviewed the accounts and that, though a few of them did not meet the criteria for removal, they were inactive. The company did not provide The Guardian with any details on this team, or respond to the publication’s queries on why this action had not been documented in the entry that Zhang had created on the task management system.

The Caravan sent Facebook a detailed questionnaire about the redacted internal documentation that Zhang had shared. Facebook did not respond to the specific allegations, or to requests for details or documentation regarding any action that was taken against the network benefitting Sonkar, as well as any details or documentation that would disprove the link between Sonkar’s personal account and the network. A Meta spokesperson said, “We have not been provided the documents and cannot speak to the specific assertions, but we have stated previously that we fundamentally disagree with Ms. Zhang’s characterization of our priorities and efforts to root out abuse on our platform. We aggressively go after abuse around the world and have specialized teams focused on this work. As a result, we’ve already taken down more than 150 networks of coordinated inauthentic behavior. Around half of them were domestic networks that operated in countries around the world, including those in India…”

According to the spokesperson, “this is a completely wrong and misleading assertion and reflects a limited understanding of how we enforce our policies. The decisions around content escalations are not made unilaterally by any one person, including any one member of the India public policy team; rather, they are inclusive of views from different teams and disciplines within the company. The process comes with robust checks and balances built in to ensure that the policies are implemented as they are intended to be and take into consideration applicable local laws…”

The Caravan had earlier asked Facebook about the specialist team involved in the review of the accounts from this network in May 2019, according to its claims to The Guardian, and whether this review and enforcement had been recorded in the internal task management system. The spokesperson provided a single-line response: “Every content enforcement decision is taken by an XFN team including subject matter experts.”

Since 2020, when Zhang was fired from Facebook and refused a $64,000 package so that she could speak about her experience at the company, she has not taken up employment and is working to promote accountability on social-media platforms. When she reached out to news organisations in India, the redacted documents she provided were meticulously annotated, including various points at which she had included earnest apologies to the people of India for not being able to do more in the instances in which she had intervened. They belied an extraordinary sense of responsibility and investment, which Zhang attributed to her abiding belief in the robust defence of democratic principles.

The trajectory of Zhang’s life also appeared to have instilled in her a particular aversion to enabling any kind of injustice. Her parents had immigrated to the United States, and her father was a staunch Chinese nationalist. In her interview with me, she recalled that she first learnt the Chinese phrase hanjian—race-traitor—when she was around thirteen years old, after being disparaged by her father as one during a political argument.

As a young transgender woman, Zhang faced brutal violence when her father discovered her identity. She locked herself up in a bathroom and removed the window screen in preparation to jump out. Upon second thought, she realised that it was safer for her to “stay within an abusive household and accept his beatings, than to run away and be homeless on the streets.” (A profile on Zhang in the MIT Technology Review noted that, although her father had denied these allegations, multiple people who knew Zhang since high school had corroborated her account.)

The next day, Zhang went to school prepared with manufactured explanations she expected she would have to offer to teachers about the wounds that were visible on her arms and face, but none were required—no one asked. Authority, Zhang learnt at a very young age, could not always be trusted to do the right thing. “I think a lot of people, Indians included, have experienced slipping through the cracks…someone who was in a position of power, was supposed to defend them, but didn’t,” Zhang said in the interview, adding that such people “completed the letter of the duty, but failed its spirit.”

For Zhang, the traumatic experiences of her childhood cemented a determination to not let anyone slip through the cracks, if she were ever in a position of power. “I never expected this to actually happen of course. I never expected to actually be in a position of authority, and I certainly didn’t expect it to be as difficult as it was,” she said.

This report forms a part of a series on the Facebook Files.

Correction: An earlier version of this article incorrectly stated that Sunder Sham Arora was a Congress leader. He had joined the BJP a day before the publication of this piece. The article also mistakenly identified Facebook’s cross-functional team as its cross-check team in one instance. The Caravan regrets these errors.