Hearings

Print

Hearing Type: 
Open
Date & Time: 
Wednesday, November 1, 2017 - 9:30am
Location: 
Hart 216

Full Transcript

[Senate Hearing 115-232]
[From the U.S. Government Publishing Office]


                                                        S. Hrg. 115-232

                  OPEN HEARING: SOCIAL MEDIA INFLUENCE
                       IN THE 2016 U.S. ELECTION

=======================================================================

                                HEARING

                               BEFORE THE

                    SELECT COMMITTEE ON INTELLIGENCE

                                 OF THE

                          UNITED STATES SENATE

                     ONE HUNDRED FIFTEENTH CONGRESS

                             FIRST SESSION

                               __________

                      WEDNESDAY, NOVEMBER 1, 2017

                               __________

[GRAPHIC NOT AVAILABLE IN TIFF FORMAT]



      Printed for the use of the Select Committee on Intelligence



        Available via the World Wide Web: http://www.govinfo.gov
                    
                    


                    U.S. GOVERNMENT PUBLISHING OFFICE                    
27-398 PDF                  WASHINGTON : 2018                     
          
----------------------------------------------------------------------------------------
For sale by the Superintendent of Documents, U.S. Government Publishing Office, 
http://bookstore.gpo.gov. For more information, contact the GPO Customer Contact Center, 
U.S. Government Publishing Office. Phone 202-512-1800, or 866-512-1800 (toll-free). 
E-mail, gpo@custhelp.com.                     
                    
                    
                  
                    
                    SELECT COMMITTEE ON INTELLIGENCE

           [Established by S. Res. 400, 94th Cong., 2d Sess.]
           
       

                 RICHARD BURR, North Carolina, Chairman
                MARK R. WARNER, Virginia, Vice Chairman

JAMES E. RISCH, Idaho                DIANNE FEINSTEIN, California
MARCO RUBIO, Florida                 RON WYDEN, Oregon
SUSAN COLLINS, Maine                 MARTIN HEINRICH, New Mexico
ROY BLUNT, Missouri                  ANGUS KING, Maine
JAMES LANKFORD, Oklahoma             JOE MANCHIN III, West Virginia
TOM COTTON, Arkansas                 KAMALA HARRIS, California
JOHN CORNYN, Texas
                 MITCH McCONNELL, Kentucky, Ex Officio
                  CHUCK SCHUMER, New York, Ex Officio
                    JOHN McCAIN, Arizona, Ex Officio
                  JACK REED, Rhode Island, Ex Officio
                              ----------                              
                      Chris Joyner, Staff Director
                 Michael Casey, Minority Staff Director
                   Kelsey Stroud Bailey, Chief Clerk
                                CONTENTS

                              ----------                              

                            NOVEMBER 1, 2017

                           OPENING STATEMENTS

Burr, Hon. Richard, Chairman, a U.S. Senator from North Carolina.     1
Warner, Hon. Mark R., Vice Chairman, a U.S. Senator from Virginia     4

                               WITNESSES

Stretch, Colin, Vice President and General Counsel, Facebook.....     7
    Prepared statement...........................................     9
Edgett, Sean, General Counsel, Twitter...........................    16
    Prepared statement...........................................    18
Walker, Kent, Senior Vice President and General Counsel, Google..    38
Prepared statement...............................................    41

                         SUPPLEMENTAL MATERIAL

Exhibits used by Chairman Burr...................................    47
Exhibits used by Vice Chairman Warner............................    54
Exhibits used by Senator Collins.................................    69
Exhibits used by Senator King....................................    79
Answers to questions for the record from Colin Stretch...........   100
Answers to questions for the record from Sean Edgett.............   130
Answers to questions for the record from Kent Walker.............   173
Submission from Senator Harris...................................   199

 
     OPEN HEARING: SOCIAL MEDIA INFLUENCE IN THE 2016 U.S. ELECTION

                              ----------                              


                      WEDNESDAY, NOVEMBER 1, 2017

                                       U.S. Senate,
                          Select Committee on Intelligence,
                                                    Washington, DC.
    The Committee met, pursuant to notice, at 9:34 a.m. in Room 
SH-216, Hart Senate Office Building, Hon. Richard Burr 
(Chairman of the Committee) presiding.
    Committee Members Present: Senators Burr, Warner, Risch, 
Rubio, Collins, Blunt, Lankford, Cotton, Cornyn, Feinstein, 
Wyden, Heinrich, King, Manchin, Harris, and Reed.

   OPENING STATEMENT OF HON. RICHARD BURR, CHAIRMAN, A U.S. 
                  SENATOR FROM NORTH CAROLINA

    Chairman Burr. I'd like to call the hearing to order. Good 
morning.
    I'd like to welcome our witnesses today. Before I introduce 
them, I want to say, on behalf of the full Committee, that our 
hearts and our prayers go out to the individuals in New York, 
the families and the friends of those who were affected by a 
senseless terror act. To most on this Committee, we've come to 
expect this. We spend countless hours working through the 
threats that exist to this country and around the world, and 
it's sad that we've come to the point where, really, nothing 
can happen that surprises us.
    But it's the responsibility of this Committee to work hand-
in-hand with our intelligence community to help to keep America 
safe by providing the tools that they need to accomplish their 
mission. We will continue to do that.
    As is the case that we're here today, and I welcome our 
witnesses, Colin Stretch, Vice President and General Counsel at 
Facebook; Sean Edgett, General Counsel at Twitter; and Kent 
Walker, Senior Vice President, General Counsel at Google.
    For several months now, the media has been fixated on the 
role that social media platforms played in spreading 
disinformation and discord during the 2016 elections. This is 
an opportunity for each of you to tell your respective stories 
and, if necessary, correct the record. My sense is that not all 
aspects of those stories have been told accurately. I'll note 
for the record that this Committee is now having its 
seventeenth open hearing this year, and the twelfth at which 
we'll be discussing Russia and Russia's activities.
    Today, I'm hopeful we can provide the American people with 
an informed and credible assessment of how foreign actors used 
your platforms to circulate lies and to agitate unrest during 
last year's elections.
    I'm also hopeful you'll share with us what your companies 
are doing to make it harder for foreign actors to use your 
platforms' automated accounts and falsified stories to 
influence sentiment in the United States.
    Very clearly, this kind of national security vulnerability 
represents an unacceptable risk, and your companies have a 
responsibility to reduce that vulnerability. While we're on the 
topic of responsibility, I want to use this forum to push back 
on some narratives that have sprung up around the subject. A 
lot of folks, including many in the media, have tried to reduce 
this entire conversation to one premise; foreign actors 
conducted a surgical, executed covert operation to help elect a 
United States president.
    I'm here to tell you this story does not simplify that 
easily. It is shortsighted and dangerous to selectively focus 
on one piece of information and think that that somehow tells 
the whole story.
    We've heard from the media how a series of, I quote, 
``Russian-linked Facebook ads were specifically aimed at 
Michigan and Wisconsin during the lead-up to last year's 
presidential election,'' unquote, and that, quote, ``some of 
those ads targeted specific demographic groups in two states,'' 
unquote. The narrative here is that ads linked to Russia were 
targeted at pivotal states and directly influenced the 
election's outcome.
    What you haven't heard is that almost five times more ads 
were targeted at the State of Maryland than of Wisconsin, 
Maryland, which is targeted by 262 ads in comparison to 
Wisconsin's 55 ads, and Maryland was not up for grabs. It was a 
State the Democrat candidate carried by 26 percent; or that 35 
of the 55 ads targeted at Wisconsin ran prior to the Wisconsin 
primary, before there was an identified Republican candidate; 
and moreover, that not one of those 55 ads mentioned President 
Donald Trump by name; or that the key election State of 
Pennsylvania had fewer ads targeted at it than Washington, 
D.C., where 87 percent of the electorate voted for Hillary 
Clinton; or that the three most heavily targeted states in 
America--Maryland, Missouri, and New York--were all determined 
by at least 18-point margin, and two of them won by Hillary 
Clinton.
    One point the media has gotten correct is that more of 
these geographically targeted ads ran in 2015 than 2016--again, 
before President Trump was identified as the Republican 
candidate for president. But some of the context surrounding 
the more than $100,000 worth of divisive ads on hot-button 
issues purchased by Russian actors is missing. To add some 
detail here where the media has failed to do it and put the 
$100,000 into a frame of reference, the total ad spend for the 
State of Wisconsin was $1,979, with all but $54 being spent 
before the primary--again, before the emergence of a Republican 
candidate. The ad spend in the State of Michigan was $823; 
Pennsylvania, $300.
    To believe the narrative, you have to accept that these 
sophisticated, well-resourced Russian actors studied our 
process, assessed what states would be critical to the election 
result, then snuck and invested all of $300 to execute their 
plan in Pennsylvania--$300. More than five times as much money 
was spent on advertising in California, a State that hasn't 
voted Republican in presidential elections since 1988.
    Even with the benefit of numbers and what can be calculated 
and measured, this is an incredibly complex story. We can look 
at the amount of money spent, the number of ads purchased, and 
draw conclusions about priorities. We can look at the divisive 
content of the ads and the pages that they directed people 
towards, the number of tweets and retweets, and the manipulated 
search results, and draw inferences about the intent of the 
information operation. What we cannot do, however, is calculate 
the impact that foreign meddling in social media had on this 
election, nor can we assume that it must be the explanation for 
an election outcome that many didn't expect.
    I understand the urge to make this story simple. It's human 
nature to make the complex manageable, find explanations, and 
interpret things in ways that conform to your conclusions. But 
that's biased. Pointing to a State and saying that no ads ran 
there after the election doesn't prove intent, or even motive. 
It just shows that no ads ran there after the election.
    This subject is complicated. There's a whole new vocabulary 
that comes with this stuff. Impressions are different than 
views. Views are different than clicks. But there's one thing 
that I'm certain of and it's this: Given the complexity of what 
we've seen, if anyone tells you they've got this all figured 
out, they're kidding themselves. And we can't afford to kid 
ourselves about what happened last year and continues to happen 
today.
    That complexity, I'll note, is exactly why we depend on you 
for expert insight and reliable information. Sixty percent of 
the U.S. population uses Facebook. A foreign power using that 
platform to influence how Americans see and think about one 
another is as much a public policy issue as it is a national 
security concern. Crafting an elegant policy solution that is 
effective, but not overly burdensome, demands good faith and 
partnership between companies and this Committee.
    Just recently, on the basis of a more complete and 
sophisticated analysis, the original estimate that 10 million 
Americans were exposed to Russian-origin content on Facebook 
was increased to 126 million. That tells me that your companies 
are just beginning to come to grips with the scale and the 
depth of the problem.
    That's encouraging, but know this: we do better when you do 
better. I'd urge you to keep that in mind and to work with us 
proactively to find the right solution to a very constant and 
complaining challenge.
    I'll take a moment here to stress what this hearing is and 
is not about. This isn't about relitigating the 2016 U.S. 
presidential election. This isn't about who won or who lost. 
This is about national security. This is about corporate 
responsibility, and this is about the deliberative and 
multifaceted manipulation of the American people by agents of a 
hostile foreign power.
    I'll say it again: agents of a hostile foreign power 
reached into the United States, using our own social media 
platforms, and conducted an information operation intended to 
divide our society along issues like race, immigration and 
Second Amendment rights. What's even more galling is that, to 
tear us apart, they're using social media platforms Americans 
invented, in connection with the First Amendment freedoms that 
define an open and democratic society.
    While it's shocking to think that foreign actors use the 
social networking and communications mediums that are so 
central to our lives today in an effort to interfere with the 
core of our democracy, what is even more troubling is the 
likelihood that these platforms are still being used today to 
spread lies, provoke conflict and drive Americans apart.
    Your three companies have developed platforms that have 
tremendous reach and, therefore, tremendous influence. That 
reach and influence is enabled by the enormous amount of data 
you collect on your users and their activities. The American 
people now need to understand how Russia used that information 
and what you're doing to protect them. Your actions need to 
catch up to your responsibilities.
    We have a lot to get to this morning. I'm going to stop 
here. Again, I want to thank each of our briefers--our 
witnesses today, and I turn to the vice chairman for any 
comments he might have.

 OPENING STATEMENT OF HON. MARK R. WARNER, A U.S. SENATOR FROM 
                            VIRGINIA

    Vice Chairman Warner. Well, thank you, Mr. Chairman, and 
let me also express our concerned thoughts about the tragedy 
yesterday in New York.
    Let me get right at it. In the age of social media, you 
can't afford to waste too much time or, for that matter, too 
many characters, in getting the point across. So I'll get 
straight to the bottom line: Russian operatives are attempting 
to infiltrate and manipulate American social media to hijack 
the national conversation and to make Americans angry; to set 
us against ourselves, and, at their most basic, to undermine 
our democracy. They did it during the 2016 U.S. presidential 
campaign. They are still doing it now. And not one of us is 
doing enough to stop it. That's why we're here today.
    In many ways, the threat is not new. Russians have been 
conducting information warfare for decades. But what is new is 
the advent of social media tools with the power to magnify 
propaganda and fake news on a scale that was unimaginable back 
in the days of the Berlin Wall.
    Today's tools in many ways seem almost purpose-built for 
Russian disinformation techniques. Russia's playbook is simple, 
but formidable. It works like this.
    First, disinformation agents set up thousands of fake 
accounts, groups and pages across a wide array of platforms. 
These fake accounts populate content on Facebook, Instagram, 
Twitter, YouTube, Reddit, LinkedIn, and many other platforms.
    Each of these fake accounts spends literally months 
developing networks of real people to follow and like their 
content, boosted by tools like paid ads and automated bots. 
Most of the real-life followers have no idea that they are 
caught up in these webs. These networks are later utilized to 
push an array of disinformation, including stolen e-mails, 
state-led propaganda like RT News and Sputnik, fake news, and 
divisive content.
    The goal is pretty simple. It's to get this so-called news 
into the news feeds of many potentially receptive Americans and 
to covertly and subtly push those Americans in the directions 
the Kremlin wants to go.
    As someone who deeply respects the tech industry and who 
was involved in that industry for more than 20 years, it's 
taken me quite a bit of time--and I'm still learning--to truly 
understand the nature of this threat. Even I struggle to keep 
up with the language and the mechanics, the difference between 
bots, trolls, and fake accounts; how they generate likes, 
tweets, and shares; and how all these players and actions are 
combined into an online ecosystem.
    What is clear, however, is that this playbook offers a 
tremendous bang for the disinformation buck. With just a small 
amount of money, adversaries use hackers to steal and weaponize 
data, trolls to craft disinformation, fake accounts to build 
networks, bots to drive traffic, and ads to target new 
audiences. They can force propaganda into the mainstream and 
wreak havoc on our online discourse. And if you look back at 
the results, it's a pretty good return on investment.
    So where do we go from here? I believe it will take all of 
us--you, some of the platform companies, the United States 
government, and the American people--to deal with this new and 
evolving threat.
    The social media and innovative tools each of you have 
developed have changed our world for the better. You've 
transformed the way we do everything from shopping for 
groceries to growing small businesses.
    But Russia's actions are further exposing the dark 
underbelly of the ecosystem you have created, and there is no 
doubt that their successful campaign will be replicated by 
other adversaries--both nation-states and terrorists--that wish 
to do harm to democracies around the globe. This is not a 
unique American phenomenon.
    As such, each of you here today needs to commit more 
resources to identifying bad actors and, when possible, 
preventing them from abusing our social media ecosystem. Thanks 
in part to pressure from this Committee, each company has 
uncovered, I believe, only some of the evidence of the ways 
Russians exploited their platforms during the 2016 election.
    For Facebook, much of the attention has been focused on the 
paid ads that Russian trolls targeted to Americans. However, 
these ads are just the tip of a very large iceberg. The real 
story is the amount of misinformation and divisive content that 
was pushed for free on Russian-backed pages, which was then 
spread widely on news feeds of tens of millions of Americans.
    According to the data Facebook has provided, 120 Russian-
backed pages built a network of over 3.3 million people. From 
these now-suspended pages, 80,000 organic unpaid posts reached 
an estimated 126 million real people--more than a third of the 
population.
    This is an astonishing reach from just one group in St. 
Petersburg. And I doubt that the so-called Internet Research 
Agency in St. Petersburg represents the only Russian trolls out 
there.
    Facebook has more work to do to see how deep this goes, 
including into the reach that we've just found in the last 48 
hours of information you've provided, of IRA-backed Instagram 
posts, which, again, if we just take for an example, 80,000 
posts from IRA-based trolls on Facebook, 120,000 pieces of 
content on Instagram, and we don't even have the data on how 
many--how much that content reached.
    The anonymity provided by Twitter and the speed by which it 
shares news makes it an ideal tool to spread disinformation. 
According to one study, during the 2016 campaign, junk news 
actually outperformed real news in some battleground states, 
leading up to Election Day. Another study found that bots 
generated one out of every five political messages posted on 
Twitter over the entire presidential campaign.
    I'm concerned, sir, that Twitter seems to be vastly 
underestimating the number of fake accounts and bots pushing 
disinformation. Independent researchers, people who've 
testified before this Committee, have estimated that up to 15 
percent of active Twitter accounts, or potentially 45 million-
plus accounts, are fake or automated.
    Despite evidence of significant incursion and outreach from 
researchers, Twitter has to date only uncovered a small piece 
of that activity, although I will acknowledge that in the last 
few days your numbers have gone from about 200 accounts to over 
2,700 accounts. And again, I believe there's more to be done.
    Google search algorithms continue to have problems in 
surfacing fake news or propaganda. Though we can't necessarily 
attribute to Russian efforts, false stories and unsubstantiated 
rumors were elevated on Google Search during the recent mass 
shootings in Las Vegas.
    Meanwhile, YouTube has become RT's go-to platform. Google 
has now uncovered 1,100 videos associated with this Russian 
campaign. Much more of your content was likely spread through 
other platforms.
    But it's not just the platforms that need to do more. The 
United States government has thus far proven incapable of 
meeting this 21st-century challenge. Unfortunately, I believe 
this effort is suffering in part because of lack of leadership 
at the top. We have a President who remains unwilling to 
acknowledge the threat that Russia poses to our democracy. 
President Trump should stop actively delegitimizing American 
journalism and acknowledge and address this very real threat 
posed by Russian propaganda.
    I believe that Congress, too, must do more. We need to 
recognize that current law was not built to address these 
threats. I partnered with Senators Klobuchar and McCain on what 
I believe is the most light-touch legislative approach, which I 
hope all my colleagues on this panel will review. The Honest 
Ads Act is a national security bill intended to protect our 
elections from the foreign interference we all want to avoid.
    Finally, but perhaps most importantly, the American people 
also need to be aware of what is happening to our news feeds. 
We all need to take a more discerning approach to what we are 
reading and sharing and who we're connecting with online. We 
need to recognize the person at the other end of that Facebook 
or Twitter argument may not be a real person at all.
    The fact is that this Russian weapon has already proved its 
success and cost-effectiveness. We can be assured that other 
adversaries, including foreign intelligence operatives and 
potentially terrorist organizations, have read this playbook 
and are already taking action. It's why we, collectively, must 
act.
    To our witnesses today, I hope you will detail what we saw 
in the last election and, most importantly, tell us what steps 
you will undertake for us to get ready for the next one. We 
welcome your participation and encourage your commitment to 
addressing this shared responsibility.
    Thank you, Mr. Chairman.
    Chairman Burr. Thank, Senator Warner.
    I'd like to notify Members we will have seven-minute rounds 
today by seniority.
    Gentlemen, if I could ask you to please stand and raise 
your right hand. Do you solemnly swear to tell the truth, the 
whole truth, and nothing but the truth?
    Mr. Stretch. Yes.
    Mr. Walker. I do.
    Mr. Edgett. Yes.
    Chairman Burr. Please be seated.
    Mr. Stretch, we're going to recognize you, then Mr. Edgett, 
then Mr. Walker. Mr. Stretch, the floor is yours.

STATEMENT OF COLIN STRETCH, VICE PRESIDENT AND GENERAL COUNSEL, 
                            FACEBOOK

    Mr. Stretch. Chairman Burr, Vice Chairman Warner and 
distinguished Members of the Committee, thank you for this 
opportunity to appear before you today. My name is Colin 
Stretch and since July 2013 I've served as the General Counsel 
of Facebook. We appreciate this Committee's hard work to 
investigate Russian interference in the 2016 election.
    At Facebook, our mission is to create technology that gives 
people the power to build community and bring the world closer 
together. We are proud that each of you uses Facebook to 
connect with your constituents, and we understand that the 
people you represent expect authentic experiences when they 
come to our platform to share and connect.
    We also believe that we have an important role to play in 
the democratic process and a responsibility to protect it on 
our platform. That's why we take what's happened on Facebook so 
seriously.
    The foreign interference we saw during the 2016 election is 
reprehensible. That foreign actors hiding behind fake accounts 
abused our platform and other internet services to try to sow 
division and discord and to try to undermine our election 
process is directly contrary to our values and everything we 
stand for. Our goal at Facebook is to bring people closer 
together. These foreign actors sought to drive people apart.
    In our investigation, which continues to this day, we have 
found that these actors used fake accounts to place ads on 
Facebook and Instagram that reached millions of Americans over 
a two-year period, and that those ads were used to promote 
pages which in turn posted more content. People shared these 
posts, spreading them still further.
    Many of these ads and posts are inflammatory. Some are 
downright offensive. We know that much of this content is 
particularly hurtful to members of the Facebook community that 
engaged with this content believing it was authentic. People 
should believe content on Facebook is authentic, and should not 
have to worry that they are being exploited in a cynical effort 
to prey on painful fault lines in our society in order to 
inflame discourse in this country.
    In aggregate, the ads and posts we are here today to 
discuss were a very small fraction of the overall content on 
Facebook, but any amount is too much. All of these accounts and 
pages violated our policies, and we removed them.
    Going forward, we are making significant investments. We're 
hiring more ad reviewers, doubling or more our security 
engineering efforts, putting in place tighter ad content 
restrictions, launching new tools to improve ad transparency, 
and requiring documentation from political ad buyers.
    We're building artificial intelligence to help locate more 
banned content and bad actors. We're working more closely with 
industry to share information on how to identify and prevent 
threats, so that we can all respond faster and more 
effectively. And we're expanding our efforts to work more 
closely with law enforcement.
    We know bad actors aren't going to stop their efforts. We 
know we'll all have to keep learning and improving to stay 
ahead of them. We also know we can't do this alone. That's why 
I want to thank you for this investigation. We look forward to 
the conclusions you will ultimately share with the American 
public. And I look forward to your questions.
    [The prepared statement of Mr. Stretch follows:]
    [GRAPHICS NOT AVAILABLE IN TIFF FORMAT]
    
    Chairman Burr. Mr. Edgett, the floor is yours.

       STATEMENT OF SEAN EDGETT, GENERAL COUNSEL, TWITTER

    Mr. Edgett. Chairman Burr, Vice Chairman Warner and Members 
of this Committee, Twitter understands the importance of the 
Committee's inquiry into Russia's interference in the 2016 
election, and we appreciate the opportunity to appear here 
today. The events underlying this hearing have been deeply 
concerning to our company and the broader Twitter community. We 
are committed to providing a service that fosters and 
facilitates free and open democratic debate and that promotes 
positive change in the world.
    We are troubled by reports that the power of Twitter was 
misused by a foreign actor for the purpose of influencing the 
U.S. presidential election and undermining public faith in the 
democratic process. The abuse of our platform to attempt state-
sponsored manipulation of elections is a new challenge for us 
and one we are determined to meet. Today we intend to show the 
Committee how serious we are about addressing this new threat 
by addressing the work we are doing to understand what happened 
and to ensure that it does not happen again.
    At the time of the 2016 election, we observed and acted on 
instances of automated and malicious activity. As we learned 
more about the scope of the broader problem, we resolved to 
strengthen our systems, going forward. Elections continue all 
the time, so our first priority was to do all we could to block 
and remove malicious activity from interfering with our users' 
experience. We created dedicated teams within Twitter to 
enhance the quality of information our users see and to block 
malicious activity whenever and wherever we find it.
    Those teams continue to work every day to ensure Twitter 
remains a safe, open, transparent and positive platform. We 
have also launched a retrospective review to find Russian 
efforts to influence the 2016 election through automation, 
coordinated activity, and advertising.
    While that review is still underway, we have made the 
decision to review and share what we know today, in the 
interest of transparency and out of appreciation for the 
urgency of this matter. We do so recognizing that our findings 
may be supplemented as we continue to work with the Committee 
staff and other companies, discover more facts, and gain a 
greater understanding of these events.
    My written testimony details the methodology and current 
findings of our retrospective review in detail. We studied 
tweets from the period September 1st to November 15th, 2016. 
During that time, we did find automated and coordinated 
activity of interest. We determined that the number of accounts 
we could link to Russia and that were tweeting election-related 
content was comparatively small: about one one-hundredth of a 
percent of total Twitter accounts at the time we studied.
    One-third of one percent of the election-related tweets 
people saw came from Russian-linked automated accounts. We did, 
however, observe instances where Russian-linked activity was 
more pronounced, and have uncovered more accounts linked to the 
Russian-based Internet Research Agency as a result of our 
review.
    We have also determined that advertising by Russia Today in 
seven small accounts was related to the election and violated 
either the policies in effect at the time or that have since 
been implemented. We have banned all of those users as 
advertisers, and we will donate that revenue to academic 
research into the use of Twitter during elections and for civic 
engagement.
    We are making meaningful improvements based on our 
findings. Last week, we announced industry-leading changes to 
our advertising policies that will help protect our platform 
from unwanted content.
    We are also enhancing our safety systems, sharpening our 
tools for stopping malicious activity, and increasing 
transparency to promote public understanding of all of these 
areas. Our work on these challenges will continue for as long 
as malicious actors seek to abuse our system and will need to 
evolve to stay ahead of new tactics.
    We have heard concerns about Russia actors' use of Twitter 
to disrupt the 2016 election and about our commitment to 
addressing that issue. Twitter believes that any activity of 
that kind, regardless of magnitude, is unacceptable, and we 
agree that we must do better to prevent it. We hope that our 
appearance today and the description of the work we have 
undertaken demonstrates our commitment to working with you, our 
industry partners, and other stakeholders to ensure that the 
experience of 2016 never happens again.
    Cooperation to combat this challenge is essential. We 
cannot defeat this evolving, shared threat alone. As with most 
technology-based threats, the best approach is to combine 
information and ideas to increase our collective knowledge. 
Working with the broader community, we will continue to test, 
to learn, to share, and to improve so that our product remains 
effective and safe.
    I look forward to answering your questions.
    [The prepared statement of Mr. Edgett follows:]
    [GRAPHICS NOT AVAILABLE IN TIFF FORMAT]
    
    Chairman Burr. Thank you, Mr. Edgett.
    Mr. Walker, the floor is yours.

  STATEMENT OF KENT WALKER, SENIOR VICE PRESIDENT AND GENERAL 
                        COUNSEL, GOOGLE

    Mr. Walker. Thank you very much, Chairman Burr, Vice 
Chairman Warner, Members of the Committee, for the opportunity 
to speak with you today. My name is Kent Walker. I'm Senior 
Vice President and General Counsel at Google. I oversee our 
legal, our policy, our trust and safety, and our Google.org 
teams. I've worked at the intersection of technology, security, 
and the law for over 25 years, starting my career as an 
assistant U.S. attorney for the U.S. Department of Justice 
focused on technology crimes.
    Let me start my conversation with you today by joining your 
earlier comments, acknowledging the victims and families of the 
awful attack in New York yesterday. As a New York employer, we 
know how strong and tough New Yorkers are and we look forward 
to do anything we can to help.
    Turning to the issues before the Committee today, Google 
believes that we have a responsibility to prevent the misuse of 
our platforms, and we take that very seriously. Google was 
founded with the mission of organizing the world's information 
and making it universally accessible and useful. The abuse of 
the tools and platforms we build is antithetical to that 
mission.
    Google is deeply concerned about attempts to undermine the 
democratic elections. We are committed to working with 
Congress, law enforcement, others in our industry, and the NGO 
community to strengthen protections around elections, to ensure 
the security of users, and to help combat disinformation. We 
recognize the importance of this Committee's mandate, and we 
appreciate the opportunity to share information and talk about 
solutions.
    Of course, disinformation and propaganda campaigns aren't 
new and have involved many types of media and publications over 
the years. And for many years, we've seen attempts to interfere 
with our online platforms. We take these threats very 
seriously. We've built industry-leading security systems, and 
we've put those tools into our consumer products as well.
    Back in 2007, we launched the first version of our Safe 
Browsing tool, which helps protect users from phishing, 
malware, and other attacks. Today, Safe Browsing is used on 
more than 3 billion devices worldwide.
    If we suspect that users are subject to government-
sponsored attacks, we warn them about that. And last month, we 
launched our Advanced Protection Program, which helps protect 
those at greatest risk of attack, like journalists, business 
leaders, and politicians.
    We face motivated and resourceful attackers, and we are 
continually evolving our tools to stay ahead of ever-changing 
threats. Our tools don't just protect our physical and network 
security, but they also detect and prevent attempts to 
manipulate our systems. On Google News, for example, we use 
fact-check labels to help users spot fake news. For Google 
search, we have updated our quality guidelines and evaluations 
to help surface more authoritative content from the web. We've 
updated our advertising guidelines as well to prohibit ads on 
sites that misrepresent themselves.
    And on YouTube, we employ a sophisticated spam and security 
breach detection system, designed to detect anomalous behavior 
and catch people trying to inflate view counts of videos or 
numbers of subscribers. And as threats evolve, we will 
continually adapt in order to understand and prevent new 
attempts to misuse our platforms.
    With respect to the Committee's work on the 2016 election, 
we've looked across our products to understand whether 
government-backed entities were using our products to 
disseminate information in order to interfere with the U.S. 
election. While we did find some deceptive activities on our 
platform associated with suspected government-backed accounts, 
that activity appears to have been relatively limited. Of 
course, any activity like this is more than we would like to 
see. We've provided the relevant information to the Committee, 
have issued a public summary of the results of our review, and 
we will continue to cooperate with the Committee's 
investigation.
    Going forward, we will continue to expand our use of 
cutting-edge technology to protect our users and will continue 
working with governments to ensure that our platforms aren't 
abused. We will also be making political advertising more 
transparent, easier for users to understand, and even more 
secure. In 2018, we'll release a transparency report showing 
data about who is buying election ads on our platform and how 
much money is being spent. We'll pair that transparency report 
with a database, available for public research, of election and 
ad content across our ads products.
    We're also going to make it easier for users to understand 
who bought the election ads they see on our networks. Going 
forward, users will be able to easily find the name of any 
advertiser running an election ad on Search, YouTube, or the 
Google Display Network through an icon on the ad. We'll 
continue enhancing our existing safeguards to ensure that we 
permit only U.S. nationals to buy U.S. election ads.
    We already tightly restrict which advertisers can serve ads 
to audiences based on political leanings. Moving forward, we'll 
go further by verifying the identity of anyone who wants to run 
an election ad or use our political interest-based tools, and 
confirming that that person is permitted to run that ad.
    We certainly can't do this alone. We'll continue to work 
with other companies to better protect the collective digital 
ecosystem. And even as we take our own steps, we remain open to 
working on legislation that promotes electoral transparency.
    Moreover, our commitment to addressing these issues extends 
beyond our services. We've offered in-person briefings and 
introduced a suite of digital tools designed to help election 
websites and political campaigns protect themselves from 
phishing, unauthorized account access, and other digital 
attacks. We're also increasing our long-standing support for 
the bipartisan Defending Digital Democracy Project.
    Let me conclude by recognizing the importance of the work 
of this Committee. Our users, advertisers, and creators must be 
able to trust in their security and safety. We share the goal 
of identifying bad actors who attempted to interfere with our 
systems and abuse the electoral process.
    We look forward to continued cooperation both with the 
Members of this Committee and with our fellow companies to 
provide access to tools that help citizens express themselves, 
while avoiding abuses that undercut the integrity of elections.
    Thank you again for the opportunity to tell you about our 
ongoing efforts. We look forward to our continuing work with 
Congress on these important issues, and we are happy to answer 
any questions you might have.
    [The prepared statement of Mr. Walker follows:]
    [GRAPHICS NOT AVAILABLE IN TIFF FORMAT]
    
    Chairman Burr. Mr. Walker, thank you for your testimony.
    The Chair would recognize himself and share with Members 
that I'm going to talk about one specific ad that--it's not 
going to count to my seven minutes, and the Vice Chairman is 
going to do the same at the beginning of his, to sort of set 
the stage for much of what we'll talk about today.
    As an example, I'd like to highlight one specific case with 
real-world implications involving two different Facebook 
groups, both of which are associated with the Russian Internet 
Research Agency.
    You'll see the first board that is up.
    [The material referred to follows:]
 [GRAPHICS NOT AVAILABLE IN TIFF FORMAT]
    
    The first group's called ``The Heart of Texas,'' with over 
250,000 followers. This account promoted pro-Texas causes and 
included posts many would characterize as anti-immigration or 
anti-Muslim. The tagline for this group, as referenced in the 
top left-hand corner of the first chart, is ``Texas: homeland 
of guns, barbecue, and your heart,'' with the words ``time to 
secede'' emboldened on the Texas flag.
    Turning to the second group, which is in the bottom right-
hand, it's called ``The United Muslims of America,'' with over 
328,000 followers. This account claimed to be pro-Islamic 
themes. The tagline for this group, as referenced in the 
bottom-right corner of the first chart, is ``I'm a Muslim and 
I'm proud.''
    So, if I could have the second board up. The Heart of Texas 
group created a public event on Facebook, to occur at noon, May 
21st, 2016, at the Islamic Center in Houston, Texas, to stop, 
quote, ``to stop the Islamization of Texas,'' unquote. The same 
group then placed an advertisement on Facebook to promote their 
event, with over 12,000 people viewed.
    The United Muslims of America subsequently created an event 
on Facebook to occur at noon, May 21st, 2016, at the Islamic 
Center in Houston, Texas, to, I quote, ``save Islamic 
knowledge''--same time, same place as the Heart of Texas event. 
The group then placed an advertisement, targeting people in 
Houston, Texas, area to promote their event to support the 
Islamic Center. More than 2,700 people viewed this ad.
    If I could have the third board.
    On May 21st, 2016, local news captured the events as they 
unfolded, reporting on the protest stage by the Heart of Texas 
group and resulting counter-protest. The pictures you see on 
the third board are from the streets in front of the Islamic 
Center in Houston, Texas.
    What neither side could have known is that Russia trolls 
were encouraging both sides to battle in the streets and create 
division between real Americans. Ironically, one person who 
attended stated, ``The Heart of Texas promoted this event, but 
we didn't see one of them.'' We now know why. It's hard to 
attend an event in Houston, Texas, when you're trolling from a 
site in St. Petersburg, Russia. Establishing these two 
competing groups, paying for the ads, and causing this 
disruptive event in Houston cost Russia about $200.
    Mr. Stretch, you commented yesterday that your company's 
goal is bringing people together. In this case, people were 
bought together to foment conflict, and Facebook enabled that 
event to happen. I would say that Facebook has failed their 
goal. From a computer in St. Petersburg, Russia, these 
operators can create and promote events anywhere in the United 
States and attempt to tear apart our society. I'm certain that 
our adversaries are learning from the Russian activities and 
even watching us today. Simply put, you must do better to 
protect the American people and, frankly, all of your users 
from this kind of manipulation.
    My time can start now. I have one simple question, yes or 
no from each of you. I'll start with Mr. Stretch and work my 
way to your left.
    The Federal Election Campaign Act prohibits any foreign 
national from spending funds in connection with any Federal, 
State, or local elections in the United States. Doesn't this 
law prohibit your publication of this content?
    Mr. Stretch.
    Mr. Stretch. Prohibit publication of the content we've 
seen?
    Chairman Burr. Does FEC law apply to Facebook?
    Mr. Stretch. Certainly, FEC law, yes, applies to----
    Chairman Burr. Prohibits foreign dollars influencing an 
election?
    Mr. Stretch. It prohibits foreign actors from using really 
any medium, including Facebook, to influence a foreign--a U.S. 
election.
    Chairman Burr. So FEC law applies to Facebook?
    Mr. Stretch. Yes, it does.
    Chairman Burr. Mr. Edgett.
    Mr. Edgett. It applies to Twitter as well.
    Chairman Burr. It applies Twitter.
    Mr. Walker.
    Mr. Walker. Yes, sir.
    Chairman Burr. Great.
    The prevalence of social media use among military members, 
who spend so much time outside the country, deployed away from 
friends, away from family, seems a likely target for foreign 
intelligence agencies who want to collect details on U.S. force 
movements, deployments, and other sensitive insight. Do you 
monitor your platforms for indications that your users in the 
U.S. military are targeted in any way?
    Mr. Stretch.
    Mr. Stretch. Senator, yes, and I would say that that sort 
of--that sort of security work really falls into the 
traditional cybersecurity work that we've long been focused on. 
We've had a threat intelligence team for years now that has 
been focused on tracking foreign actors, and it's exactly that 
sort of threat that we believe has historically been an area of 
focus for our adversaries, and likewise an area of focus for us 
on the defensive side.
    Chairman Burr. Mr. Edgett.
    Mr. Edgett. Similar to Mr. Stretch, we've been focused on 
that type of threat for years. We're also focused on education 
on the other side and helping law enforcement and military 
personnel understand how to use Twitter and both its benefits 
and its risks.
    Chairman Burr. Mr. Walker.
    Mr. Walker. We've been looking at cyber espionage for some 
years, and so this is all in focus. Because we're not a social 
network, we may not have as much visibility as to whether 
individual users of our service are veterans or not, but that 
would certainly be an area of concern.
    Chairman Burr. These questions are for Facebook, Mr. 
Stretch. In a blog published September 6th, 2017, Alex Stamos, 
Facebook's Chief Security Officer, wrote that the company had 
discovered about 3,000 political ads that were paid for through 
470 fake accounts and pages that likely operated out of Russia. 
Facebook shut down these accounts on the grounds that they were 
inauthentic. Had these accounts not violated Facebook's 
prohibition against fake accounts, would they have been shut 
down?
    Mr. Stretch. Senator, many of them would have, because many 
of them violated other policies related to the type of content 
that's permitted on the platform. The authenticity issue is the 
key.
    Referring to the content you surfaced earlier, it pains us 
as a company, it pains me personally, to see that we were--that 
our platform was abused in this way. People in this country 
care deeply in--about issues of public concern, and it's one of 
the strengths of our country that people are so willing to 
speak freely about them. The fact that foreign actors were able 
to use our platform to exploit that openness is a deeply 
painful lesson for us, and one we're focused on learning from 
going forward.
    Chairman Burr. Does it trouble you that it took this 
Committee to get you to look at the authentic nature of the 
users and the content?
    Mr. Stretch. Senator, we are certainly troubled--I'd say 
more than troubled--by the evidence of abuse of our platform 
during 2016, and we're certainly grateful for the Committee's 
investigation and the attention you're bringing to this issue. 
We think it's very important.
    We do believe that it's a larger issue than any one 
company, and we believe that, going forward, there are 
opportunities, not just for us to do better, but for us to work 
together to make sure we're all addressing this threat 
appropriately.
    Chairman Burr. What characteristics would indicate that an 
account or a page is likely operated out of Russia?
    Mr. Stretch. There are a number of characteristics that can 
signal potential location. The most obvious one that is 
typically the most reliable is location information that's 
transmitted by the user's browser when they access Facebook. 
It's also the most easily manipulable.
    There are many other signals that similarly will suggest 
location, but, because of the way the internet is architected, 
can also be faked. Our job is to look not just for the signals 
that are in plain sight, but understand how they can be 
manipulated, and look for patterns of activity that reveal 
efforts to abuse our platform that are shrouded, both 
geographically and in other ways.
    Chairman Burr. Mr. Edgett, your vice president at Twitter 
stated that Twitter's expanding its team and resources and 
building new tools and processes to combat automated Twitter 
accounts, or bots. What is Twitter's process for identifying a 
bot?
    Mr. Edgett. We have a lot of data behind sort of the things 
you see on Twitter that looks at the activity of an account--
and remember, there are hundreds of millions of accounts--the 
activity of an account as it relates to other accounts. So as 
you or I, Senator, tweet, our activity looks pretty normal. As 
an automated account tweets thousands of times an hour, or logs 
in thousands of times a day, that looks pretty suspicious.
    So our technology is looking for that anomaly that 
differentiates sort of normal accounts from automated accounts. 
But spammers and bad actors are getting better at making 
themselves look more real.
    Chairman Burr. So what percentage of accounts on Twitter 
are actually bots and not real people?
    Mr. Edgett. So we do a monthly audit of this and 
investigation, and determined that for years less than 5 
percent of our accounts are false accounts or spam.
    Chairman Burr. What happens to accounts on Twitter that are 
suspended by Twitter? Is there an indefinite status?
    Mr. Edgett. Once we suspend an account, they're--especially 
an automated account--they're typically permanently banned from 
the platform. And we also do work to link those accounts with 
new accounts that may pop up. So the more we investigate and 
look into this and build the web of information around the 
signals we're seeing from these accounts, the better we get at 
linking those accounts and stopping them before they get on the 
platform.
    Chairman Burr. My time has expired, but I'm going to ask 
you to submit in writing for the record Twitter's assessment of 
why independent assessments of the number of bots on Twitter 
constantly, consistently, are higher than the 5 percent that 
you've stated today, if you would provide that for the record.
    Mr. Edgett. Happy to provide that for the record and 
address it today.
    Chairman Burr. Thank you.
    Vice Chairman.
    Vice Chairman Warner. Thank you, Mr. Chairman.
    I also want to demonstrate, but I'd also--as we're getting 
ready--we have had testimony before this Committee from a 
representative of NATO that fake and bot accounts on Twitter 
are more in the 12 percent to 15 percent account. A vast number 
of research studies--you know, 320 million active Twitter 
accounts--even if you assume 10 percent, you're still talking 
30-plus million potential accounts that could be misused and 
abused.
    If we could put up the chart here, this is another example 
of how people are kind of lured in.
    [The material referred to follows:]
[GRAPHICS NOT AVAILABLE IN TIFF FORMAT]
    
    Vice Chairman Warner. The first ad is an ad that is pretty 
benign. It's obviously targeted towards Christians. It's a--
it's an ``Army of Jesus'' Facebook ad, 217,000 followers. You 
like that page, and here's what happens: you would get a series 
of, for the most part, relatively benign Bible quotes or other 
items. This ad appeared in October of 2016.
    Late October, early November, suddenly, this benign site, 
in addition to your Bible quotes, suddenly we're getting these 
other posts, not paid ads, but posts from this organization. 
This message, obviously the bottom one would've gone to the 
217,000 followers. We have no idea how many times it was liked 
or shared with other individuals.
    Again, we've got two different examples of the type of 
tools, how people are lured in, and then once they're lured 
into what they think is a pro-Texas or pro-Muslim or here pro-
Jesus account--and then they are manipulated by foreign actors 
and foreign agents.
    Go ahead and start my time.
    First of all, I hear all your words, but I have more than a 
little bit of frustration that many of us on this Committee 
have been raising this issue since the beginning of this year, 
and our claims were frankly blown off by the leaderships of 
your companies, dismissed, said, there's no possibility, 
nothing like this happening, nothing to see here.
    It bothers me, if you're really committed to trying to work 
with us to resolve this, that it took until this Committee 
continually went at you, and it was July and early August when 
you made your first presentations. And, candidly, your first 
presentations were less than sufficient and showed in my mind a 
lack of resources, a lack of commitment, and a lack of genuine 
effort.
    Candidly, your companies know more about Americans in many 
ways than the United States government does. And the idea that 
you had no idea any of this was happening strains your 
credibility.
    So my first question is this, and I want a yes or no 
answer, not a filibuster. Will you commit to continue to work 
with this Committee to provide additional information and 
additional documents as needed as we continue to explore this 
challenge and threat on a going-forward basis?
    We go right down the line. Mr. Stretch.
    Mr. Stretch. Yes.
    Vice Chairman Warner. Mr. Edgett.
    Mr. Edgett. Absolutely.
    Vice Chairman Warner. Mr. Walker.
    Mr. Walker. Absolutely.
    Vice Chairman Warner. Next, one of the things that I 
continue--again, and I will commend you here--that from the 
first our friends at Facebook, you identified 470 accounts, 
3,000 ads. And most of the work, at least it appears to me, 
from at least Twitter and Facebook, has all been derivative of 
that initial data dump.
    And again, this is a yes or no question: Do you believe 
that any of your companies have identified the full scope of 
Russian active measures on your platform? Yes or no?
    Mr. Stretch. Senator, our investigation continues. So I 
would have to say no, certainly not with certainty.
    Vice Chairman Warner. Mr. Edgett.
    Mr. Edgett. No, and we're still working on this.
    Vice Chairman Warner. Mr. Walker.
    Mr. Walker. I believe we haven't done a comprehensive 
investigation, but, as Mr. Stretch says, these are ongoing 
issues, and we continue to investigate.
    Vice Chairman Warner. Let me start again with Facebook 
here. You've identified 470 accounts from one troll farm in St. 
Petersburg. There have been plenty of press reports of other 
troll farms in Russia. There have been reports of other 
activities that were Russian-controlled in Central Europe and 
Eastern Europe. In meetings with your leadership, as you became 
more aware of this problem, you aggressively promoted the fact, 
for example, that you took down 30,000 accounts around the 
French elections. Now, you say not all of those were Russian-
related.
    Have you gone back and cross-checked those Russian-related 
accounts that you took down in France to see if any of those 
accounts were active in the American election?
    Mr. Stretch. Senator, the 30,000 accounts that we took down 
in----
    Vice Chairman Warner. The accounts that were related to 
Russian accounts that you took down. Your leadership in fact 
bragged about how proactive you were in the French election 
process. Did you check those accounts to see if any of them 
were active in the American elections?
    Mr. Stretch. Senator, the system that ran to take down 
those accounts, which were fake accounts of, really, all type 
and for any purpose, has--is now active worldwide----
    Vice Chairman Warner. Have you----
    Mr. Stretch [continuing]. And has been operated----
    Vice Chairman Warner. Just please answer my question. Have 
you reviewed the accounts you took down in France that were 
Russian-related to see if they had played any role in the 
American election?
    Mr. Stretch. Senator, I apologize. I'm trying to answer the 
question.
    Vice Chairman Warner. Well, the answer is yes or no. I 
don't want a long explanation. I want to know if you have done 
this. I've been signaling this to you for some time. We wanted 
to make sure that you would review those accounts. We wanted to 
make sure--the 470 accounts that paid for the 3,000 ads, you 
said these were all accounts, except for one, that were paid 
for in rubles. Did you even run those accounts to see if any of 
those accounts were paid for with dollars or euros or other 
currencies?
    Mr. Stretch. Senator, those--let me try to state it----
    Vice Chairman Warner. Yes or no?
    Mr. Stretch. So we have, and we continue----
    Vice Chairman Warner. Mr. Stretch, yes or no?
    Mr. Stretch. Yes, we are looking and have looked at every 
possible indication of Russian activity in the 2016 election, 
and the investigation continues.
    Vice Chairman Warner. Sir----
    Mr. Stretch. That includes any evidence we've identified 
from those 30,000 accounts, as well as a number of----
    Vice Chairman Warner. All those accounts have been run, 
that database has been run, to see if any of those accounts 
were active in the United States?
    Mr. Stretch. I will have to come back to you on that, 
Senator.
    Vice Chairman Warner. Sir, we've had this hearing scheduled 
for months. I find your answer very, very disappointing.
    On the question of--we just discovered, and I appreciate 
this, you had 80,000 views in terms of Russian--views on 
Facebook. We now discovered in the last 48 hours 120,000 
Russian-based posts on Instagram. Have you done any of the 
similar analysis on those 120,000 posts? We know the 80,000 
ended up reaching 126 million Americans. Have you done that 
same analysis on the 120,000 posts on Instagram?
    Mr. Stretch. Yes, Senator, we have.
    Vice Chairman Warner. And how many Americans did those 
touch?
    Mr. Stretch. The data on Instagram is not as complete, but 
the data that we do have indicates that, beginning in October 
of 2016, those Instagram posts reached an additional 16 million 
people, in addition to the 126 million people that we 
identify----
    Vice Chairman Warner. So now we're seeing the Russian 
activities roughly at 150 million-plus Americans, without 
knowing how many times they were re-shared.
    Mr. Stretch. If I can add that the time period prior to 
October 16th, where our data is less reliable, would yield an 
incremental 4 million. So all told, that gets you to 
approximately a little less than 150 million. That's correct, 
Senator.
    Vice Chairman Warner. Mr. Edgett, on the Twitter account, 
you--there was one activity--and this was not something that 
happened during 2016. Again, I agree with the Chairman. We're 
not here to relitigate 2016. But there was a fake Tennessee 
Republican account, TEN-GOP. The irony was this account had 
154,000 followers; the real Tennessee GOP party had 13,000--
13,400 followers, I believe, based on your numbers.
    I find very interesting, there have been some people who 
have said, ``Well, people should be able spot these fake 
accounts.'' Well, if they're able to spot these fake accounts, 
you had the President's communications director retweeting this 
account, Kellyanne Conway. You had the President's son, Donald 
Trump, Jr., retweeting this account.
    My question is, why did it take so long to take this down 
when the Tennessee Republican party was asking you repeatedly?
    Mr. Edgett. Yeah, that--and that was an absolute miss, and 
we've gotten better since. We've refined our policies around 
impersonation and parody accounts----
    Vice Chairman Warner. Let me just close with this. My 
time's about up.
    Mr. Edgett. Sure.
    Vice Chairman Warner. We've looked on this subject, on 
political information and disinformation. But the same way that 
these bots and trolls and click farms and fake pages, groups, 
algorithm gaming can be used in politics, these same tools can 
and have been used, I believe, to assist financial frauds 
around stock schemes. I think there is a lot of this activity 
in broad-based digital advertising. I think we've seen some of 
this in schemes to sell counterfeit prescription drugs, as well 
as the ability to encourage folks to download malware. I 
believe this is a real challenge, and to get this right we're 
going to need your ongoing cooperation.
    Thank you, Mr. Chairman.
    Chairman Burr. Senator Risch.
    Senator Risch. Thank you.
    Gentlemen, thank you for coming today. By now it's probably 
pretty obvious to everyone that this Committee has spent lots 
and lots and lots of time on this, both as it relates to the 
election and on these kinds of things not related to the 
political process here in the country. But we have spent a lot 
of time, and I think have been able to reach some conclusions 
on this.
    No-one's exempt. I come from a State that's a lot smaller, 
Idaho. They tried to do exactly in Idaho exactly what was done 
in Texas, where they tried to promote a meeting where they had 
two conflicting sides, and no one showed up. So there was no 
success.
    In Idaho, just like Texas, it had absolutely nothing to do 
with the 2016 presidential election. It was simply a cultural 
type of acrimony that they were attempting to promote.
    The Chairman talked about the news reports that allege that 
the Russians used social media to promote a particular 
candidate, and may even, some of those, suggest that it changed 
the result of the election. But this whole thing goes a lot 
deeper than that. One of the things we've discovered--and I 
think you probably are aware of this--that you can't look at 
those ads and say, ``Okay, they were all promoting one 
particular candidate.'' There were ads going both ways, for and 
against both candidates, by the Russians. I'm going to get back 
to that in just a second.
    But the other thing that I think that we've come to a 
conclusion on, and very early, is that the U.S. isn't the only 
country suffering from this. The Europeans, France, Austria, 
Germany, just among others, have suffered from the exact same 
thing, and that is Russian attempted interference with their 
domestic affairs.
    I put a section in the sanctions bill, the Iran, Russia, 
North Korea sanctions bill, that requires the Executive Branch 
to do a study on this, on the effect in Europe, because they 
were much more overt in Europe than they were here--most of the 
work they did here was covert--and probably because in European 
countries there is actually a fair amount of Russian sympathy 
where they can mobilize these people, not so much here in the 
United States. Obviously some, but not nearly, what is there.
    So I want to come at this from a different perspective. The 
2016 elections got a lot of the politicians riled up because it 
went after the political process. But my conclusion is, and I 
think most people here would agree with me, that--and indeed, 
Senator Warner referred to this--that this is a lot deeper than 
just the elections.
    There are a lot of things that the Russians are trying to 
do, and not just inject themselves into the electoral process. 
It seems to me that, after you step back and look at this and 
say, ``What's going on here? What is the motivation? What are 
they doing?'' I always look at something from an objective 
standpoint--``What is their objective? What are they trying to 
accomplish?'' And you walk away from it just shaking your head, 
because we Americans don't think the same way they think about 
promoting our country.
    So the conclusion I've reached is that the Russians are 
doing what they've done all along, long before your technology 
even existed, and that is trying to sow discord, simply trying 
to sow discord.
    My question to each of you is: have you tried to analyze 
what the Russians were trying to accomplish here, not only in 
the 2016 elections, but in these other kinds of ads, with the 
discord? What are your personal views on that, whether 
they've--what they're trying to accomplish?
    Mr. Stretch.
    Mr. Stretch. Senator, it's very difficult for us to ascribe 
motive. It's I think why this Committee's work is so important. 
We've tried to provide you as much information as we can, and 
we hope that, with your visibility into other sources of 
information, you will be able to help the American people have 
a better assessment of what the motive is. We think that'll 
help all of us do better to prevent this sort of activity in 
the future.
    Senator Risch. Would you agree with me that the motive 
isn't obvious here, given the difference in the way they handle 
these things?
    Mr. Stretch. Yes, I would agree with that.
    Senator Risch. Mr. Edgett.
    Mr. Edgett. I would agree with that, as well. I mean, based 
on what we've seen, the advertisements from Russia Today, the 
types of content that was being put out by the IRA, also the 
automated account content, looks as if it's merely focused on 
divisiveness. But we're still investigating this issue, and 
look forward to working with this Committee to help put the 
whole picture together.
    Senator Risch. Mr. Walker.
    Mr. Walker. The large majority of the material we saw was 
in the socially divisive side, rather than direct electoral 
advocacy, yes.
    Senator Risch. And that has really been the focus of the 
media, that, oh, this was all about the 2016 election. You 
agree with me that this is much broader than that and is, as 
you say, divisive--aimed at divisiveness, or aimed at discord? 
Would you all agree with that?
    Mr. Edgett. Yes, and that's a problem we're trying to 
tackle every day.
    Senator Risch. Mr. Stretch, do you agree with that?
    Mr. Stretch. Yes, I would agree and note that the time 
period in question and the activity we saw even continued after 
the election.
    Senator Risch. Mr. Walker.
    Mr. Walker. That seems reasonable, hard for us to know and, 
again, ultimately for the Committee to decide.
    Senator Risch. I appreciate that. And as I said, my view of 
this is this a whole lot broader than simply the 2016 election.
    Mr. Walker, I have a specific question for you. I think I 
heard you say that you're enacting a policy where only a U.S. 
national can buy an election ad. Is that correct?
    Mr. Walker. That's correct.
    Senator Risch. Okay. What about other countries? Obviously, 
you operate in places other than the United States. Can a U.S. 
national buy an ad, for instance, for a French or German or 
Austrian campaign?
    Mr. Walker. I haven't studied the laws of individual 
countries, but we are not confining our work to the U.S. We are 
looking at other elections around the world to make sure that 
we do whatever we can to minimize electoral inference.
    Senator Risch. So what you're going to do is try to confine 
people to their own elections in their own countries? Is that 
pretty much your objective?
    Mr. Walker. Certainly that's the case for the United 
States, and any other country around the world where that's the 
law that's true, yes.
    Senator Risch. I think that's going to be a big challenge 
for you, but good luck, and I wish you well in that endeavor.
    Thank you, Mr. Chairman.
    Chairman Burr. Senator Feinstein.
    Senator Feinstein. Thanks, Mr. Chairman.
    I sat in the Judiciary hearing yesterday. It was a 
subcommittee hearing, but was able to ask some questions. And I 
want to just make a personal comment, because I've been very 
proud, and I know Senator Harris is as well, to represent this 
tech community from California. But I must say, I don't think 
you get it--I think the fact that you're general counsels, you 
defend your company--that what we're talking about is a 
cataclysmic change. What we're talking about is the beginning 
of cyber warfare. What we're talking about is a major foreign 
power with the sophistication and ability to involve themselves 
in a presidential election and sow conflict and discontent all 
over this country.
    We are not going to go away, gentlemen, and this is a very 
big deal. I went home last night with profound disappointment. 
I asked specific questions. I got vague answers, and that just 
won't do.
    You have a huge problem on your hands, and the United 
States is going to be the first of the countries to bring it 
your attention, and others are going to follow, I'm sure, 
because you bear this responsibility. You've created these 
platforms and now they are being misused, and you have to be 
the ones to do something about it, or we will.
    And this Committee is Intelligence. It's different from 
yesterday, so they're privy to different facts, and they're 
very potent facts.
    Let me go back to a couple of questions that I asked 
yesterday. Mr. Edgett, yesterday, you testified that Twitter 
only began to remove voter suppression posts that told people 
they could vote by texting or tweeting after you found out 
about them from other Twitter users.
    These were illegal tweets. Waiting for users to alert 
Twitter isn't sufficient. I'll give you another chance. What is 
Twitter doing to proactively identify illegal voter suppression 
tweets?
    Mr. Edgett. Thank you for letting me address that. We're 
constantly improving, not only on our technology around 
automated accounts that are trying to amplify these types of 
messages----
    Senator Feinstein. That's not enough.
    Mr. Edgett [continuing]. But also on putting people and 
technology on the content and the behavior and trying to make 
our workflows, our reporting flows, more efficient and using 
artificial intelligence to prioritize things like the illegal 
voter suppression ads and other things that we see on the 
platform, and taking those down faster.
    We are getting better, but this is a problem that we are 
focused on getting better at every day.
    Senator Feinstein. Well, you have to find a way to prevent 
them from going up.
    Mr. Edgett. That's right. And that's why we tend to----
    Senator Feinstein. That's the problem.
    Mr. Edgett. Right. That's why we tend to focus on behavior 
behind the accounts, to know before the content goes up. We've 
seen--we've seen great strides in other areas not related to 
that, and so we're trying to take that same solution to this 
problem set.
    Senator Feinstein. Mr. Walker, I asked your colleague 
yesterday why Google didn't immediately revoke Russia Today's 
preferred status after the intelligence community determined 
and publicly stated that RT was a part of the Russian 
government's efforts to interfere in our election.
    Mr. Salgado told me that RT only lost its preferred status 
because of a, quote, ``drop in viewership,'' end quote, not 
because it was part of the Kremlin's propaganda machine. This 
response was deeply troubling, and frankly, did not answer my 
question. So here it is again. Why didn't Google take any 
action regarding RT after the intelligence community assessment 
came out in January of 2017?
    Mr. Walker. Let me start by----
    Senator Feinstein. I'm sorry, your mic isn't on.
    Mr. Walker. Senator, let me start by responding to your 
initial comments to assure you we take this and have taken this 
issue very seriously. The question of cyber espionage is one 
that we have been working on for some years, publicly and 
privately, working with other companies and working on our own 
to identify some of these threats. This is one manifestation of 
that, but not the only one.
    With regard to RT, we recognize the concerns that have been 
expressed about RT and concerns about its slanted coverage. 
This is of course a question that goes beyond the internet. RT 
is covered--its channel is on major cable television stations, 
on satellite television stations. Its advertising appears in 
newspapers, magazines, airports. It's run in hotels in pretty 
much every city in the United States.
    We have carefully reviewed the content of RT to see that it 
complies with the policies that we have against hate speech, 
incitement to violence, et cetera. So far, we have not found 
violations, but we continue to look.
    Beyond that, we think that the key to this area is 
transparency--that Americans should have access to information 
from a wide variety of perspectives, but they should know what 
they're getting. And so we already on Google provide 
information about the government-funded nature of RT. We're 
looking at ways to expand that to YouTube and potentially other 
platforms.
    Senator Feinstein. Well, as you might guess, I'm really not 
satisfied with that. That's been the trend of the testimony all 
along. I think we're in a different day now. We're at the 
beginning of what could be cyber war. And you all, as a policy 
matter, have to really take a look at that and what role you 
play.
    I think my time is almost up. Let me try one more. A 
British report recently concluded that social media platforms 
such as Facebook, Twitter, and YouTube failed to remove 
extremist material posted by banned jihadist and neo-Nazi 
groups, even when that material was reported. The source for 
this is the British Parliament Home Affairs Select Committee.
    Last night, we saw a horrific attack on innocent people in 
New York by an individual who may have been radicalized online. 
We know one person, who is Anwar al-Awlaki, with 75,000 hits, 
the major radicalizer in the United States on the internet.
    I'm working on legislation to require tech companies to 
report known terrorist activity on their platforms to law 
enforcement and to provide law enforcement with civil 
injunction authority.
    So thank you, Mr. Chairman.
    Chairman Burr. Thank you, Senator Feinstein.
    Senator Rubio.
    Senator Rubio. Thank you.
    Thank you all for being here.
    Mr. Stretch, I want to ask you--and it relates to all this 
in the following way, but let me work there. Guo Wengui is a 
whistleblower and a critic of the Chinese government, and his 
Facebook account was blocked, and Facebook has informed us and 
has said publicly that he violated terms of service. I think he 
published personal identifying information about individuals, 
and that violated the terms of service, so--and I understand 
that argument.
    My question--so what I want to be clear is, was there any 
pressure from the Chinese government to block his account?
    Mr. Stretch. No, Senator. We reviewed a report on that 
account and analyzed it through regular channels using our 
regular procedures. The blocking was not of the account in its 
entirety, but I believe was of specific posts that violated our 
policy.
    Senator Rubio. But you can testify today that you did not 
come under pressure from the Chinese government or any of its 
representatives, or people working for them, to block his 
account or to the block whatever it is you blocked?
    Mr. Stretch. I want to make sure I'm being precise and 
clear. We did receive a report from representatives of the 
Chinese government about the account. We analyzed that report 
as we would any other and took action solely based on our 
policies.
    Senator Rubio. Facebook is not allowed to operate in China. 
Is that correct?
    Mr. Stretch. Yes, that's correct. Our consumer services are 
blocked in China, that's correct.
    Senator Rubio. Okay. There have been press reports that 
Facebook may have potentially developed software to suppress 
posts from appearing in people's news feeds in specific 
geographic areas. And the speculation is it's being done for 
the purposes of getting into the Chinese market.
    Is that accurate? Has Facebook developed software to 
suppress posts from appearing in people's news feeds in 
specific geographic areas?
    Mr. Stretch. Senator, as you know, we are--we are blocked 
in China, so any software we have is certainly not operative 
there. We do have many instances where we have content reported 
to us from foreign governments that is illegal under the laws 
of those governments. So a great example of this is Holocaust 
denial in Germany, for example. And our position with respect 
to reports like that is, if there is content that's visible in 
a country that violates local law and we're on specific notice 
of that content, we deploy what we call geoblocking, or I.P. 
blocking, so that the content will not be visible in that 
country, but remains available on the service.
    Senator Rubio. So, for example, if criticizing a government 
is illegal in that country, you have the capability to block 
them from criticizing the government and thereby gaining entry 
into that country and being allowed to operate?
    Mr. Stretch. We have the capability to ensure that our 
service complies with local law, that's accurate. We take a 
very nuanced approach to reports of illegal content. We believe 
our mission is to enable people to share and connect, and we 
believe that political expression is at the core of what we 
provide. And so----
    Senator Rubio. What if that political expression is illegal 
in the country?
    Mr. Stretch. So, in the vast majority of cases where we are 
on notice of locally illegal content, it has nothing to do with 
political expression. It's things like blasphemy in parts of 
the world that are--that prohibit blasphemy.
    Senator Rubio. We could probably do a whole hearing on that 
topic. But here's why that's related to what we're talking 
about today: terms of service is the reason why he was knocked 
off. All of your companies have terms of service.
    Is a foreign influence campaign a violation of the terms of 
service of any of the three companies represented here today? 
If you can prove that someone is doing it on behalf of a 
foreign government, seeking to interfere in an election, does 
that violate your terms of service?
    [Pause.]
    Any of you? Any of the three companies, in terms of being 
able to operate or post things, and particularly Twitter and 
Facebook?
    Mr. Edgett. Generally, it would violate a number--we don't 
have state-sponsored manipulation of elections as one of our 
rules. But generally the other types of rules, like 
inflammatory ads content, would take down most of these, these 
posts. So we don't outright ban it, but----
    Senator Rubio. Well, let me ask you this. I've read that 
you can buy a bot army from between $45 to $100. Is buying 
and--if you can prove that someone's bought up and put together 
a bot army, would that be a violation of terms of service?
    Mr. Edgett. Those would violate our terms of service around 
the use of automated accounts, and those are the things that 
we're catching every day. We're blocking 450,000 suspicious 
logins a day. We're challenging 4 million accounts every week, 
to make sure that they're actually real people. But we have--we 
have terms of service around----
    Senator Rubio. I didn't get an answer on the face--is that 
a violation of terms of service, to buy for a foreign influence 
campaign or to put together a bunch of fake ads and put them 
together?
    Mr. Stretch. That campaign violates our terms and our 
policies in a number of ways. And we do not permit automated 
means for accessing the site, so using the bots likewise would 
be a violation.
    Senator Rubio. Okay. If someone goes on and posts the 
Social Security number and date of birth of an individual, 
that's a violation of terms of service, correct?
    Mr. Stretch. For Facebook it is, yes.
    Senator Rubio. I would imagine for all the platforms.
    Mr. Edgett. It is.
    Senator Rubio. What about if someone goes online and posts 
classified information, illegally obtained, that threatens the 
lives of individuals or methods, or potentially disrupts the 
ability to disrupt a plot that can endanger the lives of 
people? Is posting that online or posting that in one of your 
platforms a violation of terms of service? It happens 
sometimes. I don't know if you're aware of it.
    Mr. Edgett. We work with law enforcement all the time on 
matters like that, and balance free speech rights, obviously, 
with those of--obviously, an imminent threat, we would--we 
would take very seriously and act on right away.
    Senator Rubio. I guess my point is, personal identifying 
information is illegal to post it, right? It threatens 
someone's identity. It's also illegal to steal and reveal 
classified information. And I'm just curious if that's also a 
violation of terms of service, since in fact it could have 
real-life implications on individuals who could be compromised 
because of that release.
    Do we have any evidence that Russian accounts uploaded U.S. 
voter registration data and used it in conjunction with custom 
audiences to target specific voters by name? Do any of you have 
any information that registered voter data was uploaded and 
used to customize advertising or messaging to individual 
voters?
    Mr. Edgett. We haven't seen evidence of that.
    Mr. Stretch. The same is true for Facebook.
    Senator Rubio. And my last question is--the scope of this 
was not limited to 2016 or even the presidential race. As an 
example, I think, with the help of some of the companies here, 
we've identified Being Patriotic, LGBT United, United Muslims 
of America, Stop A.I., Heart of Texas; all were used to attack 
my campaign during the primary. What's interesting, though, is 
on the 3rd of July and on the 8th of August, after the primary 
but when I chose to run for reelection, one of those, LGBT 
United, attacked again.
    So my point being these operations, while we're talking 
about the 2016 presidential race, they're not limited to 2016, 
and they were not limited to the presidential race, and they 
continue to this day. They are much more widespread than one 
election. It is about our general political climate. Is that 
correct?
    Mr. Stretch. I would certainly agree with that statement, 
Senator.
    Senator Rubio. Okay. Thank you.
    Chairman Burr. Senator Wyden.
    Senator Wyden. Thank you, Mr. Chairman.
    With the current fascist leadership of Russia 
enthusiastically undermining our democracy, America must defend 
the values that made us great and aggressively confront this 
espionage and the enemies that sponsor it. The tools of the 
espionage range from political ads, to issue ads, sock puppets 
to fictional news stories, and from rallies, to protests, to 
marches, all presented under false pretenses.
    While the Supreme Court has ruled that Congress may place 
some limits on strictly political advertising, the other 
activities I just mentioned are beyond the reach of government 
and government regulation in a free society.
    To fight back against this espionage, Americans have to 
rely on our marketplace of ideas and the institutions that 
support it. Gentlemen, today you three represent those 
institutions.
    Now, you've discussed your response to these attacks, but 
it is self-evident, in relation to the power your platforms now 
have, in the past election you failed. And this is especially 
troubling because the same Federal law that allowed your 
companies to grow and thrive, the Section 230 law, gives you 
absolute legal protection to take action against those who 
abuse your platforms to damage our democracy.
    The same algorithms that power your companies can be used 
to identify the behavior indicative of these attacks, including 
fake accounts and fake news stories, and identify the source of 
money purchasing your ads.
    Now, I'm of the view ads are a small part of a much bigger 
problem. Fake users posting stories on Facebook, videos on 
YouTube, links on Twitter can be used by foreign and domestic 
enemies to undermine our society. You need to stop paying lip 
service to shutting down bad actors using these accounts. 
You've got the power and Congress has given you the legal 
protection to actually act and deal with this.
    So I want to start with a couple of quick yes or no 
questions and just go right down the row for the three of you. 
Mr. Walker, are you satisfied with your platform's response to 
foreign interference in the 2016 election? Yes or no? Just yes 
or no.
    Mr. Walker. We--we are constantly doing better.
    Senator Wyden. Is the answer no?
    Mr. Walker. We could have done more, but I think we are 
doing more today and have done more since the election.
    Senator Wyden. I'll take that as a no.
    Mr. Edgett.
    Mr. Edgett. No, we need to do more.
    Senator Wyden. Mr. Stretch.
    Mr. Stretch. The same is true.
    Senator Wyden. Okay.
    Do you all have--and we'll start with you, Mr. Walker--the 
technical ability and resources to better respond to future 
misinformation campaigns? Yes or no?
    Mr. Walker. Yes. The safe harbors and the Good Samaritan 
laws are important underpinnings for all of this. And we are 
doing more, we have done more to combat fake news----
    Senator Wyden. Mr. Edgett, yes or no?
    Mr. Walker. Yes.
    Mr. Edgett. Yes.
    Senator Wyden. Mr. Stretch.
    Mr. Stretch. Yes, and I would add, though, that I do 
believe we need information-sharing among industry, as well as 
working with the government, to enable us to do this 
effectively.
    Senator Wyden. All right.
    Gentlemen, specifically now describe the changes you're 
going to pursue that respond to not just the ads, but the sock 
puppets, the hoaxes, and the confidence operations? We'd like 
to walk out of here knowing the changes you're going to support 
going forward.
    Mr. Walker.
    Mr. Walker. Sure. Let me give you four on the ad side and 
three on the non-ad side.
    Senator Wyden. Quickly.
    Mr. Walker. Absolutely. The transparency report that we 
talked about for ads; an archive of content that--of all ads' 
content that's available; icons that make information on the 
site available to users as to who sponsors an ad; and enhanced 
verification techniques.
    When it comes to non-ads material, fake news, we're 
improving our algorithms, our rater guidelines, and the signals 
we use. We're using fake news fact-check labels to improve 
users' ability to evaluate fake news, and we're looking at our 
ads policies to improve and toughen rules against sites that 
misrepresent their nature.
    Senator Wyden. Mr. Edgett.
    Mr. Edgett. Coming out of the 2016 election and early this 
year, our CEO asked our entire engineering product and design 
teams, which make up a large majority of the company, to tackle 
the problem of safety, abuse, and misinformation on our 
platform, and to drop everything else that we're doing and to 
figure this out. We formed we call an information quality team.
    Senator Wyden. Those are three sentences. What are the 
changes?
    Mr. Edgett. Yes. We formed an information quality team 
focused on looking at both behavior and content and seeing how 
we could stop bad actors from using automated activity to 
amplify their message. We have just announced new transparency 
rules around not just political ads, but all advertisements, to 
educate not just American citizens, but our worldwide users.
    We are also continuing to collaborate with law enforcement 
and committees like this to make sure we're putting the right--
--
    Senator Wyden. I know very few specifics from that answer.
    Mr. Stretch.
    Mr. Stretch. Senator, let me try four things.
    First, today there are 10,000 people working at Facebook on 
safety and security across our security product and community 
operations teams. By the end of 2018, there will be more than 
20,000.
    Second, we announced last week a series of ad transparency 
steps, drawing on the ideas in the Honest Ads Act that Senator 
Warner talked about earlier, that will bring much greater 
visibility to advertising generally and particularly to 
political advertising.
    Third, we are tightening our ad policies to limit 
divisiveness and to limit violence in the use of our ad tools.
    And fourth, we're standing up an organization to enable 
better industry sharing of threat information and also to help 
us work better with law enforcement so that we can share 
information and expertise in order to address this threat going 
forward.
    Senator Wyden. My last question is, it's not clear that you 
all or the public understand the degree of this sophisticated 
and manipulative intelligence operation. The Russians created 
Facebook pages, posted YouTube videos, all trying to appeal to 
specific audiences.
    Some of the content wasn't fake. It was intended to gather 
an audience and gain trust. It told people that they were 
already receptive to that, after gaining that trust, you could 
execute the espionage, for example, by gathering liberals and 
then discouraging them from voters.
    Mr. Stretch, I'd like you to confirm that this technique 
was used in the election.
    Mr. Stretch. Senator, we've provided all the information we 
can about the content that we've identified on the system. I 
think to make the sort of assessment you're describing really 
requires this Committee's work to look at all of the online and 
offline activity that would be necessary to effectuate a 
campaign like that.
    Senator Wyden. My time has expired. We have specific cases 
that that was used. I would like in writing, within a week, 
what you're doing about it.
    Thank you, Mr. Chairman.
    Chairman Burr. Senator Collins.
    Senator Collins. Thank you, Mr. Chairman.
    It is very clear that Russian activities on your social 
media platforms go far beyond the paid political ads that 
appeared last year. The primary purpose of Russia's active 
measures is to exploit and to aggravate the divisions in 
American society and to undermine public confidence in our 
democratic institutions. And those efforts have not stopped. 
They continue to this very day. As Senator Risch has pointed 
out, no area of the country is immune.
    So let me give you an example, and we've passed it out to 
you, by describing three unpaid posts from Facebook pages 
created by the Russians that refer to the governor of Maine, 
Paul LePage.
    [The material referred to follows:]
[GRAPHICS NOT AVAILABLE IN TIFF FORMAT]

    Senator Collins. There are two negative posts related to 
the governor on one Russian Facebook page, called Williams & 
Calvin, that appeared in August of 2016. There's a video of 
comments made by Maine's governor from that same month. And the 
post in part says the following: ``LePage called up white 
people to kill blacks. After this statement, we can clearly see 
what kind of people serve in American government: white racist 
supremacy--that's for sure. The only way to avoid mass killings 
of black people is to fire LePage and all who have the same 
racist beliefs from American government.''
    There was a second post on the same website about 10 days 
later. Let me just read part of that: ``It is not a secret that 
America is the country of white supremacy, and people like 
LePage must be replaced from their positions in the government. 
America doesn't need racist politicians. Black people are tired 
of white supremacy.''
    Then, this year, this year, in August of 2017, Maine's 
governor was the subject of a positive post on a different 
Russian-backed Facebook page, called Being Patriotic. In this 
case, the post defended comments that the government made at 
the time about Confederate monuments.
    The post ends with its own incendiary conclusion. It says: 
``When even the governor is not safe from leftist haters, then 
what can we say about ordinary citizens? Liberals are now 
acting like terrorists. They try to intimidate everyone who 
disagrees with them. Hope our police will take appropriate 
measures against these cowards.''
    Now, let me point out something. Our governor was not up 
for reelection last year; he is term-limited. He cannot run for 
reelection as governor. And yet these comments were made both 
last year and just a few months ago.
    And the posts are just three among 80,000 that reveal the 
Russian playbook of playing both sides off against each other 
and of sowing discord and division with inflammatory rhetoric. 
And there were other posts that involved lower-level officials 
in the State of Maine that we found as well. And the Russians 
continue to push this kind of divisive rhetoric to this very 
day.
    So my question to you is: what are you, as American 
companies, doing to effectively counter unpaid content posted 
by the Russians that is clearly designed to specifically 
polarize and anger the American people? And I would argue that 
you have a special obligation here, given your reach in 
American society and the fact that you are patriotic American 
companies.
    Mr. Stretch.
    Mr. Stretch. Senator, we agree that we have a special 
responsibility here. We value the trust that users place in our 
services. And when they show up to connect with friends and 
family and to discuss issues, they need to know that the 
discourse they see is authentic.
    What is so painful about this type of content is that it 
exploits truly and passionately held views and then inflames 
them to create more discord and more distrust.
    To prevent this, we are investing much more heavily in 
authenticity. We believe that one of the cornerstones of 
Facebook is that users are known by their real names and so 
that creates a level of authenticity in the discourse that 
users can trust when they come to the platform. This sort of 
content erodes that trust and it's contrary to everything we 
stand for as a company. As Americans, it's particularly painful 
because it is so exploitative of the openness of our society.
    And so the investment we are making and the commitment we 
are making is to ensure that our authenticity policy is more 
effectively policed and monitored to prevent exactly this sort 
of behavior.
    Senator Collins. Mr. Edgett, what is Twitter doing?
    Mr. Edgett. We're focusing on a number of things. The one 
we see the greatest strides in and where we see the greatest 
effect and protections from our users is on the amplification 
side, in the use of automated accounts. These bad actors need 
an audience for their voice, and generally they don't have a 
followership. So they are trying to use activity on the 
platform to automate and amplify their voices.
    So we're looking behind the message and behind the content 
at the behavior of doing that, and have been successful in 
doubling our effectiveness of doing that, year over year, and 
looking at the behavior, taking down millions of accounts every 
single week because they're not actually humans, they're 
actually----
    Senator Collins. Well, this just happened in August of this 
year. This isn't something old.
    Mr. Edgett. Right. We continue to try to stay ahead of 
their activities. We're also looking at things like coordinated 
human activity, where real people are coming together, like the 
IRA, and actually putting out divisive content like this. We 
are able to link those accounts and take action on them as we 
learn, not just what they're saying, but what's behind. What's 
behind it only we can see on the Twitter side. We've had great 
strides on the terrorism front in that regard, and we believe 
we can apply the same techniques and methodologies to this, 
this problem.
    Senator Collins. Mr. Walker. Thank you.
    Mr. Walker. We're also very concerned about this kind of 
deceptive and divisive content. We remove it immediately from 
our services and we have removed these.
    Going forward, and actually already, we have engaged a 
number of things to avoid the problem of fake news: changes to 
our algorithms, improving the training that our raters get in 
evaluating quality, labeling fake news where we can find it, 
working with third parties, et cetera.
    Senator Collins. Thank you.
    Chairman Burr. Senator Heinrich.
    Senator Heinrich. Thank you, Chairman.
    Mr. Stretch, I want to start with you. Last month, 
President Trump called Russian-purchased Facebook ads a hoax. 
I've looked at those Russian-sponsored Facebook ads. I 
certainly hope you've had a chance to review them. Are they in 
fact a hoax?
    Mr. Stretch. All the information we've provided to the 
Committee did run on Facebook, so----
    Senator Heinrich. It's a yes or no answer. I know you're a 
lawyer; it's hard. But----
    [Laughter.]
    Mr. Stretch. No. The existence of--those ads were on 
Facebook and were not a hoax.
    Senator Heinrich. So in the interest of just clearing this 
up and giving the American people some transparency into this, 
so that they can see the nature of what typically gets used to 
divide the American populace, why not simply release those 
Russian-financed Facebook ads to the public? Redact the 
pictures, but release the contents, so that people can 
understand how this works?
    Mr. Stretch. Senator, we believe this Committee is really 
best placed to determine what information to release. We stand 
ready to assist in that, in that effort. We agree that the more 
people can see the type of content that ran and the divisions 
that were sought to be exploited, the better.
    Senator Heinrich. Well, I think we have a disagreement on 
this Committee as to whether or not to release those. I would 
urge all of you as platforms to consider that kind of activity 
as well.
    I want to move on to Russia's RBC magazine, which recently 
revealed that St. Petersburg's troll factory employed hundreds 
of trolls, including 90 at the, quote-unquote, ``U.S. desk'' 
alone, and spent about $2.3 million in 2016 to meddle in U.S. 
politics, actually contacted U.S. activists directly and 
offered them thousands of dollars to organize protests.
    Your platform--your platforms are all global. They're not 
just U.S. platforms. And there is substantial open-source 
reporting right now suggesting that similar divisive activity 
may be occurring, for example, in the Catalonian region of 
Spain right now.
    What are each of you doing right now to make sure that your 
platforms aren't being used in similarly divisive ways across 
the globe, to sow discord in Western democracies? And in 
particular with the Catalonian example, are you familiar with 
what you're doing there?
    Mr. Stretch. Senator, we are focused on preventing this 
form of abuse globally. So when we say we have an obligation to 
protect the platform from being used for abuse, that's a global 
obligation. So we are focused on elections as they appear on 
the calendar, including the Catalonian election that occurred 
recently, as well as the other elections that are on the 
calendar going forward.
    We're focused on ensuring that all actors on the platform 
comply with local law, as Mr. Walker suggested earlier, and we 
are focused on making sure that any foreign threat actors that 
are seeking to undermine democracy anywhere are removed from 
the platform.
    Senator Heinrich. Have each of you taken, had to take 
corrective action against actors in that debate who are not who 
they purported to be?
    Mr. Stretch.
    Mr. Stretch. Senator, the key I'd say progress we've made 
is----
    Senator Heinrich. That's a yes or no, once again.
    Mr. Edgett.
    Mr. Edgett. I believe so, but I'll need to follow up with 
your staff.
    Senator Heinrich. Thank you.
    Mr. Walker.
    Mr. Walker. We're constantly removing fraudulent and 
deceptive accounts from our services. I'm not familiar with the 
specifics there.
    Senator Heinrich. You can get back to us.
    Mr. Edgett, given the discussion we've had about automated 
Twitter accounts and bots--and the range is obviously very 
wide, but we know that's a problem. And you made an assertion 
earlier that I want to come back to and just make sure it's 
accurate. Do you require at Twitter, by service agreement, that 
profiles are linked to real names, real people, or some other 
way to make sure that those go back to real human beings, from 
Social Security numbers to other unique identifiers?
    Mr. Edgett. We do not. We require some information at sign-
up, but we don't require you to verify your identity. We have 
services that verify identities on the platform.
    Senator Heinrich. Why on Earth not?
    Mr. Edgett. Because we see the power of Twitter being used 
by folks like political dissidents, embedded journalists in 
difficult countries who use the ability to not have to identify 
themselves by name, like on other platforms, to speak their 
truth to power. We see that----
    Senator Heinrich. So the reason is for social dissidents 
and people in third world countries or where there is a hostile 
government regime? It is not your business model? You're not 
reliant, for example, on those automated accounts to generate 
revenue?
    Mr. Edgett. We don't rely on--there's some good automation 
on the platform and I'm happy to talk about that. But we do not 
rely on this, the bad, malicious automation that we're talking 
about here.
    Senator Heinrich. If I were running a political campaign 
today and I were to advertise on local television, on cable 
television, in print or on the radio, or even through the mail, 
I would have to have a ``paid for by'' disclaimer on those ads. 
Now, Mr. Walker I believe has already addressed this issue. But 
is there any policy reason that online social media ads, given 
how effective and influential they have clearly become, 
shouldn't meet that same level of transparency?
    Mr. Edgett. We agree with the transparency efforts and last 
week announced that we're creating a transparency center, not 
just for political ads, which will have even more information 
than all ads, but a transparency center for all ads, so that 
you can see not just the ad that you've seen and why it's been 
targeted to you, but all of the other ads created from that 
same advertiser.
    On the election front, you'll also be able to see who's 
paying for the ad, how much they've spent on this ad campaign 
and all ad campaigns, and you'll--able to see what the 
targeting criteria are, so to better educate around why these 
ads are on the platform.
    Senator Heinrich. I appreciate that, Mr. Edgett.
    Mr. Stretch.
    Mr. Stretch. The same is true for Facebook. We are working 
both on political ad transparency, enabling more visibility 
into campaign ads by third parties, and also enabling campaigns 
to meet their disclosure obligations in their--in their online 
communications.
    Senator Heinrich. Thank you, Mr. Chairman.
    Chairman Burr. Senator Blunt.
    Senator Blunt. Thank you, Mr. Chairman.
    So, Mr. Edgett, in response to Mr. Heinrich's question, 
there was a lot of information that you could get based on that 
policy, if you pursue it, like all the other ads they ran, how 
much. Would you get that by going to another spot? Surely 
that's not all right there on the ad.
    Mr. Edgett. Obviously we're a character-constrained 
platform, so we will be identifying very clearly whether or not 
something is a political ad, so that you can--you can see it 
right away. And then, depending on if you're on a web browser 
or on a mobile phone, you'll have to hover over or click on a 
spot to then see a sort of a full transparency center that 
gives you all that information right away.
    Senator Blunt. So would you be able, on the ad itself, to 
go ahead and put enough of a disclosure there that it's clear, 
when you're looking at the ad, who paid for it and how to find 
more information out about who paid for it?
    Mr. Edgett. We're still working through the technical 
details, but believe we'll be able to get that in front of----
    Senator Blunt. Mr. Walker, are you trying to do anything 
similar to that?
    Mr. Walker. We are. Our idea is to have an icon that a user 
can click on, so it's immediately available to them when they 
see the ad.
    Senator Blunt. But it wouldn't necessarily be--would any 
information be on the ad except the icon?
    Mr. Walker. It depends on the format of the ad. There may 
be, in display context or video context, where it makes more 
sense. In a very small amount of space, the FEC has struggled 
with figuring out appropriate disclosure requirements. We 
continue to look at that.
    Senator Blunt. And you're looking at the other disclosure 
requirements that other ads have to have on other media as 
you're considering this?
    Mr. Walker. Of course, broadcast, newspapers, and online.
    Senator Blunt. Mr. Stretch.
    Mr. Walker. Yes.
    Mr. Stretch. The same is true for Facebook.
    Senator Blunt. Looking at this same thing.
    Well, on--when you're talking--and I think I'll start with 
Mr. Edgett on this. When you're talking about Russians, are you 
referring to the Russian government, or any Russian citizen, or 
people who paid in rubles?
    You mentioned the IRA, which I assume is not either your 
individual retirement account or the Irish Republican Army. So 
how do you know they're Russians? And what are you looking for 
there when you're talking about Russians in retrospect?
    Mr. Edgett. Right. That's a great question. We are looking 
for signals. Not everyone identifies themselves as a Russian, 
especially these malicious actors. So we're looking at things 
like whether they registered in country in Russia. Do they have 
a Russian phone number? Are they on a Russian mobile carrier? 
Do they have a Russian email address? Are they coming in from a 
Russian I.P.? Have they ever logged in--you'll see in our 
retrospective work, we looked at have you ever logged in at any 
time from Russia.
    There are some technical--some technical challenges with 
that. The trail sometimes goes cold at data centers, where 
information is being processed. We saw about 14----
    Senator Blunt. Okay. Looking back at what the so-called--
the Russians, however we're defining that, did during the 
election, when you're saying the Russians paid for these ads, 
these are ads paid for by the Russians because you've now gone 
back and checked groups like the Internet Research Agency, and 
you now know that's a Russian group?
    Mr. Edgett. Well, on the advertising side we also have some 
additional data around banking information, because folks are 
paying for these ads. But we didn't link the IRA accounts to 
advertising in the election. But what we did was found nine 
advertisers, based on the signals I talked about and also 
banking information. Largely it was Russia Today, who we have 
since removed as advertisers from the platform.
    Senator Blunt. Let's see if they got their money's worth. 
Everybody here has been involved in one way or another in 
buying advertising. I've always had some sense that in 
advertising you pretty much got what you paid for.
    Mr. Stretch, how much money did the Russians spend on ads 
that we now look back as either disruptive or politically 
intended? Is that $100,000? Is that----
    Mr. Stretch. It's approximately $100,000.
    Senator Blunt. I meant from your company.
    Mr. Stretch. Yes, approximately $100,000.
    Senator Blunt. How much of that did they pay before the 
election?
    Mr. Stretch. The----
    Senator Blunt. I've seen the number 44,000. Is that right, 
56 after, 44 before?
    Mr. Stretch. The ad impressions ran 46 percent before the 
election, the remainder after the election.
    Senator Blunt. Forty six percent? Well, if I had a 
consultant that was trying to impact an election and spent only 
46 percent of the money before Election Day, I'd be pretty 
upset about that, I think.
    So they spent $46,000. How much did the Clinton and Trump 
campaigns spend on Facebook during--I assume before the 
election?
    Mr. Stretch. Yes.
    Senator Blunt. They were more organized than the other 
group.
    Mr. Stretch. Combined, approximately $81 million.
    Senator Blunt. Eighty one million dollars. And before the 
election?
    Mr. Stretch. Yes.
    Senator Blunt. So $81 million. I'm not a great 
mathematician, but $46,000 out of $81 million, would that be 
like five one-thousandths of one percent? It's something like 
that.
    Mr. Stretch. It's a--it's a small number by comparison, 
certainly.
    Senator Blunt. Very small number by comparison. So the fact 
that we're talking about it today, it certainly seems like they 
got their money's worth after the election, whether they got it 
before or not. We're still talking about five one-thousandths 
of one percent of the Facebook money spent. And that was just 
by Clinton and Trump? Or was that all the presidential 
candidates put together?
    Mr. Stretch. No, those were the----
    Senator Blunt. The $81 million?
    Mr. Stretch. Those were the Clinton and Trump campaigns, 
combined.
    Senator Blunt. Well, probably got more attention here than 
they did.
    I know Ferguson, Missouri, and Baltimore were a couple of 
big targets in a lot of these ad campaigns. Is there a way you 
could do this to where you principally target viewers in the 
St. Louis area for Ferguson or in Maryland, for Baltimore?
    Mr. Stretch. It's important to distinguish between our ad 
tools and the organic tools. Our ad--our ad tools do permit 
geographic targeting of content. Approximately 25 percent of 
the ads that we've identified and turned over to the Committee 
were geographically targeted to a region smaller than the 
United States. Most of them were targeted on a State basis.
    Organic content--unpaid posts, if you will--are not 
geographically targeted.
    Senator Blunt. And some of those targeted dollars were 
spent in states where the election turned out not to be close 
at all. Is that right?
    Mr. Stretch. That's correct.
    Senator Blunt. The other questions that we'll get to, maybe 
in--maybe in writing later, but this--on the free media, I 
think we have to be very thoughtful here about who decides 
what's voter suppression and what's not; who decides what level 
of speech is acceptable and what's not. It's an unbelievable 
obligation that the government's never been very good at, and 
an unbelievable obligation that it sounds like to me your 
companies are all being asked to assume. And that'll be an 
ongoing discussion, I think, of whether that's possible or not, 
and the questions and problems that arise when somebody does 
begin to decide what's acceptable to talk about and what's not, 
and what discourages voters and what doesn't.
    I would think the general election process these days would 
discourage voters from participating, so maybe that would just 
mean none of it could be discussed. But we will see how that 
goes.
    Thank you, Mr. Chairman.
    Chairman Burr. Senator King.
    Senator King. Thank you, Mr. Chairman.
    Gentlemen, you've done a good job this morning. I must say, 
though, I'm disappointed that you're here and not your CEOs, 
because we're talking about policy and the policies of the 
companies. And it's fine to send general counsel, but I think, 
if you could take a message back from this Committee, if we go 
through this exercise again, we would appreciate seeing the top 
people who are actually making the decision.
    I want to begin with two quotes, and I generally don't read 
quotes, but these are so apt. The first one says this: 
``Nothing was more to be desired than that every practicable 
obstacle should be opposed to cabal, intrigue, and corruption. 
These most deadly adversaries of republican government might 
naturally have been expected to make their approaches from more 
than one quarter, but chiefly from the desire of foreign powers 
to gain an improper ascendant in our councils.'' That's 
Alexander Hamilton in the 68th Federalist. He saw this coming.
    The other is a more recent quote from a fellow named 
Vladimir Kvachkov, who's a former GRU officer. And he said: ``A 
new type of war has emerged in which armed warfare has given up 
its decisive place in the achievement of the military and 
political objectives of war to another kind of warfare: 
information warfare.'' And that is exactly what we're talking 
about here today.
    I appreciate the Chair and the Vice Chair giving us the 
context of what we're doing. Their visual demonstrations I 
think were very vivid. And the warfare is the division of our 
society. And it's not only us; it's the entire West.
    We know that the Russians were involved in the French 
election. We know that they were involved in the German 
elections. We are now learning that they're involved in the 
separation of Spain, and my understanding is they've set up 
shop in Scotland, which is talking about an independence vote 
from Great Britain. So this is a sophisticated, worldwide 
strategy that applied here in 2016.
    There is one other piece I'd like to add to what the Chair 
and the Vice Chair did, and that is that it's still happening. 
This is a service of the German Marshall Fund called, 
interestingly, ``Hamilton 68,'' that follows hashtags on a 
daily basis. And I just picked a day in September to show. 
These are the hashtags that are being propagated, not created 
by Russians, but these are 600 Russian websites that are using 
these are the hashtags on these particular days.
    [The material referred to follows:]
    [GRAPHIC] [TIFF OMITTED] T7398.139
    
    Senator King. The interesting thing, Syria's up there; 
clearly, Russia has an interest there. But then we have the 
NFL; and then we have ``boycott the NFL''; then we have ``Stand 
for our anthem''; we have, ``Make America great again''; 
Russia, ``Take a knee.'' In other words, they were tweeting on 
both sides of the NFL dispute in order to exacerbate the 
divisions.
    One witness said to this Committee that their strategy is 
to take a crack in our society and turn it into a chasm. And 
that's exactly what we saw in 2016. My point here is it hasn't 
stopped and it won't stop.
    So we have to figure out what to do about it. And it seems 
to me that there are three possibilities, one of which you can 
make a significant contribution to; the other two, frankly, are 
up to us.
    The first is a technical defense, the kind of thing you've 
already been talking about today: checking identities, 
identifying the source of this kind of information. I want to 
pursue that in a minute.
    The second is we as a society have to understand when we're 
being conned. I spent some time a year ago in Eastern Europe, 
before our election. And the Eastern European politicians, all 
they wanted to talk about was Russian meddling in their 
elections. And I said, ``How do you defend yourself? You can't 
undo the internet. You can't turn off the TV.''
    They said, ``All of our people now get it, that this is 
what the Russians are doing. And when they see one of these 
postings, they say, `Oh, it's just the Russians again.' ''
    We have to develop that level of sophistication so that we 
know when we're being misled. And to me, this is the mentality 
we all apply at the checkout counter of the supermarket and we 
see a tabloid that says a movie star had a two-headed baby. We 
say, ``Oh, that's just a tabloid.'' We need to apply that same 
kind of sensibility to these kinds of fake news, misleading and 
purposeful distortions.
    The third thing that we have to determine, I think, is that 
this country has to have some kind of cyber warfare deterrent 
capacity. Right now, there's no price to be paid for meddling 
in our democracy; and our adversaries have to understand that, 
if they're going to undertake a campaign like this, there will 
be a price to be paid; there will be results; and if they do X, 
we are going to do Y to them. Right now, that doesn't exist, 
and all of what the Russians did last year has basically been a 
free pass. And I think that's a very difficult problem.
    Now, let me ask the technical question. Mr. Stretch, can 
you guys--could you, for example, require a date line on a 
posting that said where it comes from, just like a news story 
says ``Moscow, September 23rd''? Is there some way to identify 
the source of information as it comes across your news feeds?
    Mr. Stretch. Senator, it's a great--it's a great question. 
We do permit users to identify the geographic location of the 
post. We don't require it. There are oftentimes privacy 
considerations that would prevent a user from----
    Senator King. You could require it by country, couldn't 
you? That's----
    Mr. Stretch. There are many uses of our services, Senator, 
where requiring people to designate their physical location 
could be problematic.
    I would make two other points. One is, because of the way 
the internet is architected, your geographic location is--can 
be disguised. That's something we need to work on in order to 
make sure we're not being fooled, because I think your larger 
point is an excellent one. The geography of the location of the 
user, paired with the content they're serving----
    Senator King. It's part of the information.
    Mr. Stretch [continuing]. Is part of the information, and 
we need to do a better job tuning our system to be more 
sensitive to that.
    Senator King. Mr. Walker, you said the people should know 
what they're getting. When we get information, we know it's in 
a--it's in a newspaper, we see the name of the author, we see 
the date line. We're in a new information distribution world 
here, and we need to think about how to apply some of the 
principles that have helped us to assess that information. And 
I hope that you all will continue to develop policies, just as 
the newspaper business did 100 years ago, that help your 
customers to analyze and assess the validity of the data. The 
problem now is we're just taking what comes as it comes.
    I'll end with this: I have a quote on my kitchen wall that 
my wife found, and it says, ``The great problem with quotes on 
the internet is determining whether they're authentic.'' 
Abraham Lincoln.
    [Laughter.]
    Thank you, Mr. Chairman.
    Chairman Burr. Senator Lankford.
    Senator Lankford. Gentlemen, thank you for being here. I 
hope you hear loud and clear from this Committee there are lots 
of questions. You're asking lots of questions. This is not an 
opposition to free speech, though. This is actually a battle to 
try to protect free speech.
    We want to have good American dialogue, and the fear is 
that your platforms are being used by foreign actors that want 
to abuse our free speech. If two Americans have a disagreement, 
let's have at it. Let's walk it through as two Americans. If an 
outsider wants to be able to come do it, we do have a problem 
with that. And we're trying to be able to work through that.
    So we're grateful that you're here and to be able to walk 
through this, and we look forward to cooperation together and 
to be able to figure out how we actually resolve some of these 
extremely complicated issues.
    I do want to be able to push on a little bit this issue 
about the type of ads and the type of content. Mr. Edgett, you 
mentioned from your testimony that in Twitter, of the 131,000 
tweets posted during the time--and I assume that means that 
September to November time period that you were tracking--only 
9 percent of those tweets from those Russia-targeted accounts 
were actually election-related; the others were all social 
engagement, other issues. And I think that's being lost in the 
conversation, that only 9 percent of the tweets were election-
related.
    Now, my question is, for all of your platforms, what are 
you seeing from Russian-related accounts that you're tracking 
now and trying to be able to pull down or identify? What are 
the social issues that are being discussed right now from those 
sites?
    You can go back for the last six months if you want to. But 
give me some examples of the type of social issues that they're 
engaging with. One has already been mentioned by Senator King 
and that's the NFL, either boycott the NFL or take a knee on 
that. Has that been actively pursued on your sites? Give me 
examples of topics here.
    Mr. Stretch. So one example we saw following the election 
was an effort by the accounts we've identified to inflame some 
of the post-election demonstrations we saw. So some of the 
accounts turned to questioning the Electoral College, as an 
example.
    Senator Lankford. Okay. So let me walk through multiple 
examples here. So Electoral College is one of them. Some of 
them, it's been publicly reported that the site was used to try 
to organize events that were election protest events in certain 
cities, that--was sending out messages saying, ``We're all 
going to protest at this spot,'' but obviously it was created 
by a Russian group.
    What else? NFL? Yes or no on that?
    Mr. Edgett. We've seen that activity.
    Senator Lankford. Okay. What else? What other issues?
    Mr. Walker. We've seen more limited use of our services, 
Senator, but among that I would say police shootings and racial 
issues.
    Senator Lankford. Okay. What else?
    Mr. Stretch. Certainly immigration has remained a topic 
throughout.
    Senator Lankford. Okay. Any other issues?
    [No response.]
    This is part of the reason that multiple members--and 
Senator Heinrich mentioned it earlier--we really do want these 
ads to get out in the public space. We think there's great 
value for all of your platforms to be able to say, ``This is 
the type of content that foreign actors are actually trying to 
put out, that are divisive content. When we put it out, that's 
one thing; when you put it out, it's completely different.''
    We think there's a great benefit for you to be able to say, 
when you're aware of things, ``Please note, this is the type of 
issue that's been coming out, that's coming out, and this is 
what it looks like,'' and so people can say, ``That's the type 
of thing I've seen before,'' or they can say, ``I've actually 
liked on that before and didn't have any idea that that was 
Russian-related.'' That'd be helpful.
    Can I ask a question? When were you aware of Russian 
activities on your platform during the election time? During 
the election or before your election? When were you aware that 
entities within the Russian government, whether that be the 
Internet Research Agency or other individuals that you knew of 
that were Russian, either government-related or at least 
policy-related, were involved in election issues on your 
platform? Before or after the election? And all three of you 
can say. Before or after?
    Mr. Stretch. We were aware of Russian state actors active 
on the platform prior to and through the election, separate 
from the Internet Research Agency. And we communicated with law 
enforcement about our concerns at the time. These actors were 
engaged in more traditional cyber threat activity, focusing on 
account compromise, as well as trying to direct attention to 
stolen information that was hosted on other sites.
    Senator Lankford. How far back? Early 2016, 2015?
    Mr. Stretch. We had seen activity as early as 2015.
    Senator Lankford. Okay. Mr. Edgett.
    Mr. Edgett. Twitter also saw activity from the IRA on our 
platform and took large-scale action to take down accounts in 
2015. We generally are aware of intelligence community reports, 
so became aware of the activity in the report that came out in 
January of this year; and then obviously, through the 
retrospective of work, have uncovered what we think is the full 
extent. We're continuing to look and to research that issue.
    Senator Lankford. Okay. Mr. Walker.
    Mr. Walker. We've been looking at cyber espionage issues, 
accounts compromise issues, for many years. It was only after 
the issuance of the report in January that we took a deeper 
dive on the particular kinds of things we're talking about here 
today.
    Senator Lankford. One of the things that we're trying to 
address is getting additional information. Getting the ads I 
mentioned before would be very helpful. Getting the type of 
posts that have been done would be very helpful to the American 
people to be able to see.
    Getting the statistical information would also be helpful. 
The statistics that you've given us are the number of accounts 
that are related to this, but it's not giving us the breadth 
and the depth of those accounts once they're activated. So for 
instance, it's helpful to know that one percent of those 
accounts were Russian-related. But what we're not getting 
statistically is was that one percent the first one percent? 
They actually created it, launched it, and then millions of 
people saw it after that? Are they in the middle, stirring it 
and advancing it? Or were they beginning it?
    That would be helpful to us as well. And I know you have 
that data to be able to see if they started it or were they in 
the middle of broadcasting it. So just giving us the percentage 
of users that shared it doesn't help us. Knowing when in that 
process and what happened after that is what helps us.
    You all have done a lot of work on terrorism, on Islamic 
extremism, on the advance of ISIS. You've done a lot of work on 
child pornography, on human trafficking, on the sales of 
illegal drugs on your sites. We're asking for help on this area 
as well. And this is something that I would hope that we do not 
have to engage legislatively or, if we do, it's the lightest 
possible touch. This is something you have been actively 
engaged in with other topics, and we look forward to 
cooperation in this as well, so that you all are managing that.
    But I do hope in the days ahead we continue to protect a 
platform for free speech, to allow individuals to be able to 
speak their opinions, whether its agreed or disagreed on by 
other Americans, so Americans can engage in that dialogue. 
Thank you.
    Chairman Burr. Senator Manchin.
    Senator Manchin. Thank you, Mr. Chairman.
    Thank you all. RT, the Russian state news organization that 
Federal intelligence officials call the Kremlin's principal 
international propaganda outlet, RT America TV broadcasts 
negative programs, derogatory information about the United 
States. It's essentially information warfare directed against 
the United States. According to the unclassified intelligence 
report released on the 2016 elections, the Kremlin staffs RT 
and closely supervises RT's coverage, recruiting people who can 
convey Russian strategic messaging.
    Additionally, the Kremlin spends $190 million a year on the 
distribution and dissemination of RT programming. Can you all 
answer, do any of your organizations allow RT to purchase space 
or advertise on your accounts?
    Mr. Edgett. We've off-boarded, which means we've banned, 
Russia Today and their related accounts from advertising on the 
Twitter platform.
    Senator Manchin. That's effective as of when?
    Mr. Edgett. That was effective as of a week or so ago.
    Mr. Walker. We have reviewed the RT accounts. As you know, 
RT advertises extensively in newspapers, magazines, airports 
across the United States. We've not found violations of our 
accounts. But we are focused on making sure that there is 
enhanced transparency with regard to government-funded 
broadcasting of all kinds, whether that's RT or Al Jazeera or 
other sources.
    Senator Manchin. Mr. Stretch.
    Mr. Stretch. The same is true for Facebook, Senator.
    Senator Manchin. Well, it says here that RT uses Google's 
YouTube, Facebook, and Twitter as the main distributors of 
their content. So you all have been the main go-to to get their 
propaganda out to the United States and do harm to us.
    Mr. Walker. That may be true online, Senator, but of 
course, RT is covered by--RT channels are included on major 
cable television networks, satellite television networks, hotel 
television choices, et cetera. So it is a problem that goes 
beyond the web.
    Senator Manchin. Let me ask you this question, if I may. 
Kaspersky Labs, their antivirus software, do any of you all 
have it on your personal computers? Your personal computers, 
are you using Kaspersky Labs?
    Mr. Walker. No, sir.
    Senator Manchin. Does your company use Kaspersky?
    Mr. Walker. I do not believe so.
    Senator Manchin. Could you check and find out?
    Mr. Walker. We will--we will follow up.
    Senator Manchin. How about any of the other two?
    Mr. Edgett. Not aware, but we'll follow up with your staff.
    Mr. Stretch. Likewise.
    Senator Manchin. Do you all have any doubts about Russia's 
involvement to interfere in our 2016 presidential election? Do 
you have any doubts at all that Russia intervened, interfered, 
had a profound outcome, basically, on our outcome to our 
election? Do you all have any thoughts on that? Do you have any 
doubts about that?
    Mr. Stretch. We have no doubt that there were attempted 
efforts at interference. It's something we're focused on 
addressing going forward. In terms of whether it had an effect 
on the outcome, that's not something we're in a position to 
judge.
    Senator Manchin. Well, let me ask this: Are you or your 
CEOs concerned about the threat and damage your companies can 
do to the U.S. with your far-reaching power, and that you have 
been identified as the major distributors of fake news? Are you 
concerned about that? Do your CEOs talk about the threats to 
the United States of America, where you're domiciled? Or is it 
basically just a business model that you're worried about?
    Mr. Edgett. We're deeply concerned. This is an issue that 
we talk about constantly. As I said earlier, the first part of 
this year, we pointed our entire engineering product and design 
team on tackling the issues of information quality, abuse of 
our systems and protecting our users.
    Mr. Walker. I'd join that. It's absolutely a very serious 
issue. The North Star of Google is to provide accurate, 
comprehensive, relevant information to people. We don't always 
get that right. But we have tens of thousands of engineers who 
are trying to improve our algorithms, improve the rating 
guidelines we use for individuals, address the problem of 
breaking news, which is a very challenging one to get right 
when there's just not a lot of content out there. So we take 
this very seriously.
    Senator Manchin. You all would agree, then, I guess, with 
the legislation that's been introduced by different members and 
bipartisan Members of this Committee and other committees, that 
you all should be regulated and overseen the same as we do 
other news medias?
    Mr. Stretch, you have any idea, a comment? Are you all 
going to fight back on that, lobby against it, or are you going 
to basically support the legislation that's going to be needed 
to make sure that the American people are getting the facts and 
not fake news?
    Mr. Edgett. Senator, we stand ready to work with the 
Committee on legislation. Any particular pieces of legislation, 
we're certainly happy to talk about.
    Senator Manchin. You've seen the legislation that we've put 
out there. You all three are lawyers. I'm sure you're watching 
it very closely, you know, the legislation that we have. Do you 
agree with the exemption? Would you support a change in law 
that treats all political advertising on the internet the same 
as print and broadcast ads, to require identification of 
sponsors, basically the legislation we're putting forth?
    Mr. Edgett. We're very supportive of the direction of the 
Honest Ads Act, and have had really productive conversations--
--
    Senator Manchin. You all we will be speaking in support of 
these pieces of legislation?
    Mr. Edgett. Yeah, we have some fine-tuning that we'd love 
to talk about. But we, as you saw, put out our own transparency 
center that very much aligns with the information that the 
Honest Ads Act was asking us to provide to you.
    Senator Manchin. According to an Oxford University study 
released in October, Russian trolls are now targeting American 
military personnel and veterans on Twitter and Facebook. 
Allegedly, these trolls are pushing fake news and injecting 
them into veterans groups and to active-duty personnel.
    In fact, one fake Facebook page was highlighted in a Stars 
& Stripes article from October the 18th. The page was called 
``Vietnam Vets of America,'' and it had attracted a follower 
network of nearly 200,000. The real veterans organization, 
Vietnam Veterans of America, called it an impostor page and 
another example of how military and veterans are being targeted 
with disinformation.
    A week after the original article was reported on October 
18th, Facebook reportedly took down the site, ostensibly for 
violating intellectual property of the real veterans service 
organization.
    So I would ask Facebook and Twitter, is the story accurate? 
Did you all know about this? And are you seeing our military 
and our veterans being targeted? And how come it took so long 
to take it down?
    Mr. Stretch. Senator, we're intensely proud of the use of 
Facebook by our military. You asked----
    Senator Manchin. We're not talking about that----
    Mr. Stretch. Right.
    Senator Manchin. We're talking about the people who are 
targeting them.
    Mr. Stretch. Yes. And we are very focused on making sure 
that it remains an authentic experience for them. We receive 
many reports of inauthentic behavior. We try to act on them 
quickly. We're trying to improve our tools to detect it even 
before it's reported to us. I'm not familiar with the 
particular--the particular page you've described.
    Senator Manchin. Stars and Stripes?
    Mr. Stretch. Yes, but I----
    Senator Manchin. I hope you will check into that.
    Mr. Stretch. I will certainly check into it, Senator.
    Senator Manchin. Let me just say this, as my time's 
wrapping up. You can see this is not a Democrat or Republican 
issue. This is an American issue that we're concerned about, 
the security of our Nation.
    We're getting hit from every way you possibly can imagine. 
And you all are the largest, one of the largest distributors of 
news. And there can be no doubt that it has to be authentic and 
true. You cannot allow what's going on against the United 
States of America.
    You are on the front lines with us, and we're doing 
everything we can to support our military, our veterans, all 
the people who put their lives on the line. And what you're 
doing by allowing this fake stuff to come across, this 
misleading, this damaging information, is really threatening 
the security and the sovereignty of our nation.
    I would hope that your CEOs--and I agree with Senator King. 
I wish that your CEOs would be here. They need to answer for 
this. It can't be a business model. It's got to be a security 
issue.
    Thank you.
    Chairman Burr. Senator Cotton.
    Senator Cotton. Thank you, gentlemen, for your appearance 
this morning.
    Mr. Edgett, I want to discuss Twitter's history of 
cooperation with our intelligence community. Last year, in an 
open hearing before this Committee, I asked then-CIA director 
John Brennan about Twitter's decision to prohibit a subsidiary 
called Data Miner from working with our intelligence community. 
Director Brennan stated that he was disappointed in Twitter's 
decision.
    But at the same time that we learned that Twitter was 
refusing to work with the CIA and the rest of the intelligence 
community, we also learned that Twitter was pitching Russia 
Today and Sputnik, propaganda arms of the Kremlin, to sell 
advertisements for profit.
    So, in essence, last year Russia was beginning its covert 
influence campaign against the United States, and Twitter was 
on the side of Russia, as opposed to the national security 
interests of the United States. How can your company justify 
this pattern of behavior to its fellow citizens?
    Mr. Edgett. We work frequently and hard with law 
enforcement all the time. We do have global policies that 
prohibit the use of our data hoses or publicly available data 
around tweets for purposes of surveillance. But we allow law 
enforcement to use Data Miner and Twitter products around news 
alerts, first response technology to see what's going on in an 
area if a 911 call is made and an emergency responder is going 
somewhere. But we do not allow surveillance based on Twitter 
data.
    Senator Cotton. Did Twitter cut off the CIA and the 
intelligence community from Data Miner last year?
    Mr. Edgett. We asked that our policy of surveillance be 
applied consistently to all organizations, and I believe that 
Data Miner has been enforcing that policy.
    Senator Cotton. Has it cut off RT and Sputnik?
    Mr. Edgett. As to Russia Today, when we approached Russia 
Today last year to talk about our advertising products and to 
sell them our advertising services, they were approached as a 
regular media organization like a BBC or an NPR.
    Senator Cotton. Do you consider RT to be a regular media 
organization?
    Mr. Edgett. Obviously not now. Coming out of the DNI report 
earlier this year and the retrospective work that we've done 
most recently, we don't. And that's why we have banned Russia 
Today from advertising on the Twitter platform.
    Senator Cotton. So there's a difference, though, between 
the advertising question, which was improvident, and the use of 
Data Miner. According to a Wall Street Journal report to which 
Director Brennan was responding, Twitter CEO Jack Dorsey vetoed 
the Data Miner-CIA contract at the last minute because he 
objected to the, quote, ``optics'' of continuing to help U.S. 
intelligence agencies.
    That Wall Street Journal report also said, though, that 
customers still getting Data Miner include RT. Is that an 
accurate report? John Brennan had no reason to doubt its 
accuracy.
    Mr. Edgett. I'm--I don't--I don't have the information, but 
will follow up on Russia Today's use of Data Miner's products, 
which is the third party where we have a relationship. I 
believe Mr. Dorsey wanted to make sure that our policies were 
being applied consistently around surveillance.
    Senator Cotton. Do you see an equivalency between the 
Central Intelligence Agency and the Russian intelligence 
services?
    Mr. Edgett. We're not offering our service for surveillance 
to any government.
    Senator Cotton. So you will apply the same policy to our 
intelligence community that you apply to an adversary's 
intelligence services?
    Mr. Edgett. As a global company, we have to apply our 
policies consistently.
    Senator Cotton. This reminds me of the old line from the 
Cold War of one who did not see a distinction between the CIA 
and the KGB on the other hand, because the KGB officer pushed 
an old lady in front of an oncoming bus, the CIA officer pushed 
the old lady out from the path of the oncoming bus, because 
they both go around pushing old ladies.
    I hope that Twitter will reconsider its policies when it's 
dealing with friendly intelligence services in countries like 
the United States and the U.K. as opposed to adversarial 
countries like Russia and China.
    Would Twitter entertain the possibility of once again 
allowing the intelligence community to use Data Miner?
    Mr. Edgett. We do today, for purposes of news alerts and 
first response technology, getting information on certain 
areas. We do not allow anyone--our policy is not to allow 
anyone, for the purposes of user privacy, to use our technology 
to run surveillance.
    Senator Cotton. Okay, let's move on to another hostile 
intelligence service. Other than Vladimir Putin and Russia, I 
can't think of anyone who was more involved in efforts to 
influence our election last year than Julian Assange and 
WikiLeaks. The current director of the CIA, Mike Pompeo, as 
well as this Committee in our annual Intelligence Authorization 
Act, has labeled WikiLeaks a non-state hostile intelligence 
service who aids hostile foreign powers like the Kremlin. Yet, 
to my knowledge, Twitter still allows them to operate 
uninhibited. Is that accurate?
    Mr. Edgett. We have terms of service and rules that apply 
to all users and will apply those consistently and without 
bias. We take action on accounts like WikiLeaks----
    Senator Cotton. Is it bias to side with America over our 
adversaries?
    Mr. Edgett. We're trying to be unbiased around the world. 
We're obviously an American company and care deeply about the 
issues that we're talking about today. But as it relates to 
WikiLeaks or other accounts like it, we make sure that they're 
in compliance with our policies, just like every other account, 
and have and will continue, if we need to take------
    Senator Cotton. So you'll be unbiased towards WikiLeaks and 
Julian Assange, but you'll take down videos of people like 
Marsha Blackburn, a Republican running for the United States 
Senate?
    Mr. Edgett. Marsha Blackburn's video was never removed from 
the Twitter platform. She ran that tweet and that video as an 
advertisement, and we have different standards for our 
advertisements than we do for the organic tweets and content on 
our platform, because we're serving ads to users who haven't 
asked to follow Representative Blackburn or others and we want 
to make sure that that's a positive experience. And so our 
policies have a different standard. And in that case, we had 
users reporting that it was inflammatory and upsetting, and it 
was initially taken down.
    We're making these tough calls all of the time, and in that 
case we reversed the decision and allowed the advertisement to 
continue to run. But we never took down Representative 
Blackburn's tweet or allowed her not to convey that message to 
those who were following her and to engage in the dialogue with 
her.
    Senator Cotton. Mr. Edgett, I know that you're the acting 
general counsel, not the general counsel. And if you were the 
general counsel, these decisions are made at the CEO and the 
board of directors level. But I have to say most American 
citizens would expect American companies to be willing to put 
the interests of our country above, not on par with, our 
adversaries--countries like Russia and China, or non-state 
actors like WikiLeaks, or individuals like Julian Assange.
    As many other Members of this Committee have expressed, I 
think your companies have accomplished amazing things for our 
country and its citizens, and made our lives better in many 
ways. I also support the channels that you've created for free 
speech, especially for some oppressed or persecuted people 
around the world.
    But this kind of attitude I would submit is not acceptable 
to the large majority of Americans, and it's going to be part 
of what would lead to unwise or imprudent regulation, not 
sensible and smart regulation.
    My time's expired.
    Chairman Burr. Senator Harris.
    Senator Harris. Thank you.
    California is home to many of the world's most successful 
technology companies and we're proud of that. And we also know 
that with that great success comes great responsibility. Your 
companies, therefore, have a great responsibility to the 
American public.
    And, as you know, you are the modern town square and the 
modern postmaster. You are the phone company and the Yellow 
Pages. You are the newspaper and the radio broadcaster and the 
television station, and you are the emergency alert system. 
Your decisions fundamentally inform public discourse.
    So our Nation's enemies have used your platforms in a way 
that has been designed to create and disseminate and advertise 
hateful rhetoric, with the intent and the effect of disrupting 
our democracy. And that, of course, is why we're all here.
    I have several questions, but I'd like to start with what I 
think is, frankly, an elephant in the room. And I'm holding up 
the SEC Form 10-Q that has been filled out by each of your 
companies. And your response to this is pretty much the same, 
but I have in front of me that from Facebook. But Twitter and 
Alphabet have the same information.
    There's a section here which requests, ``What are the risks 
related to the business?'' And it reads, ``If we fail to retain 
existing users or add new users, that will be a problem for 
us.'' It goes on to say, ``We generate substantially all of our 
revenue from advertising. The loss of marketers or reduction in 
spending by marketers could seriously harm our business.''
    It goes on to say, ``Our advertising revenue could also be 
adversely affected by a number of other factors, including 
adverse legal developments relating to advertising, including 
legislative and regulatory developments, and developments in 
litigation.''
    So my question to you is about American ads, not the 
Russian ads, American ads that run on your platforms. There are 
legitimate ads that appeared alongside of the Russian placement 
and propaganda pages on Facebook, on Twitter and even on 
YouTube.
    So can you please tell me that, as it relates to those 
advertisements on Facebook, on Twitter or in YouTube, how you 
are addressing that, and in particular, how much money did you 
make off of the legitimate ads that ran alongside the Russia 
propaganda?
    And we can start with Alphabet or with Google, please. And 
that would be the advertisements that ran before your videos on 
YouTube.
    Mr. Walker. Sure. The total amount of advertising we 
discovered across our platforms was $4,700 from the Russian 
sources.
    Senator Harris. That's not my question.
    Mr. Walker. I understand.
    Senator Harris. My question is American advertising or 
legitimate advertising. How much money did you make from 
legitimate advertising that ran alongside the Russia 
propaganda?
    Mr. Walker. A de minimis amount, Senator. I don't have it 
in front of me. We'd be happy to follow up.
    Senator Harris. Okay. What about for Twitter?
    Mr. Edgett. I don't have the data, but I will follow up 
and------
    Senator Harris. Have you not looked into that?
    Mr. Edgett. I believe--are you asking how much advertising 
revenue we made for the period, totally? Or----
    Senator Harris. I'm asking how much advertising revenue did 
you receive from legitimate advertisers that advertised 
alongside or in connection with Russian propaganda?
    Mr. Edgett. We haven't done that analysis, but we'll follow 
up and work on that.
    Senator Harris. Okay. What about Facebook?
    Mr. Stretch. The same is true for Facebook, Senator.
    Senator Harris. You've not done that calculation?
    Mr. Stretch. We've not done that analysis.
    Senator Harris. I find that difficult to understand, 
because it would seem to me that we would figure out how much 
you've profited from Russian propaganda on your platforms. So 
please do follow up with us as soon as possible on that.
    And also, it is critically important that this Committee 
have access to all of the information it needs to understand 
the Russia propaganda in the 2016 election. So will you each 
commit to retaining records, as you are required to do from the 
minimum standard of media rating research, but do that and 
extend it beyond the 11 months that they require and extend it 
through the completion of our investigation into Russia's 
interference in the 2016 election? Will you commit to keeping 
those records during the duration of our investigation?
    Mr. Walker. We will commit--we will keep all relevant 
records to this investigation and provide them to the 
Committee, yes.
    Mr. Edgett. Same goes for Twitter.
    Senator Harris. Thank you.
    Mr. Walker. Yes.
    Senator Harris. And as for all three of you, can you please 
name the senior executive who is responsible in your operation 
for countering state-sponsored information operations? And if 
you do not have one, please indicate that, as well.
    Mr. Walker. It's a challenging question, because we have a 
number of people across different teams, including our cyber 
espionage teams, as well as our trust and safety teams. I would 
say our chief security officer is one such person. Another 
person would be the head of our trust and safety team. Then we 
also have separate teams at YouTube who work in these areas.
    Senator Harris. So I take it you have not designated an 
individual as part of your executive team who's responsible 
specifically for state-influence operations?
    Mr. Walker. I will take responsibility for that, Senator.
    Senator Harris. Okay, I appreciate that.
    Mr. Edgett. That role--there's two people filling that role 
at Twitter. The first is our general counsel. I'm currently our 
acting general counsel, so currently it is me. But also our 
head of our Twitter product, the Twitter product that we all 
use, has taken responsibility for safety, abuse, and 
information quality on the platform. So I feel like that's 
directly related to your question.
    Senator Harris. Okay. But I'd like you each to appreciate 
and everyone to appreciate that this is a very specific issue 
with its own pathology, requiring a great amount of resources, 
because we are talking about state-sponsored activity. This is 
not about an individual conducting this activity and then you 
need to review it.
    So as it relates to state-sponsored information operations, 
I'm requesting that you name whoever is responsible now, but as 
we go forward that you designate in your operation someone at 
the executive level who is responsible specifically for those 
types of operations, understanding that, as we know now, there 
are governments that are willing to put incredible amount of 
resources into manipulating the American public.
    And it is beyond what you might need to review in terms of 
activity on your sites that involves issues of posting 
inappropriate images and things of that nature.
    Mr. Stretch.
    Mr. Stretch. Senator, we have a chief security officer and 
a threat intelligence team that's acutely focused on this 
threat. I will take responsibility for our overall response to 
this threat.
    Senator Harris. And how many of your employees are 
dedicated to addressing state-sponsored operations 
specifically? And if there aren't, please follow up in terms of 
what you're prepared to dedicate to that.
    Mr. Stretch. This is a--it's a harder question, because 
there are so many vectors that we're investing in. I stated 
earlier that we have 10,000 people at Facebook across a number 
of teams who are focused on safety and security generally, and 
we're doubling that number. The number of people who think of 
this as their full-time job is something I'll have to come back 
to you on.
    Senator Harris. Okay, I appreciate that. And for each of 
the companies, we'd appreciate that.
    You can create automated systems that detect foreign 
propaganda. For example, you can determine whether a user is 
active during Moscow business hours, or connects through a VPN, 
or registers with a fishy voice over an IP telephone number. 
And you can feed those signals into a machine that can actually 
create an algorithm that can allow us to indicate or figure out 
if propaganda is actually being pushed through. Have you done 
that as it relates to state-sponsored manipulation of 
elections?
    Mr. Edgett. So our technology is agnostic. We have the 
technology you're talking about, which is an algorithm that 
helps us catch the bad actors based on their pattern and 
behavior, and also connect accounts so that if they start new 
accounts or new networks of accounts, we get those before they 
tweet.
    We want to catch that activity all over Twitter. That, 
having automated accounts, malicious actors on Twitter, is a 
bad experience for our users. So we've been tackling that 
problem for years, and the challenge is as we get better, these 
actors get better. And so it's a constant game of cat and mouse 
and one-upmanship.
    But we are committed every single day to making sure that 
we are removing those actors from our platform.
    Senator Harris. My time is running out, so perhaps we can 
just have quick answers for the remaining folks.
    Mr. Walker. Our answer would be similar.
    Mr. Stretch. The same.
    Senator Harris. Thank you.
    Chairman Burr. Members should be aware that there has been 
a series of votes started about seven minutes ago, two votes. 
We've got two Members left. We're going to move through those 
and wrap up.
    And at this time, I would ask unanimous consent that all 
Members be allowed for seven days to submit questions to our 
witnesses today.
    [No response.]
    Without objection, so ordered.
    Senator Cornyn.
    Senator Cornyn. Thank you for being here.
    It strikes me that the United States is operating at a 
tremendous disadvantage. We are a free and open society. We 
believe in freedom of the press, freedom of expression, and we 
respect the privacy rights of individual--of individuals. Our 
opponents have the opposite view. They view information as a 
tool of warfare, while denying their own citizens access to the 
sort of freedoms--the press, individual thought, and 
expression--that we celebrate in this country. But it seems to 
me that you're at the intersection of this problem, which 
brings us to why you're here today.
    I'll restate the Chairman's comments when he started by 
quoting H.L. Mencken who said, ``For every complex problem, 
there is a clear, simple, and wrong answer.'' And so we need to 
be very careful, I think, in how we deal with this.
    But I do think the public needs to understand how your 
platforms operate. My friend Senator Harris talked about your 
being the town square and the newspaper, the radio station. But 
you are more than just a publisher of information. As you point 
out, most of the income that your companies earn is from data 
mining. You know more about individual Americans than anybody 
else, including their own government. And, of course, you vow 
to protect that information and their privacy at the same time.
    But you use it in order to target ads. Many of us here on 
the panel have used your platforms in political campaigns to 
make sure that our message gets to voters who we think might be 
receptive to our point of view or our platform.
    But I'd like to know as a general matter how do you 
distinguish between somebody, like Senator Feinstein alluded 
to, using social media as a means to recruit and incite lone-
wolf terrorists like the one that plowed down unsuspecting New 
Yorkers yesterday, killing 8 and injuring 11 others, between 
that person and a foreign government using your platforms in 
order to pursue the sort of disinformation and active measures 
that caused so much confusion and polarization in our election?
    And how do you distinguish between the way you treat those 
people on your platforms and how you treat sex traffickers, who 
are targeting young girls and selling them, in essence, for 
sex? How do you distinguish between those three?
    Mr. Walker. Senator, I'd say none of those activities are 
acceptable on our platforms. We have strong policies against 
abuse of platforms to promote hate, hate speech, incitement of 
violence, sex trafficking, human trafficking, et cetera. In 
addition, when you have the added layer of a deceptive actor, 
like a foreign government, trying to push its points of view in 
deceptive ways, without identifying itself, etcetera, that 
separately violates another layer of our policies.
    Mr. Edgett. It goes back to what we talked about earlier. 
We also have the same policies prohibiting all of this. We've 
had the best success so far looking at the behavior and the 
signals we see only on the Twitter side, around how accounts 
are linked and the activity of those accounts, where they're 
logging in from, what they're doing with each other, to stop 
these bad actors before they're even able to tweet.
    A good example of this is on the terrorism side. We are now 
able to automatically detect and take down 95 percent of 
terrorist accounts, 75 percent of those before they tweet for 
the first time. So because we're focusing on behavior and not 
the content, we don't have to wait for them to get their 
messages out. We take those down proactively before they're 
able to speak.
    Mr. Stretch. I would just add, Senator, that of the three 
forms of abuse of the platform you describe--child safety, 
terrorism, and foreign interference in elections--the first two 
we've been working on for quite a long time as an industry and 
with government, and I think we have a proven track record of 
working well together, as an industry and with government, to 
make sure we're taking appropriate steps to address those 
abuses. We have work to do, but we have made progress.
    This threat of foreign interference in the elections is 
something where we do need to up our game, I think, as a 
company and as an industry and working with government. But the 
success we've made or the successes we've had on those first 
two forms of abuse gives me some reason for optimism going 
forward, with respect to our ability to address the foreign 
interference threat.
    Senator Cornyn. For each of you, have the terms of service 
of your company changed at all since the 2015, 2016 actions of 
Russia? Your terms of service?
    Mr. Walker. Yes. We have changed our policies with regard 
to ads that are appropriate, as well as content that's 
available on YouTube.
    Senator Cornyn. Focused on active measures or on other 
matters?
    Mr. Walker. Our terms have to do with terms around 
advertising and terms about what acceptable content is. We are 
simultaneously using additional tools behind the scenes to 
identify material.
    Mr. Edgett. It's very similar at Twitter.
    Mr. Stretch. The same is true for Facebook.
    Senator Cornyn. Why should your companies be treated any 
different than the press from a legal accountability 
standpoint?
    Mr. Edgett. We believe, as a user-generated content 
platform, that the rules around Section 230 provide a platform 
to our users around free speech and expression, and don't 
require us to take a bias on removing content that we fear will 
violate certain rights. And so we work actively to prohibit 
things like violence and terrorism and abuse and harassment. 
And you'll see how we were tackling this problem with urgency 
and seriousness.
    But we believe that, as a user-generated content platform, 
we want to allow the free expression and debate without the 
interference of some of the things you're talking about.
    Senator Cornyn. So you believe you should be treated from a 
legal standpoint differently than a newspaper, cable TV show, 
or a radio show?
    Mr. Edgett. Yes. We're not producing the content; we're 
allowing users to upload. We have a lot of great journalists 
and news organizations who are putting content on our platform 
to share, linking back to their sites. We're offering the 
service to allow that interchange, that information sharing.
    Senator Cornyn. That may well be a distinction that is lost 
on most of us, that you're just a platform for other people to 
express their views, as opposed to being a publisher in your 
own right of those, of those views.
    Finally, let me just ask each of you, please, to continue 
to work with us on the Stop Enabling Sex Traffickers Act of 
2017. As you know, this deals with the Communications Decency 
Act, which has been used as a legal bar to those who have been 
victims of sexual abuse, when they seek to bring the people who 
facilitated that sex trafficking to justice. And I think there 
is a way that, working together, we can come up with something 
that protects the victims, but also maintains the freedom of 
the internet. And I would just encourage each of you to 
continue working with us on that, so we can reach an acceptable 
outcome.
    Thank you, Mr. Chairman.
    Chairman Burr. Senator Reed.
    Senator Reed. Thank you, Mr. Chairman.
    For all the panelists, but starting with Mr. Stretch: When 
you discover a deceptive foreign government presentation on 
your platform, my presumption is from what you've said today 
you'll stop it and take it down. Do you feel an obligation in 
turn to notify those people who have accessed that? And can you 
do that? And shouldn't you do that?
    Mr. Stretch. Senator, we feel an obligation, as you say, 
first to stop the activity; second, to investigate it further, 
to fan out, essentially, from the account to make sure we're 
taking an expansive view of the investigation to try to capture 
any related activity; third, to share threat information with 
the industry and with the government so we can all do a better 
job; and then, fourth, to bring the issue to the attention of, 
in this case, this Committee.
    And the content itself, we've said we're supportive of this 
Committee making it publicly available. The question of 
reaching out to individuals who may have seen it is a much more 
difficult and complex one. But we believe our commitment to 
transparency on this issue generally should address that.
    Senator Reed. Well, potentially you could do that, I 
presume, or you could invest the resources to do it. And as a 
result, frankly, reporting to us about the nefarious activities 
of Russia is not going to immediately translate to the 
thousands or apparently 126 million people who saw the message 
and thought it was legit.
    You have the, I presume, the technical skill to do that. 
Again, apropos Senator Cornyn, you know, you'll see in the 
newspaper, ``We correct the statement we made the other day; it 
was wrong,'' or ``It was deliberately wrong.'' And I think you, 
given the First Amendment, you can live with that, I hope.
    Mr. Stretch. I'm sorry, Senator. Could you repeat the 
question?
    Senator Reed. Well, the question goes back to having an 
obligation under the First Amendment to notify people who you 
know have been deliberately misled by a foreign government--not 
just us, not just law enforcement.
    Mr. Stretch. The technical challenges associated with that 
undertaking are substantial, particularly because much of the 
data work underneath our estimate of the number of people who 
may have been exposed to this relies on data analysis and 
modeling.
    That said, we do believe transparency in this area is 
important, and we are supportive of making as much of this 
information available to the public as the Committee deems 
warranted.
    Senator Reed. Mr. Edgett. I deem it warranted, for whatever 
that's worth.
    Mr. Edgett.
    Mr. Edgett. It's an interesting proposition. We have a team 
dedicated to information quality and how we present information 
on the platform. We see, as an open platform, active dialogue 
around a lot of this false information, fake information right 
away. So when you're seeing the tweets, you're also seeing a 
number of replies to it, showing people where to go, where 
other information is that's accurate. But we will definitely 
take that idea back to explore how we could implement a process 
like that.
    Senator Reed. Mr. Walker, your platform?
    Mr. Walker. We're somewhat differently positioned in that, 
because were not primarily a social network, many of our users 
are not logged in at the time they access content. So it's 
difficult for us to know exactly who has seen what. But we too 
will take it under consideration.
    Senator Reed. Thank you.
    With respect to bots, what I've gleaned from the testimony 
is that you can technically identify a bot system operating on 
your platform; and then, am I right to assume that you will 
shut them down? Any bot system, you will shut down? 
Particularly a government-related bot system?
    Mr. Walker. When we refer to bots, it's primarily a Twitter 
issue. I'm not familiar with bots on Google per se.
    Senator Reed. Well, let's go to Twitter. I'm not the 
technical expert, but YouTube is your subsidiary, I presume.
    Mr. Walker. That's correct, yes.
    Senator Reed. And a lot of the hits on YouTube I presume 
were generated by electronic devices, not people, so that the 
RT program that was attacking Secretary of State Clinton had a 
2 percent over-the-air audience, but a huge number of hits on 
YouTube. And as a result, you know, you are being unwittingly 
or wittingly used by bot systems all of the time.
    Mr. Walker. A bot, an automation, an automated view of 
content, really isn't sort of the core issue. What people tend 
to do on YouTube is try to drive up their perceived view counts 
to make themselves appear to be more popular than they are.
    Senator Reed. And they do it by using electronic networks.
    Mr. Walker. That's certainly right. And so we--yes, 
across--and this is a problem not limited to this context. Many 
people would like to make themselves appear more popular than 
they are. So we have a lot of sophisticated tools that are 
precisely designed to combat that kind of phenomenon, yes.
    Senator Reed. And you will, if you find it, reduce the 
number of hits so it's no longer trending?
    Mr. Walker. Either that or remove people from our services 
for abuses of our terms of service, yes.
    Senator Reed. So you do have to deal with these bot 
networks, and you're dealing with it.
    Mr. Walker. In that sense, that's correct, yes.
    Senator Reed. Okay.
    Bots?
    Mr. Edgett. Similarly, we remove as many bad automated 
accounts as we can find. As I said earlier, some of these bad 
actors are trying to get more sophisticated, and so we're 
staying ahead of that by learning from the automated accounts 
that we're seeing. But we will remove them, and have technology 
to make sure that automated accounts aren't gaming trends, so 
that--the trending hashtags that people are seeing on the 
platform. We will remove their content from search and from the 
timelines, and we will remove them permanently from the system, 
you know, once we are able to investigate.
    Senator Reed. Mr. Stretch, please.
    Mr. Stretch. Senator, we--apologies.
    We prohibit automated account creation, and in doing so 
we're always looking for evidence of accounts being created en 
masse and engaging in either of the behaviors that Mr. Walker 
and Mr. Edgett identified.
    Senator Reed. Thank you.
    Just a quick question, because my time is going. This is a 
daunting effort. We're being attacked, and you have to go on 
the offense, the counter-offense, because the way we've 
structured this system, we have very limited government role in 
your regulation, your activities, et cetera.
    And it comes down ultimately to resources. So I would like 
to follow up officially. But what percent of your revenue are 
you devoting to these activities? I mean proactive activities, 
not, you know, just if someone complains enough we'll take it 
down, but finding bots, thinking about notifying recipients of 
clearly bad information. So what percent do you think, Mr. 
Stretch, right now?
    Mr. Stretch. Senator, I cannot give you a percentage. I can 
tell you that the company is committed to getting this right. 
Our most expensive resources are--resources are people and, as 
I stated earlier, we are doubling the amount of people who will 
be focused on these efforts in the coming year.
    Senator Reed. Can you get us a number, please, in writing?
    Mr. Stretch. Yes.
    Senator Reed. Thank you.
    Mr. Edgett, please.
    Mr. Edgett. We also dedicate a lot of resources to this, 
and I'll follow up with your staff.
    Senator Reed. Mr. Walker, similarly?
    Mr. Walker. Similarly.
    Senator Reed. Thank you.
    Thank you.
    Chairman Burr. The Chair recognizes the Vice Chair.
    Vice Chairman Warner. Thank you, Mr. Chairman. I'll be very 
brief. I know our time is limited.
    One, I want to acknowledge Senator Reed I think raised a 
very good question. And if you were in a medical facility and 
you got exposed to a disease, the medical facility would have 
to tell the folks who were exposed. The comment as well about 
TV and radio making corrections. I do think it's an interesting 
question about what obligation you might have.
    I just have to tell you, I think there has been some 
progress made and I appreciate some of your efforts. I still 
find it very disturbing that it appears that, at least from 
Twitter and Facebook, the sense is that all the Russian active 
measures only originated with one single troll farm in St. 
Petersburg, and it still appears that most of the work that you 
have provided us is derivative of your initial reports.
    I was hoping very much that you would come in today and 
either say that was absolutely all of it or we've identified 
other troll farms or other entities. And I think we've got a 
lot more work to do.
    Thank you so much, Mr. Chairman, for this hearing.
    Chairman Burr. Thank you. Thank you, Senator Warner.
    We've come to the close of this and let me just make a 
couple of statements, if I can. If, for some reason, you need 
antitrust waivers to collaborate with each other, please let us 
know. More importantly, seek for the waiver yourself. This is 
going to take an overall effort to minimize--I'm not going to 
use the term ``eliminate''--to minimize the damage and, more 
importantly, the impact of what Russia is doing, did do, and 
what others will do next year.
    I firmly believe that all three of your companies have a 
new perspective on security and that you've got your varying 
degrees of changes that are good. The challenge is if it fails 
the impact of that failure is significantly different than it 
was in the 2016 elections. I need you to know that up front.
    You all acknowledge that FEC law applies to you, but it 
hasn't been lost on me that all of you asked for an exemption 
from the applicable FEC law. So I sort of am reminded that a 
portion of the content posted on Facebook by foreign actors 
appeared to support one candidate or another. Clearly, it falls 
within the lines of what the FEC law was there for. And, as you 
know, Federal campaign law requires disclosures of sources for 
ads. I have to put that little thing, ``Paid for by.''
    Mr. Edgett, you said your guys are adopting something 
similar. I applaud you on that. And I'm sure, if I asked why 
you didn't apply it, you might have told me before that the FEC 
law didn't apply to you because they had a hung jury at the 
FEC. They didn't decide it did, and they didn't decide it 
didn't. Or maybe it's the excuse that it was small, impractical 
items, therefore they had an exception to the disclosure.
    Let me make it perfectly clear: There is no exception to 
the disclosure as it relates to foreign money used to influence 
U.S. elections. That is a national security issue. It is a 
direct attempt to infiltrate the democracy that we have here. 
And if it's not stated in the law, it should be your company's 
responsibility to take it on head-on.
    So in the future I hope that, if there's a takeaway from 
this, it's that everybody's going to adhere to FEC law. If 
you're a media outlet and you've known that it applied to you; 
if for some reason you ever questioned whether it was foreign 
money, then you probably didn't run the ad. I hope that none of 
your platform is conditional upon you not verifying where that 
money's coming from.
    So the one thing that I didn't hear today, and I hope you 
will take it back, is the effort to certify who's paying for 
these things. I, like others, do not want the government to 
stipulate to any of you what content, especially political 
content, should look like. By the same token, you're the front 
line before anything else has to kick in to certify that 
foreign money is not finding its way into influencing U.S. 
elections.
    I wish I could leave today and believe the only thing we 
have to worry about are elections. The truth is we've spent a 
tremendous amount of time as two old guys trying to figure out 
what bots were and things that I don't use. But I have to 
understand them in a way that as a policymaker makes decisions 
that are best for the future of my kids and my grandchildren, 
to make sure they've got access to your platforms, to make sure 
that they can experience things I never dreamed about 
experiencing and am too old to understand.
    But I recognize the fact that I can't be influenced just 
because I don't understand it, because I've got to match my 
capital with the intellect of all of you at the table and the 
people that work at all your companies. That is our future.
    Don't let nation-states disrupt our future. And you're the 
front line of defense for it. Please take that back to your 
companies.
    This hearing is adjourned.
    [Whereupon, at 12:30 p.m., the hearing was adjourned.]

                         Supplemental Material
[GRAPHICS NOT AVAILABLE IN TIFF FORMAT]

  
                                  [all]