Hearings

Print

Hearing Type: 
Open
Date & Time: 
Wednesday, August 1, 2018 - 9:30am
Location: 
Hart 216

Witnesses

Dr.
Todd
Helmus
Senior Behavioral Scientist
RAND Corporation
Ms.
Renee
DiResta
Director of Research
New Knowledge
Ms.
Laura
Rosenberger
Director
Alliance for Securing Democracy at The German Marshall Fund of the United States
Dr.
Philip
Howard
Director
Oxford Internet Institue

Full Transcript

[Senate Hearing 115-397]
[From the U.S. Government Publishing Office]


                                                        S. Hrg. 115-397

                   OPEN HEARING ON FOREIGN INFLUENCE
               OPERATIONS' USE OF SOCIAL MEDIA PLATFORMS
                     (THIRD PARTY EXPERT WITNESSES)

=======================================================================

                                HEARING

                               BEFORE THE

                    SELECT COMMITTEE ON INTELLIGENCE

                                 OF THE

                          UNITED STATES SENATE

                     ONE HUNDRED FIFTEENTH CONGRESS

                             SECOND SESSION

                               __________

                       WEDNESDAY, AUGUST 1, 2018

                               __________

      Printed for the use of the Select Committee on Intelligence
      
      
 [GRAPHIC NOT AVAILABLE IN TIFF FORMAT]     


        Available via the World Wide Web: http://www.govinfo.gov
                    
                    
                               __________
                               

                    U.S. GOVERNMENT PUBLISHING OFFICE                    
30-957 PDF                  WASHINGTON : 2018                     
          
-----------------------------------------------------------------------------------
For sale by the Superintendent of Documents, U.S. Government Publishing Office, 
http://bookstore.gpo.gov. For more information, contact the GPO Customer Contact Center, 
U.S. Government Publishing Office. Phone 202-512-1800, or 866-512-1800 (toll-free).
E-mail, gpo@custhelp.com. 
                    
                    
                    
                    
                    
                    
                    
                    SELECT COMMITTEE ON INTELLIGENCE

           [Established by S. Res. 400, 94th Cong., 2d Sess.]

                 RICHARD BURR, North Carolina, Chairman
                MARK R. WARNER, Virginia, Vice Chairman

JAMES E. RISCH, Idaho                DIANNE FEINSTEIN, California
MARCO RUBIO, Florida                 RON WYDEN, Oregon
SUSAN COLLINS, Maine                 MARTIN HEINRICH, New Mexico
ROY BLUNT, Missouri                  ANGUS KING, Maine
JAMES LANKFORD, Oklahoma             JOE MANCHIN III, West Virginia
TOM COTTON, Arkansas                 KAMALA HARRIS, California
JOHN CORNYN, Texas
                 MITCH McCONNELL, Kentucky, Ex Officio
                  CHUCK SCHUMER, New York, Ex Officio
                    JOHN McCAIN, Arizona, Ex Officio
                  JACK REED, Rhode Island, Ex Officio
                              ----------                              
                      Chris Joyner, Staff Director
                 Michael Casey, Minority Staff Director
                   Kelsey Stroud Bailey, Chief Clerk
                                
                                
                                
                                CONTENTS

                              ----------                              

                             AUGUST 1, 2018

                           OPENING STATEMENTS

Burr, Hon. Richard, Chairman, a U.S. Senator from North Carolina.     1
Warner, Mark R., Vice Chairman, a U.S. Senator from Virginia.....     3

                               WITNESSES

Helmus, Dr. Todd, Senior Behavioral Scientist, The Rand 
  Corporation....................................................     5
    Prepared statement...........................................     7
DiResta, Renee, Director of Research for New Knowledge...........    16
    Prepared statement...........................................    19
Kelly, John, CEO and Founder of Graphika.........................    25
    Prepared statement...........................................    27
Rosenberger, Laura, Director, Alliance for Securing Democracy, 
  German Marshall Fund of the United States......................    30
    Prepared statement...........................................    32
Howard, Philip, Director of the Oxford Internet Institute........    89
    Prepared statement...........................................    91

                         SUPPLEMENTAL MATERIAL

Responses to Questions for the Record by:
    Todd Helmus..................................................   134
    Renee DiResta................................................   142
    John Kelly...................................................   148
    Laura Rosenberger............................................   150
    Philip Howard................................................   159
Charts introduced by members.....................................   163

 
                   OPEN HEARING ON FOREIGN INFLUENCE
                    OPERATIONS' USE OF SOCIAL MEDIA PLATFORMS 
                    (THIRD PARTY EXPERT WITNESSES)

                              ----------                              


                       WEDNESDAY, AUGUST 1, 2018

                                       U.S. Senate,
                          Select Committee on Intelligence,
                                                    Washington, DC.
    The Committee met, pursuant to notice, at 9:32 a.m. in Room 
SH-216, Hart Senate Office Building, Hon. Richard Burr 
(Chairman of the Committee) presiding.
    Present: Senators Burr, Warner, Risch, Collins, Blunt, 
Lankford, Cotton, Cornyn, Feinstein, Wyden, Heinrich, King, 
Manchin, and Harris.

   OPENING STATEMENT OF HON. RICHARD BURR, CHAIRMAN, A U.S. 
                  SENATOR FROM NORTH CAROLINA

    Chairman Burr. I'd like to call the hearing to order.
    I'd like to welcome our witnesses today: Dr. Todd Helmus, 
Senior Behavioral Scientist at the RAND Corporation; Renee 
DiResta, Director of Research at New Knowledge; John Kelly, CEO 
and founder of Graphika; Laura Rosenberger, Director of the 
Alliance for Securing Democracy at the German Marshall Fund; 
and Dr. Phil Howard, Director of the Oxford Internet Institute.
    Welcome to all of you. I thank you for being here today and 
for your willingness to share your expertise and insights with 
this Committee and, more importantly, with the American people.
    We're here to discuss a threat to the Nation that this 
Committee takes every bit as seriously as terrorism, weapons of 
mass destruction, espionage and regional instability. Today 
we're talking about how social media platforms have enabled 
foreign influence operations against the United States.
    Every member of this Committee and the American people 
understand what an attack on the integrity of our electoral 
process means. Election interference from abroad represents an 
intolerable assault on the democratic foundation this republic 
was built on.
    The Committee, in a bipartisan fashion, has addressed this 
issue head on. In May, we released the initial findings of our 
investigation into Russia's targeting of election 
infrastructure during the 2016 election.
    Today's hearing is an extension of that effort. But in some 
ways it highlights something far more sinister, the use of our 
own rights and freedoms to weaken our country from within. It's 
also important that the American people know that these 
activities neither began nor ended with the 2016 election. As 
you can see on the one graph on display to my left, your right, 
the Kremlin began testing this capability on their domestic 
population several years ago, before using it against their 
foes in the Near Abroad and on the United States and Western 
democracies.
    Even today, almost two years after the 2016 election, 
foreign actors continue an aggressive and pervasive influence 
campaign against the United States of America. Nothing 
underscores that fact more than yesterday's announcement by 
Facebook that they've identified over 30 new accounts that are 
not only causing chaos in the virtual domain, but also creating 
events on our streets with real Americans unknowingly 
participating.
    These cyber actors are using social media platforms to 
spread disinformation, provoke societal conflict and undermine 
public faith in democratic institutions. There does not seem to 
be much debate about that.
    I think it's also the case that social media isn't going 
anywhere anytime soon. It's part of how we exchange ideas, we 
stay connected, it binds us as a community, it gives voice to 
those that are voiceless. Social media is the modern public 
forum, and it's being used to divide us.
    This was never about elections. It is about the integrity 
of our society.
    So how do you keep the good while getting rid of the bad? 
That's the fundamental question in front of this Committee and 
in front of the American people. And it's a complex problem 
that intertwines First Amendment freedoms with corporate 
responsibility, government regulation and the right of 
innovators to prosper from their own work.
    Sixty percent of the U.S. population uses Facebook. A 
foreign power using the platform to influence how Americans 
see/think about one another is as much a public policy issue as 
it is a national security concern.
    Crafting an elegant policy solution that's effective but 
not overly burdensome demands good faith and partnership 
between social media companies and this Committee. We hope to 
hear from those innovators in September, because you can't 
solve a problem like this by imposing a solution from 3,000 
miles away. This requires a thoughtful and informed public 
policy debate and this Committee is uniquely positioned to 
foster that debate.
    Last November, when we first welcomed the social media 
companies in an open hearing, I stressed then what this debate 
is and is not about. This isn't about relitigating the 2016 
U.S. presidential elections. This isn't about who won or who 
lost. This is about national security. This is about corporate 
responsibility. And this is about the deliberate and 
multifaceted manipulation of the American people by agents of a 
foreign hostile government.
    I thank you again for being here, for the work that you've 
done. Your analytic and technical expertise is indispensable to 
us getting this right. We cannot possibly formulate the right 
solution without first knowing the extent of the problem.
    I'm hopeful this morning that as you offer your insights 
and your findings, that you'll also share your recommendations. 
We can't afford ineffective half-measures, let alone nothing at 
all.
    While it's shocking to think that foreign actors used 
social networking and communication mediums that are so central 
to our lives in an effort to interfere with the core of our 
democracy, what is even more troubling is that it's still 
happening today. Nothing less than the integrity of our 
democratic institutions, processes and ideals is at stake.
    With that, I turn to the Vice Chairman.

OPENING STATEMENT OF HON. MARK R. WARNER, VICE CHAIRMAN, A U.S. 
                     SENATOR FROM VIRGINIA

    Vice Chairman Warner. Thank you, Mr. Chairman, and I also 
want to welcome our witnesses today.
    This Committee has invested a significant amount of time, 
focus, and energy, both in public and behind closed doors, in 
uncovering and exposing Russian information warfare in our own 
backyard.
    It is clear that our efforts have increased Americans' 
understanding of what the Russians did in 2016 and how they 
sought to attack us through the use of social media. It was 
pressure brought by this Committee that led Facebook, Twitter 
and YouTube to uncover malicious activity by the Russian-backed 
Internet Research Agency. These revelations eventually resulted 
in the indictments of 13 Russian individuals and three Russian 
companies by the Special Counsel's Office in February of this 
year.
    Social media oversight has not typically been a function of 
our Committee and, for that matter, any Committee. I have no 
problem acknowledging that the terminology of this world--bots, 
spam, click bait, API, trolls--does not always come naturally 
to all of us. But thanks to bipartisan determination to 
understand what happened in 2016 and a commitment to stopping 
it from happening again, we have been able to accomplish a lot. 
We have helped reveal the Russian playbook, we have raised 
public awareness regarding the threat, and we have succeeded, 
however incremental, in pressuring each of these companies to 
take steps to address the problems on their platforms.
    That's the good news. The bad news is that we've got a lot 
more work to do. Twenty-one months after the 2016 election and 
only 3 months before the 2018 elections, Russian-backed 
operatives continue to infiltrate and manipulate social media 
to hijack the national conversation and set Americans against 
each other. They were doing it in 2016; they are still doing it 
today.
    That was made just evident yesterday, as the Chairman 
noted, when Facebook announced the takedown of 32 new pages and 
accounts that had connections to Russian-backed operations, and 
those accounts had hundreds of thousands of followers.
    In our previous hearings on Russian disinformation, we 
outlined the Russian playbook in the 2016 elections. We 
discussed how Russian operatives set up thousands of fake and 
automated accounts on Facebook, Instagram, Twitter, YouTube and 
others, in order to build networks of hundreds of thousands of 
real Americans. These networks pushed an array of 
misinformation, including stolen e-mails, state-led propaganda, 
fake news and divisive content, onto the newsfeeds of as many 
potentially receptive Americans as they could. And you will 
note out here today from our experts that they were extremely 
successful in that effort.
    These active measures have two things in common: first, 
they're effective; and second, they're cheap. For just pennies 
on the dollar, they can wreak havoc in our society and in our 
elections.
    And I'm concerned that, even after 18 months of study, we 
are still only scratching the surface when it comes to Russia's 
information warfare. Much of the initial focus was on paid 
advertisements, but it quickly became clear that these ads 
represented a tiny percentage of the IRA's activity compared to 
the hundreds of thousands of free Facebook and Instagram posts, 
pages and groups, and millions of tweets from IRA-backed 
accounts.
    Today, it is becoming clearer that IRA activity represents 
just a small fraction of the total Russian effort on social 
media. In reality, the IRA operatives were just the incompetent 
ones who made it easy to get caught. Who else is still out 
there actively attacking us? Are there other troll farms? What 
about the actual Russian intelligence services? I hope we'll 
hear from the experts today how much further out they think 
this Russian disinformation effort goes.
    I'm also concerned that the United States government is not 
well-positioned to detect, track or counter these types of 
influence operations on social media. These types of asymmetric 
attacks--which include foreign operatives appearing to be 
Americans, engaging in online public discourse--almost by 
design slipped between the seams of our free speech guarantees 
and our legal authorities and responsibilities.
    Again, I hope our witnesses will recommend ideas for better 
tackling this problem while also protecting our constitutional 
rights as Americans.
    All the evidence this Committee has seen to date suggests 
that the platform companies, namely, Facebook, Instagram, 
Twitter, Google and YouTube, still have a lot of work to do. 
Now, before I went into politics I spent more than 20 years in 
the tech business and I have tremendous respect for these 
companies and what they represent. And when they are at their 
best, they are a symbol of what this country does best: 
innovation, job creation, changing the world.
    I've been hard on them, though, that's true. But it's 
because I know they can do better to protect our democracy. 
They have the creativity, expertise, resources, and 
technological capability to get ahead of these malicious 
actors.
    That's why, as the Chairman mentioned, we'll be hosting 
senior executives from Facebook, Twitter, and, yes, Google, for 
a hearing on September 5th to hear the plans they have in 
place, to press them to do more, and to work together to 
address this challenge.
    That's because it's only going to get harder. As digital 
targeting continues to improve, and as new advances in 
technology and artificial intelligence--one that I'm 
particularly concerned on, like deep fakes--continue to spread, 
the magnitude of the challenge will only grow.
    I know today we'll focus on what happened in 2016 and what 
is happening now, but Russian active measures have revealed a 
dark underbelly of the social media ecosystem. These same tools 
that spread misinformation can negatively affect other aspects 
of our lives.
    I think we need to start pushing ourselves beyond just 
recognizing the problems and start to press actual policy ideas 
forward. I'm interested in hearing some of those policy options 
that might help us address broader challenges posed by the 
growth and dominance of a few social media companies.
    For example, does a user have the right to know if they are 
interacting with a person or a bot online? Do companies have a 
responsibility to ensure more transparency of how they collect, 
use, and secure user data? Do users have enough control over 
their own personal data?
    I hope, as a panel of experts here, you can help this 
Committee to lead and to begin to shape a bipartisan 
responsibility to this ongoing, as the Chairman has indicated, 
national security threat.
    Thank you, Mr. Chairman.
    Chairman Burr. I thank the Vice Chairman.
    Before I move to the testimony from our witnesses, some 
Committee housekeeping. After testimony, members will be 
recognized for five minutes by seniority, and I will hold that 
to five minutes today.
    We have five votes that are scheduled for 11 a.m. I'll make 
sure that all members today are able to ask these witnesses 
their questions. I would ask members that, when you need to 
leave to vote, would you be expeditious in coming back if 
you're in the queue to ask questions, and the Chair will work 
with each one of you to let you know where we think you'll be 
in the sequence.
    The Chair will announce he's going to miss the first two 
votes to stay here and keep the continuity of the hearing going 
so that we can get through as many members as we possibly can.
    With that, Dr. Helmus, I'll recognize you and we'll go from 
your right to left from there on. Dr. Helmus, the floor is 
yours.

 STATEMENT OF TODD HELMUS, Ph.D., SENIOR BEHAVIORAL SCIENTIST, 
                        RAND CORPORATION

    Dr. Helmus. Thank you, Chairman.
    Good morning, Chairman Burr, Vice Chairman Warner and 
distinguished members of the Committee. Thank you for the 
invitation to testify at this important hearing.
    Russia is engaged in a worldwide propaganda campaign. One 
particular focus for this campaign is in Russia's own backyard, 
in the former Soviet states of Eastern Europe. In addition to a 
military and propaganda war in Ukraine, Russia is disseminating 
propaganda to Russian speakers in the Baltics and other nearby 
states.
    Their goal principally is to drive a wedge between these 
Russian speakers and their host nations, the North Atlantic 
Treaty Organization, and the European Union. To do this, Russia 
uses--Russia, of course, uses bot and troll social media 
accounts. They also synchronize such tools with their state-
funded television network, their online news portals, and an 
army of regional proxies that some call ``useful idiots.''
    The RAND study I will talk to you about today sought to 
better understand the nature and effectiveness of Russian--of 
pro-Russia outreach on social media. By focusing on the region 
that includes Estonia, Lithuania, Latvia, Ukraine, Moldova, 
Belarus, our research team sought to help advance NATO's 
defense of the Baltic states and shed light on how to combat 
this issue around the globe. My written testimony highlights 
the analytic methods and key findings from this--from our 
report, but for today's testimony, I'll focus on our five key 
recommendations.
    First, there's a need to further develop analytic methods 
to track and target Russian propaganda efforts. To take any 
action against Russian social media operations, it is critical 
to identify Russian bot and troll accounts and track their 
activity in real time. This will require continued analytic 
advancements so that computers can distinguish between 
authentic social media chatter and the adversarial information 
campaigns that are to come.
    Second, it is important to highlight and tag Russian 
propaganda. The approach by international organizations 
involves frequently websites or e-mail alerts which reach only 
fellow activists or members of the policy community. Instead, 
the research team argues that it is important to highlight 
Russian propaganda in ways that are much faster and target at-
risk audiences.
    One example is Google ads could potentially help improve 
the speed and targeting of counter-messaging. The approach uses 
videos and other content embedded in Google search results to 
educate people who search for Russian-born fake news on Google.
    Third, expand and improve access to local and original 
content. One challenge, particularly in the Baltics, is that 
Moscow-controlled media, especially TV, is a dominant source of 
information for many Russian speakers in the region. Policies 
should not so much counter the Russian narrative as to displace 
it with more entertaining and accurate content. The team argues 
for training Russian language journalists, increasing access to 
Russian language television programming such as Current Time, 
and highlighting the authentic voice of local influencers.
    Fourth, the U.S., NATO and the EU must do a better job of 
telling their story. They should, for example, offer a 
compelling argument for Russian-speaking populations to align 
with the West or individual nation-states to which they belong. 
NATO should also more effectively communicate the purpose and 
intent of its infantry battalions now stationed in the Baltics.
    Finally, there is a need to build resilience in target 
populations. This will include long-term effort to implement 
media literacy training and integrate such training into 
classrooms. A public information campaign that can immediately 
convey the concepts of media literacy and the risk of Russian 
propaganda may also be necessary.
    Thank you once again for inviting me, and I look forward to 
taking your questions.
    [The prepared statement of Dr. Helmus follows:]
    [GRAPHICS NOT AVAILABLE IN TIFF FORMAT]
    
    Chairman Burr. Thank you, Dr. Helmus.
    Ms. DiResta.

STATEMENT OF RENEE DiRESTA, DIRECTOR OF RESEARCH, NEW KNOWLEDGE

    Ms. DiResta. Thank you, Chairman, Vice Chairman and members 
of the Committee, for giving me the opportunity to address this 
body today. I'm Renee DiResta, Director of Research at New 
Knowledge, and I study computation propaganda.
    Disinformation, misinformation and social media hoaxes have 
evolved from a nuisance into a high-stakes information war. Our 
frameworks for dealing with them have not evolved. We discuss 
counter-messaging, treating this as a problem of false stories 
rather than as an attack on our information ecosystem.
    We're in the midst of an arms race, in which responsibility 
for the integrity of public discourse is largely in the hands 
of private social platforms, and determined adversaries 
continually find new ways to manipulate features and circumvent 
security measures. Computational propaganda and disinformation 
is not about arbitrating truth, nor is it a question of free 
speech. It's information warfare, it's a cybersecurity issue, 
and it must be addressed through collaboration between 
governments responsible for the safety of their citizens and 
private industry responsible for the integrity of their 
platforms.
    Malign narratives have existed for a very long time, but 
today's influence operations are materially different because 
the propaganda is shared by friends on popular social 
platforms. It's efficiently amplified by algorithms, so 
campaigns achieve unprecedented scale. Adversaries leverage the 
entire information ecosystem to manufacture the appearance of 
popular consensus. Content is created, tested and hosted on 
platforms such as YouTube, Reddit and Pinterest; it's pushed to 
Twitter and Facebook with their standing audiences in the 
hundreds of millions, and it's targeted at the most receptive.
    Trending algorithms are gamed to make content go viral. 
This often has the added benefit of mainstream media coverage 
on traditional channels, including television. And if an 
operation is successful and the content gets wide distribution, 
recommendation and search engines will continue to serve it up.
    We're here because the Internet Research Agency employed 
this playbook. Their operation began around 2013, continued 
throughout the 2016 election, and even increased on some 
platforms, such as Instagram and Twitter, in 2017. The 
operation reached hundreds of millions of users across 
Facebook, Twitter, Vine, YouTube, G+, Reddit, Tumblr, and 
Medium. Websites were created to push content about everything 
from social issues to concerns about war, the environment and 
GMOs.
    Twitter accounts masqueraded as local news stations, 
WhiteHouse.gov petitions were co-opted, Facebook events were 
promoted, and activists were contacted personally via Messenger 
to take the operation to the streets. Twitter accounts and 
Facebook accounts associated with the IRA remain active today.
    The focus of the IRA campaign was to exploit social and 
especially racial tension. Despite YouTube's claim that the 
content found on its platform was not targeted to any 
particular sector of the U.S. population, the majority was 
related to issues of importance to the black community, 
particularly officer-involved shootings. Hundreds of thousands 
of Americans liked Facebook pages with names like Blacktivist, 
Heart of Texas and Stop All Invaders.
    The amount of explicitly political content that mentioned 
the candidates in 2016 was small, but unified in its negativity 
towards the candidacy of Secretary Clinton. In content that 
targeted the left, this included messages aimed at depressing 
the turnout, particularly among black voters, or painting 
Secretary Clinton in a negative light compared to Jill Stein or 
Bernie Sanders.
    Only the social networks that hosted this campaign are 
currently in a position to gauge its impact.
    The IRA was not the only adversary to target American 
citizens online. The co-opting of social networks reached 
mainstream awareness in 2014, as ISIS established a virtual 
caliphate across all social platforms.
    The debate about what to do about that made it obvious that 
no one was in charge. That confusion continues even as the 
threat expands. The Wall Street Journal recently revealed that 
a private intelligence company, Psy-Group, marketed their 
ability to conduct similar types of influence operations to 
impact the 2016 election.
    Social platforms have begun to take steps to reduce the 
spread of disinformation and deserve credit for doing that. 
These steps, several of which were inspired by prior hearings 
in this chamber, are a good start, but as platform, tactics and 
protections change, determined adversaries will develop new 
tactics.
    We should anticipate an increase in the misuse of less 
resourced social platforms. We should anticipate an increase in 
the use of peer-to-peer encrypted messaging services. Future 
campaigns will likely be compounded by the use of witting or 
unwitting persons through whom state actors will filter their 
propaganda. We anticipate the incorporation of new 
technologies, such as video and audio produced by AI, to 
supplement these operations, making it increasingly difficult 
for people to trust what they see.
    This problem is one of the defining threats of our 
generation. Influence operations exploit divisions in our 
society using vulnerabilities in our information ecosystem. 
They take advantage of our commitment to freedom of speech and 
the free flow of ideas. The social media platforms cannot and 
should not be the sole defenders of democracy and public 
discourse.
    So, we recommend immediate action to identify and eliminate 
maligned influence campaigns and to educate the public in 
preparation for the 2018 elections. We recommend an updated 
global IO doctrine, including a clear delegation of 
responsibility within the U.S. government. We believe that 
private tech platforms must be held accountable to ensure that 
they're doing their utmost to mitigate the problem in our 
privately owned public squares, and oversight is key.
    Finally, we need structures and cooperation, information-
sharing between the public and private sectors. Formal 
partnerships between security companies, researchers and the 
government will be essential to defending our values, our 
democracy and our society.
    In closing, thank you for the opportunity to participate in 
this conversation.
    [The prepared statement of Ms. DiResta follows:]
    [GRAPHICS NOT AVAILABLE IN TIFF FORMAT]
    
    Chairman Burr. Thank you, Ms. DiResta.
    Dr. Kelly.

  STATEMENT OF JOHN W. KELLY, Ph.D., FOUNDER AND CEO, GRAPHIKA

    Dr. Kelly. Chairman Burr, Vice Chairman Warner, members of 
the Committee: Thank you for this opportunity to appear before 
you today to discuss the weaponization of our social media 
platforms and the resulting harm to our democracy.
    The data now available make it clear that Russian efforts 
are not directed against one election, one party, or even one 
country. We are facing a sustained campaign of organized 
manipulation, a coordinated attack on the trust we place in our 
institutions and in our media, both social and traditional.
    These attacks are sophisticated and complex, and the 
Committee's bipartisan work to untangle and expose them sets a 
great example for the country.
    I am a social scientist and the CEO of a marketing 
analytics firm that develops advanced techniques for 
understanding the flow of information online. My experience 
with Russian online communities began 10 years ago when I 
helped lead a research effort at Harvard's Berkman Klein Center 
for Internet & Society. In this work, we observed Russia's own 
online political discussion evolve from a vigorously free and 
open forum with a wide variety of organic voices and viewpoints 
to a network rife with automated accounts and organized pro-
government trolling. In short, for the past several years the 
Russian government has been doing to us what they first did at 
home and in Eastern Europe a decade ago.
    We know this because of indispensable work by a wide range 
of investigative journalists, academic researchers, NGOs, 
grassroots organizations, often conducted at great personal 
risk. For more than a decade, these groups have documented the 
playbook used by the Russian government to spread chaos and 
discord online. These techniques include crafting fictitious 
online personas to infiltrate communities, infiltrating radical 
political communities on both sides to enhance their mutual 
distrust, targeting both sides of a country's most divisive 
issues, mixing pop culture references and radical political 
discourse to influence young minds, using bots and trolls for 
inorganic amplification, launching cyberattacks in conjunction 
with information operations.
    Again, each one of these features of the Russian 
government's attack against the American public was first 
tested and deployed against their own people, and then refined 
to target their chosen enemies abroad.
    Thanks to the great work of this Committee and to the 
cooperation of social media platforms, data documenting the 
Internet Research Agency's U.S.-focused effort in 2016 has now 
been released to the public. Many dissertations will be written 
on this data, but today I want to highlight just three points.
    First, Russian manipulation did not stop in 2016. After 
Election Day, the Russian government stepped on the gas. 
Accounts operated by the IRA troll farm became more active 
after the election, confirming again that the assault on our 
democratic process is much bigger than the attack on a single 
election.
    Second, they are targeting both sides of our political 
spectrum simultaneously, both before the 2016 election and 
right now. We see from the IRA data how the same Russian 
organization will use sophisticated false personas and 
automated amplification on the left and the right in an attempt 
to exploit an already divided political landscape.
    Our current landscape is particularly vulnerable to these 
sorts of attacks. In our estimate, today the automated accounts 
at the far left and the far right extremes of the American 
political spectrum produce as many as 25 to 30 times the number 
of messages per day on average as genuine political accounts 
across the mainstream. The extremes are screaming while the 
majority whispers.
    Third, American media is also being targeted. The IRA 
persona ``Jenna Abrams,'' which had accounts on multiple 
platforms, was cited by over 40 U.S. journalists before being 
unmasked. The Russian activity seeks to turn the normal 
differences of opinion among Americans into headlines about 
unbridgeable political divisions. American journalism has a 
responsibility to harden itself to these manipulations.
    The platform's proactive transparency in these matters will 
be critical in keeping us ahead of the new efforts and tactics 
and to informing public debate on how to strengthen our 
democracy in the face of these threats. There are significant 
challenges ahead of us, and, unfortunately, knowing the other 
team's playbook does not mean you are going to win the game.
    The released data allow us to understand what the IRA did 
in retrospect. Detecting these efforts before they have already 
had their intended effect and agreeing on how to address them 
remains a formidable challenge.
    On the technological front, our field is making progress in 
discerning technical markers that distinguish true grassroots 
movements from fabricated campaigns. And research is yielding 
methods for detecting manipulations before they gain momentum. 
It is equally important to keep our values front and center in 
this work, notably our dedication to freedom of expression and 
to protecting user privacy.
    It will take skilled women and men professionally dedicated 
to this task and an investment in the development of tools and 
methods to first catch up and then stay ahead in our race to 
defend America's cyber social fabric from a new form of 21st-
century warfare.
    Civil society or media institutions in the technology 
sector can only do so much in the face of it. The 
responsibility also lies with government to ensure that any 
state actor eager to manipulate and harass faces consequences 
for their actions. It is not just bots that are attacking us 
and it's not just algorithms that must protect us.
    The efforts of this Committee represent a tremendous step 
forward in what will undoubtedly be a long and challenging 
process, and I commend its leadership, dedication, thoroughness 
and bipartisan spirit.
    Thank you again for the opportunity to participate today.
    [The prepared statement of Dr. Kelly follows:]
    [GRAPHICS NOT AVAILABLE IN TIFF FORMAT]
    
    Chairman Burr. Thank you, Dr. Kelly.
    Ms. Rosenberger.

STATEMENT OF LAURA ROSENBERGER, DIRECTOR, ALLIANCE FOR SECURING 
   DEMOCRACY AT THE GERMAN MARSHALL FUND OF THE UNITED STATES

    Ms. Rosenberger. Thank you, Chairman Burr, Vice Chairman 
Warner and distinguished members of the Committee. I submitted 
my full statement for the record, but let me highlight a few 
key points on the national security context of these activities 
and steps we need to take to address them.
    The health and strength of our democracy depends on 
Americans' ability to engage freely in political speech, to 
hold vibrant debates free from manipulation, and to obtain 
reliable information about the issues of the day.
    I come at this issue as a national security professional 
who has watched social media and online platforms be weaponized 
to attack these foundations of our democracy. I watched from 
inside the National Security Council when Russia test-drove 
these approaches in Ukraine and as our government struggled to 
understand them and respond. And I watched from the campaign 
trail in 2016 as our government was surprised that these tools 
were used against American democracy.
    The 9/11 Commission characterized the failures that 
preceded that attack as a failure of imagination. I believe the 
failure to detect and disrupt the Russian government's 
weaponization of online platforms to be a similar failure to 
imagine, not just by the government but also by those who ought 
to understand these tools best, their creators.
    Thanks in part to the bipartisan work of this Committee, we 
now know that Russian government-linked actors used a range of 
means to manipulate the online information space, using nearly 
every social media and online platform to amplify extreme 
content and promote polarization, manipulate search results, 
encourage action off-line, undermine faith in institutions, 
insinuate themselves to target audiences in order to influence 
public debates on geopolitics, and spread hacked information.
    And, it's not just the Internet Research Agency. We know 
Russian military intelligence officers used fake social media 
personas and websites, and the United States is not the only 
target.
    The Chinese government has also begun to use social media 
to manipulate conversation and public opinion outside of its 
borders. Our authoritarian adversaries are using these 
platforms because controlling the information space is a 
powerful means to undermine democratic institutions and 
alliances and advance their geopolitical goals. But meaningful 
actions to close off these vulnerabilities by both government 
and the private sector are lacking, and as we focus on the past 
we are missing what still is happening and what will happen 
again. What may have once been a failure to imagine is now a 
failure to act.
    Fundamentally, this is not a content problem. This is a 
deliberate manipulation of the information space by actors with 
malicious intent engaging in deceptive behavior. Transparency 
and exposure of manipulation is critical to reducing its 
effectiveness and deterring it, but tech companies have 
remained defensive and reluctant to share information. Their 
focus cannot be on public relations campaigns; it needs to be 
on detailing the nefarious activities these companies are 
seeing and curtailing it. Facebook's announcement yesterday is 
what we need more of.
    Transparency is also critical for accountability, and 
outside researchers need greater access to data in a manner 
that protects users' privacy. Users also need more context 
about the origin of information and why they see it, including 
disclosure of automated accounts while protecting anonymity.
    Identifying malicious actors and their patterns of activity 
requires new mechanisms for sharing data, both between the 
public and private sectors and among technology companies. 
Massive efforts along these lines are welcome, but need to be 
streamlined and institutionalized and protect privacy and 
speech.
    We also need to identify threats in new technology before 
they are exploited. AI presents new tools to both combat the 
problem as well as new ways to make it worse, such as deep 
fakes. Government and tech companies need to close off 
vulnerabilities that are being exploited, including by 
providing a legal framework such as the Honest Ads Act that 
applies the same standards to political ads online that apply 
off-line.
    Manipulation of social media is one part of a larger 
strategy to weaken our democracy. My bipartisan program 
recently released a policy blueprint for countering 
authoritarian interference in democracies endorsed by a 
bipartisan and trans-Atlantic group of former national security 
officials. Our recommendations include sending clear deterrent 
warnings to foreign actors about the consequences for such 
activity and identifying our own asymmetric advantages.
    Government also needs to expose foreign interference 
publicly, and legislating reporting requirements for the 
Executive Branch would ensure that politics are not a 
consideration.
    We also need to harden our electoral infrastructures 
through measures like the Secure Elections Act, as cyber 
attacks remain a core part of Moscow's arsenal. More broadly, 
the government needs a unified and integrated approach, 
including through a counter-foreign-interference coordinator at 
the National Security Council and a National Hybrid Threat 
Center.
    Finally, this is a transnational challenge and it is 
essential that we work more closely with allies and partners to 
share information about threats and collaborate on responses.
    Distinguished members, there are steps that we can take 
today to make our democracy more secure. We need to come 
together across party lines and between the public and private 
sector to address this challenge. Putin's strategy is to divide 
Americans from one another in order to weaken us as a country. 
In the face of this threat, standing together as Americans has 
never been more important.
    Thank you.
    [The prepared statement of Ms. Rosenberger follows:]
    [GRAPHICS NOT AVAILABLE IN TIFF FORMAT]
    
    Chairman Burr. Ms. Rosenberger, thank you.
    Dr. Howard.

 STATEMENT OF PHILIP HOWARD, Ph.D., DIRECTOR, OXFORD INTERNET 
                           INSTITUTE

    Dr. Howard. Thank you, Chairman Burr and Vice Chairman 
Warner, for the opportunity to testify on foreign influence 
operations and their use of social media platforms.
    My name is Phil Howard. I'm a professor at Oxford 
University and Director of the Oxford Internet Institute, a 
department at Oxford. My own area of expertise includes 
political communication and international affairs. And at the 
institute, I've been leading a project on computational 
propaganda, currently funded by the European Research Council, 
and something that--a research initiative that started with 
support from the National Science Foundation in this country.
    I began working on these questions in 2010, but the project 
really grew in the summer of 2014, when the Malaysian Airlines 
flight was shot down over Ukraine. And in Hungary, where I was 
based at the moment, at that time, many of my Hungarian friends 
got multiple ridiculous stories about what had happened. We 
knew these came from Russian or Russian sources.
    There was one story that democracy advocates had shot the 
plane down because they thought Putin was traveling on 
commercial from Amsterdam to Malaysia. There was another story 
that Americans had shot the plane down because the U.S. had 
stationed troops in Ukraine. And far and away my favorite was 
the story of a lost tank from World War II that had come out of 
the great forests of Ukraine that was confused and had shot the 
plane down.
    It was at that moment that we realized the thrust of 
Russian propaganda was not so much about creating one counter-
narrative and placing that story amongst a public, but creating 
multiple, sometimes equally ridiculous, stories and placing 
those stories in a public. What we did not expect is that 
Russia would turn this campaign strategy on America, on the 
other great democracies in the West.
    I'm going to say a little bit about what we've learned over 
the last few years about the form of these computational 
propaganda campaigns and give you a sense of what I expect for 
2018 and perhaps the years ahead.
    We coined the term ``computational propaganda'' because 
this kind of disinformation is unique. It makes use of 
automation; it makes use of the social media algorithms that 
technology firms themselves have built. And it makes use of 
those algorithms to distribute targeted propaganda. This 
propaganda includes falsely packaged news, misinformation, 
illegal data harvesting, hacking. There's a range of techniques 
that goes into backing computational propaganda.
    And there's three kinds of campaigns that tend to target 
voters. There are campaigns to polarize voters on particular 
issues. For example, known Russian social media accounts will 
simultaneously promote political action by groups like the 
United Muslims of America and the Army of Jesus, or encourage 
African-American political activity around Black Lives Matter 
and encourage others to support the Blue Lives Matter movement. 
The goal is to get groups of voters to confront each other 
angrily, not just over social media, but in the streets.
    Second, there are campaigns to promote or discredit 
particular senators, presidential candidates and other 
political figures. Foreign-backed rumormongering is not new, 
but it is strategically targeted in a way that is new.
    Third and perhaps most worrying for democracy is that 
campaigns, some of these campaigns, discourage voters from 
voting. Voter suppression is a common messaging technique aimed 
at voters whose support for a candidate a foreign government 
might find unpalatable. For example, voters are often told that 
voting day has been postponed, or that they can text message 
their vote in, or that the polling station has moved when it 
has not.
    In the case of the United States, these campaigns are 
ongoing. Months after the last major election in the U.S., our 
team demonstrated that disinformation about national security 
issues, including information from Russian sources, was being 
targeted at U.S. military personnel, veterans and their 
families.
    During the President's State of the Union Address, we 
demonstrated that junk news, some of which originates from 
foreign governments, is particularly appetizing for the far 
right, white supremacists, and President Trump's supporters, 
though notably not small ``c'' conservatives.
    Our team has completed recently a global inventory of the 
number of governments managing these campaigns and, while many 
of us talk about Russia, I would say that the original writ of 
our research was to track what the Russians and Chinese are 
doing in this domain. So far, we have not documented much 
Chinese activity. We know they spend time working on voters in 
Taiwan, they work on the Chinese diaspora. We believe they have 
capacity, but as of yet they haven't set American voters in 
their sights.
    We have found in this most recent inventory that there are 
48 countries in the world with large political parties or 
government agencies running misinformation campaigns either on 
their own voters or on voters in other countries. There are 
seven authoritarian governments, aside from Russia, that spend 
money in this domain.
    And overall, I would say it's time for democracies to 
develop their own cyber-security strategies. The time for 
industry self-regulation has probably passed. And I'm grateful 
for this opportunity to discuss the possibilities going 
forward.
    [The prepared statement of Dr. Howard follows:]
    [GRAPHICS NOT AVAILABLE IN TIFF FORMAT]
    
    Chairman Burr. Dr. Howard, thank you very much.
    I am reminded, after listening to all of the testimony, 
that the 1960s strategies of Russia were simple: If it's bad 
for America, it must be good for us. And it seems like this is 
rooted in the same foundational strategic vision that they had 
then.
    The Chair would recognize himself for five minutes. I'm 
going to ask all of you to follow my chart over there. I just 
want to get your comments relative to whether this is accurate 
or not.
    [The material referred to appears in the Supplemental 
Material on page 163.]
    Chairman Burr. The red line represents the Russian 
activities of the IRA Twitter activity relative to outside the 
United States. The blue line is U.S.-focused IRA Twitter 
activities. What that shows is a huge spike up in the 2014-2015 
timeframe, which was the invasion of the Ukraine.
    The next two jogs of the lineup are between 2015 and 2016, 
and that's the Crimea propaganda, and the regional politics in 
Belarus specifically.
    And then all of a sudden you see this spike in the blue 
line in the United States. I think the fascinating thing here 
is that the spike is in 2017 and 2018, which tells us--and 
correct me if I'm wrong--the effort in 2017 and 2018 was much 
more intense than the effort in 2015 and 2016 in the lead-up to 
an election. Am I misreading that?
    [No response.]
    So, Dr. Kelly, let me ask you this: Is it possible for the 
mainstream media today to run a story that was the creation of 
an effort by the IRA, that had no factual basis, but over the 
transition of how their strategies work, it gained enough 
coverage, belief that people had read it, that it got so big 
that it had to have been real? Is that possible?
    Dr. Kelly. I believe it is possible. I think the goal of 
these information operations over the long term is to condition 
the public and to weave the network, so to speak, that later 
you can use it to move any sort of story.
    Remember, a key feature of propaganda--you know, if you're 
running a propaganda outfit, most of what you publish is 
factual, so that you're taken seriously, and then you can slip 
in the wrong thing at exactly the right time. I believe that's 
what they've done, is cultivate a set of sources as 
authoritative with content that's often just about Kim 
Kardashian. And then those people become credible, they become 
cited in the mainstream media. And then at that point, they can 
start to move anything they want through it.
    Chairman Burr. And is it the individuals that contribute to 
that theme that's on a social media platform, in many cases 
Americans responding, that gives it credibility? And are they 
knowing or unknowing as to what they're participating in?
    Dr. Helmus, have you got a strategy on that?
    Dr. Helmus. Certainly I agree that there's no borders on 
social media. There's no borders on media today. So certainly 
content that's disseminated by one source could easily get 
picked up by another. It's our observation from looking at 
Eastern Europe that there's fundamental issues with journalism 
training and quality that can certainly lead to and exacerbate 
that type of issue of, you know, bringing viral content that is 
otherwise false or untrue into perceptions of reality.
    Chairman Burr. Ms. DiResta, you said, and correct me if I'm 
wrong, IRA pages stay active today.
    Ms. DiResta. Yes, sir, I believe that's true; and Twitter 
accounts that were associated with IRA botnets also appear to 
be dormant today with the potential to be able to be turned 
back on at some point.
    Chairman Burr. So with all the efforts by the Justice 
Department at targeting by the public acknowledgement and 
indictment of individuals, the IRA has not gone away?
    Ms. DiResta. No, sir.
    Chairman Burr. Their capabilities--and comment on it if you 
will--their capabilities relative to Facebook's latest 
disclosure may have gotten significantly better.
    Ms. DiResta. One thing that's a very big, significant 
challenge is attribution. So we can attribute this to the IRA, 
perhaps. I also read the same news that you read yesterday and 
don't have any inside information there. My understanding is 
they believe it was the IRA based on image similarities, 
tactical similarities.
    What they did change was they paid in, I believe, U.S. 
dollars and Canadian dollars. So they are no longer paying in 
rubles. They are probably no longer using IP addresses that are 
tied to Russia; slight increases in operational security that 
will make them more difficult to detect.
    The other thing that is going to go along with that, 
though, is as attribution is so difficult, particularly for 
outsiders who don't have access to that kind of account level, 
what we call metadata, is that other people will be able to run 
the same playbook, perhaps making it look like an IRA operation 
when it was conducted domestically.
    Chairman Burr. Individual or a nation state?
    Ms. DiResta. Individual or nation state, yes, sir.
    Chairman Burr. Great, thank you.
    Vice Chairman.
    Vice Chairman Warner. Thank you all for your testimony. I 
think a couple of things. One, we're still mostly just talking 
about the IRA activity, as opposed to what we don't know in 
terms of other Russian services' activities. And we do know the 
IRA, with the revelations of yesterday, has gotten better.
    And we're going to still need to figure out their 
tradecraft. And one of the things we need from expertise like 
you is I feel like even when the platform companies are moving 
in the right direction, they're only doing it looking at their 
own universe, their own platform, not the interrelationship.
    I think, Mr. Kelly, you said something that was maybe the 
single most stunning line of all the testimony, that in terms 
of the political content, particularly on the extremes, that 25 
to 30 times more of that content is being generated by bots and 
automated accounts rather than individuals. Is that correct?
    Dr. Kelly. Yes, Senator, that's correct. If you look at the 
American political spectrum and, say, array a set of 
politically oriented Twitter accounts along an axis where on 
one side you've got those that only talk to people of their 
own, you know, stripe, and on the other it's the other stripe, 
and most Americans are in between, connected to some on the 
right and the left, those on the either extreme of that network 
are shouting with automated amplification.
    Vice Chairman Warner. So with a lot of that automated.
    Let me state for the record, we had some of this--I've had 
conversations with you in the past. There are very appropriate 
and effective roles for automated accounts and bots in certain 
cases. But I guess what I would ask--I'll start with Ms. 
Rosenberger and Dr. Kelly on this: Shouldn't we as human beings 
have a right to know--maybe not make a judgment, but a right to 
know whether the content that we're receiving is coming from a 
human being versus an automated account; recognizing that there 
is good value in some of the automated accounts?
    Ms. Rosenberger. Yes, Senator. I believe that context about 
information is absolutely critical for consumers of that 
information to be able to evaluate it. When we talk about 
critical thinking in media literacy, this takes on wholly new 
characters when we talk about online content. And so having 
information about the origin of information, about whether or 
not that content is being served up through an automated 
process, why users are seeing that kind of information, I 
absolutely believe that's critical.
    One thing I do think is important in this conversation is 
that we ensure we protect the anonymity online, which is 
essential for democratic activists in authoritarian states. But 
I believe very deeply that there are ways to identify 
automation without compromising the ability for users, real 
users, to be anonymous.
    Vice Chairman Warner. Dr. Kelly, do you want to?
    Dr. Kelly. Well, we have to recognize that automation is 
performing a lot more functions online that simply supporting 
Russian propagandists. And the fact that it's doing so many 
different things, some of which are, you know, call them green 
things we like and some of which are red things we don't like, 
makes it extremely hard, without being able to know who's 
running that robot, to know who's using it for good or bad.
    Vice Chairman Warner. Dr. Howard, did you want to weigh in 
on this?
    Dr. Howard. No.
    Vice Chairman Warner. Could we analogize to the markets 
where, with the huge advances around HFT and high frequency 
traders--the markets, in terms of trying to make sure that 
things didn't get totally away, put certain speeds bumps in 
place. And if the market jolts one way or another, there are 
these speed bumps that then allow in a sense human activity.
    With the, again, 25 to 30 times automation, if there are 
stories that are trending at an enormously rapid rate, that 
might be trending because they've got this enormous amount of 
automation driving that story, you know, could there be some 
kind of time out so that you could, a company, or some entity, 
could evaluate whether this is actual, not actual? Something 
looks phony here, fishy here? Any of you on that comment?
    Ms. DiResta. I think that the parallel to HFT is spot on. I 
think that it's an issue of information integrity. And one of 
the challenges that the platforms have had is believing that 
they need to address the core of the narrative. And what we 
should be looking for is addressing the dissemination patterns 
that you're mentioning.
    Vice Chairman Warner. I think that's really--go ahead, Mr. 
Kelly.
    Dr. Kelly. Well, one thing to keep in mind is that, again, 
automation is running all kinds of things. So it's not just 
pushing Russian propaganda. It's pushing legitimate American 
political speech. It's also pushing pop music elements in, you 
know, marketing around music. So automation is doing a lot of 
things in different places.
    Vice Chairman Warner. And I'll make the comment that it 
doesn't come with good or bad attached. But I guess I just 
think as a human being, I ought to have that knowledge of 
whether that message is being promoted to me by a human being 
or by automation.
    And I know my time's up. I just want to come back, asking 
Ms. Rosenberger on the next round of, you know, could we deal 
with that protection of anonymity, but still put some geocoding 
so that if somebody says Richard Burr from North Carolina, but 
it's actually come from a different location?
    Thank you, Mr. Chair.
    Chairman Burr. Dr. Howard, did you have something you 
wanted to add to that?
    Dr. Howard. I just wanted to add that the other possibility 
is to have these accounts self-identify with B-O-T, bot, in the 
name. That kind of disclosure is what helps users separate the 
good content from the bad.
    Chairman Burr. Great.
    Senator Risch.
    Senator Risch. Well, thank all of you for coming here 
today.
    I think the takeaway from this, after listening to all of 
this, is something that's troubled me from the beginning and 
that is how difficult this is. We know the problem. We have bad 
actors putting out bad information. The difficulty is how do 
you segregate those people who are doing this from Americans 
who have the right to do this?
    I've looked at the stuff that--that, as everybody has, that 
is part of this. But yet, if you took one of those pieces, any 
one of them individually, and looked at it and said, we just 
discovered who's doing this, it's John Doe in East Overshoe, 
New Jersey, there's nothing illegal about it. It may be 
disgusting. It may be untrue. It may be with a bad motive. But 
there's nothing--indeed, it's protected by the First Amendment 
of the Constitution.
    So how do you separate that person from someone who is 
doing the same thing, but coming from Russia, but whose motives 
are to enhance Russia by pulling down America? How do you 
police that?
    And I think, probably, the question that Senator Warner 
asked about putting a speed bump in so that somebody can 
evaluate this. I mean, that kind of puts--I want to be the 
evaluator, and I think most everybody does, and that's the 
problem.
    And then you talked about protecting anonymity. How do 
you--how can you protect anonymity if you're going to actually 
do something against someone who is doing something that we 
don't want done?
    These are extremely difficult questions. And I appreciate 
all the kind things you've said about this is bipartisan, we 
all need to come together, et cetera, et cetera. We all agree 
with that, but how in the world do you do this? I mean, the 
takeaway here has got to be that this is just an enormous, if 
not an impossible, thing.
    Mr. Helmus, your thoughts?
    Dr. Helmus. Yes, I absolutely agree. I think that is the 
fundamental question.
    In our research, we identified upwards of 40,000 accounts 
centered around Ukraine that are putting out vociferously anti-
Ukraine content. And ultimately, the crux is are these bad 
actors that are doing this? Or is this a free--other actors 
practicing what might otherwise be their free speech?
    So, that's challenged our bot detectors. So, there are some 
ways, and I'll defer to others on the Committee who can speak 
to these, but there are bot detectors that are available that 
can detect some types of content that mimic the characteristics 
of bots. But it is an arms race. As developers develop ways to 
detect bots based on either inhuman levels of content, the 
timing of their tweets, or what have you, the producers of 
those bots will then identify other ways of circumventing that 
and staying covert. So, it's an arms race and I think it will 
just require constant research and evaluation to develop and 
update new techniques.
    Senator Risch. Ms. DiResta.
    Ms. DiResta. What you're describing is a significant 
problem for researchers as well. And we look at information 
operations, trying to gauge, again, attribution or whether this 
is organic or not.
    Senator Risch. But what do you do about it when you do get 
the attribution?
    Ms. DiResta. We try to look at the content. Has it appeared 
elsewhere? Is it affiliated with past IRA operations? Or is it 
coming from somewhere else? So, we look at the origin.
    We look at the voice; the actors that are pushing the 
content. Are they bots? Are they humans? Is there something off 
about the bio related to past tweets? There's a number of 
signatures there. And then, we look at the dissemination 
pattern. Does it look like it's been artificially amplified? Is 
it being run through accounts, or groups, or pages that seem a 
little bit dubious?
    We try to flag things for the social platforms as well. We 
believe firmly in transparent communication, where we're 
saying, this is what we're seeing, what are you seeing? They 
have access to metadata and to account information and to e-
mail addresses, phone numbers, things that people have 
registered their accounts with. That is also a significant part 
of the investigation of the operation.
    There is no easy answer to this question. This is the 
primary challenge and this is where we see even influence 
operations going towards laundering narratives, either through 
the unwitting or through participants. That's a hard problem.
    Senator Risch. The analysis that you're talking about is 
you're looking for all of these things. But you'll find, I 
assume, some actors that are, what we would consider, bad 
actors, but yet, some actors that we would consider good 
actors, whether it was a U.S. government operation or 
something.
    Who makes the determination as to who's a good actor and a 
bad actor? That's what I really, really struggle with.
    Ms. DiResta. And I think the----
    Senator Risch. Dr. Kelly, why don't you get your two cents 
worth in?
    Dr. Kelly. Thank you, Senator.
    So it's tractable to tell what's fake. It's harder, but 
doable, to figure out who is behind it. And then you need to 
understand who's behind it, tracking the landscape of threat 
actors. That's where somebody is making a determination who's 
against our interests and who doesn't matter. Then, once you 
have that, you know, it's up to government and other 
appropriate folks to figure out the response.
    I think to do that detection in the first place requires an 
enormous amount of data and sophisticated methods of analysis. 
And it's not just data from one platform, so, it can't happen 
only internally. It has to happen with data from multiple 
sources, which then gets to your, I think, extremely important 
questions about who makes these determinations and who has the 
right to see that private data.
    I think we have to look at a model that's like cyber-
security firms. So there are trusted industry partners that 
everybody trusts, that they know are going to be secure in the 
way they handle that data. We need some sort of a facility like 
that where these advanced----
    Senator Risch. Of course, this is different than cyber-
security, in that with cyber-security you don't want anybody 
entering a private space, whereas with this you want everybody 
entering. That to me differentiates the two.
    My time is up. Thank you, Mr. Chairman. Thank you.
    Chairman Burr. Senator Feinstein.
    Senator Feinstein. Thanks, Mr. Chairman.
    I want to thank Facebook for their move yesterday to delete 
32 pages and 290,000 accounts on the basis that Russia and 
other outside actors are continuing to weaponize social media 
platforms. I'm very pleased that Facebook took this action, and 
I hope that all social media platforms continue to actively 
counter Russia's foreign influence campaign. I have no question 
that it's going on, and I have no question that it is related 
to more than just election interference.
    Let me ask this question: Since the 2016 election ended, 
how many IRA accounts have any of you found that are still 
active?
    Dr. Kelly.
    Dr. Kelly. We've been doing some work on this. We went and 
looked--I mean, that list of accounts is extremely valuable. We 
looked for live accounts on other platforms using open source 
research tools and we found a great deal of accounts directly 
connected to the closed accounts, which were active across 
numerous platforms.
    Senator Feinstein. Can you put a number on it?
    Dr. Kelly. Of the sample we've looked at so far, it's 
roughly 28 percent of those accounts are connected to at least 
one live account on a different platform. We also know that 
those accounts were connected to numerous other Twitter 
accounts and where--we think of this as what we have here is 
the tentacle of an octopus, and we don't know how far out on 
the arm of that octopus that tentacle has gotten.
    Senator Feinstein. How about Russia's accounts?
    Dr. Kelly. The Russian accounts evident in this data?
    Senator Feinstein. Right.
    Dr. Kelly. Well, presumably these are IRA accounts too and 
presumably they have their own--you know, they've got a 
tentacle wagging in Russia as well and I don't know how much of 
their effort this represents.
    Senator Feinstein. Does anybody else on the panel have a 
comment on this subject matter?
    Yes. Please, doctor?
    Dr. Howard. Thank you, Senator. My comment would be that 
it's the social media firms who have that information. We do 
our best juggling probabilities and percentages to make best 
guesses about what kinds of account. Some of these accounts 
occasionally slip into Cyrillic and then slip back. There are 
some giveaways. But it's actually the social media firms that 
have the best data on this.
    Senator Feinstein. Well, let me ask you this question. 
Facebook has alleged that IRA activity on its platform alone 
reached 126 million people and that doesn't include Instagram 
or Twitter. What can you say about the extent to which the IRA 
activity reached real Americans?
    Dr. Howard. I can say that it was significant, yet also 
concentrated in swing states.
    Senator Feinstein. I'm sorry? Concentrated in?
    Dr. Howard. Swing states----
    Senator Feinstein. Swing states.
    Dr. Howard [continuing]. During the 2016 election. So 
particular states got more of this kind of content than other 
states.
    Senator Feinstein. And what was the time that you looked at 
that to draw that conclusion?
    Dr. Howard. It was from the beginning of the presidential 
debates until through to a few days after Election Day.
    Senator Feinstein. Have you looked at it now?
    Dr. Howard. Not in the last few months, no.
    Senator Feinstein. Can you estimate the number of Americans 
touched by Russian-linked activity in this area?
    Dr. Howard. No. That is very difficult to do.
    Senator Feinstein. Can anybody?
    Yes, please go ahead.
    Ms. Rosenberger. No, I just wanted actually to add a small 
data point to this, which is we spend a lot of time talking 
about Facebook and Twitter but as Renee highlighted and others 
have noted, this is a problem of the entire information 
ecosystem. This is cross-platform. Reddit confirmed hundreds of 
IRA-created accounts. Tumblr did it and in particular on 
Tumblr, that platform was used to target the African-American 
community particularly.
    So, I think this is why it's so really difficult to 
quantify in any meaningful way the reach of these activities, 
because this is across the entire ecosystem, not to mention, as 
others were highlighting, how this information gets picked up 
and then transmitted and amplified through mainstream media 
outlets.
    Senator Feinstein. Let me ask you, when information becomes 
a weapon, does anybody see any need to change the environment 
to prevent this from happening?
    Ms. DiResta. I believe that many of us were advocating 
doing that when it became clear that ISIS had turned the 
information ecosystem into a weapon. I believe that, 
unfortunately, the dialogue between the government, the 
platforms and researchers was not necessarily where it needed 
to be. There were a handful of convenings that tried. There was 
the Global Engagement Center that was established, that's now 
tied up in some funding morass and we're not really clear what 
the status of that is.
    The tech platforms, about two years after the extent of the 
ISIS operation became known, established the Global Internet 
Forum to Counter Terrorism. To the best of my knowledge, that's 
not staffed so much as it is a repository of hashed content so 
that platforms can participate in takedowns.
    To answer your earlier question with one other point, we 
did see in the public House data set, when the House released 
the ads, that the ads were both demographically and 
geographically targeted. The number of people who saw that 
content, only the platforms have access to that information, 
but we could also gauge the number of followers that did follow 
the Russia pages. And that was in the neighborhood of a couple 
hundred thousand on the largest pages.
    Senator Feinstein. Thanks. My time is up.
    Thanks, Mr. Chairman. Thank you.
    Chairman Burr. Thank you.
    Senator Collins.
    Senator Collins. Thank you, Mr. Chairman.
    Dr. Kelly, you have a very profound statement in your 
testimony. You said: Russian efforts are not directed against 
one election, one party, or even one country. What are Russia's 
ultimate goals? Is it to undermine the public's faith in 
Western democracies and so weaken the bonds that unite us, that 
there are opportunities for Russia?
    Dr. Kelly. Yes, Senator, I believe that's exactly correct. 
I think they have long-term strategic goals, which include 
weakening Western institutions and faith in democracy and 
traditional sources of information and authority. That's the 
strategic goal. And then they have a lot of near, short-term 
tactical goals, things like injecting hacked information to 
sway a particular event or election, and they're doing that 
activity all around their periphery and now here.
    Senator Collins. Ms. DiResta, this is a question for both 
you and Dr. Kelly. Both of you emphasized that Russian 
manipulation did not stop in 2016. In fact, you, Dr. Kelly, 
said that Russia stepped on the gas and increased its activity. 
And Ms. DiResta, you said that Russian efforts increased 
postelection to promote racial tensions in our country.
    We imposed sanctions on Russia. They seem to have done no 
good when it comes to this kind of activity. What can we do 
beyond educating the public to counter Russia more effectively?
    Ms. DiResta, I'll start with you.
    Ms. DiResta. I would say that one of the things that we 
need to do is to evaluate our information operations doctrine, 
JP 313. I believe Senator Warner alluded to this in his recent 
policy proposals. I think that addressing the scale and 
sophistication of information operations is something that as a 
government we've not really looked at that in quite some time 
and perhaps that would be a good place for us to start.
    Senator Collins. Thank you.
    Dr. Kelly.
    Dr. Kelly. I think there's a technical component, which is 
to be able to effectively detect and attribute this activity so 
you can authoritatively prove it's happening, and then you have 
a more traditional toolkit of foreign policy measures to take 
action.
    Senator Collins. Dr. Howard, I want to get to something you 
said, and that was you gave us several compelling examples from 
your Hungarian experience where they received clearly false 
stories that were intended to explain the downing of the 
Malaysian airline. And what's interesting to me is, based on 
Dr. Kelly's testimony, it isn't just the Hungarian press that 
is being manipulated or infiltrated or controlled, but we've 
seen evidence where America's media is also being targeted.
    Dr. Kelly pointed out that the Russian persona of Jenna 
Abrams, who had accounts on multiple platforms, was cited by 
more than 40 U.S. journalists before being unmasked. How can 
the media be more sensitive or more aware, more on guard to 
being manipulated in this way?
    Dr. Howard. Thank you, Senator. The United States actually 
has the most professionalized media in the world. It's learned 
certainly to evaluate their sources and no longer report tweets 
as given. So I would say that in this country, the most 
professional news outlets are already on the defense. They 
already have ways to ensure that the quality of the news 
product isn't shaped by these constant disinformation 
campaigns.
    I would say that the greater concern would be amongst the 
media institutions in our democratic allies. I believe that the 
Russians have moved from targeting us in particular to Brazil 
and India, other enormous democracies that will be running 
elections in the next few years. And while we still see 
significant Russian activity, those countries have the media 
institutions that need to learn, need to develop.
    Senator Collins. Ms. Rosenberger.
    Ms. Rosenberger. Thank you, Senator. I would just add that 
this is not a problem that we've overcome. We have one example, 
for instance, of an IRA-created Twitter account, the hash--
sorry, the handle was ``wokeluisa,'' that was tweeting in 
particular to African-Americans, focused on the NFL take-a-knee 
debate. There were IRA-created accounts tweeting on both sides 
of that debate. But that Twitter account in particular, which 
was active through earlier this year, appeared in more than two 
dozen news stories from outlets such as BBC, USA Today, Time, 
Wire, The Huffington Post, and BET.
    So, this was about four months ago. So, we really do need 
to make sure that this information is not getting laundered 
into the broader ecosystem, which is part of the strategy here.
    Senator Collins. And the issue there is when we read it in 
a credible source, we're likely to believe it.
    Ms. Rosenberger. That's exactly right. It gives it that 
much more credibility.
    Senator Collins. Thank you.
    Chairman Burr. Senator Wyden.
    Senator Wyden. Thank you very much, Mr. Chairman, and thank 
all of you.
    It seems to me for now and the foreseeable future, 
protecting America's private data is going to be a national 
security issue. Cambridge Analytica, like the Russians, 
exploited Facebook's lax protections to abuse Americans' 
information. I believe a significant part of the failure is the 
fact that the Federal Trade Commission doesn't have the 
authority or the resources to be a tougher cop on the beat. And 
I'm going to be rolling out a plan to fix that in the weeks 
ahead.
    Now let me go to questions. Ms. DiResta, your testimony 
referenced the Russian Facebook pages in 2016, targeting both 
the right and the left. But you noted it was the pages 
targeting the left that included not only content intended to 
appeal to its audience, but also content intended to suppress 
the vote and be critical of Secretary Clinton.
    In your view, does the apparent Russian content released 
yesterday by Facebook resemble the content the Russians used 
last time to attract an audience on the left and among racial 
minorities, which the Russians then used to suppress their 
vote?
    Ms. DiResta. Yes, sir, it does. There's a strong component 
of cultural posts that appear in communities and pages 
targeting minority voters: a lot of pride, pride-related 
content, less news, more memes and that reflects what we saw 
yesterday.
    Senator Wyden. I appreciate that, because content targeting 
I think is clearly going to be a big part of the challenge. The 
public has got to be aware of it, because not all Russian 
propaganda is going to get caught. And Americans are inevitably 
going to read some of it, particularly if it's consistent with 
what they already believe.
    So I gather what you're saying, Ms. DiResta, is the public 
has got to be alert to a repeat of the 2016 Russian playbook, 
which was to attract an audience on the left, discourage them 
from voting. And that could mean attacking Democratic 
candidates, pushing the line, in effect so that the Russians 
are trying to make it possible that our votes don't matter. Is 
that essentially your concern?
    Ms. DiResta. Yes, sir. There's a lot of efforts to push 
intraparty divisions on the left.
    Senator Wyden. Good.
    Let me ask you now, if I could, maybe for you, Ms. DiResta, 
Ms. Rosenberger, Dr. Kelly, about this concept known as down 
ranking. My interest here is that for the social media 
companies there's just a mismatch of incentives. The social 
media companies, they want users and clicks and impressions, 
and inflammatory and often false content creates that. So even 
when the companies can't or haven't decided to identify a 
certain account as either foreign or nefarious, they can still 
downgrade the posts to limit their exposure. This is an equal 
or worse problem with conspiracies and junk news as it is with 
foreign influence.
    So my question here would be for the three of you: Do you 
think these down-ranking programs are effective? Are they the 
kind of thing that ought to be considered part of the kind of 
toolbox as we look to deal with this problem, Ms. DiResta and 
the rest of you?
    Ms. DiResta. Sure. So I think that there's sort of three 
facets to the toolbox. There is remove, reduce, or inform. 
Inform means to add additional context to a post. This is 
Facebook's framework right now.
    Reduce would be to do something like down-rank it, per the 
question earlier about is it possible to inject just a little 
bit of friction? This is where down-ranking could potentially 
be used as a tool, as attribution and authenticity and 
integrity are established, to reduce the reach of content.
    And then remove is, of course, the more--the most extreme.
    Senator Wyden. Would any of you like to add anything? Yes?
    Ms. Rosenberger. I'd just like to note that we talk about 
down-ranking, but we forget that up-ranking is also part of the 
process. These platforms are not----
    Senator Wyden. You're being way too logical.
    Ms. Rosenberger. These platforms are not neutral pipes.
    Senator Wyden. Right.
    Ms. Rosenberger. Information is not being served up without 
some kind of algorithm deciding, for most of the platforms, 
without an algorithm basically deciding what is served up at 
the top. So when we talk about down-ranking, we have to start 
from the premise that up-ranking is baked into the cake. And so 
then the question becomes: are these platforms actually somehow 
prioritizing bad, malicious information, right? That, as we 
know and as others mentioned in their testimony, gaming these 
algorithms, whether that's on trying to get certain content to 
trend or, frankly, getting certain content to rise to the top 
of Google searches, something that we know that Sputnik and 
RT----
    Senator Wyden. I'm over my time. I just want to be clear, 
as the author of Section 230, the days when these pipes are 
considered neutral are over. Because the whole point of 230 was 
to have a shield and a sword. And the sword hadn't been used 
and these pipes are not neutral.
    Thank you, Mr. Chairman.
    Chairman Burr. Senator Blunt.
    Senator Blunt. Thank you, Mr. Chairman.
    So much of the activity we're looking at on the charts and 
today is largely of the IRA. What percentage of Russian-linked 
activity would you anticipate that the IRA represents? Is this 
half of everything they try to do, 90 percent, 10 percent? Who 
would have a sense of what we're not looking at when we're 
looking at the IRA activity?
    Dr. Kelly. We've looked at a number of different known 
disinformation campaigns and we think these are--the IRA folks 
are involved in a minority of them.
    Senator Blunt. In a minority of them. Do you think that 
would be the case here as well?
    Dr. Kelly. I do. The only thing--the thing we don't know, 
though, is how much of the IRA this is.
    Senator Blunt. Ms. Rosenberger, do you want to comment on 
that?
    Ms. Rosenberger. I would just only add that we know from 
Special Council Mueller's indictment actually of the GRU, there 
is one section of that that notes that GRU operatives utilize 
social media accounts and fake Web sites that they created in 
order to spread hacked information and other kinds of 
weaponized information.
    So we certainly know that there are other actors. GRU is 
probably better at hiding their tracks than the IRA is, and so 
I think that just speaks to again how this is probably just one 
tip of the iceberg of what we're looking at.
    Senator Blunt. So, the early discussion clearly has moved 
from what the Russians were paying for, which appears to be a 
very small fraction of the impact they were having. Does 
anybody disagree with that? That is clearly--and this IRA 
activity may--is some fraction of the Russian activity in 2016, 
2017 and into 2018. That would be--so I think the indictment, 
the Mueller indictment, said that there were probably at least 
80 IRA employees involved and millions of dollars involved in 
that effort.
    I don't know what--is that 5 millions of dollars or 
hundreds, hundred million dollars? What kind of--what amount of 
money do you think the Russians invested in this effort that 
was covered by the Mueller indictment? He uses the term 
``millions of dollars.'' That could mean a lot of different 
things. Any idea of the activity you've looked at, what kind of 
investment of money and how many people that may have been 
involved in this?
    Dr. Howard. We've done that audit globally. We believe that 
half a billion dollars have been spent by the 40 governments 
that we've studied since 2010. In the Russian case, we think 
it's around $200 million U.S. over this extended period for the 
full set of organizations behind the various campaigns.
    Senator Blunt. Dr. Howard, on that topic, in the other 
countries you've looked at, who should we be looking at after 
Russia that are likely impacting our daily conversation in the 
country, in some ranked order? Who would be the top three or 
four countries that you would believe would be most actively 
out there doing what Russia is also doing?
    Dr. Howard. Well, in our research we look at Turkey, China, 
Hungary and Iran.
    Senator Blunt. Dr. Kelly, have a thought on that?
    Dr. Kelly. We believe there's a growing black market for 
people skilled in the--who have these dark arts, and they're 
employing them in their own countries and they're also starting 
to get hired to work in other countries. So, this is a critical 
challenge, because the Russians may have been the first to 
effectively do this, but they're not the only players; and 
you'll have a black market of players who are mobile and can be 
hired by any actor.
    Senator Blunt. Well, just to be sure I understand, doctor, 
the 40 countries, are these 40 countries you've looked at for 
outside activity or 40 countries that are participating in this 
kind of activity?
    Dr. Howard. These are 40 countries that have organized 
disinformation campaigns in the sense of stable personnel with 
telephones and family benefits. These are formal organizations 
that do this work.
    Senator Blunt. And how many countries do you think they, 
those 40 countries, would be trying to influence activity in?
    Dr. Howard. Seven countries.
    Senator Blunt. Seven countries?
    Dr. Howard. There's seven authoritarian regimes that have 
dedicated budgets for disinformation campaigns targeting voters 
in other countries.
    Senator Blunt. And how many other countries, again?
    Dr. Howard. Our audit of government expenditures covers 40 
in total. It's usually the United States, Canada, Australia, 
the U.K. that are the--Germany--that are the targets.
    Senator Blunt. That are the targets.
    On Dr. Kelly's comment about determining the attribution, 
you know, we have--in our country, we are focused on defense. 
No administration has yet figured out what our offense should 
be, and I think one of those reasons is we have not figured out 
with certainty how we would determine where a cyber attack came 
from as opposed to even cyber misinformation, which is a 
different kind of cyber attack, but vulnerable infrastructure. 
What we're seeing here is a vulnerable social media 
infrastructure that may be every bit as critical infrastructure 
as any of the other infrastructure we're trying to protect.
    Ms. Rosenberger, I'm going to let you have the last answer 
to my questions.
    Ms. Rosenberger. Senator, I would just note on that, that 
Russia is playing to its asymmetric advantage. This is a low 
cost, high reward kind of tactic. We need to also evaluate: 
what are our own asymmetric advantages and sometimes that's not 
responding symmetrically or in the same domain.
    So, for instance when it comes to Russia, I think this is 
why imposing costs in the financial space in particular--we 
know that Putin cares most about his power and his power rests 
on his money. And I think that looking at ways that we can dry 
up the sources of funding both for these activities as well as 
for the regimes that are using them is incredibly important. 
When it comes to China, things like reputational costs are very 
important.
    So I think, this is why it's important that we put this 
conversation on the national security front in a broader 
strategic frame to identify our own asymmetric advantages so we 
can go on offense.
    Senator Blunt. Thank you, Chairman.
    Chairman Burr. Senator Heinrich.
    Senator Heinrich. Ms. Rosenberger, I believe it was you who 
said, and I may be paraphrasing here, but we've moved from a 
failure of imagination, to a failure to act. Do you find it 
troubling that, despite the current risk, despite the quickly 
approaching 2018 midterms, that concrete responses like the 
Secure Elections Act, like the Honest Ads Act, have not been 
scheduled for a vote in the United States Senate?
    Ms. Rosenberger. Yes, Senator. I do believe that, while 
this is a complex problem, there are some clear steps that we 
can take in particular on the defensive side, as well as on the 
deterrent side, that we need to be taking urgently.
    Senator Heinrich. I share that concern, because I think 
some of these things are sitting right in front of us and we 
just need to make it a priority.
    For Ms. DiResta and Dr. Kelly: The Committee's analysis 
shows that the Internet Research Agency's campaign focused 
heavily on socially divisive issues, but fanning racial 
division in particular was the single most targeted category of 
effort. Are Russian information warfare operations using 
unresolved racial tensions here as a weapon to weaken the 
United States?
    Ms. DiResta. Yes, I believe they are.
    Dr. Kelly. Absolutely.
    Senator Heinrich. Do you see that ongoing exploitation of 
racial tensions as a direct threat to our national security 
and, for that matter, our cohesiveness as a country?
    Dr. Kelly. You could think of this as a social cohesion 
attack to try and drive wedges into the American public where 
maybe a little wedge or a piece of history in our past is being 
exploited to make 21st century America look more like 1950s 
America than it ought to.
    Ms. DiResta. I would agree.
    Senator Heinrich. So, we now know much more about the 
Russians' 2016 campaign than we did before we started this 
investigation, and we know it was far broader than we 
originally thought. We know that it's highly active today, as 
many of you have testified to, and we know that no single 
entity by itself--not the government, the social media 
companies, not civil society--can effectively stop foreign 
influence operations on social media.
    But, Ms. Rosenberger, in your view have we as a Nation 
extracted the sort of price or penalty for this behavior that 
would defer--deter Vladimir Putin from acting in this way? Or 
has the Russian Federation simply gotten a pass so far in terms 
of the price that we have chosen and that this Administration 
has chosen to extract?
    Ms. Rosenberger. So, I think it's evident by the fact that 
this kind of activity continues, that we have not yet 
effectively deterred it. One thing I would note is that in 
classic deterrence theory, deterrence relies on two prongs: one 
is credibility and one is capability.
    And I think it's incredibly important, number one, that on 
the credibility front we have very clear, consistent messages 
from across the government, starting with our leadership and 
all the way down, that they're----
    Senator Heinrich. Including the White House?
    Ms. Rosenberger. Including the White House--that this 
behavior will not be tolerated and that there will be 
consequences for it going forward, and articulating what those 
consequences will be. And I think that there is a role for 
Congress to play here in terms of teeing up triggers that would 
be automatic, and I know there is consideration of such 
measures and I welcome that. But I think that it also has to 
start--the credibility piece has to be very, very clear.
    Vladimir Putin cannot see from one place that there is a 
potential for consequences, but then over here be getting a 
very different mixed message. We have to have consistency; that 
has to be credibility coupled with the capability to act.
    Senator Heinrich. I could not agree more.
    You mentioned financial cost as one of our asymmetric 
advantages. What would you foresee as a potential cost that we 
might extract for this kind of ongoing misbehavior?
    Ms. Rosenberger. I think there's two different ways of 
looking at it. One is, of course, very targeted sanctions and 
other kinds of designations; the other is thinking more broadly 
about how our financial system, the Western financial system, 
frankly, is used for Putin and his cronies to hide the money 
that they have stolen, by the way, from the Russian people.
    And just as we have vulnerabilities in our information 
domain, we have vulnerabilities in our financial system. I 
think steps like providing transparency around beneficial 
ownership, extending and legislating the geographic targeting 
orders that the Treasury Department has been using--there's a 
whole suite of steps that we outline in our report that I 
mentioned earlier, that I think----
    Senator Heinrich. I will read those in the report. I want 
to hit one last thing and then my time is up.
    You all mentioned the broader ecosystem. Can you just 
confirm so that people understand, this isn't just a couple of 
platforms? This is music apps, this is video games, this is 
meme sharing. It's much broader than Twitter and Google.
    Dr. Kelly. I would expect that they have people whose job 
it is to figure out how to exploit every small new platform 
that comes along.
    Senator Heinrich. Thank you all.
    Chairman Burr. Senator King.
    Senator King. Thank you, Mr. Chairman. And I want to 
thank--thank you, Mr. Chairman, for calling this, what I think 
is a very important hearing.
    And thank you all for all the information that you've 
shared. I've been listening and came up with a couple of 
conclusions. Tell me if I'm right. One is: there is a massive, 
sophisticated, persistent campaign on multiple fronts to 
misinform, divide and ultimately manipulate the American 
people. Is that accurate?
    Dr. Kelly. Yes.
    Senator King. I wanted to hear ``yes'' because nods don't 
go in the record.
    [Laughter.]
    Dr. Howard. Yes.
    Ms. Rosenberger. Yes.
    Senator King. Let the record show everybody nodded.
    Dr. Howard. Yes, Senator.
    Ms. Rosenberger. Yes.
    Senator King. I think that's incredibly important because 
in all of this whole Russia active measures thing, a lot of the 
space and energy has been going into campaigns and elections 
and collusion and those kinds of questions. This is an enormous 
part of what's going on, and it worries me that we've sort of 
lost sight of this.
    The second thing I've learned from you is, number one, it's 
still happening; is that correct?
    Dr. Kelly. Yes.
    Ms. DiResta. Yes.
    Senator King. Absolutely, still happening?
    Ms. Rosenberger. Yes.
    Senator King. It's way beyond elections.
    Ms. Rosenberger. Yes.
    Dr. Howard. Yes.
    Senator King. Secondly, it's more sophisticated than it was 
in 2016. They're learning to hide their tracks, not paid in 
rubles. I would have thought they would have figured that out 
before. But more sophisticated.
    And then finally, it seems to me what you've been 
suggesting is we're asymmetrically vulnerable because of the 
First Amendment and democracy. We believe--our whole system is 
based on information. And we have this principle of opening 
access to information. Thomas Jefferson said, ``We can tolerate 
error as long as truth is free to combat it.'' Thomas Jefferson 
never met Facebook, I might add.
    But would you agree that we are particularly vulnerable 
because of the nature of our society?
    Ms. Rosenberger. Yes.
    Senator King. Now, this one is for the record because I 
think it's a long answer. It seems to me there are three ways 
to combat this. And the first--and this is what I would hope 
you would supply for the record--technical solutions. Things 
that have been mentioned today that we could do, and that 
Facebook could do, or Google, or Reddit, or Twitter, whoever. 
Technical solutions: identifying bots, for example, those kind 
of things.
    Please give us some specificity and things that you think 
we might be able to do without violating the First Amendment. I 
shudder when I hear the words ``regulate the internet.'' I 
don't want to do that, but there may be things that we can do 
that could be helpful.
    The second thing, it seems to me--and, Doctor Helmus, you 
mentioned this in your testimony--we need to do a better job of 
media literacy. I had a meeting just before, in the fall of 
2016, with a group of people from Latvia, Lithuania and 
Estonia. And I said, ``What do you do about this problem with 
the Russians' propaganda? And you can't unplug the internet, 
you can't unplug your TV.''
    They had a very interesting answer. They said: ``The way it 
works over here is, everybody knows it's happening and 
therefore when something like this comes online, people say, 
`oh, it's just the Russians again.' '' We haven't gotten to 
that point.
    Doctor Helmus, is that what you mean by ``improve media 
literacy''?
    Dr. Helmus. Yes, precisely. To be able to recognize these 
instances when they appear, and to be able to process those in 
a way that can minimize the impact.
    Senator King. But that goes--it's deeper than just having a 
hearing. This has got to be--you know, our kids are growing up 
with these devices, but not necessarily being taught how they 
can be manipulated by their devices. I think there ought to be 
standardized courses in high school called ``digital 
literacy,'' and increasing the public's awareness that they are 
being conned, or that at least they're potentially being 
conned, and how to ask those kinds of questions.
    Ms. Rosenberger.
    Ms. Rosenberger. Senator, I think that that's right; it has 
to include online literacy as well as just your standard media 
literacy. But it also can't just be in the schools. One of the 
things we know from research is that, in fact, it may be that 
older populations who are not growing up with technology may, 
in some cases, be more vulnerable to manipulation by this kind 
of activity.
    Senator King. I would argue that's because they grew up 
with newspapers and they have this unspoken assumption about 
editors and fact checkers.
    Ms. Rosenberger. I think that's probably right, sir.
    Senator King. And if you do your website in Times New 
Roman, people will give it some credibility.
    Ms. Rosenberger. Especially if it's your friend sharing it, 
or somebody you believe to be your friend, someone----
    Senator King. And your friend may be sharing something 
which they got from somebody that they didn't know where it 
came from.
    Ms. Rosenberger. Absolutely, absolutely.
    Senator King. A final point, and I think you've touched on 
this, is deterrence. Ultimately, we cannot rely exclusively on 
defense. The problem thus far, it seems to me, is that the 
Russians in this case and others see us as a cheap date. We are 
an easy target with no results. Nothing happens.
    And I would--that would be something I hope you all again 
could take for the record because of a lack of time, to give us 
some thoughts about deterrence. And I think it's important. It 
doesn't have to be cyber. It could be deterrence in a number of 
areas, including sanctions, as we've discussed.
    But it has to be--there has to be some price to be paid. 
Otherwise, as we now know, it's going to continue.
    So give me some thoughts on deterrence for the record. I 
appreciate it.
    Thank you, Mr. Chairman.
    Chairman Burr. Thank you, Senator King.
    Senator Manchin.
    Senator Manchin. Thank you, Mr. Chairman.
    I want to thank all of you for coming today to help us. 
This is a critical topic, which I hope all Americans are 
watching.
    We, as an open voting society need to be informed. A 
properly informed voter population is the key to a sound 
democracy. Unfortunately, Russia is trying to undermine that 
foundation.
    A quick look back through American history shows that our 
allies and adversaries have changed over time. The Soviet 
Union, specifically Lenin and Stalin, openly criticized the 
capitalist West before World War II. During our mutual fight 
against Nazi Germany, President Roosevelt called Stalin ``Uncle 
Joe'' and the U.S. and USSR fought a mutual enemy. After the 
end of the war, we found ourselves in an adversarial 
relationship, known as the Cold War that lasted decades.
    We saw a brief thaw in relations during the 1990s. But now 
Russia, specifically Vladimir Putin, and the U.S. seem to be 
adversaries again.
    I would ask, I think Mr. Howard, in your written testimony 
you describe Russian computational propaganda aimed at 
everything that we've heard today, pulverizing voters, 
discrediting certain political candidates, discouraging 
citizens to vote.
    So I would ask, which country--we know Russia--poses the 
greatest threat to our democracy using social media platforms? 
And which countries are making strides to do the same?
    Dr. Howard. Thank you, Senator. I agree that Russia has 
been the most innovative in developing these kinds of 
techniques. Unfortunately, I think it's safe to say that 
dictators learn from each other. So as they see successful 
campaigns run in particular countries, they emulate. They sink 
their own resources into developing similar capacity. Some of 
these countries have re-tasked small military units to do 
entirely social media campaigning.
    So as I mentioned earlier, there are now seven different 
countries that are--who are, most would agree----
    Senator Manchin. Actively involved?
    Dr. Howard [continuing]. Authoritarian regimes that are 
actively developing these kinds of----
    Senator Manchin. Which ones do you think--which one has the 
greatest potential to do harm? Russia is unquestionably the 
absolute greatest violator.
    Dr. Howard. I believe China has the next best capacity in 
this----
    Senator Manchin. If they want to turn loose on us?
    Dr. Howard. If they want to.
    Senator Manchin. And you haven't seen that yet?
    Dr. Howard. Not directly in the U.S. sphere.
    Senator Manchin. I would ask this to any of you all. Is 
there any country that has been successful at deterring Russia 
or any other attackers from other countries?
    Dr. Kelly. Not that I'm aware of.
    Ms. Rosenberger. It's hard to know the counterfactual of 
what would have happened in different cases in some of these 
instances. There is some evidence that in the German and French 
elections, that deterrent messaging from the top, from the 
leadership there about the consequences for this kind of 
activity, may have reduced in some ways the kind of activity.
    Senator Manchin. How about Macron's election in France? We 
saw that he fought back. As soon as they saw the attacks being 
made by Russia, they were actively involved.
    Ms. Rosenberger. There are some interesting lessons that we 
may be able to learn from----
    Senator Manchin. Dr. Kelly. I'm so sorry----
    Ms. Rosenberger. No, please, absolutely.
    Senator Manchin. Our time is very limited.
    Dr. Kelly. No, I answered too quickly before. I think the 
Macron case is a perfect example of how being aware of it, that 
kind of situation awareness, as well as quick and decisive 
action to counter it in terms of public--you know, speech by 
the leadership--had an effect.
    Senator Manchin. And let me just ask--I've got one final 
question here. I have a little bit of time here, but I wanted 
to see your all's opinion. In West Virginia, you know, people 
are having a hard time deciding where to get the facts. And 
fake news seems to be the real news, depending on where they 
get it from, social media and sometimes on networks, if you 
will.
    Can I ask each one of you all, where do you receive your 
news that you believe is factual? Where do you go to? Where 
could I help a West Virginian find some real news and not have 
to rely on trying to decipher themselves was it fake or not? Is 
it made up, real or not?
    And I'll start Dr. Howard and go right down.
    Dr. Howard. I go to PBS, BBC, and the Canadian Broadcasting 
Company.
    Ms. Rosenberger. I'm old-fashioned and I tend to still like 
newspapers as my sort of major sources. I like having 
publishers involved and editors who are able to fact-check 
content.
    Dr. Kelly. I'm a New Yorker, and I'll go with the Old Gray 
Lady.
    Ms. DiResta. New York Times, Washington Post, Wall Street 
Journal.
    Dr. Helmus. Major newspapers.
    Senator Manchin. Not one of you mentioned social media. Not 
one of you all mentioned what we're here talking about as where 
you get you news or where you trust your news to come from. I 
think that speaks volumes of what we're dealing with today.
    I have no further questions after that. Thank you very 
much.
    Chairman Burr. Thank you, Senator Manchin.
    And I just might add to his comment about what happened in 
France. France also did some things that constitutionally we 
can't do. So let's recognize the fact that they had a very loud 
message and they had a very big stick that they used. And we 
might not get the same results, though that doesn't change for 
the loud voice.
    Senator Rubio.
    Senator Rubio. Thank you.
    No one mentioned TMZ. There is some good stuff on TMZ.
    [Laughter.]
    And I'm on as often as I can get on there.
    Anyway, so I want to talk about the terminology that we use 
because I think it's one of the things that's really impeding 
the way forward, and get your insight on all of this. The first 
is, I've had people come up to me and say: Well, everybody 
spies on everyone. But this is not really about espionage, 
certainly not in the traditional sense. This is not--I mean 
there may be elements that involve espionage, you know hacking 
a computer, getting into a system network and stealing e-mails 
and the like. But this is not really an espionage situation.
    The other term that's always thrown around is collusion. 
And there's ongoing efforts to answer all those questions. But 
this sort of thing doesn't really involve, or doesn't really 
require collusion. You don't need the cooperation of a 
political candidate or party to be able to do any of this.
    In fact, many of the ads that were pulled down yesterday 
have nothing to do with a candidate or a party in the short 
term. And it isn't even quite clear what the psychology behind 
it is, other than to get us to fight against each other.
    So if you can just put--if people would just put aside the 
whole espionage focus and put aside, you know, the collusion 
focus, and let that be dealt with the way it's being dealt 
with, we'd get left with the term ``interference.'' And that's 
become such a generic term that it's almost become benign. You 
know, ``interference'' sounds like everything from the 
leadership of another country had a preference about who won 
the election, to actually like actively engaged in helping 
somebody get elected. And I would hope--and, maybe you 
disagree--I hope you agree, this is more than that.
    This is really, no, nothing less than informational 
warfare. This is just another type of warfare to weaken an 
adversary. And that's how Vladimir Putin views the United 
States of America. So, for example, if he conducted a kinetic 
strike, a military strike to take out anti-air defenses, he 
would do so to weaken our air defenses. And if they conducted a 
cyber attack to knock out our command and control, he's there 
to weaken our communication systems or our electrical grid.
    And if you do this, you do it in order to weaken our 
society, our willingness and capacity to fight, to work 
together, to come together as a Nation. This is part of their 
broader doctrine on how to confront an adversary.
    And on the escalation scale, it costs very little money, 
you can do it with limited attribution, and it works because 
the fact of the matter is, with all of the things happening in 
the world today, the United States Senate Select Committee on 
Intelligence has spent an inordinate amount of time on this 
important topic and there are so many other issues we could be 
focused on. So, it's worked to some extent.
    Is this assessment of it right? Isn't this--this is not 
interference. This is information warfare designed to sow 
division and conflict and doubts about--because whether it 
involves changing voter registration databases in the future at 
some point, potentially, or the stuff we're seeing now, all of 
that is designed to sow chaos, instability, and, basically, to 
get us to fight against each other.
    We're already fighting against each other in this country. 
All this does is just, sort of, stir that up even more. Is that 
an accurate assessment? Is this informational warfare?
    Ms. Rosenberger. Yes.
    Dr. Kelly. I agree 100 percent.
    Ms. DiResta. Yes.
    Dr. Howard. Yes.
    Senator Rubio. So to the extent that it is--and I think 
everybody's already asked you this question--but wouldn't one 
of the best things that could happen is that--we can focus all 
day on Facebook and Twitter, and Instagram. These are 
ultimately platforms who are being used for informational 
warfare. I don't believe they invited them in and there are 
things they can do to improve their processes, and I wish their 
disclosures were a little faster, but by and large, they're a 
platform that's being used. It would be like blaming the road 
builders because some enemy used that road that they built to 
put their tanks into your country.
    So there are things these folks can be doing to improve the 
way they operate, no doubt about it. But ultimately, we really 
should be focused on what's being done and not only who they're 
using to do it.
    And so my question is, why wouldn't these social media 
pages be in a position to potentially alert all of their users? 
Not just a public disclosure like they did yesterday in their 
press conference but actively send out to all of its user's 
alerts about every time they remove something, so that people 
can become conditioned to the sort of messages that are being 
driven by these informational warfare operations?
    Ms. DiResta. I believe they can. I believe Senator 
Blumenthal requested that they do so in response to the--back 
in September after the first set of hearings. They did push 
notifications to people saying that they had seen content, they 
had liked a page, they had engaged. I believe Twitter sent out 
e-mails to users who were affected.
    That kind of disclosure is absolutely necessary, because 
one thing that it does is it comes from a platform that is at 
least seen as somewhat trustworthy, whereas if they hear it 
from the media you see these polarized echo chambers where some 
people don't even believe this is happening.
    Ms. Rosenberger. Senator, I would just add that one of the 
things we know from looking at both the history of active 
measures as well as their use across Eastern Europe and Central 
Europe is that sunlight is one of the most effective antidotes. 
Transparency, exposure of this activity, is critical for both 
building resiliency and deterring it going forward. And so, I 
absolutely concur that the more information and the more 
transparency that the platforms can be providing to their 
consumers, to the users of information about these activities 
is absolutely critical.
    Senator Rubio. I don't have a question, Mr. Chairman. I 
just want to say that it's great that Facebook put this stuff 
out there and that we're having this hearing. I promise you, 
the vast majority of people that I know back home will never 
see a single one of these images because there's a lot going on 
in the news every day, constantly, by the hour.
    Chairman Burr. Senator Harris.
    Senator Harris. Thank you.
    Mr. Chairman, I'd like to put what I believe is a context 
in which we should be thinking about what happened in 2016. 
First, I think we're all clear that Russia attacked our country 
during the 2016 election and that they are continuing to attack 
us today. Russia not only attacked one of our most sacred 
democratic values, which is a free and fair election, but also 
I believe our very American identity.
    I often say that we, as Americans, no matter our race, 
religion, or region, have so much more in common than what 
separates us. And among what we have in common is a love of 
country and a belief that we as Americans should solely be 
responsible for the choosing of our elected leaders and the 
fate of our democracy and who will be the President of the 
United States.
    And I think of us then as being a large and diverse family, 
the American family. And like any family, we have issues and 
fissures that are legitimate and run deep and provoke potent 
reactions. We have a history of slavery in this country. We 
have a history of Jim Crow, of lynchings, of segregation, and 
discrimination. And, indeed, we have a lot to do to repair and 
to recover from the harm of the past and some harm that 
continues today.
    But let's be clear. Someone else came into our house, into 
the house of this country, the family of who we are as 
Americans, and they manipulated us; and they are an adversary, 
and they provoked us and they tried to turn us against each 
other. The Russian government came into the house of the 
American family and manipulated us.
    And we must take this seriously in that context and 
understand that when we debate, as we did in 2016, one of the 
most important debates that we have, which is who will be 
leader of our country, the Russians exploited our Nation's 
discourse to play into our deepest fear.
    And as leaders I believe then it is incumbent on us to 
speak to the American people about how we can solve this urgent 
national security threat. I believe, first, we must act 
urgently to bolster our country's defenses like our election 
infrastructure and cybersecurity, a bipartisan issue that we 
have been working on in a bipartisan way--I thank Senator 
Lankford and many of our colleagues--throughout the work that 
we've been doing on the Secure Elections Act.
    But second, I believe we need to make sure that the 
American public recognizes who is trying to sow hate and 
division among us, so that the American public can rightly 
identify and see it for what it is: an attempt to exploit our 
vulnerabilities for the purpose of weakening our country and 
our democracy.
    And with that, I'd like to ask, Ms. DiResta, in your 
written testimony you say that the Russian Internet Research 
Agency, IRA, efforts targeting the right-leaning, quote, 
``right-leaning and left-leaning Americans was unified in its 
negativity towards the candidacy of Secretary Clinton''; and 
that, quote, ``in pages targeting the left, this included 
content intended to depress voter turnout among black voters.''
    This seems to corroborate the intelligence community's 
finding that Russia was trying to hurt the campaign of one 
candidate in the 2016 United States election and help the 
other. Can you tell us more about what your research has found 
regarding the nature of the political content that the Russian 
IRA was pushing toward Americans on social media during the 
2016 campaign?
    Ms. DiResta. It was unified on both sides in negativity 
toward Secretary Clinton. It was not unified in being pro-
President Trump. So the pages targeting the left were still 
anti-candidate at the time Trump.
    On the right, we did see an evolution in which evidence of 
support for candidate Trump continued during the primaries. 
There was some anti-Senator Rubio, anti-Senator Cruz content 
that appeared. And there was a substantial amount of anti-
Secretary Clinton content on both the right and the left.
    On the left, that included narratives that either African 
Americans should not vote, should vote for Jill Stein, which 
was not a wasted vote, and during the primary there was support 
for candidate Sanders.
    Senator Harris. And then quickly, Ms. Rosenberger, you 
recently published a report policy blueprint for countering 
authoritarian interference in democracies. You described an 
event on May 21 of 2016 where two groups were protesting in 
Houston, Texas, and one was called the Heart of Texas that 
opposed the purported Islamification of Texas. On the other 
side, the United Muslims of America, who were rallying to 
purportedly save Islamic knowledge, and these protests were 
confrontational.
    Can you tell me, at the time were law enforcement or the 
protesters aware of who had manufactured the conflict?
    Ms. Rosenberger. No, our understanding is that they were 
not. One thing we do know is that, fortunately, law enforcement 
was present at the demonstrations and therefore was able to 
keep them separate. But one of the things that we believe may 
have been part of the intent of organizing simultaneous 
rallies--same day, same place, opposite sides of the street--
was probably to attempt to provoke violence.
    Senator Harris. And then just quickly, if we can follow up 
in any writing with the Committee, but I'd be interested in 
knowing what your recommendations are for how we can inform law 
enforcement, because obviously this is a matter that is about 
public safety and frankly also officer safety. As we know, many 
of these disruptions end up resulting in violence and harm to 
many individuals.
    Ms. Rosenberger. Absolutely. I would just point very 
quickly to the announcement from Facebook yesterday, which 
actually seems like it may have been something intended to be 
along similar lines with a protest attempting to gin up very 
high emotions.
    Chairman Burr. Senator Lankford.
    Senator Lankford. Thank you, Mr. Chairman.
    To all of you, in your research and the data that you're 
putting together to be able to help us in this and be able to 
expose some of the issues, thank you. You all have done a lot 
of hours at a computer and running a lot of data to be able to 
get to this point. And we appreciate that very much.
    Ms. Rosenberger, I want to ask you about some of the 
recommendations that your team has made and to follow up on one 
of the questions that Senator Blunt had started. You made some 
very specific recommendations that, when we discover 
attribution, which is not easy to do, but when we discover it 
and see it as a foreign actor, three main sets of responses you 
seem to have recommended: sanctions; making sure there's a 
reputational cost for the country that's doing it; and 
considering offensive cyber operations. I want to take those in 
reverse order.
    What would you consider an offensive cyber operation that 
would be effective in this means?
    Ms. Rosenberger. Well, Senator, as you know, the use of 
offensive cyber operations is itself a very complex problem.
    Senator Lankford. Right.
    Ms. Rosenberger. So I'm just going to kind of boil it down 
to be specific within this context.
    What I would say is, I think that there are instances in 
which when we are able to--when the U.S. government is able to 
identify-- for instance, the servers that are being used to 
carry out these operations, based on a variety of potential 
damage assessments, et cetera, I do think that there are 
instances in which that might be an appropriate course of 
action.
    Again, as we know in offensive cyber, this can often lead 
to a challenge of whack-a-mole. You set up a new server, et 
cetera. It does impose a cost. Of course, one of the things 
that we know that creates challenges is sometimes for these 
transnational operations they may, for instance, be using a 
server in the United States, or in the country--or in the 
domain of one of our allies. So that introduces complications.
    So it's not a super-simplistic answer. But I do think that 
there are instances in which we should consider it.
    Senator Lankford. So you also mentioned reputational costs. 
I'm not sure there's anyone left on the planet that doesn't 
understand that Russia does propaganda on their own people and 
does offensive propaganda against everyone else.
    What kind of reputational cost could you put on Russia, 
trying to expose their activities?
    Ms. Rosenberger. Senator, the reputational cost 
recommendation is a little bit more specifically aimed at 
China, where I think that, as others have alluded to, China has 
the capabilities and we're seeing them test these things in 
their neighborhood. China has a longer-term strategic interest 
that's much more about generating affinity toward it and its 
model. So I think that reputational costs would be more 
effective with China.
    I concur with you that, when it comes to Russia, 
reputational costs are difficult, although I do believe that it 
is important for the American people to hear clear and 
consistent messages from our leadership that Russia and 
Vladimir Putin are an adversary and a threat to our Nation.
    Senator Lankford. It was one of the areas that I was 
pleased with Facebook's announcement yesterday that this 
Committee had talked to Facebook about multiple times. It's one 
thing to be able to say that they are being used by an 
adversary; it's another thing to actually show the images.
    Ms. Rosenberger. Yes.
    Senator Lankford. Yesterday Facebook was rapid to not only 
say there's an outside entity, we're not saying it's Russia, it 
looks like it is, but here's the images they're putting out, 
here are the events they're putting out. And they put out a 
tremendous amount of data yesterday. That's much improved from 
where we were two years ago, where they were still saying, 
``We're not sure if they used us or didn't use us.'' Now 
they're being very forward-facing on that. That's helpful to be 
able to get information around faster.
    Traditional media multiplied that message by putting it out 
as well. That helps us to be able to get the message out. 
That's one of the things that we heard on this Committee 
multiple times: European allies have faced from Russia those 
attacks, that they've been able to get that and have that 
pushback immediately. So that was helpful to be able to see it 
yesterday.
    I have one other question to relate to this as well. You 
had mentioned a comment here in one of your recommendations on 
making sure that there is transparency, passing legislation 
that ensures Americans know the source of online political ads. 
Much of what happened with this was not an ad. It was just a 
profile that was set up that they did a tremendous amount to be 
able to develop it.
    How do you separate out being aware where an ad is coming 
from and just a profile that's a free profile, that's developed 
quite a following?
    Ms. Rosenberger. I completely concur that the political 
advertising piece of this effort was a small one. My own view, 
coming from a national security perspective, is when we 
identify a vulnerability we should close it off. And so even if 
it was not the most significant avenue that was utilized, I 
absolutely believe that applying the same standards to 
political advertising online that apply offline is absolutely 
essential. That being said, that will not solve the problem and 
we can't be in any way convinced that it will.
    And so that's why we also recommend a number of 
transparency measures about providing greater context for 
users, about the origin of information, about whether 
automation is involved, about requiring some kind of 
authenticity confirmation while protecting anonymity. I think 
these are the kinds of steps that can help mitigate some of 
these broader concerns that you're raising.
    Senator Lankford. I look forward to that conversation. We 
also need to have a conversation on is there a level of 
cooperation needed between the internet service providers, cell 
phone companies, and others that have a different level of 
information about where that information is coming from, and 
their cooperation with some of the providers of content.
    Right now we're leaning mostly on providers of content to 
say, help us with the data and help police yourself on it. But 
there's another whole level of information coming from the ISPs 
and from the cell phone companies and such, as well, of where 
that data is actually originating from.
    Ms. Rosenberger. Absolutely. And when you combine that with 
information that the intelligence community can provide, I 
think that that is how we begin to put together different 
pieces of this puzzle to create better identification 
processes.
    Senator Lankford. I look forward to that.
    Thank you, Mr. Chairman.
    Chairman Burr. Senator Reed.
    Senator Reed. Well, thank you very much, Mr. Chairman, and 
thank you all for your excellent testimony.
    We all here appreciate what Facebook did yesterday. I think 
it was a very appropriate and timely response. But there was a 
comment that you made, Dr. Howard, that I think is very 
important and bears repeating, which is basically that 
companies are beyond self-regulation. Could you elaborate on 
that, and then I'll ask the panel if they concur?
    Dr. Howard. I think much of what we've discussed today has 
come from evidence that has been released very slowly over a 
two-year period, often after prodding from you, multiple kinds 
of Committee investigations and multiple governments. When I 
say that I think the social media industry is past the point of 
self-regulation, I mean mostly that the more public, open data 
there is about public life, the faster we can catch these 
moments of manipulation.
    For the most part, we've been speaking about American 
citizens and us as individuals and the impact on our--on our 
democracy, but democracies have civil society groups, faith-
based charities, civic groups, prominent hospitals and 
universities that are also under attack. And these are also 
distinct to democracy and these are part of--these are the 
organizations that I think can help defend us.
    Senator Reed. But I think, again, we have and we have 
gotten--and Chairman and the Vice Chairman have done a 
remarkable job. We've gotten, as you say, slowly and surely 
we've gotten a little bit more response. But I think the time 
is running out, frankly, and I think we have to move 
legislatively to set in motion a framework of disclosure.
    Someone mentioned, you know, options to remove information, 
reduce information, or inform the participant. I don't think 
that will happen voluntarily. It's the prisoner's dilemma. I'm 
sure they would all love to do it, but unless everyone does it 
it's not cost-effective or it's not culturally consistent with 
their corporation.
    So, let me just go with Ms. Rosenberger and down the line 
about this comment about do we have to move very quickly to set 
up the framework, consistent with the First Amendment 
obviously, that allows us to deal with this issue?
    Ms. Rosenberger. Senator, one thing I would note is that, 
while the United States has not taken any steps like this, 
other countries or international institutions have. So the 
European Union has been moving out, not just with GDPR but 
other conversations about regulation of social media and online 
platforms. China is using its market access as enormous 
leverage over these companies in order to basically set the 
terms of the debates.
    By being absent from this conversation and not taking steps 
to figure out some of these very thorny issues, but right now 
what's happening is other countries, other governments, are 
setting the rules for this space. And that is in many cases not 
in the interest of the United States. I think some of the ideas 
that Senator Warner put forward in his paper earlier this week 
are absolutely worth very, very serious conversations and the 
kind of things we need to be doing.
    Senator Reed. I think one of the ironies, as you point out, 
is that we could be disadvantaged because not only don't we get 
to make the rules, but our companies, our international 
companies, will follow the rules in China, follow the rules in 
Europe, and not follow the rules here, leaving us much more 
vulnerable.
    Dr. Kelly.
    Dr. Kelly. I believe that it's critical to have access to 
data from all the platforms in order to detect this kind of 
activity. And that is a sophisticated analytic capability that 
needs to be created, and it's going to be a lot of time and 
effort from a lot of smart people.
    Where does that data then sit? Who is it that gets to look 
at it? And I think that our concerns about privacy and the 
First Amendment lead us to at least suggest we ought to think 
about industry-oriented consortiums or things that allow a kind 
of--without moving it too far from industry--let them at least 
have the first crack at the detection piece.
    Senator Reed. Well again, I think your instincts are very 
consistent with the views of most Americans. But this now has 
been several years, and we are still waiting for the kind of 
robust response. Perhaps the Facebook example yesterday is a 
good sort of sign that the industry is coming around, but----
    Dr. Kelly. Yes, Senator. I think that the proactive 
transparency we saw yesterday from Facebook shows real 
leadership in the field. And I think we need more of that.
    Senator Reed. We do, and my concern is that, again, there 
are other incentives, disincentives, profit, culture, et 
cetera, that could inhibit that.
    My time is expired, but ma'am, please.
    Ms. DiResta. I think the key is to have oversight. We spoke 
about finance a little bit earlier, high frequency trading in 
particular. There were two sets of regulators. There were self-
regulatory bodies that stepped in, there were the exchanges. 
There're some parallels there, where the exchanges are able to 
see what's happening and immediately, before the regulatory 
process happens, step in and say: Not on our platform.
    I think that that's actually an interesting model; this 
combination of regulatory, self-regulatory, the exchanges 
acting independently, and an oversight body looking to make 
sure the entire ecosystem remains healthy.
    Senator Reed. You're talking about the security exchanges?
    Ms. DiResta. Correct.
    Senator Reed. Yes.
    Doctor, comment?
    Dr. Helmus. I'll just say our research certainly shows the 
importance of tagging this information so that audiences can 
know the source of it. The appropriate legislative mechanism 
for that I can't speak to.
    Senator Reed. Thank you.
    Thank you, Mr. Chairman.
    Chairman Burr. Senator Cornyn.
    Senator Cornyn. I can't help but recall the words of H.L. 
Mencken, who said that for every complex problem, there's a 
solution that is clear, simple, and wrong. And so I think we 
need to be a little bit--demonstrate a little humility when we 
begin to approach this from a public policy perspective, what 
our response should be.
    But I also want to ask you about my impression, which is, 
it would be a mistake to think this is just about elections. 
And one of the reasons I say that, I came across an article 
recently entitled ``When A Stranger Decides To Destroy Your 
Life,'' where somebody used a fabricated story about a woman 
and posted it online on a website called ``She's A 
Homewrecker,'' and basically ruined this woman's life, or at 
least challenged it in a dramatic way.
    And then I thought, well, this is a tool that could also be 
used by somebody who wants to tank a stock price by disparaging 
the reputation of a company and then perhaps sell it short and 
reap a significant reward. Or, if you're a Chinese telecom that 
wants to get rid of some of the competition, particularly when 
it comes to developing 5G technology or some other cutting edge 
technology, this is also a pretty useful tool, using this 
information warfare.
    So all of this leads me to wonder if by focusing solely on 
the election, which is dramatic and of tremendous concern--and 
I share the concerns of all of you and all the Committee--that 
if we just focus on that and not the rest of the picture, 
whether we are missing the right picture.
    Ms. DiResta, do you have any observations?
    Ms. DiResta. Yes sir. We look at--at New Knowledge, we do 
look at misinformation and disinformation targeting corporates. 
On the state actor front, we have seen evidence of campaigns 
targeting agriculture and energy as two industries of interest 
to foreign powers. On energy, we've seen anti-fracking 
narratives, anti-fracking bots, by countries affiliated--
countries with strong oil interests. In agriculture, that's 
taken the form of spreading fear about GMO's.
    Senator Cornyn. Yes, Ms. Rosenberger.
    Ms. Rosenberger. I'd also note that in the case of Russia, 
we know that they use these operations to try to shape our 
conversations and views on geopolitical issues, especially 
those of interest to Russia.
    So for instance, one IRA-sponsored post on the fake--the 
inauthentic account ``Blacktivist'' asked, how would we feel if 
another country bombed us for the poisoned water in Flint and 
for police brutality? That was posted in the immediate 
aftermath of the Trump Administration's strikes on Syria after 
the chemical attack in March of 2017. So a clear instance of 
that account actually criticizing an action by the Trump 
administration, using emotional issues like the Flint water 
crisis and police brutality as an avenue in, to try to shape 
views on a geopolitical issue of interest to Russia.
    Senator Cornyn. Dr. Kelly.
    Dr. Kelly. So I completely agree that there's a commercial 
dimension of this which is underreported, and there's a lot 
more going on in the commercial space in terms of these attacks 
than is reported. Renee discussed some of them.
    We've seen others with our customers. And sometimes they're 
tied, these political attacks and the attacks on corporations, 
where corporations will be basically punished with falsely 
amplified boycott campaigns and similar measures for doing 
something which is politically not what Russia would like to 
see.
    Senator Cornyn. Dr. Helmus, the psychologist Jonathan Haidt 
gave a speech I saw online recently called ``The Age of 
Outrage'' at the Manhattan Institute, where he basically 
describes a narrative where there's a lot of things conspiring 
to manipulate us and invoke outrage for whatever is going on, 
whether it's cable news, social media or the like.
    What can regular Americans do to protect themselves against 
those, whether they be state actors, whether they be 
individuals, whether with malicious intent? What can they do to 
protect themselves? There's one thing for the government to do 
what we can do from a policy standpoint, but what can average 
individuals, consumers of social media online, do to protect 
themselves from being manipulated by fake information or 
misinformation?
    Dr. Helmus. You know, our work, our work in Eastern Europe, 
as was mentioned earlier, suggests that people in those areas 
are very well aware of Russia's intentions. Russia lurks very 
closely to those nations, and people know what's going on.
    I think obviously the way to apply that to the United 
States is understanding the need to know the sources of your 
information, be able to adjudicate and assess the truthfulness 
of that information, the potential biases of that information, 
and then try to make your own decisions on that. Ultimately, 
it's about being a careful consumer of information.
    Senator Cornyn. Thank you.
    Chairman Burr. Thank you, Senator Cornyn.
    The Chair's going to recognize himself for just a question, 
and then I'm going to recognize the Vice Chairman. We'll see if 
we've got any members that return after that second vote 
starts. But it's my intention to try to wrap up as close to 
noon as we can.
    You know, I've heard a lot of phrases to describe what went 
on just in the last few minutes--disinformation campaign, 
misinformation campaign, societal chaos campaign. Dr. Howard, I 
think you used one that struck me earlier--computational 
propaganda. And my suggestion is that we not come up with a 
single one, because we're dealing with a generational issue. 
And I think somebody alluded to it earlier, that it's much 
easier to take a generation that grew up with these devices and 
accomplished some type of change than it is for somebody that 
struggled, like me, to learn how to use the device and found 
the most useful TV ad, when somebody defriended somebody they 
took their picture off the wall, if you remember that Post-It 
note? That struck home to me.
    So I think it's important that we speak to as many 
languages on this, because the task that we've gotten before us 
is to penetrate the entire population. And it's not limited to 
the United States. As you have described today--and, you know, 
I hope if there's a takeaway for the media--this is going on 
everywhere. It's not limited to politics. It's much more 
intrusive in the economic, global economic picture today, than 
it is in the political landscape.
    It's just we like to write about politics. And so, I want 
to point you to this chart I've got over here. It looks like 
something that would be used at the psychiatrist's office, to 
have you describe what it was. And I'm going to ask you, Dr. 
Kelly. In our analysis, we went through and we tried to connect 
the dots: Who generates it, where does it go, does it go to the 
right, does it go to the left? And what my staff determined--
and I'm looking for your agreement or disagreement--is that in 
a lot of cases, at least in the '16 cycle, the same person 
sitting somewhere in the world generating, initiating this 
propaganda, both initiated the part on the right and the part 
on the left, that it wasn't two different individuals. 
Therefore, this was a very well-orchestrated, very 
choreographed plan that they carried out.
    What's your comment on that?
    Dr. Kelly. Well, this is very interesting and it tells a 
deeper part of the story that the Clemson--recent Clemson paper 
tells, which is that you don't just have, you know, one room 
full of people who are running right-wing trolls and another 
room full of people running left-wing trolls. It's actually the 
same people at the same computers. So, I think that is a real 
lesson in how we need to worry about the way they're trying to 
play us like marionettes, right and left.
    Chairman Burr. And is it safe to say that it's so easy that 
Russia uses existing views inside of American society; all they 
do is try to make the gap bigger between the two by inflaming 
both sides?
    Dr. Kelly. I agree. I think that they're not creating these 
divisions. They're not--you know, and they're doing the same 
thing in Europe and elsewhere. They find in a society what are 
the vulnerabilities, what are the groups that oppose each 
other, and they're basically arming them. It's kind of like 
arming two sides in a civil war so you can kind of get them to 
fight themselves before you go and have to worry about them.
    Chairman Burr. So, Ms. Rosenberger, is this any different 
than really what we faced in the 1960s in the campaigns by the 
Soviet Union against their adversaries in the world of 
propaganda?
    Ms. Rosenberger. It is and it isn't. I think the playbook 
in some way is the same, but the tools that they can use to run 
those plays are very different. And what we have seen is that 
digital platforms have supercharged the ability to take that 
playbook and to really reach a much broader audience more 
quickly and in a much more targeted kind of way than what we 
would have seen in the 1960s.
    There's a difference between hand-cranking out leaflets in 
a basement and passing them around under covert means than 
there is from putting information online using automated 
techniques, inauthentic personas, to watch it go viral.
    Chairman Burr. I will say that the Vice Chairman has been 
one of the most outspoken about how technology allows this plan 
to be on steroids. Words like bots, and he comes up with some 
new ones every day, that many on the Committee and most in the 
country either didn't understand at the beginning of this or 
still don't understand.
    So I'm not sure that we can emphasize enough the intent, 
but, more importantly, the capability, and he deserves a 
tremendous amount of credit for raising this to the level that 
it is.
    I recognize the Vice Chair.
    Vice Chairman Warner. Thank you, Mr. Chairman. That's the 
nicest thing you've said about me and you said it with no 
members here.
    [Laughter.]
    Chairman Burr. I can repair the record.
    Vice Chairman Warner. You can repair it.
    Well, I want to start with what Senator Cornyn and you just 
said. I think the political piece of this is really going to be 
relatively small compared to the overall threat. And I think 
one of the things we've not talked about yet today is the 
marrying of cyber attacks with misinformation and 
disinformation.
    So, if somebody goes out, and let's say, for example that 
the Equifax hack was actually done by a foreign actor, and it's 
got personal information on 146 million Americans, then that 
actor contacts you with your personal financial information, 
you're going to open that, open that message. And then, if 
behind that messages comes a live-stream video of what appears 
to be Mark Zuckerberg or Jay Powell, the Chairman of the 
Federal Reserve, the ability to wreak havoc in the markets, it 
really almost overwhelms what we've seen on the political 
front. So this cyber-misinformation combination is one that's 
important.
    I appreciate when we were talking earlier and recognize the 
rest of you--you really helped me recently--that even something 
that seems so obvious as should we have the right to know 
whether we're being contacted by a human being or a bot has 
layers of complexity to it. But I think we ought to continue to 
explore that.
    Ms. Rosenberger, I've got two points I want to make. One 
is: you have rightfully said we want to make sure that we 
protect anonymity, particularly, you know, the foreign 
journalists in Egypt or the female journalists in Egypt, and 
the ability to hide sourcing gets easier and easier with the 
use of virtual private networks.
    Even with those challenges, shouldn't we have some ability, 
though, to say if--should an American have some ability to put 
some kind of geocoding location so that if somebody says 
they're posting a message from Michigan or North Carolina and 
it's originating in Macedonia or Russia, you ought to at least 
have that information? Again, you can still--we don't have to 
get to content, but we can just know that there ought to be a 
second look, because the origin of that post may not be what is 
described in the post.
    Is that a possible tool?
    Ms. Rosenberger. I think that there are ways that can be--
that's one thing can be investigated. I think there are a 
variety of ways to require authenticity without requiring 
disclosure, sort of frontally, right? So a platform--in fact, 
some of them actually do require confirmation of authenticity.
    Some of them require--some of them include a verified check 
that then sort of puts another label of--another level of sort 
of authenticity on top of that.
    But I think that there are ways that authenticity can be 
confirmed or at least we can do a lot better to try to confirm 
it, while still ensuring that we do have anonymity protected 
and--sorry.
    Vice Chairman Warner. Let me follow up on that, because 
we've heard today some members talk about Section 230. We've 
heard some members talk about GPDR and the whole privacy 
bucket. You know, I've raised some issues about humans versus 
bots. We're talking here about geocoding.
    One of the areas that we haven't talked so much about--and 
I'll appreciate the Chairman giving me this extra time--but are 
there market forces that could help regulate if we ensured more 
competition? For example, I was an old telecom guy and it used 
to be really hard to move from one telco to another until we 
implemented requirements of number portability.
    You know, the Facebooks, the Googles, the Twitters dominate 
the markets. There may be, as people increasingly have concerns 
about the safety of their data, the ownership of their data, 
fake accounts being used--and this doesn't completely work as 
an analogy; let me state that up front. But the notion of data 
portability, the notion that would say: if you want to take all 
of your content off of Facebook, including your cat videos, 
they have to make it in a user-friendly form to move to NewCo, 
because NewCo as part of their business model is going to have 
much higher levels of authentication.
    I mean, is this--is that a possible avenue to look at, as 
well? And I'll take anybody on the panel. Now, and when you get 
into data portability, you've also got to get into 
interoperability issues, which makes it again not a perfect 
analogy. But is there a nub of an idea there? Anybody?
    Dr. Kelly. I don't have an answer on that exactly, but I 
think as you're thinking about that it's important to think 
that, in these kinds of disinformation campaigns, two of the 
most powerful things are a combination of anonymity and 
atomization. You know, those two things together allow you to 
run very large bot armies, so to speak, that are able to effect 
your objectives. It's important--so those two pieces are 
something you have to think about, how that concern weaves 
through this.
    The other thing to realize, though, about that is that the 
bots are only part of the army, so to speak. So by solving that 
problem, even if you force them to identify, you've basically 
forced a medieval army to, you know, put a flashing light on 
the archers. There's a lot of other folks out there that are 
playing more direct roles that you still have to worry about. 
And I think that those more high-value assets in this kind of 
cyber social battle are a little bit harder to find. And 
they're the ones that, you know, you can't just fire up 
another--another hundred of them if you shut--if you shut the 
first one----
    Vice Chairman Warner. We've done a lot of recognition of 
Facebook today. I think we should also recognize Twitter, which 
in the last two months has, you know, even counter to their 
business model, has taken down lots of fake accounts, lots of 
fake bots.
    But is there any, you know--is there any possibility here 
about trying to add more competition into the marketplace as a 
way to help us sort through this? Not so much just a regulatory 
approach, but a competitive approach?
    Ms. DiResta. I'd say one of the challenges is if you 
fragment the platforms and fragment where people are, then 
there are more platforms to watch, since this is a systems 
problem and it does touch everything. That's not to say that 
that's not an appropriate course of action, because one of the 
reasons why this is so effective is there is this mass 
consolidation of audiences as the internet, which was 
originally much more decentralized, kind of came to have mass 
standing audiences on a very small handful of platforms.
    The challenge there is also, though, that people like that 
consolidation. They like having a lot of--you know, all of 
their friends on one platform. So this is a--it's kind of a 
chicken-egg problem to think about it in those terms, but happy 
to continue the conversation.
    Vice Chairman Warner. I would just--if anybody wants to 
add, my last comment would be: I think one of the earlier 
statements that were made was that each of these platforms, 
even as large as they are, really only look after their own 
content or their own usages. So that ability to see across the 
whole ecosystem is mostly lacking.
    And I think the Chairman and I--and we spent a lot of time 
trying to learn up on this--feel like the U.S. government is 
trying to get a handle on this, but has got a lot of work to 
do, as well.
    So I really want to thank all of you. And one of the things 
that we might be able to find consensus on, you know, is there 
more ability for us to urge, force, nudge the platforms in an 
anonymous way to share more data with independent researchers? 
Because you guys actually can give us that system-wide view 
that, for all the size Facebook has, Facebook can't give us the 
complete picture.
    Ms. Rosenberger. Senator, I think that that's exactly 
right. I think we need two different kinds of information 
sharing, and ideally, they can somehow be combined. One is 
greater data-sharing between the public sector and the private 
sector, bringing together the capabilities of the U.S. 
government and the intelligence community, with the 
capabilities and what the platforms are able to see happening 
in their own ecosystem. Of course, that needs to be with 
privacy and speech protected, but I think there are mechanisms 
to do that, number one.
    Number two is cross-platform information sharing. So I 
would think about this as both a vertical and a horizontal 
challenge. And then you have the question of outside 
researchers, which is absolutely critical. I think that Renee 
mentioned earlier the Global Internet Forum to Counter 
Terrorism. I think that's one model to look at in this space.
    There's other models, including from the financial 
integrity world as well as from the cyber security world, where 
you have been able to bring together different parts of 
industry, academics and the government to ensure that the full 
picture is put together to best go at this problem.
    Vice Chairman Warner. Well I just want to again thank all 
of you, but I also particularly want to thank the Chairman, his 
notional idea. He did get this beyond taking the Post-It note 
off the refrigerator. But he has been a great ally, has moved 
this Committee forward on a whole host of technology issues.
    This is one where there is no Democratic or Republican 
answer, since clearly the goal of our adversaries was not to 
favor one party over the other. It was to wreak havoc and split 
divisions. And I think this Committee, under your leadership, 
is trying to take this issue on in an appropriate way.
    Thank you, Mr. Chairman.
    Chairman Burr. And I thank the Vice Chairman.
    You know, I was just sitting here thinking a lot of good 
has happened since we started this drumbeat over a year ago. A 
lot of changes have happened that I think 12 months ago at some 
of the companies we would have said ``Never do.''
    A big ship is not turned around overnight. It takes a 
while. But I think that they have now given us an opportunity 
to work with them. And I hope that in a month, when we have at 
least three of the platforms in, that we will see a willingness 
to collaborate with us, to come up with a solution that fits 
both legislatively and from a standpoint of their corporate 
responsibilities.
    So I'm optimistic that we're headed--that we're started on 
that pathway to a solution. You know, I remind people that it 
was this Committee that took on legislation for cyber security 
when everybody said it couldn't happen. Is it perfect? No. Was 
it a good first step? Yes.
    And part of the challenge, because we're the filter for 
technologies changes in the world--there's no Committee of 
Technology in Congress, there's no Agency of Technology in 
Washington. It all sort of dumps in our lap, and we have a 
perspective that nobody else has. And technology will drive, 
for the next 10 years, the way we do things, the way we 
communicate, where we go, how we do it. Everything in life is 
going to be driven by technological change.
    So, this is very appropriate that we would be talking about 
a new architecture, not necessarily a new architecture for 
social media, but a new architecture for the relationship 
between government and the private sector.
    And I hope that if there's a takeaway from today's hearing, 
it's that this is the last time we're going to associate the 
propaganda effort that we see, with an election cycle. There's 
been no interruption since 2016. There was no interruption from 
2014. This was planned out well before we knew who two 
candidates were, we knew the differences between two parties, 
or where the American people's hot button was. It's flexible 
enough and it's nimble enough that it's going to attack 
whatever the hot button is at a given time that they want to 
initiate.
    I can't thank all of you enough for your candid and 
insightful testimony. You've given us a lot to think about as 
we wrestle with how to counteract the problems of foreign 
influence and its use on social media.
    I want to summarize what we've heard today for the American 
people. The Russians conducted a structured influence campaign 
using U.S.-based social media platforms and others to target 
the American people, using divisive issues such as race, 
immigration and sexual orientation. That campaign is still 
active today. They didn't do it because they have political 
leanings to the right or to the left, but because they--or 
because they care about our elections--but rather because a 
weak America is good for Russia.
    Some feel that we as a society are sitting in a burning 
room, calmly drinking a cup of coffee, telling ourselves this 
is fine. That's not fine, and that's not the case.
    We should no longer be talking about if the Russians 
attempted to interfere with American society. They've been 
doing it since the days of the Soviet Union and they're still 
doing it today. The pertinent question now is: what are we 
going to do about it? And it won't be an easy answer. The 
problem requires all of us--government, private sector, civil 
society, the public--to come together and leverage our distinct 
strengths and resources to develop a multi-pronged strategy to 
counteract foreign attacks.
    We've heard about the problem today and have considered 
some potential recommendations and solutions. The next step is 
to hear from the leaders of social media companies themselves. 
And I'm certain that they, too, learned a fair amount today 
while watching this hearing, and I look forward to their 
responses. They owe it to the American people to communicate 
clearly and transparently what they view their role to be, and 
what they're doing to combat these foreign influence 
operations.
    As I mentioned previously, this issue goes far beyond 
elections. We're fighting for the integrity of our society. And 
we need to enlist every person we can.
    With that, I want to thank you for your time today. I think 
I've hit within about a minute of what I told you our target 
would be. This hearing is adjourned.
    [Whereupon, at 11:58 a.m., the hearing was adjourned.]

                         Supplemental Material
[GRAPHICS NOT AVAILABLE IN TIFF FORMAT]

  

                                  [all]