Loading Events

Day 1. Friday, 13 September

At the start of the 2-day seminar held in the Frontline Club, London, Julia Taranova (Russian Readings) and Ilya Yablokov (University of Leeds) welcomed the speakers and guests to the third and final event in the series of seminars dedicated to media and journalism. Taking the audience through the programme of the event that brought together social scientists, media experts, and journalists, the curator of the series Dr Ilya Yablokov pointed out that the seminar aimed not only to focus on the issues directly relating to real vs fake news in the narrow sense but also to examine the perspectives and opportunities that the digital age and big data offer to modern journalism.

The seminar opened with the keynote talk by Prof. Steven Livingston from George Washington University. Prof. Livingston is also the Director of the recently founded Institute for Data, Democracy, and Politics, which among other things aims to investigate the nature of disinformation campaigns and the ways they travel globally across different platforms and various kinds of social media.

Prof. Livingston, however, took up the challenge of inviting the audience to consider the positive effects that modern technology and the proliferation of available data can bring about. He noted that over the past few decades, the attitudes towards technology and the changes it brings to society have shifted from elation to gloom and pessimism. Prof. Livingston pointed out, however, that after all, the technology is only a tool that can be employed to achieve various outcomes. What matters, he suggested, is not the technology itself but the intention of those who use it. Assessing the role of technology in modern journalism Prof. Livingston proposed to consider what he referred to as the affordance of the tool in question, i.e. the variability of the ways the tool can be used and the set of potentials it offers.

His view was that while there is nothing inherent or deterministic about technology itself, there is a habit of thinking about it in dystopian ways. Indeed, the rapidly growing availability of CCTV cameras and of other ways of gathering information (which can then be bought and sold) has led to the collection of detailed information on all of us thus enabling officialdom to keep track of virtually anything. This type of technological affordance carries with it serious implications for privacy and human rights. At the same time, he emphasised that we should be aware that these same technologies enable journalists and human rights organisations to conduct investigations that would otherwise be impossible. Thus, the availability of sensors and all the data they collect produces opportunities as well as concerns. While in the US and Europe there are efforts at finding ways to regulate the negative consequences of the sensors and data that violate privacy, the regulators tend to see only a part of a larger picture, and therefore the regulation may prove more harmful than good.

Prof. Livingston started his analysis of the role of technology in journalist investigations by considering the Transnational advocacy networks model developed by Margaret Keck and Kathryn Sikkink in the 1980s-1990s. The model suggests that when human rights organisations visit locations with reported human rights abuses and communicate with other organisations and record the instances of abuse, they create pressure on the states that do not comply with human rights. Thus, those organisations have the capacity to change the behaviour of repressive regimes. This model, however, Prof. Livingston stated, fails to recognise that it is next to impossible to get to many places where investigations of human rights abuse and war crimes are most needed. Secondly, the model was developed in the pre-digital age and therefore does not take modern technology into account. Prof. Livingston suggested adapting the model to the reality of the 21st century by allowing for the role of sensors and collected data in putting pressure on regimes trying to conceal crimes whilst at the same time admitting that it can be very hard for human investigators to get to many of the zones of suspected crimes.

The speaker suggested that, if the advocacy model is correct in that investigations of abuses can alter the situation, then using data collecting technology including Internet-based devices, earth observation satellites, CCTV cameras and location-tracking devices should help record and make known, and possibly prevent, human rights violations and war crimes even in remote areas.

Prof. Livingston stressed that the data available grows exponentially. This makes the same raw material employable for both suppression and surveillance as well as for verification of reported events, the challenging of disinformation, and for creating an accurate understanding of the world. Analysing the images relating to the Russian-Ukrainian conflict zone, as well as those taken in Sudan and Burundi, the speaker demonstrated how data collected by mobile phone and satellite can reveal tensions between the official claims and the objective data from the images. Prof. Livingston pointed out that the advances in technology and the proliferation of data have led to the emergence of a new kind of organisation (like Bellingcat, Digital Forensics Research Lab, the White Helmets, Syria Archives, Open Source Investigations), whose purpose is the utilisation of this data in order to identify and report on the events. Many of these organisations exist online and conduct their investigations in collaboration with one another. Thus, the video investigating the alleged use of chemical weapons in a Syrian village in 2018, presented by the New York Times Video Investigations Unit as largely theirs, is in fact an example of a collaborative undertaking that utilises various kinds of data collected by various devices and analysed by various specialists. This particular sort of investigation creates a hybrid system or assemblage made possible because of the involvement of a number of organisations as well as the use of satellite imagery, pictures taken by phones, and expertise of physicists and photogrammetrists. Thus, modern journalistic and human rights investigations, while still performed by journalists, are supported by a number of organisations working collectively to tap into the available data in order to report on the events in a scientifically grounded way. Modern investigative journalism in this area has developed into a hybrid system that engages human rights organisations, satellite companies, people sharing pictures and technical specialists. In Prof. Livingston’s view, different kinds of organisations and specialists will have varying degrees of prominence at various stages of the investigation. The presentation of the final report, itself drawn from various sources and with the participation of various organisations, might involve a robust public institution or newspaper to the fore, as they are entities less vulnerable to possible disinformation campaigns in response to the investigation. In the end, it is the technology that allows to collect and share the data to provide truthful reporting on the events.

Concluding his talk, Prof. Livingston reiterated his concern that the policy-makers tend to see negative outcomes of the use of technology and omit to consider that the same platforms, same social media are enabling investigators to verify things. If the platforms (like Facebook and UTube) are forced to remove pictures and videos algorithmically, they will get rid not only of hate speech and disinformation, but also of the evidence of that very behaviour. Thus, instead of fixing the problem they will damage the means of identifying, investigating and resolving those crimes. The challenge stated by Prof. Livingston, in essence, is therefore to understand the problem more fully before formulating sound policy responses.

The first panel session of the day considered the ways in which technology has changed journalism, and was moderated by Sumit Paul-Choudhury, former Editor in Chief of New Scientist. He started the session by inviting the four panellists – Aleksey Amyotov (the Village), Ivan Sigal (Global Voices), Turi Munthe (North Base Media, Demotix), and Gregory Asmolov (King’s College London) – to share their views regarding the issues raised in the keynote talk earlier that day.

Ivan Sigal suggested that the amount and ubiquity of media presence, brought about by technological progress in the past fifteen years, has changed the way we actually perceive the world and the way we approach narrative. The Internet allows us to see everything at once. Past, present, and future can exist there simultaneously, and geography is compressed into the network space. As a result, the way in which stories are told has changed fundamentally, and the traditional linear narrative presented in a single voice from the perspective of a writer has given way to the pointillist view of the world, enabled by the Internet. At the same time, as Ivan Sigal reminded us, human attention, as well as the capacity of an individual to absorb information, does not change. He alerted the audience to the fact that in the Internet age, where content is cheap and abundant and audiences are highly segmented and have control, it is the attention of the audience that is becoming a matter of contest. Ivan Sigal shared his concern that the media is being increasingly pressured into commodification, which in turn is leading to more simplistic ways of storytelling and is pushing audiences back towards passivity of the viewing experience. Interactivity is being replaced by a presentation of narratives that resembles 21st century television more and more. Ivan Sigal referred to the notion of “affordances” discussed in the morning session, and stressed that, in the end, we would need to decide how we are going to shape and use our technology.

The next speaker, Turi Munthe, commented that in the past decade Internet technology has led to an information surplus, with broken models of content. He quoted James Bridle’s book New Dark Age, where Bridle proposed that in a media ecosystem defined by an information surplus, the response to that surplus is either apathy, or conspiracy, both being key drivers of populist politics.  Commenting on media formats, Turi Munthe suggested that while Virtual Reality has so far failed to deliver, it has huge potential for story telling that would allow the user multiple perspectives and would be profoundly interactive.

Gregory Asmolov expanded on the idea of hybridity of modern. While recognising new opportunities that these hybrid organisations provide, Dr Asmolov also shared his concern regarding the increasingly digitally mediated relationship between a journalist and the event. He stressed that as fewer and fewer journalists are actually present in conflict zones, relying instead on activists and sensors, the distance between a reporter and the event affects the way the event is perceived. Speaking about the way journalism develops in the context of technological change, Dr Asmolov suggested considering three points that he saw as crucial for modern journalism, namely 1. affordances that involve new types of actors, organisations, business models, and non-human actors, 2. new skills, both hard and soft including adaptability and collaboration, that would correspond to new affordances, and 3. self-reflexivity on the role of journalism, without which skills and affordances lose their meaning.

Alexey Amyotov agreed with the previous speaker in that the trend to base journalistic reports on the accounts submitted by others does not always work well. Taking life-style magazines and city guides as an example, Alexey Amyotov stressed the importance of the personal and emotional involvement in reporting which requires first-hand experience and actual presence on site. The same holds true as regards interviews, which turn out to be much more of a success when taken by a magazine’s own reporter rather than via a mediator.

Concluding the session, the speakers in the panel suggested that among the challenges posed by technology is the attention economy (Gregory Asmolov) and the necessity to develop a new business model (Alexey Amyotov). On the other hand, high level of adaptability of modern hybrid systems provides significant advantages. In Ivan Sigal’s view, what matters most is the contest of ideas regarding what we need technology for and consequently the way technology will be developed and employed.

The second panel was dedicated to legislation and regulation initiatives in response to the problem of fake news and misinformation. Chairing the session, Martin Moore (King’s College London) reminded the audience that the positive perception of technology and technological platforms brought about by the events of the Arab spring in 2011, when technology was seen as capable of undermining authoritarian regimes across the world, was followed by a marked shift in public attitude. By 2016, the spread of misinformation and fake news facilitated by modern technology was widely considered to be a problem and a threat to democracy.

Martin Moore started the discussion by analysing the Online Harms White Paper, the UK government ‘plan for a world-leading package of measures to keep UK users safe online’. The legislation is pioneering and is intended to create a model to be copied by other countries. In Dr Moore’s view, the initiative in question employs a useful analogy of technological platforms with real-life public venues rather than with publishers. The analogy implies that whoever has control or ownership of the venue has a duty of care regarding those who use it. However, application of the concept of duty of care to technological platforms leads to a number of questions as regards to the definition of online harm, interpretation of duty of care, the ways in which duty of care can be enforced, and the types of companies and services it will apply to. Dr Moore concludes that the very broad definitions used in the legislation in its current form might lead to significant interventions and to censorship of legal content.

Prof. Kalina Bontcheva’s (university of Sheffield) research also focuses on the UK, and she shared her observations and concerns developed in the course of analysing the involvement of social media in the UK general election campaigns and the UK EU membership referendum. She noted that now when political debates develop online, there is a greater risk of losing data of strong public interest relating to key historical events. The comments that have been deleted by the user can no longer be retrieved; the same applies to hate speech which is removed automatically. Prof. Bontcheva suggested that in addition to regulations relating to what the platforms should delete in order to protect the rights of users, there should be another set of regulations protecting the rights of researchers to study records of historical significance. Her strong view was that such data should be preserved in libraries and made available for research. Prof. Bontcheva stressed the potential of engaging researchers in journalist investigations as well as collaborative work aiming to develop uniform regulation criteria to be applied by all the platforms in the UK and beyond. The speaker also attracted attention to the problem of regulatory algorithms that may reflect certain biases. She emphasised the importance of determining to what degree these biases may be identified and controlled and of assessing the impact of these built-in biases on freedom of speech.

The workshop at the end of the first day of the seminar was dedicated to Investigative journalism using big data and open source information. The workshop started with a presentation by Olesya Shmagun, who represented the Organised Crime and Corruption Reporting Project (OCCRP), an organisation that brings together investigative journalists conducting document-based investigations in a number of countries, predominantly in Eastern Europe. The talk focused on document-based journalism and demonstrated that the distinction commonly made between document-based journalism and ‘journalism of people and ideas’ is not clear-cut and in certain circumstances not relevant. Indeed, official records document life events, and some of those life events would be of public significance. The talk was followed by a practical exercise where the participants were invited to conduct their own investigations following the algorithms and using the resources and the tips mentioned in the presentation.

Day 2. Saturday, 14 September

The section dedicated to the questions of How governments and affiliated actors use digital technologies in political battles was chaired by Gregory Asmolov (King’s College London) and featured presentations by Denis Teyssou (MediaLab, Agence France Presse) and James Ball (Bureau of Investigative Journalism).

Denis Teyssou spoke about governmental initiatives that aim at employing technology against disinformation. Mr Teyssou, who is an innovation manager in the EU-funded InVID and WeVerify projects that work on tackling disinformation, focused in his talk on the ways in which disinformation campaigns use video content as well as on the tools that help verify images and debunk false information. The speaker demonstrated how the InVID free verification plugin can help to source a fake video and identify the original images used to produce it. The tool in question can be applied to a number of images search engines and is already in use in 174 countries. In the near future, as Mr Teyssou mentioned, the WeVerify project team hopes to employ the tool for monitoring social networks and intends to build a database of non-fakes.

The second speaker of the session, James Ball, chose to talk about bots, as this topic, as he claimed, inevitably comes up whenever we talk about the ways state actors and those attached to them use the Internet. Mr Ball argued that bots, or rather inauthentic, semi-automated accounts that are run by real people, are not nearly as dangerous as popular belief pictures them, and the threat they pose is much exaggerated. Their behaviour is well-researched and predictable, and they are easy to spot. Besides, bots appear to be largely talking to themselves. Most bot tweets get only around 7-8 retweets, mostly from other accounts in the same bot network, and 80-90% of them are seen by the people working in the same bot network, including the same person operating other bot accounts. James Ball surmised that in terms of the information ecosystem, our fear of bots is more dangerous than bots themselves, as this fear undermines our trust in what we read. Bots themselves can only have real impact when their postings resonate with a message which is already in the society. The speaker argued that the main vector for state information operations is still the mainstream media.

The two presentations of the session led to a lively discussion that centred around issues of necessity, possibility, and strategies of regulating social media (Gregory Asmolov, James Ball, Denis Teyssou, Ivan Sigal) as well as the challenges presented by deep fakes (Steven Livingston, Denis Teyssou). Commenting on that latter issue, Denis Teyssou noted that the main challenge is in fact presented by fake audios, as it is easier to fake voice than an image while the consequences are equally harmful.

The third panel discussion of the seminar was moderated by Kamilla Nigmatullina (St Petersburg State University), who asked the two panellists – Sergey Sanovich (Princeton University) and Tanya Lokot (Dublin City University) to address the questions relating to Political campaigning, technology and the media.

Sergey Sanovich further developed the discussion on inauthentic social media accounts and their impact on media and society that had been started in the previous session. His analysis was based on a comprehensive study of inauthentic social media accounts, their activity and their role in the cycle of disinformation in Russia in the recent years. The speaker offered detailed typology of such accounts as well as both quantitative and qualitative analysis of their activity in respect of the events of social and political relevance. Dr Sanovich also discussed the goals these accounts are set to achieve and the strategies they employ to maximise the result. The study revealed that bots in Russia are used by both the government and the opposition (as well as for pro-Kyiv postings in the context of an ongoing conflict between Russia and Ukraine). At certain times, their activity reaches up to 75% of all messages on Russian politics. Although the situation cannot be ignored and requires regulation, the policy proposals have to be suggested with care: perpetrators of misinformation have proved to be highly adaptable and can use to their advantage the same proposals that aim to stop them.

Tanya Lokot spoke on the involvement of journalists and social media in political campaigns focusing in her talk on the analysis of the 2019 election campaigns in Ukraine. The speaker discussed new platforms and tools that were used for campaign communication and reporting and pointed out to the new challenges that are posed by new actors entering Ukrainian political scene with no previous experience in politics. Analysing the involvement of the parties in online debates and the postings and arguments in social media supporting different candidates, the speaker highlighted the need for regulatory support as well as accountability and transparency mechanisms to aid reporting on campaigns. She also suggested that in the view of the new strategies employed in modern politics, the role of journalists in political campaigns should be redefined.

The programme of the seminar was rounded up by a workshop on fact-checking in the digital age. The workshop was led by James Ball from the Bureau of Investigative Journalism, who shared with those present some practical recommendations for fact-checking and hoax-spotting, drawing on his own experience and mainly using real world examples. James Ball claimed that in the past decade hoaxes have become a part of the breaking news cycle. The fact that much of the reporting of big breaking news events is covered remotely using social media crowdsourcing, while the people on the Internet often have little time to reflect upon what they see and read there, has greatly contributed to the abundance of unverified facts and hoax news items on the Internet. In fact, hoaxes of various types are now so widespread that investigative journalists have to select what kinds of fake news are likely to affect their audience and therefore have to be investigated and refuted.

The presenter suggested that one can distinguish between two types of fact-checking, one referring to breaking news, coverage of live protests and live events, the other relating to politics and politicians. At the same time, as the speaker pointed out, the two types of investigation often merge. Indeed, apart from fake facts, journalists have to deal with false interpretations of events, and these interpretations often reveal a political agenda behind them. Moreover, real life events get quick official responses which can add to their political relevance. At times, fact checking can lead to fully fledged investigations. The introductory talk was followed by a practical exercise in which the participants of the workshop were invited to analyse a fictitious report on mass street protests in a hypothetical medium-sized city and suggest ways in which the information given in the report could be checked and verified. Concluding the session, James Ball listed five general principles of fact-checking in ascending order of difficulty. Thus, to verify information, a journalist has to: 1. analyse the context; 2. study facts in the public domain; 3. conduct checks for hoaxes; 4. engage remote reporting and open-source intelligence. Finally, on-the-ground reporting, which costs money and takes time, is always the best way to distinguish fact from fiction.

Programme

Friday, 13 September 2019

Registration

Tea and coffee served
09:30-10:15

Welcome and Introduction

Dr Ilya Yablokov (University of Leeds), Julia Taranova (Russian Readings)
10:20-10:30

Keynote Talk

Professor Steven Livingston (George Washington University), Moderator: Professor Svetlana Bodrunova, (St Petersburg State University)
10:30-12:00

Lunch

12:00-13:00

Panel Discussion 1: How technology has changed journalism – positives and negatives

Aleksey Amyotov (the Village), Ivan Sigal (Global Voices), Turi Munthe (North Base Media; Demotix), Gregory Asmolov (King’s College London)
13:00-14:30

Break

14:30-15:00

Panel Discussion 2: Fake News and Misinformation: Legislation and Regulation

Professor Kalina Bontcheva (University of Sheffield), Martin Moore (King’s College London)
15:00-16:30

Break

16:30-17:00

Workshop 1: Investigative journalism using large amount of data and open source information

Olesya Shmagun and OCCRP – Organised Crime & Corruption Reporting Project
17:00-19:00

Dinner Reception

20:00

Saturday, 14 September 2019

New Technology, Politics and Journalism

Registration

Tea and coffee served
10:30-11:00

Talk and Discussion: How governments and affiliated actors use digital technologies in political battles

Moderator: Gregory Asmolov (King’s College London), Denis Teyssou (MediaLab, Agence France Presse), James Ball (Bureau of Investigative Journalism)
11:00-12:30

Lunch

12:30-13:30

Panel discussion 3: Political campaigning, technology and the media

Moderator: Dr Kamilla Nigmatullina. Dr Sergey Sanovich (Stanford University), Abdulrahman Al Shayyal (Al-Araby Al-Jadeed), Dr Tanya Lokot (Dublin City University)
13:30-15:00

Break

15:30-16:00

Workshop - Fact Checking in the Digital Age

Leader: James Ball, (Bureau of Investigative Journalism)
16:00-18:00

Events