The Tony Blair Institute on what social media regulation could physically look like

From Twitter and free speech, to Google vs Publishers, much has been written in recent weeks about media tech regulation. The trouble is that particularly when it comes to social media, while many of us may talk a good game, very few are able to conceptualise what regulation might physically look like on the ground.

Cue Max Beverton-Palmer. He is the Head of Internet Policy at the Tony Blair Institute for Global Change, prior to which he was Head of Digital Policy for Sky, and prior to THAT spent six years working in key policy roles at Ofcom, the UK communications regulator. We therefore figured he would be ideally placed to explain some of the logistics to us. In this exclusive interview for FIPP, he walks us through everything from digital passports, to online abuse, the role of AI vs human intervention, and beyond to international cooperation in the area.

“Taking it back a step, when we think about what identity actually means you’ve got the kindof old world model, which is related to identity cards,” says Beverton-Palmer. “There you essentially have a card or a piece of paper, which has all of your information stored, and allows you to undertake certain actions like getting into the pub if you’re over 18 for example.”

“But the new world – the new digital model of identity – is actually quite different. In an ideal world, we as individuals own the bits of information about us and we can give this information out for certain purposes, when we want to or when it’s required of us. You don’t have to give out all of your identity. Nobody needs to know my height for example, if I’m trying to apply for a mortgage. But they might want to know it if I’m going for a doctor’s appointment.”  

“And if we think about that in a world of social media, there are certain characteristics we want to share, and some we don’t. Age is the obvious one. You want to make sure if you’re running a social network that you definitely don’t have people under 13, and if you do have people under 18 that they’re not at threat from other users for various different reasons. So what a digital ID system could look like is the ability to share information or prove somebody’s identity using different tokens, in order to allow them to have access to certain services or certain permissions within it.”

The problem is not necessarily the anonymity, but the permission that people feel to say whatever they want because they think that they can hide behind a fake identity or something like that.

One area of contention that surrounds social media regulation, is the extent to which we should all be baring our online identities for the purposes of accountability and policing of online abuse, versus the obvious privacy questions this raises. Beverton-Palmer believes that implemented in the right way, the right balance can be struck.

“Now when we talk about abuse online for example, the debate often goes to: ‘Well, the problem is anonymity, and that we have created this environment in which everybody can say whatever they want.’ But the problem is not necessarily the anonymity, but the permission that people feel to say whatever they want because they think that they can hide behind a fake identity or something like that.”

“So perhaps what we need to think about is a way to prove to one another that we have a real-world identity, but not necessarily have to disclose that full identity. That type of approach would change that incentive structure for people when they’re typing into their smartphone or on their keyboard something potentially abusive.”

Another thing that, as I mention in the interview, I just cannot get my head around, is how Facebook can detect – and block – a one minute television clip that first aired a quarter of a century ago, on the grounds that it recognises a copyrighted track, but still seems to be at a loss when it comes to stopping the spread of fake news. Beverton-Palmer doesn’t pull any punches in his response to this issue.

“Yeah, that’s a really good question and the answer is that: One, because record labels and copyright holders have deep pockets, and care about the protection of their own content – and that doesn’t necessarily mean that Facebook cares more about them than it does about antivax material. But what it means is that those companies have uploaded all of their content onto the cloud, and that then enables Facebook to search through that and match it against people’s profiles and posts using the technology they have. And there’s a high effort that goes into that, and you need to employ a lot of people to specifically do that.”

We need to think more about government regulation enabling those connections between the people who know what’s harmful and what’s bad within society, and then the people who can take action.

“But that’s not to say that Facebook aren’t starting to do that with antivax content. And so the most prolific antivax content will be taken down. But you need the people, as well as the technology, to find that. There are some really great organisations that do fact checking. NewsGuard for example is an organisation that’s well worth a look at. They specifically look at the authority of different news sites, and their credentials, to check that they are good sources of news. The WHO as well looks at this kind of content.”

“But you need to make sure that there are good connections between those people who look at content and know that it is bad, and the people in companies like Facebook who can take action. We really haven’t as a society designed those proper frameworks to do that yet, we haven’t kind of built those links in a more formal way, because we’ve been afraid of government regulation. But we need to think more about government regulation enabling those connections between the people who know what’s harmful and what’s bad within society, and then the people who can take action.”

Tackling the issue of global tech is really difficult and so you need global policy solutions.

Finally, in what was as discussed a wide-ranging interview, another highlight that jumped out was the importance of international cooperation. Because after all, how can nation states effectively police a digital technology that knows nothing of physical borders?

“I think the important thing is to break it down into manageable pieces. So clearly, the international cooperation, frameworks and agreements aren’t quite working for tech and the internet at the moment. There are many reasons, including the idea that not everybody is represented in those discussions and forums. There’s a lot of countries around the world that just don’t have that kind of resource and allocation, but they still use Facebook. We’ve seen it in Ethiopia and Myanmar for example, the incredible role that Facebook and social networks have in those countries. But actually (until recently) there’s been a limited investment from the platforms in those countries.”

“Tackling the issue of global tech is really difficult and so you need global policy solutions. I know for example the UK in its presidency of the G7 this year, is focussing on internet standards as one of its long list of policy priorities. And that’s really important because some of the biggest decisions about the way the internet operates in the future, and some of the most important things in society about how we are all going to interact and communicate with each other, are decided in these internet standards discussions.”

“We need to break all of this down into manageable layers looking at the UN at one level, trade and frameworks at another, and then there’s a whole other level of discussions that we need to have just to raise the profile of some of these issues and make people realise how important they are.”

Get stories like these directly in your inbox every week
Click here to subscribe to our (free) FIPP World Newsletter

Topics

Your first step to joining FIPP's global community of media leaders

Sign up to FIPP World x