Host: Benjamin Thompson
Welcome back to the Nature Podcast. This week, a surveillance special…
[Jingle]
Host: Benjamin Thompson
Nature has a special focus on facial recognition technologies and their role in surveillance systems this week, and we’re following suit here on the podcast. In this show, we’ll be exploring how these technologies are being used, the ethical issues to consider when doing research on them, and what academics in the field think about their use. One of the architects of this special is Richard Van Noorden, features editor here at Nature, and he’s co-hosting the Nature Podcast with me this week. Richard, hi.
Host: Richard Van Noorden
Hello, Ben.
Host: Benjamin Thompson
Well, Richard, first up, I suppose we have to ask what is facial recognition technology? I mean, I imagine that the clue is in the name, but can you give me an overview of how it works?
Host: Richard Van Noorden
Sure, so, essentially, this technology is taking the geometry of your face, like a face print which you can think of as a fingerprint of your finger. It’s something that uniquely classifies you, so it’s a biometric identifier. And then that can be used to check against other face prints. So, it could be used to verify that you are yourself. You can unlock your smartphone. That might be how you travel through a passport gate because you match your stored photo. Or it could be used in what’s called a sort of one-to-many comparison to scan a face print against hundreds or millions of others, and this is how police might scan a crowd to try and find someone on their watchlist, for example.
Host: Benjamin Thompson
It’s not a new technology, Richard. Why is Nature looking to cover it now?
Host: Richard Van Noorden
Yeah, so over the last few years, I think this technology has really made its way into many more places around the world, and resistance to it is also growing. This year, we’ve seen cities in America banning public agencies from using facial recognition because they’re worried about inaccuracies and bias, and we’re seeing proposals for regulation in Europe and the United States and many other places. And so, we’re taking a look at how researchers are essentially quite disturbed by the rise in this technology. So, I worked with my colleague, Davide Castelvecchi, a reporter here, and with freelancer Toni Roussi, and we examined this in three features. Now, this year, it’s actually been particularly timely because COVID has struck and we’ve actually seen places using facial recognition technology to help trace contacts and enforce quarantine, and people actually worry that the use of this live surveillance technology may linger on after the pandemic and that COVID is giving some cities and states the opportunity to experiment with and entrench these technologies.
Host: Benjamin Thompson
Well, the rise of facial recognition in cities around the world is the subject of our first story this week. We’ve been investigating the concerns about this technology’s spread and efforts to push back against it. Our journey begins in Serbia.
Interviewee: Danilo Krivokapić
What we know is that the whole city will be covered with these cameras.
Interviewer: Noah Baker
This is Danilo Krivokapić, director of the human-rights organisation the SHARE Foundation based in Belgrade, Serbia.
Interviewee: Danilo Krivokapić
At this moment, we were able to map, together with the citizens from Belgrade, more than 1,000 cameras on more than 460 locations in Belgrade.
Interviewer: Noah Baker
The spread of surveillance cameras across Serbia is concerning Danilo. But not simply because of their prevalence. These cameras are also equipped with facial recognition technology deployed in the name of crime prevention to create smart cities.
Interviewee: Danilo Krivokapić
Which means that if you have one city that is completely covered with cameras and you have facial recognition software and you also have a database of ID cards with biometric pictures, you can basically track everybody the whole time.
Interviewer: Noah Baker
And Danilo is warning that any system which helps governments to track people is open to abuse.
Interviewee: Danilo Krivokapić
If you have a system that is basically covering the whole city, these freedoms are in danger. We are also living in a country, I would say, politely, with fragile democracy, so when our government has such a powerful tool as this, this can lead to many, many problems.
Interviewer: Noah Baker
The expansion of this facial recognition technology in Belgrade is only the latest in a long line of examples across the world.
Interviewee: Steven Feldstein
I think the conventional wisdom with this kind of technology was that it was mostly centred around countries that were technologically sophisticated and also those with sort of poor human rights records.
Interviewer: Noah Baker
This is Steven Feldstein. He’s been analysing the spread of AI surveillance technologies for an international think tank in Washington DC.
Interviewee: Steven Feldstein
I think there was maybe a general sense that democracies were using this type of technology, these techniques, less, but that countries like China, Russia, Iran was pretty strong. Sophisticated capabilities and a poor record of freedoms were maybe the leading users of this tech. And what I found was actually that democracies are just as prevalent in terms of their use of these technologies as autocratic governments.
Interviewer: Noah Baker
For example, right here in London, facial recognition technology was used between 2016 and 2018 just up the road from Nature’s offices in Kings Cross Boulevard, a busy street in central London. That particular use of this tech has since been stopped. There was somewhat of an outcry after the public became aware of it. But police have also been using facial recognition across the UK since 2016. In several states of the US too there are similar uses of this technology by the police.
Interviewee: Steven Feldstein
In general, it was just more widespread than I anticipated, and that many countries with less sophisticated capabilities, places like Kenya, for example, countries in Central Asia, Serbia, they’re all deploying advanced facial recognition surveillance, social media monitoring and so forth.
Interviewer: Noah Baker
According to Steven’s research, the spread of this surveillance tech seems, in part, to be driven by so-called ‘soft loans’ from Chinese companies – a cheap form of finance that’s enabled governments to purchase the technology – and that raises a concern that Chinese authorities may have undue influence in countries which are buying their systems. But to better understand how these surveillance tools are spreading, Steven warns that only looking at China is too simplistic. After all, they aren’t the only providers of surveillance tech.
Interviewee: Steven Feldstein
China is a big player but it’s not the only player. Whether it’s democracies providing technology like the US, UK, France, Japan, and many others, Israel as well, as well as autocracies like Russia that also provide these types of capabilities. So, from what I’ve seen, the marketplace is crowded. There are a lot of different competitors from different countries who are seeking to provide this type of equipment.
Interviewer: Noah Baker
In the past few months, several companies, such as IBM and Microsoft, have distanced themselves from this kind of surveillance technology after receiving significant pushback, but it’s not really clear how many companies are involved in the development of this technology, and some may not be as sensitive to public pressure. Take, for example, the US company Sandvine. It provided technology to the government of Belarus, which allowed them to selectively censor websites, websites related to unrest due to disputed elections.
Interviewee: Steven Feldstein
Now, after a huge amount of political pressure in the United States, Sandvine has now kind of pulled its selling and its provision of these tools from Belarus. But that only happened because of a specific incident of political problems that arose in the US. Otherwise, Sandvine was perfectly happy to continue providing these tools to the Belarus government and, in fact, it continues to provide these same types of intrusive tools to other authoritarian governments, like Egypt, around the world. And so, in that sense, it almost seems to be a little bit reactive.
Interviewer: Noah Baker
So, if the spread of surveillance tech like facial recognition is mediated to some extent by the public response, that begs the question, ‘What do the public think?’ To give us some insight, here’s Carly Kind, director of the Ada Lovelace Institute in the UK.
Interviewee: Carly Kind
We surveyed 4,000 members of the public in a broadly representative sample, and we had some really interesting findings. We found that people, on the whole, were more comfortable with police using facial recognition technology than they were with the use of the tech in the private sector. So, around 69-70% of people broadly said they’d be comfortable with police using facial recognition technology in specific circumstances. Whereas much lower levels thought that facial recognition should be used in schools or in workplaces, for example.
Interviewer: Noah Baker
Whilst the majority of respondents were content with certain uses of the technology, 29% of people were not comfortable with it at all.
Interviewee: Carly Kind
Of those who said they weren’t comfortable, the main reason they gave is that they felt that it would normalise surveillance, and I think that that is a real concern that exists with lots of people and probably is experienced more highly by people of certain communities who already feel like they over-surveilled or over-policed, including communities from Black or minority ethnic backgrounds, and that is a real concern.
Interviewer: Noah Baker
This is also reflected in concerns from survey respondents that said that they were broadly content with the use of surveillance technology but they had caveats. And so, to get a more nuanced understanding, the Ada Lovelace Institute followed the survey up with several citizens’ biometrics councils, sort of like focus groups.
Interviewee: Carly Kind
We had a session specifically with people from Black or minority ethnic backgrounds, a session with people living with disabilities and a session with people from the LGBTQ community as well. Through those sessions we found a range of different anxieties and concerns with the use of facial recognition and other biometric technologies. A thread that ran throughout the citizens’ biometrics councils was that it would be really important that the technology doesn’t discriminate against certain groups of people. So, there’s a big concern on the part of the public that this technology discriminates, and they want to see some kind of verification or assurance that the technology is not discriminatory before it’s rolled out.
Interviewer: Noah Baker
These findings, along with several other concerns, have led the Ada Lovelace Institute to support a moratorium on this technology, at least until the regulatory issues are hammered out. None of the sources in this podcast are opposed to surveillance technology altogether, but quite often they are concerned that the tech is overtaking the regulation. Carly thinks that regulatory bodies need to put the people affected by facial recognition technologies front and centre when working out how best to use them in the future.
Interviewee: Carly Kind
We’ll be reporting the outcomes of the citizens’ biometrics council hopefully before the end of the year, and they’ve made a range of recommendations that we’d like to see now brought into policy. In parallel to that, we’ve also commissioned an independent review into the regulatory framework for biometric data in the UK. We appointed Matthew Ryder QC and a team of lawyers and technology experts to analyse the existing legal framework and make a range of recommendations as to how the law can be changed to better protect peoples’ lives and to enable the use of biometric technologies in a safe and trustworthy manner.
Interviewer: Noah Baker
That was Carly Kind from the Ada Lovelace Institute, based in the UK. You also heard from Danilo Krivokapić of the SHARE foundation in Serbia and Steven Feldstein at the Carnegie Endowment for International Peace based in the US. This package was produced by Nick Howe with narration by me, Noah Baker.
Host: Benjamin Thompson
Well, Richard, it seems then we’ve just heard that these cameras really are cropping up everywhere.
Host: Richard Van Noorden
Yeah, it’s extremely worrying, and what’s worrying is the lack of transparency here, the lack of understanding of where the data is going, and people feeling that in any place they might potentially be being watched by cameras or their image might be looked at afterwards by police using facial recognition technology and essentially, it sort of changes your feelings about how you go about your life, potentially. You might not go to a rally or a protest or even a football match if you think that facial recognition technology might be used, and it’s also a tool that’s just extremely suitable for sort of setting up an architecture of control, and often the communities on whom it’s being used say they don’t necessarily feel safer even though they’re being told that they ought to be safer because the cameras are there. So, a lot of the local resistance we’re seeing is being led by people who say, ‘This isn’t helping our community despite what we’re being told.’
Host: Benjamin Thompson
And I imagine that there are disproportionately different effects of the use of this technology in different communities.
Host: Richard Van Noorden
Exactly. There was an example earlier this year of the arrest of a guy called Robert Williams in Michigan after a facial recognition system misidentified him as someone who had stolen a watch. They had blurry surveillance footage of a Black man and they matched it to his driving license – he’s also Black – but it wasn’t him. They arrested him at his home and apparently, according to Williams who had a complaint filed for him by the American Civil Liberties Union, literally a detective showed him these photos, Williams picked them up, put it next to his face and said, ‘I hope you don’t think all Black people look alike. This is not me.’ And apparently the detective replied, ‘Well, the computer says it’s you.’ So, worst nightmare for anyone. He was released but he was held for more than a day. And people in the region say it’s not just that a mistake was made and, in this case, there wasn’t enough checking done on whether this supposed match was actually a match. This also plays into the feeling that the police in that region are over-policing the Black community, and they’re going to use this tool perhaps to concentrate even more on the people that they’re already arresting too much anyway, and this just played into those concerns. The ACLU, for instance, said, according to them, this tool doesn’t work well enough yet, and even when it does work, it’s too dangerous a tool. That’s what Phil Mayor of the ACLU says. So, that’s been an example that people have cited to talk about how this technology doesn’t work well enough and it could just be used to further entrench biased policing, is what they’re charging.
Host: Benjamin Thompson
Well, let’s talk about the technology. Facial recognition is a product of research, and the algorithms that underly the technology need to be developed and trained on faces, and this isn’t without its issues.
Host: Richard Van Noorden
Yeah, this has been called the ‘dirty little secret’ of facial recognition technology, and that’s that these algorithms need training on millions and millions of faces in order to get good at extracting the most relevant geometry from each face and to do this, researchers need datasets of lots of faces. And this has kind of blown up in the past few years because like much of computer science, facial recognition research has involved grabbing lots of public images off the internet, often licensed even for research use, and there are some very large datasets of millions of faces that are released by companies or just academic groups in order to train and test and benchmark how good their facial recognition algorithms are, and this was just the way it worked. And then an artist called Adam Harvey has been pointing out that this means that your face potentially is in a dataset that’s been shared and then has subsequently been cited and used by firms creating these systems for the military. So, this turned out to make some of the people who discovered they were in these photos pretty unhappy, and that was something that computer science researchers hadn’t really thought about. The data was public so what’s the harm? It turned out that the harm was from the sort of autonomy and the dignity of the people in these photos who, when they heard about it, didn’t really want their photos to be used in this way. And so, people are having to rethink, ‘Hmm, should we not be collecting datasets of photos in this way? Should we actually be asking people first, and how would we do that if we had millions of photos that we needed?’ And there are other ethical questions as well around the research here.
Host: Benjamin Thompson
Well, clearly ethical concerns around consent and how the technology is used, but also surrounding a very controversial area of research that uses these datasets, and that’s called facial analysis, where attempts are made to make inferences about someone’s characteristics by analysing their face. And yet even though these issues exist, people are still working on facial recognition research which begs the question, can this sort of research be justified? Well, to get a sense of some of the discussions going on in academia, I reached out to Karen Levy from Cornell University in the US, who researches the social, legal and ethical dimensions of new technologies.
Interviewee: Karen Levy
I think knowledge of history is almost always an essential in doing this sort of work and thinking about kind of the social implications of what one is doing. Many folks have compared modern face recognition to some of these older pseudoscientific attempts to discern like criminality from the shape of the face, right, things that were quite deeply rooted in racism and eugenics. I think they are quite apt comparisons between some of the modern research that tries to make similar inferences based on the face – things like sexual orientation or again, things like criminality or other kind of personal characteristics. And you still see papers, you still see attempts to sort of make such inferences, but I do think that for the most part, more researchers are kind of aware of the real limitations of doing so and of the social and ethical consequences that might adhere to that, and I know that some publication venues or conference venues have started to take much more seriously concerns about research that tries to do things like that.
Host: Benjamin Thompson
In your research, did you ever get a sense that people are going back to the argument that technology is morally neutral and it’s how it’s used that is part of the problem?
Interviewee: Karen Levy
Yeah, I mean people always like to make those assertions, right, that they’re just engineers and that folks sometimes disclaim responsibility for the tools that are actually built and deployed into the world that are based on the science that scientists do. I think with face recognition, for some applications like it goes even deeper than that, like the questions are actually about the integrity of the science itself, which researchers ought to be fundamentally concerned with. But even on top of that, I think there is increasing awareness that is still far from universal awareness of the degree to which we might ascribe moral responsibility or ethical responsibility to researchers for the work that they do and put out into the world. So, the ‘good’ uses of face recognition, or the ones that are often kind of rhetoricised, are almost always like finding missing people, missing children especially, or fighting terrorism, like things that I think everybody, for the most part, thinks are positive use cases. The difficulty is that it’s almost impossible to deploy a technology only for those use cases that everybody agrees are beneficial. It’s really, really difficult to prevent the creep of these tools for the use of lots and lots of things around which we have much more ambivalence about their ethical implications.
Host: Benjamin Thompson
So, if I come up with an algorithm that’s very, very good at recognising faces in a medical context, there’s obviously the potential that it could be used in a different sort of context, and that’s what researchers need to be cognisant of.
Interviewee: Karen Levy
Potentially, yeah, and I don’t think it necessarily means like a hard stop. I’m not trying to assert like that we shouldn’t do any research that might have some negative social implications because those are almost impossible to foresee. We should be reflective about them and we should think carefully about not disclaiming, ‘Well, I just do the science and therefore whatever happens next like doesn’t concern me,’ like that strikes me as a pretty fallacious argument. But there isn’t a bright line, I think, one way or the other. I don’t think the counter point to that is like, ‘Well, you should only do work about which you feel 100% confident that nobody will ever misuse it,’ like that would be a really difficult line to draw and would impede a lot of science. So, I think the answer is probably somewhere in the middle, right, and there are some institutional safeguards we can put in place to try to ensure beneficent use, but we’re never going to be 100% confident that that’s going to hold.
Host: Benjamin Thompson
I mean, you say putting safeguards in place, I mean, is it possible to make an ethical dataset and if so, is it possible to use it in ethical facial recognition research?
Interviewee: Karen Levy
I think there are certain kind of ethical minima that we might think about. So, certainly, thinking about consent is a good initial threshold, that you would want to ensure that you’re using a dataset about which you feel pretty confident that the folks in the dataset would assent to their inclusion. That already is a pretty difficult bar, and that’s not something that I think has had a lot of traction in computational research. Other folks think a lot about the relationship between academia and industry, right. People have very different takes on the degree to which they want to sort of productise their research versus do kind of basic research in this area. Beyond that, people think a lot about inclusion, and this is actually also, I think, a really interesting kind of ethical quagmire. So, a lot of folks become quite legitimately concerned about differential accuracy rates for different types of faces, right, for the faces of people of colour or the faces of women and how face recognition algorithms tend to be less accurate for those groups. So, there’s really interesting debate within the community about whether what we should want is more inclusive datasets for which there is more accuracy for those groups, or if we might say like, actually, the use of these datasets is a problem. We don’t necessarily want more inclusive datasets that will then be used to over-police Black communities. I think recognising kind of the moral ambiguity there is an initial step. But there’s like I think significant and legitimate disagreement about kind of what we ought to do in that case.
Host: Benjamin Thompson
Karen, you obviously work in this sphere. What are some of the things that have gone on and some of the changes that you’ve seen that have impressed on you this area of research?
Interviewee: Karen Levy
Oftentimes, I think in technical research, there’s a push to do things that are technically novel or to improve the state of the art in some way, and like I think the academic reward tends to come from doing something cool that nobody was able to do before. I think face recognition is a really great example of a context where technical advances may be socially and ethically consequential in ways that are only now, I think, becoming more widely appreciated. It’s really difficult, I think, to effect like broad culture change across the academic enterprise and to start to say like, ‘Well, maybe we should also reward like careful thinking about these social consequences or like maybe something that’s technically really impressive like isn’t something that really ought to be trumpeted and widely published.’ And we’re starting to see the community reckon with that, and I think it’s a difficult conversation to have because nobody wants to be accused of doing something unethical. The norms, I think, are changing fairly quickly. Some of the major machine learning conferences have just started requiring more attention from researchers to the ethical consequences of their work in ways that they didn’t before. It’ll be really interesting, I think, to watch that develop over the next few years because I think it’s new to the community, right. I think it’s a community that hasn’t previously reckoned with those things in an explicit way, and it will be interesting to see, five years from now, is it typical for researchers to think quite seriously or to include in their papers like ethical statements that are meaningful, or is it sort of like a fad that will fade or it won’t be effective or something? Those, I think, will be interesting developments to watch.
Host: Benjamin Thompson
Karen Levy from Cornell University there. Well, we heard Karen say, Richard, that conferences, in some cases, are beginning to put safeguards in place, but she also mentioned that some research papers are still coming out that are based on some fairly controversial research to say the very least. And this is something that you’ve been looking at for a feature this week.
Host: Richard Van Noorden
Yeah, there’s a number of papers, actually, that have come out from China that have been looking at using facial recognition to try and distinguish the faces of Uyghurs – this minority ethnic group, mostly Muslims, in western China, and it seems kind of remarkable that academic journals would publish this research, essentially giving it a tacit seal of, ‘Yes, this is okay science.’ There’s been a lot of debate this year about what should be done about those papers in that context. It’s not just that work, though. There’s other papers that try to look at other characteristics of people from their face. But nonetheless, amazingly, some of this research is still being done, it’s still being published, and it’s quite difficult to see how this passes ethical review. And there’s essentially a growing coalition of people who are saying, ‘This research is really hard to do ethically. Why is anyone doing it? Why is the scientific community saying this is okay?’ Of course, companies and governments will do what they will do and they should be sanctioned by society and the law, and there’s not much researchers can do about that other than speak out against it. But in their own scientific studies, researchers need to think a lot more carefully about what exactly they’re doing with their facial recognition work.
Host: Benjamin Thompson
And to get a sense of what researchers actually think about what they’re doing, Richard, you asked them.
Host: Richard Van Noorden
Yeah, so, earlier this year, just before COVID really took off, we asked about 500 researchers in facial recognition and AI and computer vision what they thought about these ethical concerns, and also what they thought about how the technology is used as a whole. And on the ethical concerns, broadly speaking, people were pretty uncomfortable and worried about them, which I think is positive that they were worried. So, more than 70% of people, for instance, thought that in the case of research on vulnerable people, for example, the Uyghurs that I mentioned, they thought that that basically was unethical or could be unethical even if you had consent. In the question of whether facial recognition researchers should get consent of people in the photos that they’re training their systems with, this was much more split. Only 40% of people thought that scientists should get informed consent. Still, higher than I might have thought given that after all, a lot of the datasets that we’re talking about here are public photos.
Host: Benjamin Thompson
So, still quite a divided field then in this case.
Host: Richard Van Noorden
Yeah, still divided because if these online photos have licenses saying they’re free to use, why not use them? A lot of researchers still think that, and it’s also very difficult to see how to reach out to these people. You don’t necessarily know who they are just because you have seen a number of pictures of their face or if there’s a label. And so, I think researchers are still really working this idea through, to be honest. They are a bit clearer on what should be done about studies that try to pick out sensitive, personal characteristics of people from their photo. A lot of them feel that informed consent is needed here or there should be discussions with the groups affected beforehand. And when asked what should the scientific community do about some of these ethically questionable studies, a lot of people thought that there should be ethical peer review in conferences and journals as we talked about, and essentially that we should start insisting that human subject research ethics forms parts of the peer review process.
Host: Benjamin Thompson
But is that maybe shifting the responsibility from the researchers themselves to, I mean, I use the word gatekeepers loosely, but who publish the papers and who run the conferences?
Host: Richard Van Noorden
Well, I think it’s saying that the community as a whole needs to instil more ethical review in the way it’s working because the people who run the conferences are usually researchers, and even the people that run the journals are almost always researchers as well, in terms of the people who are accepting the papers, and so bringing those ethical review processes in is something that I think a lot of people want. People are quite worried also about institutional review boards, these ethics bodies that consider research on human subjects, which we’re very familiar with for medical research. For computer science, less so. A lot of people thought that these facial recognition studies should be basically go through these institutional review boards, but many of them felt these boards are not very well equipped to judge ethical issues around this kind of research because the harms we’re talking about could be sort of harms to a whole group, not harms to an individual as such. If you’re trying to create something to spot someone’s sexuality from their photo, the fact of that existing is probably more likely to harm a whole group of people who can now be singled out or even if the technology doesn’t work, the idea that it might is harmful as well, potentially. So, these are more sort of subtle harms. These are not direct medical harms to a participant in a research study. These are the subtle harms in society, and so people were not sure that IRBs whilst are well equipped to consider these and so you might get a study pass ethical review and it would be a bit of a tick-box exercise. It’s not really deep reflection.
Host: Benjamin Thompson
At the start of the show, we heard about the Ada Lovelace Institute here in the UK and how they were talking to the public about their views on facial recognition technology and how it’s used, and there was maybe the feeling that in some cases, the pace of this technology is outstripping the regulation. Do you think that’s part of the problem here is that things are moving so fast that ethical review boards or whatever you have are struggling to keep up?
Host: Richard Van Noorden
Oh, completely. I mean, it’s moved around so quickly in the last few years that I don’t think it’s necessarily anyone’s fault, it’s just an appreciation that a lot more attention needs to be paid to this. In our survey, we also just asked the researchers, ‘What do you think about the ways that facial recognition is used? What’s comfortable and what’s uncomfortable?’ And I was very interested to know that the answers actually mimicked pretty closely what the Ada Lovelace Institute found when they did a much larger survey, a sort of nationally representative survey of 4,000 British people. And both our researchers and the people who answered the Ada Lovelace Institute’s survey where fairly comfortable with the idea of police using this to probe to find suspects of a crime that have been committed. This is sort of very much couched in assuming that there’s extra regulation in place, of course. And they’re also fairly comfortable with it being used in airports to check our identities, being used with smartphones, and they were very uncomfortable with the idea that this stuff could be used in work to check your attendance or even in interviews to see if you have characteristics suitable for the company, which is happening. Very uncomfortable with it being used in schools. Essentially, very uncomfortable when they couldn’t really see what the public benefit of using the technology was. If the benefit just accrues to the private company that’s in control of the tech, people are not very happy about that.
Host: Benjamin Thompson
Well, Richard, we’ve heard a lot today about facial recognition technologies. I think it’s fair to say that facial recognition technology isn’t likely to go away anytime soon. So, where does it go, Richard? How can we make sure that it’s used ‘for good’, I suppose?
Host: Richard Van Noorden
So, we are seeing lots of pushback and no doubt concerns about privacy and ethics and so on are going to grow. I think that we’re going to see a lot more detailed case-by-case dissection of when facial recognition is useful and when it is not because talking In generalities doesn’t usually get you down to the fine details of what is okay and what isn’t okay. And it’s all going to be down to having people really engaged with the technology and really understanding how it’s used and why it’s used, and I’m very encouraged by the work that the Ada Lovelace Institute is doing because they’re using these citizens’ biometric councils to actually ask people what they think in a very detailed way how facial recognition technology is used. Because ultimately, this technology is going to change our society. It’s going to change the way that our societies can be policed and the way that states can monitor what is going on. It’s going to affect people’s privacy and ultimately, we need to ask everyone whether they feel comfortable about their society changing in this way, and that’s the kind of debate that we’re going to need in order to use these technologies safely and constructively. And if we don’t have that debate, this could end up in a situation that probably many societies don’t really want to go in, but we could see ourselves getting their by default. So, we need to stop and consider this, change regulations, change legislation, to make sure that this technology is working for all of us rather than just for the people who are behind the cameras.
Host: Benjamin Thompson
Well, Richard, let’s leave it there then. Thank you so much for joining me today. Where can listeners find out more about facial recognition technology and the results of the survey, for example?
Host: Richard Van Noorden
Yeah, our survey results and the articles on facial recognition technology are all online, explaining how it works and where it’s used and researchers’ ethical concerns, and you can find them all at nature.com/news.
Host: Benjamin Thompson
And one last thing from us. Obviously, no Coronapod in today’s show, but keep an eye on the podcast feed later in the week for the latest episode of that, where Noah will be talking to Nature’s Heidi Ledford about the drop in death rates and touching on the latest vaccine news. If you’d like to get in contact with us, you can reach out to us on Twitter – @NaturePodcast – or on email – we’re podcast@nature.com. I’m Benjamin Thompson.
Host: Richard Van Noorden
And I’m Richard Van Noorden. Thanks for listening.