No, EU parliament is not calling for a ban on facial rec. They are proposing to: forbid using it indiscriminately in public places and to forbid the use of any AI/ML in predictive policing.
Only if face images also are acquired from a legitimate investigation (ideally a prior criminal conviction). Police cannot observe a face for years until they get a warrant then all suddenly have years of data, that is wrong.
What if someone is known to have committed a crime and police look at surveillance footage from nearby red-light cameras?
Or if that person is thought to have visited that location in the past, could they use that footage?
Which brings another question, what about facial recognition performed by a human? Investigators can presumably scour video or surveillance data by hand, but I guess just not by machine? Can they use face detection to simplify picking out faces? Zoom in and enhance?
It's very tricky when passing legislation to narrow in on the pernicious aspects of "facial recognition"
If only there were some system in place where police would request the judiciary evaluate their evidence and reasoning to see if violating the privacy of the suspects were approved and warranted.
Well the legislation is pretty specific in terms of what facial recognition is allowed actually, and the scenario you are describing is listed and explicitly allowed.
The text is completely open so you can see exactly how they have scoped it.
Well i don't trust government to collect data on movements of every person but only use it with a warrant... i have issue with red-light camera for exactly this reason, we are not london, we should not have constant government surveilling. I am also concerned significantly by ring doorbell and similar, but unsure for what solution we need use. I think maybe as a first step we say they may not record any outside of a person's own property line.
I'm not really sure why we've normalized the collection of biometrics during police investigations that are kept forever. I know deleting them would defeat the purpose of the fingerprint/DNA databases but if we're gonna do this either collect and keep everyone's biometrics or no ones because creating two classes of people, those who have had run-ins with the police and not, is gross.
Not only people who have had run ins with the police - when I took an entry level job at a bank I had to go and submit my prints to the local gov (via an expensive third party of course) as part of my onboarding requirements (and run a background check). These are now stored forever in some police database; I would guess there are some other jobs like that, maybe government employees as well.
Why not? Concrete use case: many property crimes have a minimum cutoff (a felony only if more than $X is stolen or damaged). So to convict someone who knows the letter of the law and regularly engages in some mischief under the cutoff amount, you will need to gather video evidence over a long time period - a period during which the person technically hadn't hit the felony threshold yet.
Because a free society is a finely tuned balance between state power and individual freedom, and the pendulum is on the far side of state power since 9/11 anyways.
The same thing that can be used to find perpetrators in minor crimes can also be used to trace political movements, that any given government happens to dislike.
IMO there are certain uncertaintiws that any free society has to accept. One of those is that not every perpetrator will be punished. Not because there wouldn't be a way, but because a society in which this would be implemented wouldn't be free anymore.
If a shopkeeper submits footage of a crime to the police, the police should be able to keep it as evidence. What's problematic is proactive gathering and aggregation of footage by police or by corporations.
Sure. But we are talking about a global facial recognition technology, not about someones private surveilance video.
There is a slight difference between the state having facial recognition AI-Cameras and a privat citizen submitting surveilance footage of their private shop after a crime has been committed, don't you think?
When the private citizens use services explicitly connected to state police services, is the difference only a matter of latency? For now it looks like Ring is oriented around geography but its probably just a ToS change away from adding a face-print dimension for lookup.
What is the threshold of scale when a private citizen has multiple homes? Can that individual collect biometrics from their visitors and do whatever, privately, with that data as long as they wish? What if that same individual owned one or more private companies? Can they collect biometrics for internal, private use, then? Even if that private individual is an internationalists, traveling to different countries with access to that data at will, perhaps with specialist staff analyzing that data?
Why is that problematic? Individuals are allowed to capture photos and footage in public spaces, and there isn't an expectation of privacy there. I don't see why capturing footage is a problem - without that the cost to actually apprehend a criminal is so high that we might as well not have laws for many crimes.
> without that the cost to actually apprehend a criminal is so high that we might as well not have laws for many crimes.
Which crimes are those? Perhaps it's a sign that they shouldn't be considered crimes after all?
Whatever those crimes are, we've apparently successfully lived with them prior to the advent of modern mass surveillance technology so it can't be that bad.
People keep using this phrase as if it's binary. I 100% expect that when I go in public, there isn't someone following me with a notebook, keeping a log of everything I do. I also expect that someone won't walk up into my face and take a picture of me without asking.
But why won't anyone think of the poor paparazzi that need to be able to take awful photos of people in public so they can destroy people's reputation?
It might be nice if we gave public figures a bit more privacy through this whole process.
Any crime that is not worth having humans investigate is not worth disrupting a suspected criminal's life. If it isn't worth the cost to society, then society should leave it alone. Due process should be owed to every single citizen, and disrupting their lives should be considered worth the cost.
They should ban facial recognition though, in all and every context where the owner of the face has not given explicit consent. Nothing good can come of it.
It's pretty hard to ban - it is not that hard a problem to crack with modern processing and an intermediate understanding of computer vision. The issues related to skin tone bias are practically old news now, with the stress the international press placed on the issue, addressing problem areas and their fixes are well documented. I'd go as far as saying FR is at stage 3 of 5 for the technology life cycle: early majority. Its is already ubiquitous, inside commodity mobile camera chipsets and accessible with a little more than a page of python anywhere opencv can be installed.
Shouldn't this already be covered by GDPR thought? While we usually associate it to the Internet, tracking and cookies, from what I understand it is much more general and should include facial recognition (but of course IANAL etc.)
There may be space for a good law somewhere in this area. But you should be really leery of outlawing a particular kind of computation, unless you're cool with thoughtcrime, because that's what it will by default become over time, just as inflation made laws with hardcoded dollar figures become quite different without ever changing the law.
I prefer to deal with that problem when we arrive there. Most neurologists tell me there is still some way to go despite significant advancements. Same with AI, today's reality is that machines are still completely incapable of thought.
But anticipatory I would still think there is a difference because constant surveillance would still bind your time.
Sorry to be cynical, but this will not stop the use of the technology. No universal ban like this will happen. What will happen is that the government(s) will end up being the sole user of it via a plea to the public for some sort of safety or security that "benefits" them.
> forbid the use of any AI/ML in predictive policing
I don't get what this means. If you use basic statistical methods and realize that murders are more likely to occur in this location between these hours and adjust your police force accordingly, is this bad? Shouldn't police use basic statistical methods in determining effective policies?
> If you use basic statistical methods and realize that murders are more likely to occur in this location between these hours and adjust your police force accordingly, is this bad?
It depends - These models are often non-transparent in terms of how they calculate, so what if the AI/model decides/calculate that a higher percentage of minorities in an area is a high predictor of crime, thus indirectly creates a policy that, for instance, areas with high levels of ethnic minorities require higher levels of policing or harsher restrictions (i.e. curfews, more CCTV e.t.c.).
If it's just a basic 'well murders happen between these hours on this street so we will put a police officer there' then of course that should happen, but that's not really policing based on AI/ML, it's just based on basic statistics.
I am still not clear on why that's a problem. If the goal is to reduce or deter crime, and there's an effective way to do that, why does it matter what correlation it has to outside demographic factors? I would argue that not containing crime in a known high-crime area just because of correlation to 'high levels of ethnic minorities' is itself racist, since that is itself an act of making race a factor in decision making. It also doesn't fit what minorities actually want. As an example, an overwhelming majority of Black Americans want policing to stay the same or increase (https://news.gallup.com/poll/316571/black-americans-police-r...), rather than decrease, despite what (predominantly-white) activists have been pushing for in the last year.
> It depends - These models are often non-transparent in terms of how they calculate, so what if the AI/model decides/calculate that a higher percentage of minorities in an area is a high predictor of crime, thus indirectly creates a policy that, for instance, areas with high levels of ethnic minorities require higher levels of policing or harsher restrictions (i.e. curfews, more CCTV e.t.c.).
It's pretty simple to make the models transparent. Just select the inputs you want the model to have access to. You shouldn't be feeding your model the percentage of minorities in an area.
What's not transparent is how humans make decisions. You can't tell a human to consider only these characteristics but not these others ones.
> If it's just a basic 'well murders happen between these hours on this street so we will put a police officer there' then of course that should happen, but that's not really policing based on AI/ML, it's just based on basic statistics.
I don't see how you make a meaningful separation between the two. Either you can use predictive models or not. It's hard to carve out "use predictive models but not too predictive". The more likely scenario is just not using any model and distributing police resources uniformly regardless of need.
> It's pretty simple to make the models transparent. Just select the inputs you want the model to have access to. You shouldn't be feeding your model the percentage of minorities in an area.
That doesn't make the model transparent, it just makes the input and output transparent, but it's still a black box.
It's nice to say that everyone will ensure that the model isn't fed with information which will allow it to make decisions or be biased based on sex / gender / race / income but in reality accidental racial bias has already shown up multiple times in applications where it should be easier to detect than an invisible 'crime prediction' algorithm that the general public won't have access to but public policy will be implemented on the basis of.
> I don't see how you make a meaningful separation between the two. Either you can use predictive models or not. It's hard to carve out "use predictive models but not too predictive". The more likely scenario is just not using any model and distributing police resources uniformly regardless of need.
I make a meaningful separation because one has clear logic which can be viewed to be unbiased, and the other has a black box calculating it.
Again, whats the alternative? Human being making this decisions. Human beings with their own biases and life experiences. I trust a model where I can at least control the inputs it considers, is auditable, and created by intelligent thoughtful people than some police chief.
> accidental racial bias has already shown up multiple times in applications where it should be easier to detect than an invisible 'crime prediction' algorithm that the general public won't have access to but public policy will be implemented on the basis of.
The primary victims of violence are often of this same race as the "accidental racial bias" you're talking about. You're not doing anyone any favors by misallocating police resource to be more "equitable" without considering actual need. Only 7% of Americans want less policing in their neighborhoods. I imagine people that live in high crime areas would prefer more policing.
When you can no longer transparently explain the reasoning behind a decision made based on the statistics, and instead have to just shrug and say “because the model told us so”. Accountability is very important in this issue.
Apart from neural networks most models are somewhat transparent in terms of reasoning. Neural networks would be the wrong model to use for this anyway.
For instance, decision trees can tell us if the population density between this range and the percentage of single family residence is between this range, the probability of a package being stolen is X.
Or regression could tell us that the square of the population density is positively correlated to this crime .
This law isn’t talking about banning decision trees or banning regression analysis though, it’s talking about ML being used in specific listed applications.
Exactly. We don’t disagree, I don’t think. Decision trees and regression analysis should count as statistics for the purpose of this issue, as they are transparent and accountable. Neural networks are another beast entirely, in my opinion.
I think they are two different things - statistical modelling to me is about understanding relationships between data (i.e. we are highly confident that X and Y are correlated, and can infer a reason and thus put a policy around that based on open reasoning), and ML is about making predictions in new situations based on historical training data from different situations (but we can't necessarily infer the why).
> “…murders are more likely to occur in this location between these hours…”
you’re on the wrong track if murders (and the like) are the crimes you’re targeting. murders, while horrific, are a tiny and very localized danger. the crimes we need to be policing more actively happen around money and power (mass surveillance, corruption, etc.), which are far too common and have broad, highly externalized, and surreptitious effects.
i’d heartily support (heavy-handedly, even) applying any and all such techniques on (so-called) white-collar crimes. in contrast, social norming in communities (like not murdering) should be convivial rather than adversarial, as that’s much more effective (as evidenced by how little murder actually happens).
the amount of policing has basically no effect. numerous studies show that potential punishments don't deter murder, either pre-meditated or murders of opportunity. further, police rarely ever stop murders, and only solve about 55% of murder cases. lastly, murders are rare: 0.005%, or 16k/yr, in the US. that's because of the severe social, not legal, repercussions. people feeling connected to each other lowers the murder rate, not policing.
I wonder what would happen if murder stopped being policed. Would the murder rate remain unchanged as “numerous studies” suggest? Or would an unchanged base rate in murder without official retribution lead to blood feud style relationships between people and increase a secondary, tertiary, … n-terary murder rates?
If police leave 45% of murders unaddressed, I wonder how many current murders are secondary in nature, as in would not have occurred without an unretributed precipitating crime be-it murder, rape, assault, etc.
It’s an interesting idea to think about murder not being policed or have legal repercussions. Would it be comforting to know if you kill somebody drunk driving there wouldn’t be a big problem? I guess somebody like Dallas’ Dr. Death would still have to deal with pesky civil lawsuits.
the extreme limit (anarchy) isn't a practical possibility, since no government, however small, would give up its coercive power that way. it's more fruitful to talk about varying levels of policing and varying legal consequences, and the prevailing conclusion there is that murder rates aren't particularly sensitive to these variables in the practical bands of interest. but note that "policing" need not be done by a police force--in tribal groups for instance, tribal members generally police each other.
we'd be better served by having many more detectives, and far fewer gun-toting beat cops, if we were interested in solving more murder cases though. that's the kind of policy discussion we should be having, not the politically-driven narratives that currently prevail around the topic.
For instance, I lived in an area where there was a lot of bike thefts. Everyone knew it and police got a lot of bike theft reports (it was in the statistics). I would like to see more police enforcement to deter bike thefts. I think its unethical to ignore the needs of a neighborhood when allocating police resources.
But if you then over-intensify the bike theft enforcement in that area, you will continue to have an over-reporting of bike thefts in that area, leading to more intense bike theft enforcement, eventually leading to an unjust and unethical police presence across the municipality.
> Shouldn't police use basic statistical methods in determining effective policies?
No. For one, police are not statisticians. Here's a military example from WWII:
During World War II, the statistician Abraham Wald took survivorship bias into his calculations when considering how to minimize bomber losses to enemy fire. The Statistical Research Group (SRG) at Columbia University, which Wald was a part of, examined the damage done to aircraft that had returned from missions and recommended adding armor to the areas that showed the least damage. This contradicted the US military's conclusion that the most-hit areas of the plane needed additional armor. Wald noted that the military only considered the aircraft that had survived their missions – ignoring any bombers that had been shot down or otherwise lost, and thus also been rendered unavailable for assessment. The bullet holes in the returning aircraft represented areas where a bomber could take damage and still fly well enough to return safely to base. Therefore, Wald proposed that the Navy reinforce areas where the returning aircraft were unscathed, inferring that planes hit in those areas were the ones most likely to be lost. His work is considered seminal in the then-nascent discipline of operational research.
The latter bit is eminently reasonable; there are limits to how much political bodies should be capable of restraining private action, but the rules by which the police discharge their duties is something completely under political control.
Why are opacity and complexity so often invoked as a reason here? I don’t want to be mass-profiling populations with totally clear and simple technologies, either.
Opacity (of which complexity is a form) is an impediment to accountability. I agree there are many reasons that I don't want mass surveillance and profiling, but I think a lack of accountability of a system is a very large force-multiplier for the malignant effects its ends up having.
Opacity because if you are detained by the police they should be able to tell you why. Many, if not all, AI solution based on a training sets will have a hard time telling you exactly why you where flagged.
Also even if they can tell you that the AI believe that the combination of your ears and nose makes you look like some wanted person, what are you suppose to do about it?
Maybe we can develop AI that decides who has a criminal look based on the shape of the skull? /s
As someone living in Germany all these AI approaches have a distinct fascist flavour to them and remind me of my Nazi grandfather who claimed he "could just tell" our bosnian neighbours have "something criminal" in their blood.
My guess is that "opacity" is referring to implementations, while "complexity" is referring to the effects. "Totally clear and simple technologies" is referring to just the implementation.
We don't want to get involved with AI because of the complex issues it opens, not because the current implementations are complex.
Even if the below commentor is correct about the lack of accountability - which I assume is important in a democracy - and the more insidious danger is the obvious moral and ethical issues.
Could you be a little clearer on what the “obvious moral and ethical issues” are? Personally I’m not sure what you’re referencing, and I also don’t think the many of the moral and ethical issues are obvious. If they were, then I suspect countries like the UK wouldn’t already be trialing many of these technologies.
AI today is on a level of an illiterate toddler with a talent for stochastic at least. Using it to classify people is unethical because it puts suspicion on them without evidence or sense.
>>AI, by its complexity and opacity should not be used to profile people. Just because it could be so easily manipulated.
True, but I'd add in a touch of Hanlon's Razor — don't attribute to malice what can be attributed to mere incompetence — far too much of the time, the technology is just bad and wrong. It has been over-promised and under-delivered.
It is so often wrong that there are numerous lawsuits, including one today about Uber being sued because the so-called "AI Facial Recognition" system cannot even recognize it's own employees [1]. Great idea to verify everyone automatically, but if the technology sucks, it is worse than nothing.
As if falsely losing your employment is bad enough, the consequences of broad adoption of technology that is not fit for purpose is far worse than merely wasting money and time of law enforcement, courts, etc - it will literally falsely steal people's freedom, while letting actual criminals run loose.
The EU is wise to call a halt to this over-hyped tech.
(And frankly, calling it "AI" is ridiculous — it is just a fancy pattern-match search. This is what the industry gets for over-selling it's new stuff before it is actually ready - and deservedly so. It'd be an entirely different story if they'd fully developed and tested it until it was six+-sigma reliable).
> don't attribute to malice what can be attributed to mere incompetence
Police use of this nonsense is either malice or _negligence_; it's not mere incompetence. If you're the enforcement arm of the state, you largely lose the right to plead incompetence; if you don't do the research; that's negligent.
hmm, considering the seriously deficient state of these systems, I'd have to agree with rsynnott that using it without a very serious overlay of human checking before taking any action - probably enough to outweigh the benefits of the "AI" itself - would constitute negligence.
The PDs either already know that the "AI" FR systems are not fit for purpose and are using them anyway without proper checking, or they don't know because they couldn't be bothered to do their due diligence.
I want to catch the crooks as much as anyone, but catching the wrong people does no one any good, and has a hugely corrosive effect on society.
I agree with the principle but the problem here is the word "AI" or "ML"; what we are talking about is really just fancy statistics. Are we forbidding statistical modeling of any kind? How would we even describe or enforce that? The ML courses I took started with just running a linear regression on a large data set, and those were considered a form of ML.
This is tricky. And the problem is not just the techniques, but can we trust our institutions and practitioners -- what kind of systemic and unconscious biases are driving the misuses or potential misuses of ML?
"Okay, we will send all our video surveillance to a third party intelligence partner with whom we own stock and they will do facial recognition and send us a list of suspects." - Most police and government entities.
If the ban doesn't prevent 3rd party data access the above will likely happen.
The problem is not whether it's private or government.
The problem are multiple, but the main two problems found in an experiment with public facial rec. in an Italian city (Como, and now in Udine, too) are:
* municipalities should not have the power to conduct an investigation. That is a clear breach of the separation of powers. At least in Italy every access to information for investigation must be logged, from why it was collected to who requested access to it. Most of the initial deployments of facial rec are way too broad
* Our laws require consent before gathering information on people, and biometric has even stricter rules. Just going around in public is hardly giving consent
Do you need consent to be photographed or filmed in public spaces? At least in the US, there is no expectation of privacy in public spaces and if someone wants to film you - whether an amateur photographer, a journalist, or a police officer - they are permitted to, simply because it is their constitutional right to do so.
That's not true. At least in Germany you can take pictures of people in public places without consent (with certain restrictions). Publishing these pictures is where the "copyright" on your face part comes into play.
You can have a government owned database and not send anything to a third party. The way it works with fingerprints and DNA data, at least in my country.
I’d be in favor of banning facial recognition as evidence used in trials, but still allowing it to aid in investigations. I.e let it help narrow down suspects when faced with grainy footage, but can’t be used to actually make a conviction.
Seems way too easy to abuse. At least in the US, police already use junk science to "place" people near the scene of a crime, and then prosecutors use circumstantial evidence to get convictions. Or worse yet, police will, in the US, probably use this to get plea deals. I guess many EU countries don't have plea bargains through?
I have mixed feelings over this.
As a recent victim of crime (burglary and robbery) I know there is nothing the police can do about it even though the robber was caught on tape. There is a lot of crime that goes unpunished that contributes heavily to morale of the honest part of the population and their belief in the system. Why work when some thugs can take your staff for free and won't even be bothered by police over it.
I mean if I am spending better part of my life working and giving away huge % of what I make to the government the least I could expect is said government having resources to catch the bad guys.
I am in lucky position that the crime won't affect me much. It's a significant sum of money and a lot of days spent fixing stuff, worrying about security etc. I can't start to imagine how demoralizing it must be for someone who losses years of savings in a burglary and that's not even a stretch to imagine if they go for a few nice things you have (like say a nice mountain bike or skiing equipment).
I find the attitude of ignoring "petty crime" very worrying. I think the cost of it is underestimated. If we can use technology to catch the bad guys let's try it. As an honest working tax payer the government has all my data anyway and can get phone records to pinpoint my location most of the time. I don't care that much anymore. I do care about being safe though.
As long as the cameras are in public places and not on private properties it's not such a big hit to privacy anyway.
> I find the attitude of ignoring "petty crime" very worrying. I think the cost of it is underestimated. If we can use technology to catch the bad guys let's try it.
For the sake of argument, we’ve caught the bad guy. What should we do now? I think this is a seriously hard to answer question, and we haven’t really got a coherent answer.
This assumes that we actually get the right person as well, and facial recognition is honestly really bad about getting people caught in the dragnet. If we have an attitude where “computer says no” means you’re now subjected to criminal penalties, that’s a nasty world.
For some crimes you'd probably be able to drastically lower penalties. If there's a 100% chance a shoplifter is going to get caught when they leave the store, then just making them return the stolen goods would be enough. Harsh penalties are partly the result of an inability to apprehend people (using the fear of harsh penalties to compensate for the low chance of being caught).
It also seems likely that more surveillance will result in fewer innocent people convicted, not more. Anecdotally, every case I've read about that had video surveillance and involved innocent people were arrested, the video surveillance ended up being what exonerated them.
> If we have an attitude where “computer says no” means you’re now subjected to criminal penalties
You've made quite a leap here - he's talking about using facial recognition software to automate the comparison of pictures from a surveillance video against known criminals (something police have been doing, manually, for decades) and you've jumped straight to "an algorithm is judge, jury and executioner".
An argument here is that it disincentivizes others from doing similar crimes. If the technology is so spot on that 99% of the time you commit the crime you'll get caught, then others will likely not take the risk.
Granted there's an easy counter argument there (all the more prescient these days) that they could wear a mask while committing the crime.
There's a similar (less discussed about) trend happening these days. Law enforcement is using DNA evidence from crime scenes, passing it through (sometimes private) DNA databases and getting matches.
Let's say those database continue to have more data – what are the odds that someone involved in a crime will leave some DNA behind, and will either themselves have their DNA in a database, or, a relative. The chances of you getting away with a crime converge to zero. And if you know you have high odds of getting caught when doing the crime, you might be less likely to do it in the first place.
> If the technology is so spot on that 99% of the time you commit the crime you'll get caught
And in 1% of the time[1], you are an innocent who had nothing to do with the crime. Now you have to spend your time: getting arrested, spending time with lawyers, getting people to collaborate your alibi (<- In the best case! Otherwise you're f'ed).
If you want to make a case for facial recognition, criminal justice is one of worst possible cases you could make for it.
[1] I don't accept your claim of 99% accuracy when this is applied on a massive scale. Maybe I'm wrong, so let's go with this number
False positives always exist in these systems along with the false negatives. There's no avoiding every case of false conviction if you rely on imprecise methods. That's admittedly already a problem without facial recog, but is a problem when using it, too.
> The 1% there is that “the criminal won’t get caught”.
So the 99% is for the facial recognition system to identify criminals?
What error rate does the best facial identatification have? If it's not zero then my original point still stands. You are throwing a lot of innocent people into the judicial system. Or at best being pestered by police for no good reason.
> Also what you described can happen to you today as well. It’s the imperfect system we live in.
That's correct. So instead of improving it, you suggest to automate it? This sounds insane to me. Why would you automate something that you know is defective? Check the user support of all the top tech companies who use "AI" to automate things. Now check the worst of the worst customer support of the top tech companies. There's an overlap. And if you want to expand (*censored*) tech support to the criminal justice system, people like me are going to get upset.
You brought up the fact that FR could identify criminals. I
brought up the fact it could also identify innocent people as criminals. When innocent people are accused of crimes they didn't do, they usually get upset.
I didn't say I personally was upset with your message. I said they would get upset. I'm not a "people" person, but this should be obvious. And I think I would be upset too if I got accused of something I didn't do.
You still didn't tell me how you would resolve an innocent person being misidentified by a facial recognition system. Can they sue the company or person who developed it?
A claim of 99% sensitivity sounds good, and is often achievable. Any real system will also have false positives, so let’s say that we have a specificity (test true negative rate) of 99% as well. This is probably unrealistically good, most systems will false positive more. This sounds great.
However, Bayes’ theorem paints a very different picture.
If the prevalence of wanted criminals in the population is say 1/10000 (this is hard to guess), what are the odds that a person that is flagged is a wanted criminal?
The unintuitive answer is less than 1% of the time (~0.98%) will the suspect actually be a criminal.
By far the most important term is the prevalence of the thing you are testing for in the population, in this case criminality. Any dragnet facial recognition is invariably going to get more innocent people caught in its web than true criminals.
I agree, it does disincentivise people from committing similar crimes to a certain extent, but I think what it often does is shift people to different kinds of crime. In the pandemic, burglaries dropped significantly, presumably because people were home and the perpetrators didn’t like the idea of being caught. However, scams have risen sharply.
>I find the attitude of ignoring "petty crime" very worrying
Ironically I think petty crimes are much more impactful personally than say political corruption/bribery.
If you find out that your politicians received bribe/corrupt personally you don't feel it immediately. It's not like your tax amount suddenly increasing to cater the corruption.
Petty crimes on the other hand are much more direct in impact. Personal property/Money loss, that feeling of "why me", terror, etc.
That works until the day you get charged with robbery because the lousy FR system matched your face with a fuzzy picture of a robber in another state. Trading freedom of other people for personal gain (feeling of safety) is IMHO a much bigger crime than robbery.
Edit. Wrong people spent life behind bars for crimes they didn't commit. With FR it's just going to be more often. Your weak "but this is not me" will be countered by prosecutor's "all criminals say this and this certified FR system is more reliable than humans, your honor, I ask for 10 years for this gentleman" and the strong incentive to close the case paired with political desire to not disprove belief in FR will convince the judge to make the right decision. Important people will be exempt from FR, obviously.
I agree with your stance. However, I don't think the fix is every going to come from police, government officials, or mass surveillance. Even if their intentions are altruistic they just can't be every where at once to help people and their hands are tied when it comes to enforcement.
I think laws that prioritize the right to self defense or defense of one's property are what's best. Castle doctrine laws, stand your ground etc. Yeah they're going to result in a minute amount of cases where crazy people go off on some innocent person but I think that's a better alternative than giving criminals more rights than law abiding citizens.
right - your real and legitimate feelings of being vulnerable individually are a basic ingredient to a society-wide "sea change" in public life.
First to say, feelings are real and you have spoken your truth in public, yes on that. However, "compliance !== safety" What if you are being tracked overall, others are too, then someone breaks into your flat while you are out dining, and then nothing happens to help you, for whatever reasons. Public life is altered forever by tech yet, your problem is not solved. There have been surveillance cameras for decades in Britain, is everything OK now? or, do you want to compare the modern society in China as a goalpost?
I totally get that you want to catch the thief that stole your stuff - I get a bit angry hearing about it from here.
But what I really would not want to happen is some poor soul to get collared and have to defend himself because his face (mis-)matched some random street camera.
Despite all the hype about "Artificial Ihtelligence", this technology is nowhere near intelligent, nor even reliable. sure, being able to make a lot of matches sometimes, on white male faces can seem impressive, and perhaps even be genuinely useful in some situations. But the databases on which these 'tools' have been trained are known to be full of garbage (remember "GIGO") and biased, so badly that the results are lawsuits about bad results from the tech. e.g., [1].
The EU is wise to start down the road of banning it, and frankly, this is just desserts for the industry that over-hyped this stuff. We already had one 'AI Winter' because of this, and apparently no one learned the lesson.
I haven't been able to read the whole resolution, which has not legal value as is anyway, but my impression is that police using software on existing video to extract an image or even compare with an existing database would be fine
Having a CCTV pointed at a random square identifying everyone in it and tracking their movements for indefinite amounts of time, less fine
> my impression is that police using software on existing video to extract an image or even compare with an existing database would be fine
Like DNA and fingerprint evidence, I'm sure that an indiscriminate database search will absolutely have no false positives and no innocent people will ever get in trouble.
My car got beaten up not too long ago. Costly yes, but nothing compared to constant surveillance, it doesn't even come close. Of course that would have a chilling effect on behavior everywhere. If crime goes down, more security companies should go belly up. We don't yet see that happen with the latter.
> I find the attitude of ignoring "petty crime" very worrying
This is nothing to do with facial recognition, though. It's a question of how the police allocate their time and effort, and how effective the processes that they work within are.
Any discussion of regulating the use of facial recognition by government misses the point. Private industry has just as much if not more power to misuse the technology. I don’t know what a global opt out system would look like, but that’s the only serious answer to preventing misuse.
> In a resolution adopted overwhelmingly in favor, MEPs also asked for a ban on private facial recognition databases, like the ones used by the controversial company Clearview AI
I would sharply disagree with this. The ability to bring proper force into play is not with corporations. Looking at the 20th century, state actors are the ones that could and did kill millions.
I have no idea why we can't focus on misuse by both governments and corporations. Your argument seems to be that even though harms can occur from misuse by two types of organizations (governments and industry), there is no point to addressing the harms from one type of organization (governments) because the other type of organization (industry) is more likely to commit harms. With that logic, you'd never get anything done.
Private entity and individuals come and go at a much higher frequency than government. You still raise a valid point but there is reason the focus is on government. I'd also go as far to say that at least on HN there is a lot more discussion of anti-privacy and invasive practices in the private sector facebook/google/amazon/messenger platforms/etc than in the government.
"predictive policing, a controversial practice that involves using AI tools in hopes of profiling potential criminals before a crime is even committed."
Ah, yes, the European Parliament. They have a great track record regulating technology, not at all like a bunch of out of touch clueless boomers. I'm sure something as complex as facial recognition is well within their capabilities to comprehend.
I assume there are many proposals that are not adopted in US too. But if you want to feel superior read about the issue with robocall in US, I read comments here of people being forced to either respond 10+ more times a day on scam calls or blocking unknown numbers and risking missing important calls. So US free market + US legislature does not seem to me they made a good job either.
Now let's talk surveillance, most people know there are false positives , most competent tech people know that there is also a bias with this technology, if you really want surveillance is better to implant people with chips, at least you won't execute innocent people because the SV startup that implemented the AI made a mistake and will not admit it until 15 years alter after a lengthy trial.
Criticism of the EU on this point ≠ the US is better at it. It's not a competition of "my politicians are better than yours".
Almost _all_ of the EU's legislation on anything tech related has been playing to the voters by beating the privacy drum, without actually addressing the underlying issues.
Meanwhile, the EU slides farther and farther into irrelevancy when it comes to being anything other than consumers of tech. All the EU seems capable of doing is creating additional bureaucracy while hamstringing Europeans trying to actually realize Europe's potential in the field.
> Meanwhile, the EU slides farther and farther into irrelevancy when it comes to being anything other than consumers of tech. All the EU seems capable of doing is creating additional bureaucracy while hamstringing Europeans trying to actually realize Europe's potential in the field.
> For investors, the European market is less overheated and so more attractive to participate in than the U.S. market and less "tough" than China, Russia or Brazil. But that's not thanks to the EU. Systemic problems, like a lack of harmonization across the Continent, mean EU startups have to spend a lot of energy on learning and complying with varying rules, like hiring non-EU recruits or rewarding employees with stock options … Patrick Collison, of fintech giant Stripe, advocates for a "more streamlined common market, fewer impractical and ineffective regulations, better legal treatment of stock options and easier access to visas for highly skilled individuals." … "The startup community has been saying the same things for over 10 years," said Simon Schaefer, president of Startup Portugal. "We always said: this is alert signal code red. If we don't harmonize Europe, we'll never be able to catch up with the U.S. and China."
I am assuming that a different branch is working on VISA and taxes, but do you realize that is not easy to get every EU country to be on the same page on VISA and taxes, some will say that this would be a cause for a new "Brexit"
But does EU needs to make it easy for giants like Google,Apple,Facebook, Tik Tok to appear? Are you sure that the barriers those startups complain are not actually a local issue?
Not the person you are addressing, but I believe he is referring to the somewhat common belief (in North America) that it is foolish to try to start a tech company in Europe due to the regulatory and business environment.
The results bear this out, there just aren’t many European tech companies that reach global relevancy.
Many of the relevant startups get bought by US giants (you know that protein folding results, that Google division was bought from UK), but US guys still are proud because a US bag of money was involved in the results.
The reality is that US has a ton of money, they burn in in possible unicorns but also in buying talent and companies from EU, I don't want to speculate why US has so much money in tech sector.
Sure the shit companies are trying to screw me over, but I click the "Configure" button, make sure all 100+ toggles are off and then click Accept only for required cookies.
But if you love advertising you can just click "Accept All" if the website is forced to show you the popup, if not you can't even opt-out or even get informed about the 100 "partners".
I can tell you who is at failt for bad GDPR UX , is the developers or who ever forces them to install 10 tracking/analytics and other third party shit. And the browsers could propose a standard like "do not track" and I could enable or disable it globally or per webpage. In EU the browsers could default to "Always Ask" and ad-loving guys could click "always track me, do not ask again"
I am also happy on how EU handled mobile phone interoperability, like porting numbers, or carrier locking, how they standardized the charger so I can charge most of my devices with same thing etc.
This will no doubt take years to go anywhere, for the EU moves exceeding slow. But generally the end result is quite carefully thought out. I'm sure you realise this and are just being obtuse, but any actual law would be written by civil servants with expert input, not the MEPs themselves.
Consider the scenario which is now legal in the EU:
A group of police personnel watching CCTV recordings and live feeds trying to find a suspect. They do it with their own eyes. They employ state of the art neural networks, trained on just a blurry photo and verbal description of the suspect. And it takes a lot of time.
The benefits of automating this step are so large, that the ban if it were to happen will be temporary. Facial recognition is already here, and it becomes more accessible every year.
I think instead of banning facial recognition we should find ways to prevent it from being abused. Something that would prevent democratic countries turning into another China or Russia of surveillance.
The honey Is too sweet to be left alone, Better for the EU to accept reality and regulate It in a pragmatic manner to not lose the modern race on arms with the other powers, expecially china. Wealth and peace has made the west ?security dumb?
I don't understand why so many cities and other jurisdictions are trying to ban facial recognition. It's a tool, like anything else. We make policing easier by letting the police department have access to plumbed city water, electricity, computers, cars, and so on. Facial recognition is a way for them to work more efficiently and effectively.
In Seattle, we've had rampant property crime to the point where people don't even bother reporting a lot of it anymore because the insurance deductibles and lack of consequences for thieves makes the effort a depressing waste of time. Unfortunately, without deterrents, most people expect crime will continue and likely get worse. This isn't the fault of the police department - Seattle has less than half the number of active officers per capita compared to other large US cities. So they can't afford to investigate property crimes, let alone locate suspects and bring them to justice.
Having facial recognition as an available tool would help with that. It means the police can be dispatched to the location of a suspect and bring them to justice, instead of letting them continue consequence-free, or relying on dumb luck to stumble upon a suspect. False positive matches can be mitigated trivially by having a human in the loop. Does that mean the overall matching process is perfect? No, but it never can be, and perfect is the enemy of good. We need deterrents to crime and real consequences if we want to ensure a safe society, which is the most basic expectation taxpayers have. If facial recognition is regulated, I hope they don't go overboard, like requiring evidence of a crime (through surveillance footage) to run a search, or something to that effect.
How did the EU become such a bunch of luddites? Or does it only seem that way from outside?
I get that there are legit concerns about face recognition (as there are benefits), but the concerns _could_ be regulated. Granted, regulation is a much more work intensive process – you have to sit down with a large number of people, get educated on the subject etc. Is it laziness?
Or is this a negotiation tactic? Start asking for a permanent ban, and work your way back to regulation?
It is just a matter of time before stores and what not will track you via CCTV and spam you ads and adjust your credit rating depending on some opaque metrics. I rather have a ban on it before it happens.
But this can be disrupted, here it is a startup idea but probably Google and Amazon are already working on it
1 offer all shops free surveillance hardware, you will offer them notifications when a suspect enters the shop. You need some PR stuff with big numbers where you claim on how many of your partners/clients are marking suspects that commited theft and how many billions are saved with this AI tech.
2 collect all the videos, do face recognition and connect the people n the image with real identities , infer more personal data from the person facial expression, posture, clothing, stuff they look at
3 profit!
4 collaborate with NSA and FBI to make sure the government will not have an easy job shutting you down.
bonus , you can extend to clubs, schools,parks, bars. You can rent the hardware for private owners with a discount if they mount at least one camera pointed at the street.
Why do you think so? Because of bad Pr Google and Amazon will not do it ? but will some other company do it ?
It could be done with "google glasses" like devices too, why do you think it will not happen? Because of existing laws or because there is no money for a big corporation to extract?
Yea I think:
- Google and Amazon etc won’t touch it cause it is bad PR
- stores won’t buy it cause it’s bad PR; fear of boycotting etc.
- physical retail is already on the decline; why spend money in a dying industry
- the whole project would have one big flaw - people wearing masks
So if the masks will be here forever then this guys complaining they can't use face recognition anymore in public places do not know this or they are betting that masks will be gone soon.
1 some people complaining that EU will block innovation if it is not allowed to do face recognition on public places or without consent. This people want to "disrupt/innovate"
2 we have the theory that people will wear masks so facial recognition is impossible
So 1 and 2 can't happen at the same time, if the guys from 1 need to innovate then they need no masks , if 2 will happen then the guys from 1 are complaining for nothing because the virus fucked up their disruptive ideas.
This is a first step toward regulation. No PRC-style indiscriminate/"predictive" profiling, as a first step. Now let's come up with a way to use this tool to catch actual criminals when investigating actual specific crimes, with judicial oversight that protects our civil liberties without making the tool useless (by creating onerous obstacles to its use, or delaying it unacceptably).
It's a terrible headline, but this is regulation, not a ban. This would ban most police use, and would ban private mass surveillance, but would not ban, say, FaceID.