Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It required tricking a trusted human to work. It's a human (social) vulnerability, not a software or hardware one. Human vulnerabilities can sometimes be mitigated with software or hardware (or other things), but that doesn't change the fact that they're based on human mistakes, not software or hardware mistakes.


> It required tricking a trusted human to work.

That's what I thought. Until I thought a bit more.

Firstly he didn't fool the trusted human into doing anything they weren't already doing anyway. I don't see anyone being tricked.

Secondly, a lot of exploits depend on someone doing, unprompted, something they really didn't ought to do. I mean, if you rule out as an exploit anthing that simply depends on people not taking active measures against an attack they didn't know about, there's not a lot left.

I don't think Goo should have paid out, though; there's no vulnerability shown in any Google software, whether a product or an internal tool. It looks to me like a pip vuln that $TRUSTED_HUMAN could and should have evaded.


The vulnerability is the use of pip with internal tools


I'm having a hard time accepting a definition of social engineering that includes exploits where you do not in any way influence the behavior of others.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: