HackerOne appears to be completely broken and I wouldn't recommend it to anyone.
Disagreements are to be expected on a bug bounty platform, but these days they just stop responding altogether and don't pay. It borders on outright fraud.
I've been trying to report a Squid RCE (CVE-2020-8450) since October. The Squid maintainers seemed unprepared for dealing with the report as they kept being unresponsive and it took 2 months to merge my patch. Maybe they're volunteers, so I can't blame them. Reported it to the bug bounty [1] which promises high rewards on January 20th and apart from triaging it, there has been radio silence since despite having invoked HackerOne mediation. I have more Squid memory bugs and I'd rather rm -rf them than go through this process again.
HackerOne used to be decent but this appears to be a structural problem now [2].
I worked as a contractor for a company that's a household name in the US. I am now convinced that HackerOne only exists for CISOs to say "look, I'm doing something" during the 2-3 years they stay at a company.
The cybersecurity team had a backlog of roughly 30 critical issues discovered internally before starting HackerOne. We were unable to fix those issues, or the ones reported to us, because we had no visibility into source code, there were 12 different development teams, most of them outsourced, and all the project managers were interested in was covering their ass.
The HackerOne deployment was invite-only, but the few hackers in it did fantastic work. I kept being told to find excuses to reduce the amount we'd pay for the critical issues they'd find and we'd fail to fix. At least we triaged faster than Paypal.
Ohh your very right. The sales team is very focused on "selling" to the CISO (rightly so I suppose). I was part of a team that got the big sales pitch.
Little technical details, high on "let us handle this for you, we know hackers / Well throw a big Defcon party for anyone you want."
Hi, if you’re at all interested in discussing this (including if you’d prefer your name not be disclosed) please email David.morris@fortune.com. Thanks.
HackerOne’s community team also seems trained to gaslight ethical reporters who try to follow responsible disclosure practices.
I submitted a vulnerability to a vendor on H1 along with a typical “I plan on publicly disclosing this vulnerability on X date” note, and started getting emails directly from H1 telling me that this undermined vendors’ confidence in the platform and that doing what I was doing might make it so I can’t use HackerOne any more. In the same correspondence they said that my approach made sense—but they continued to threaten that “it would be a shame if you weren’t able to participate any more”.
In my case, the vendor verified the vulnerability quickly, but kept dodging my follow-ups by replying without answering my questions. When the vendor refused to assign a CVE after I asked four times, I contacted the HackerOne CNA directly to get an assignment. They replied within 48 hours asking if there was any public information already, I said no and that I was planning on disclosing on X date, and then they just stopped replying for a month until after the deadline passed.
At a glance, H1’s disclosure guideline appears fairly reasonable: 30 days by default, an upper bound of 180 days. In actuality, those times only start once a vendor closes a ticket, and can be extended indefinitely. Reporters aren’t allowed to speak publicly about anything they send to the platform until the ticket is closed and the vendor agrees to allow it, even in the public programs.
As far as I can tell, HackerOne’s primary purpose now is to act as a shield for bad vendors to hide their security defects from the public by using network effects to bully reporters into keeping quiet. The community team claim this isn’t what they’re doing and that they always ask “why should this be private?”, but their marketing material to vendors tells a different story[0], their actions with me tell a different story, and the vendor I reported to had over 100 closed reports, going back years, and none of them were publicly disclosed.
Unless you must pay your bills with security bounties, or don’t actually care and just want to dump a report and forget about it, I unequivocally recommend against using HackerOne to report a vulnerability.
I reported to one program, which ignored the report and effectively stalled until the startup had pivoted to a different idea. HackerOne didn't remove them from the platform and did not make it possible for me to publish the report through their platform (and publishing it otherwise would have likely violated some ToS).
I reported a second issue to Cloudflare. It was acknowledged as a known issue within less than an hour, but still not fixed months later, and again I was unable to publish it.
Despite waiting for months and requesting disclosure repeatedly, none of these reports are disclosed yet.
In the future, if I find a vulnerability and the only reporting path the company provides is HackerOne, I will apply full disclosure instead.
>HackerOne’s community team also seems trained to gaslight ethical reporters who try to follow responsible disclosure practices.
I would say to step back and even question the concept of 'responsible' disclosure. For starters, even the very name seems to be manipulative by setting the tone of the conversation in a way that, in most other settings, doesn't pass the smell test.
It seems like a short term optimization with longer term costs. While at the moment release the vulnerability into the wild will likely be followed by bad actors exploiting it, by sitting on it until the company fixes it we create an environment where companies are given a grace period if vulnerabilities are found. This is in turn factored into their decisions making when it comes to how prioritized security is.
The name "Responsible Disclosure" is in fact Orwellian and was designed that way, and people should avoid using it (or, really, the word "responsible" in discussions like this). The preferred term is "Coordinated Disclosure".
I always assumed the responsible part had multiple non-conflicting meanings, that 1) The researcher would not disclose it to the public until the vendor has a reasonable amount of time to fix it and 2) the vendor is assumed to want to do the right and responsible thing in fixing the flaw.
It presumes a definition of "responsible" that suits the interests of vendors and treats the safety of end-users as an externality, in such a way that anyone operating in good faith and responding to different legitimate incentives is by definition "not" disclosing "responsibly". It's a linguistic ploy, and not one that should be dignified.
In 2020, non-ironic use of the term "responsible disclosure" has become somewhat of a "tell" that the person speaking isn't super connected to vulnerability research.
You’re right that I am a software engineer, not a vulnerability researcher. I keep up with vulnerability research only insofar that I need to be aware of new classes of exploit so that I can write secure code (and, hey, it can be interesting!).
So, what is the correct term that is supposed to be applied to the approach of disclosing to a vendor first, giving them a hard deadline, and then doing a public disclosure? As far as I know, it’s not “coordinated disclosure”, since “coordinated disclosure” normally means the vendor controls the timeline.
Even when the disclosure is not actually coordinated, in the common sense of the word, because the vendor doesn’t agree to the deadline and/or isn’t given any option to pick a longer deadline?
Edit: The Google Project Zero FAQ[0] explicitly states its approach is not coordinated disclosure:
> Prior to Project Zero our researchers had tried a number of different disclosure policies, such as coordinated vulnerability disclosure. Coordinated vulnerability disclosure is premised on the idea that any public disclosure prior to a fix being released unnecessarily exposes users to malicious attacks, and so the vendor should always set the time frame for disclosure.
It seems to me that if “responsible disclosure” is problematic for the reasons you’ve mentioned, “coordinated disclosure” is too. Actually, it’s maybe even worse, since “the researcher refused to coordinate with us on the deadline” is objectively true, whereas “the researcher didn’t disclose this vulnerability responsibly” is totally subjective.
As I said in https://news.ycombinator.com/item?id=22407821 I don’t like the phrase “responsible disclosure”, especially given its history, but “coordinated disclosure” doesn’t seem to do any better at being a phrase that can’t be weaponised against researchers. It also has the downside of meaning different things to different people within infosec which makes it unreasonably hard to communicate effectively and concisely.
So, you know, anyone reading this with high stature in infosec, please coin something unambiguously unique (“time-gated disclosure”?) so less time can be spent talking about semantics and more time can be spent on how to improve software security for everyone. :-)
I don't know what else to tell you. "Responsible Disclosure" was literally a coercive marketing strategy cooked up by vendors; it isn't a term we arrived at organically. Don't use that term. Use any other term you like, but the convention in the field is "coordinated disclosure".
To counter Orwellian term, consider “informed disclosure”?
The vendor is informed before disclosure. The security researcher is an informed expert disclosing to end users. Good behavior vendors can further inform these researchers on challenges driving vendor need for alternative timing.
On the timing dimension, consider “cadenced disclosure”?
"Coordinated" would suggest the bug discoverer and the vendor coordinate on a time. (As for whether this is what actually happens in practice, no idea.)
That sort of makes sense, but what are examples of other legitimate incentives that might compel a researcher to disclose the presence of a vulnerability before the vendor has a fix?
For instance, the vulnerability is being actively exploited already, or is trivial to find.
In reality, it's not incumbent on researchers to wait for patches at all. You can straightforwardly argue that you're obliged to give users enough of a head start to stop using the product if the risk is intolerable to them, and then disclose ready-or-not.
Google Project Zero have been unequivocal about how their forced disclosures have caused vendors to release security patches earlier and more frequently[0], which is a win for everybody.
Otherwise, research suggests that the chances of a vulnerability being independently rediscovered within three months may be as high as 1 in 5 for certain types of defects[1]. This means that even if you don’t know a particular vulnerability is being actively exploited, you’ll eventually find one that’s being quietly exploited by someone. Since you don’t know which one it’ll be, early disclosure at least gives end users the opportunity to apply mitigations and hopefully burns a 0-day being used by an internet bad guy.
I don’t like the name either but I don’t have a megaphone big enough to coin a new term of art and it is the only phrase I know that is used for “disclose to the vendor first, then after a fixed amount of time, full disclosure”.
There’s no question that the name is manipulative since it was first coined to refer to what’s now called “coordinated disclosure”[0], in an essay which referred to full disclosure as “information anarchy”[1].
In this case with HackerOne, the issue is not with terminology, nor is it with time-gated full disclosure versus immediate full disclosure. Rather, the issue is that HackerOne go out of their way to serve the interests of vendors who don’t want to fix defects quickly, at the expense of reporters, end users, and software security in general.
HackerOne could take the approach of saying “we are a neutral platform for connecting researchers and vendors, it is not within our purview to try to stop reporters from disclosing vulnerabilities if they feel it’s necessary”, but this is not how they operate today, and people should know this.
Hi, if you’re at all interested in discussing this (including if you’d prefer your name not be disclosed) please email David.morris@fortune.com. Thanks.
> HackerOne appears to be completely broken and I wouldn't recommend it to anyone.
Completely disagree with this.
I launched a HackerOne program for my company last month (for free, not using their “managed” service).
Of the many reports people submitted, we triaged 30-40 valid reports (most very minor, one or two moderate). We paid out a few thousand dollars in rewards.
At the same time, we also did a more traditional 2-week penetration test with Cobalt (https://cobalt.io/) that cost over $10,000, and HackerOne was the clear winner when it came to the number of high quality security reports worth fixing. And H1 was 2-3x cheaper after paying out the bounties.
I’m sure HackerOne isn’t great for all companies, but just posting this to refute the blanket statement that HackerOne is “completely broken” across the board.
This is not funny, many of us live in countries where we need that money. We're underpaid but we try. I'm glad that we have high quality researchers in this platforms but punishing us goes too far.
> And H1 was 2-3x cheaper after paying out the bounties.
Perpetuating a system wherein security researchers are massively underpaid for their services because of a terrible abusive platform doesn't seem like a very nice way to do business.
The platform has nothing to do with the rates; plenty of companies run bounty programs without H1, and there is a general standard range for findings across all platforms.
None of these findings appear to have been worth much of anything.
I've never liked these rent-seeking bugbounty platforms which are inserting themselves as middle-men and mediators, but then take away the real value that comes from building direct client relationships.
it's ok for people who start out and only want to work on vulns and not bother with "sales" (building long term client relationships). severely limiting though in the long run!
much better to spend time on pitching your service directly and build a name for yourself this way. most customers I had always came back and rewarded me with more work. on those bounty platorms however you're constantly competing with drive-by pen-testers who lower your price and you have no say in the whole negotiation and bargaining phase. your previous reputation also tends to stay locked into these platforms.
a better long term approach is to build connections, set up a ltd (LLC) and make sure you have a good lawyer who can advise you (not just when things go down). ideally build a collective with other like minded (e.g. like a consulting or law practice where you don't always have to share clients but you can if you want to complement each others skills).
this is imo the best way to escape the "scope-prison" and the best way to learn about clients additional (and actual) weak points (points that they haven't themselves even thought about).
does anyone here do it this way or with a similar approach?
We've had bug bounty programs in the past. The biggest time sink is filtering the bullshit. You need someone with not amateur levels of technical chops to do it (which is someone who will have less time to do other things).
I've been that person before as both the 'do it yourself' bug bounty program as well as the 'filtered by hacker one' approach and I'll take the latter every time.
Outsourcing to Hacker One helps cut down the bullshit is where their value add is (and to a lesser extent the reputation system, however if someone is reporting on Hacker One I'll give them the benefit of the doubt). Anything else on top of that is just upsell.
> I've never liked these rent-seeking bugbounty platforms which are inserting themselves as middle-men and mediators, but then take away the real value that comes from building direct client relationships.
They can add value for companies that don't have a reputation and want to have their security problems discovered. But they have to follow through on behalf of researchers and threaten to remove companies that don't pay bounties and/or don't investigate and remediate issues.
Squid is vastly under-equipped to deal with the security hygiene needed for a project this important.
That's the tragedy of the open source world : mission critical for everyone, but no actor willing to maintain it properly. It's Heartbleed all over again.
I keep thinking we need some sort of new license for open source that limits which entities can use the software based on their net worth or the networth of their shareholders. That way large companies like Google can automatically fund these long tail of projects without burdening casual hackers or startups with unnecessary costs.
The Business Source License allows you to restrict use of the open source product based on thresholds of your choosing. Entities using it above the threshold need to pay for a license.
After a certain amount of time passes, the software reverts to an open source license of the author’s choosing:
The clock protects users from lazy / out of business vendors. If few enough improvements have been made recently for it to make sense, the customers simply fork an old version of the project, and deploy that instead of paying for ongoing “development.”
(The business source license is not “open source”, but I think it is close enough to be a good compromise in practice)
> I keep thinking we need some sort of new license for open source that limits which entities can use the software based on their net worth or the networth of their shareholders.
That might be a new license, but it is by definition not open source. And, no, companies like Google won't “automatically” buy commercial software with that style of license; from their perspective it's worse than regular commercial software since it has all the downsides of traditional commercial software plus gives a competitive advantage to upstart competitors.
EDIT: How about instead a “new” license that, if you feel the software isn't maintained adequately for the needs of your organization, allows you to hire whoever you want to maintain it to your requirements, instead of impotently raging that other people aren't supporting it?
Hey, try not insulting people that are trying to have a reasonable conversation.
> EDIT: How about instead a “new” license that, if you feel the software isn't maintained adequately for the needs of your organization, allows you to hire whoever you want to maintain it to your requirements, instead of impotently raging that other people aren't supporting it?
It makes sense for the people that are getting the most profit from a piece of software to be the ones paying for basic maintenance/cleanup/improvements.
If you want customizations or new features, that's when it makes the most sense to 100% self-fund.
> it has all the downsides of traditional commercial software plus gives a competitive advantage to upstart competitors
On average, I'd expect it to still be a lot cheaper than commercial closed-source software.
And what exactly do you mean by competitive advantage here?
> It makes sense for the people that are getting the most profit from a piece of software to be the ones paying for basic maintenance/cleanup/improvements.
It often does make sense for them, and you can see lots of cases where this is is done with actual open source software without resorting to a commercial license that discriminates on scale. If it doesn't make sense for the people you want to pay, making a free-for-everyone-else license isn't going to convince them that it does, it's just going to convince them that they are better served elsewhere.
> If you want customizations or new features, that's when it makes the most sense to 100% self-fund.
I would argue that it makes sense to 100% self-fund the additional work whenever you want something more than the open-source offering provides and doing so is less expensive than commercial (off-the-shelf or bespoke) solutions, whether the additional work is basic maintenance or something else, and if you are using an open source solution and it's not maintained adequately for your need, the responsibility for addressing that isn't on other users that you’d like to have subsidize your use.
I can only guess what you might think is wrong, but if you think it is the description of the proposed license as not-open-source, I would direct you to paragraphs 5 & 6 of the Open Source Definition.
That is a recipe for an ambitious PM in the corp to just make their own version, tailored for their 1 of 10 companies in the world needs. Once you start getting that big, build starts becoming cheaper than buy for critical infra.
Upvoted, but not sure it's a tragedy. Much like that quote about democracy, it's a bad system, except the others are worse. Would be nice to have something better tho.
I think we need more experimentation with solutions to the (open-source) public goods problem before we can say that the others are worse. Ditto with experimentation on variants of democracy. Significantly harder to experiment with that than with open source funding though.
Open source is pretty successful if you include the plethora of knowledge it provides to new developers. It is difficult to quantify and it is symptomatic that there is litte prestige in commiting to any open source project.
I would think experimenting with variants of democracy would be a more frequent event. Consider the small scale class president or family voting for a movie.
H1 used to have a mechanism where researchers could push to make raised issues public if a ticket was ignored or marked as wontfix.
That was a good way to keep companies honest, an implementation of responsible disclosure.
So H1 could implement that again. It doesn't get them a bounty but it does stop companies pretending reports don't exist, if that's what has happened here.
H1 is somewhat unlikely to ban someone who holds a real RCE in Squid for months and then publishes it, because H1 needs those people on its platform. Most H1 bounty people are just running scanners to find DKIM quirks.
I think the conversation about whether H1 is problematic or not is a fine thing to have at the top of the thread. I can see people going either way on that question (bear in mind that it has as much to do with idiosyncrasies of each of H1's customers as it does with H1 themselves).
Hi, if you’re at all interested in discussing this (including if you’d prefer your name not be disclosed) please email David.morris@fortune.com. Thanks.
HackerOne is complete fraud. They've got a super duper simple carrot before the horse business model which has thousands of kids beating up web apps for free. A valuable service for their Fortune 100 clientele; for the people actually doing the work for them, not so much.
Disagreements are to be expected on a bug bounty platform, but these days they just stop responding altogether and don't pay. It borders on outright fraud.
I've been trying to report a Squid RCE (CVE-2020-8450) since October. The Squid maintainers seemed unprepared for dealing with the report as they kept being unresponsive and it took 2 months to merge my patch. Maybe they're volunteers, so I can't blame them. Reported it to the bug bounty [1] which promises high rewards on January 20th and apart from triaging it, there has been radio silence since despite having invoked HackerOne mediation. I have more Squid memory bugs and I'd rather rm -rf them than go through this process again.
HackerOne used to be decent but this appears to be a structural problem now [2].
[1] https://hackerone.com/ibb-squid-cache [2] https://twitter.com/DevinStokes/status/1228014268567547905