(Disclaimer: I used to work on Chrome. I'm more than a year out at this point but I still talk to Chrome developers.)
Chrome wins some tests but loses the most on the start-up benchmarks, which is a real bummer. There was a time where we'd spend literally days figuring out why startup regressed by a handful of milliseconds, and this test shows an end-to-end test taking nearly six seconds.
One idea I saw mentioned is that there may be a bug related to how Chrome reads the system proxy settings (something about an API change between Windows 7 and Windows 8). I think most of the Chrome developers are still on Windows 7; the bots that are tracking startup performance are definitely still Windows 7 as well. So maybe that number is more of a single bug than a systematic thing.
All of the above is not intended to diminish Firefox's impressive results -- just thought I'd provide some background! What these sorts of tests show most is that competition pushes browsers ever faster. If the proxy theory above is right, then maybe this test will inspire Google to invest more into Windows 8 testing, which ultimately benefits users.
(Edit: Firefox won other tests too, I also didn't mean to say this was the only factor.)
Chrome is my main browser and I use it all day at work. The browser window opens instantly for me but it only becomes usable about a minute or so later after it has finished grinding on the hard drive. From the looks of it I'd say it's loading all the history, settings and extensions that takes so long.
I definitely know it's not the norm, and should not be the norm. It's a really bad habit I have. I run across something interesting and for some reason just hate to close it, and so I end up with tons of browser windows open.
Eventually Chrome freaks out and starts eating all my CPU time and I have to restart it and close a bunch of browser windows.
I don't believe it's the norm, but it is what I choose my browser for. I generally have about 4 windows running with anywhere from 5-30 tabs apiece, in some rare cases 50+. Chrome simply can't handle that gracefully. Firefox, meanwhile, has built-in tab grouping that deals with a hefty number of tabs with no issue.
When Firefox was slow to start, it was bashed about it a lot.
Now they have put a lot of effort into that, making it one of the fastest browser to start. And yet it's bashed because they put their effort into "something" that doesn't matter to many people.
Agree with you. Not really surprising that the Chrome crowd are finding something else to fixate on now, until that gets better, then it will be something else....etc...etc
Honestly the main reason I used FF over Chrome historically was the huge and active addon selection. Chrome has caught up a lot now but I am trying to wean myself from all Google products so not ever going to use Chrome by default, simply because it is a Google product and not because it may or may not be "better" than FF.
Maybe not the same people, but the same type of person.
In the business world, they are the guy that will adopt your product if you add this one feature. And when you add it, it's perfect! Well, except it's missing this one feature, and if you add that...
On the web, it's fan boys. Product X is better than Product Y because of this! Oh, Product Y is now better than Product X in that area? Well... Product X has this! And who cares that Product Y has that other features "no one ever needs that."
Or when reading reviews. They'll tell you how great the product is, how much they use it, but mark it failing marks because they also want it to have a new feature. Or this book is excellent, but I had to wait more time to get it in e-book format, so I'm going to mark it as a 1-star review before the actual book is released.
One could argue these people all make valid points. You can say they balance out the people that never complain or are always happy.
But over time, I've learned to take what these type of people say with a grain of salt. They'll always poke holes. They'll always find reasons. They'll never be happy.
Obviously, this is all my own experience. Make of it what you will.
I switched from Firefox to chrome because firefox took ungodly amounts of time to startup. But I'm happy with Chrome so I don't go checking every month to see if firefox have fixed it yet. Maybe I'll try it again.
That has definitely changed: not only does Firefox start up much faster than before, but Chrome starts up much slower than it did before. (Of course, I just mean on my machines, have no idea if it would be true on your system.) If start up time matters to you, you probably want to switch back to Firefox.
Weird. I just tried it then from a cold restart. Chrome is instant on my computer, firefox is near instant. Maybe because I have an SSD now I don't notice it.
Why do you close it? I also use windows, never have more tabs than I can read their titles, and never close Firefox (except when some js crashes it, like browserstack.com)
Firefox is my browsing (err) browser (I like tag bookmarks too much).
I use chrome to develop with as I like their tools much better than firebug which, to me, is very slow in comparison. Chrome I close out relaunch semi-frequently (crtl-w spam, oops), but still keep it open, because why not?
I close what I'm working on when I'm done working on it. I don't like clutter. Simple as that.
Of course, I also work in an operating environment that generally doesn't leave desktop programs running after I close the last window. So, it just happens naturally when I close a window.
Mozilla's user stats show that 96% of Firefox Beta users restart their browsers daily and 98% within 2 days. I imagine non-beta Firefox users restart even more often, e.g. they open Firefox to check their Yahoo mail and quit. Most people do not live in their browsers.
The beta channel updates more frequently and is less stable than the Release channel, it would stand to reason that the Beta channel would require users to restart their browser more frequently regardless of their preference, just like Aurora channel would need to restart even more frequently than the beta users.
From a non-power user's perspective, whenever I am fixing my family or non-technical friends' computers their browsers are regularly left open with an ungodly amount of tabs, often with multiple instances of their webmail client of choice. I'm sure some of them have no idea how to close the browser even if they wanted to, short of rebooting. I know my girlfriend likes her tab count to closely match her unread email count, alright that's an exaggeration, but I often joke to her that she has a hoarding problem and get her to throw away some of her tabs/unread emails.
a) Some plugins can now be installed without restarting
b) Certainly, a completely-restartless plugin system would reduce that number, but most people probably settle into a set of plugins and don't go changing them all that often, so I'm not sure what sort of impart that actually has on these 96-98% numbers.
Agreed. A few years ago there were a lot of complaints about Firefox starting very slowly, especially on Windows machines [1]. For those of us that obsessively close apps that aren't being used, the startup time was a big turn off.
Startup time matters as close to zero as you could get for me. Chrome only gets restarted when the computer gets restarted which is only when OSX releases updates which is about every 3 months. As long as the answer isn't '5 minutes', I don't care about it at all.
Same deal for the other test that seems to be sinking chrome in the overall score, the open 40 tabs thing. I don't think I've ever opened 40 tabs at once, and if I did, I can't see myself caring that they all finished in some amount of time. I'd be far more concerned that the browser was smart enough to prioritize the one I'm actually looking at.
The structure of the overall score in this is pretty broken in opinion. The final tally is a geometric mean of Wait Times, JavaScript/DOM, HTML5/CSS3, and Hardware Acceleration. Chrome won all but the wait times test and even within the wait times test still won the single page test. It lost purely due to 'cold start' and 'open 40 tabs and time all of them', which are least relevant tests to reality in the entire suite.
Surprisingly IE10 handles all of these perfectly. Of course, IE10 has totally broken canvas clipping (non-rectangular clipping regions are impossible. They worked fine in IE9.)
Firefox does render large text much better than Chrome, which is why I used it when taking screenshots for my book. But scaling the text (as opposed to setting a larger font) is a disaster.
This is just an example. I could go on for days about canvas bugs. I wish there was a bigger push to fix those instead of eking out a performance advantage.
To Firefox's huge credit, I've submitted a lot of bugs to Chromium and the FF team, and the FF team consistently gets back to me within a week and usually fixes the bug within a month. The bug reporting experience with Chrome on the other hand is rather disenchanting.
For a cross-platform bug example, the context's miterLimit is just plain broken by default in Chrome. I reported this (with examples) back in April and have yet to receive any kind of reply. Thank god it's an easy workaround.
If you want people to use your browser and develop with it in mind, working elements are more important than being slightly faster than the others. (Good developer tools come in close second I think, at least for this crowd).
My experience with reporting bugs in FF/Chrome is a bit different. I reported a bug in canvas text implementation, which occurs in both of those browsers, on Feb 20. I provided all the necessary info and even a minimal test case. The bug has been confirmed in FF quite quickly (within days) and then... nothing. It's quite serious, basically you can't do smooth animation (movement) of text. Similar to the text scaling bug.
Reporting to Chrome has been even worse. No word, no nothing. Not even a confirmation. (Although the submitting process itself has been simpler AFAIR).
That's really uncool, after a few of those you lose interest in submitting any more since the feedback feels like "we don't care". So you end up using the time to find workarounds instead.
Now obviously someone will point out everyone can go ahead and submit a patch for the bug itself since it's open source. That's all cool and dandy, I love open source as much as the next guy, but please, let's get serious. Who has the time to dive into a massive codebase like that and fix the bug? Not everyone has plans to become a browser developer.
Well, no, you're not seriously expected to submit a patch for a code base like that. If you can, that's awesome, of course :)
As for your Chromium bug (http://crbug.com/180300) I'd suspect it fell through the cracks with the whole Blink transition. I've tried to narrow down the tags for it, hopefully that will get somebody's attention.
The kerning differences you're seeing only show up on Windows when not using hardware acceleration (Direct2D). When not using hardware acceleration, Firefox draws text with GDI. GDI doesn't deal well with arbitrary scales applied during text drawing and this explains why the same problems show up in Chrome and Firefox. When Firefox is using DirectWrite/Direct2D (i.e., is using hardware acceleration)—like IE 10—these problems don't show up. Short of heroics, there's not much that can be done to improve text drawing at arbitrary scales with GDI.
This is not a comment to you in particular, but when authors find bugs in browsers there are several things that they can so to ensure that the bug is fixed as quickly as possible:
1. Verify that the bug occurs in the latest available (possibly unstable/nightly) build.
2. Submit the bug to the relevant browser/engine bug tracking system (see below). If you aren't confident about how to write a good bug report, read [1] first.
3. Write a testcase. If at all possible this should be minimal (i.e. not using the whole of jQuery/Angular/whatever) and should certainly be clear (e.g. no minimised code unless it is actually needed to reproduce the issue).
4. If the bug is a conformance bug rather than a QoI issue, submit your testcase to the W3C [2] so that it can be incorporated into the standard testsuite and automatically run by all browser vendors. To do this the test needs to be written in a standard format [3] and submitted using the process at [4]. At the moment the documentation is a bit sucky, but there is a big revamp in the works [5], so that should improve in the near future.
That last one may sound like a lot of effort, but testing is the only way that we will end up with a web platform that is both technologically competitive and open both in spirit and in practice i.e. with multiple interoperable implementations. People put a huge amount of time into working around, and complaining about, cross browser issues. Devoting a small fraction of that time to submitting regression tests instead would dramatically decrease the number of problems in the future. For example submitting a test for canvas clipping both makes it likely that IE will be fixed in the near future, and also ensures that it won't then just regress again one version later.
It seems that it would be in the self-interest of big projects like jQuery to commit to creating a test for each time they have to work around some browser bug, so they can expediate fixing the issue, track which browser versions have the bug, and remove the workaround — and thus reduce their code complexity — as soon as the test passes in all the browsers they are targeting. However this is not just something that matters to big projects. If you have run across a bug that is making it difficult to implement something, you are likely the best person in the world to write a testcase for that issue.
If you are trying to submit bug reports and tests and need help, please ask on #whatwg on Freenode irc or #testing on irc.w3.org [6].
I appreciate the time you spent to make that informative post, and as a professional software developer I also appreciate the value of good bug reports. However, IME this simply isn't true:
People put a huge amount of time into working around, and complaining about, cross browser issues. Devoting a small fraction of that time to submitting regression tests instead would dramatically decrease the number of problems in the future.
The effort required for a regular user to file a bug report in the major browsers is insane. The bug trackers involved are among the most user-hostile and over-complicated interfaces I have ever encountered.
Developers move to six-weekly releases with at least three different flavours in the works at any given time, and then expect people running the regular browser to install a completely different version of the software to test things before filing bug reports. This is insane.
In Firefox's case in particular, users are often asked to create a new profile, which AFAIK still requires command-line hackery and, from direct personal experience, carries a significant risk of corrupting or destroying your regular profile even if you follow the instructions to the letter. This is insane.
And even if users do make the Herculean effort required to create a minimal test case, install the latest development versions of the browser, create a clean profile if necessary, confirm that the bug still happens, and fight through the bug tracker to file a report, there is still no guarantee that anything will actually be done about it. Quality of implementation is obviously not the priority for several of the major browser developers, certainly not Mozilla and Google, and major bugs and site-breaking regressions are frequently ignored for very long periods while shiny new features no-one is actually going to use in production for years and another 0.3642% performance boost in JavaScript are prioritised.
As I said, I'm a professional software developer, and as such I appreciate the value of good bug reports and feedback from users, and I hate it when users make blanket criticisms. But Mozilla and Google are among the most user-hostile organisations on the software landscape in this respect, and until they fundamentally change their approach, I don't see this situation improving. Meanwhile, their users, and those of us trying to develop web sites and applications to support those users against a constantly moving and frequently breaking target, are the ones who lose out.
For the record, I debated whether to post this. Ranting on public forums is rarely constructive. But in this case, I think the presumption that it takes much less time to file good bug reports than to find workarounds must be challenged, or the situation isn't going to improve. For someone who isn't a regular contributor, following the guidelines to file a bug report in the way that Mozilla and co would like isn't the five minute job it should be, it's closer to a five hour job, and I think sometimes the developers/regular contributors/bleeding edge alpha testers don't even realise that.
I agree with you. I work in software development and have followed Mozilla's bug tracker for over 5 years, and i STILL don't really understand how to use Bugzilla.
On the dozen or two occasions that i've filed a ticket, probably 1/3 of them ended up being duplicates (which i always search for, but never find, because i don't get how to use Bugzilla's search function), and another 1/3 end up going un-acknowledged. The remaining 1/3 get acknowledged, but i've yet to see one actually get fixed (which i understand is a function of resources and blah blah, i'm not bitter about it or anything).
On one occasion, i was able to locate the source of the bug i had, and, even though i had absolutely zero experience with 'lower-level' languages like C++ at the time, i decided i would fix it. I pulled trunk and managed to build it, which was an enormous affair, let me tell you, and in the end i had a fix. I submitted the patch, but was told that i would need to put it through their testing system. I was given a link to a wiki, but this entire process was just absolutely beyond me at the time and i bailed on it. I just wanted to help them fix a bug. The bug is still there to this day.
To Mozilla's credit, they did (do?) have a user-feedback extension with the beta/Aurora builds that allows regular users to report bugs in a simple manner directly from the browser. You'd pick 'Firefox made me sad' and then just type what the problem was and click submit. However, i have no idea where these bugs go, how closely they're looked at, or how successful they've been in reducing bugs.
Speaking as an individual (not representing my employer, yadda, yadda)
> The bug trackers involved are among the most user-hostile and over-complicated interfaces I have ever encountered.
Please provide detailed feedback why that is. If you work with any given system for a while, you develop blind spots for its flaws, and so the devs on the respective projects are likely not even aware of the hideousness. (Except "oh, it was always that way")
> Developers[...] with at least three different flavours in the works [...]
If you have a repro case and a version number, that's more than enough. Yes, testing with latest is appreciated (and at least Chromium allows Canary side-by-side with your stable), but the repro case is what really matters.
If you have that, it is infinitely easier to actually react to a bug than anything filed that's just a vague description.
> Quality of implementation is obviously not the priority
It is a very high priority, in my perception. File a bug, please. The devs might not get to it directly - but that's due to the fact there's a lot of stuff that needs to be done.
> Ranting on public forums is rarely constructive
You posted more than a rant - you mentioned several existing pain points. It's important for all browser devs to hear about them, so thank you for doing so. And the more info you give on what exactly goes wrong, the more likely it is to actually get addressed.
I think I should start with an apology to the Chrome team, because I realised after posting that I was actually thinking of the Chromium issue tracker and its Bugzilla counterpart, not the Chrome reporting mechanism under "Tools" and then "Report an issue...". The latter actually seems to be a very good starting point for reporting a problem.
Regarding the more challenging bug trackers, I think it might be instructive to consider those systems in the same light we would consider the kinds of web start-up we often discuss here on HN. We talk about having clear calls to action, and not forcing someone to sign up with their e-mail address before they can even see what they're getting themselves into, and the psychology of giving users clear choices and avoiding analysis paralysis. This is all good stuff, and it's just as applicable to writing a user-friendly bug reporting system as any other web app.
Take a look at https://bugzilla.mozilla.org, which is the top hit for the search term "Report a Firefox bug". On that page, there are four big options, any of which might be what I should do first if I want to help. There's also a text box, which has a label similar to one of the big options, a bunch of separate related links next to that box that again seem to overlap with the big buttons and additionally suggest that I might need some sort of plug-in to do something. There's another search box in the menu across the top of the screen, along with about 10 other links. There's another row of links below all of that lot.
Now go and look at http://www.google.com, which is what I used to find that page. Except that I didn't really even do that, because the one relevant box was right there in my browser already.
Suppose I don't immediately give up, and I decide that "File a bug" is the most promising option on the Bugzilla page. Clicking through there, it seems that before I can find out whether I'm in the right place, I need to deal with a page prompting me to log in. At least, I think I need to log in, because there are form fields that look like I should on top of the 4 other text boxes, 4 buttons that look like buttons, 1 other button that doesn't look like a button, 2 checkboxes, and 20 links also on the page. However, as a new user, what I actually need is to create an account, which is accessible via a link set in tiny, low contrast text in the small print several lines below the login form.
OK, so I've spotted that link, and I click through to create an account. The next page does have a lot of nice, large, clear text. In fact, it has a whole load of links that I can use if I need help on one side of the page and a load of things for me to read on the other side. This is a classic analysis paralysis situation: if I'm here to file a bug, do I want help or am I trying to help? At the bottom of the right-hand column is, in smaller text, a place I can put in my e-mail address to create an account, under scary-sounding wording about alternative e-mail addresses (which have something funny about them that I need to follow a link to understand, apparently) and about confirming things work before I can get on with doing anything useful.
At the bottom of the left-hand column of that page, under the generic heading "Feedback", is what actually might be the most promising link: if you click that and then click "Firefox makes me sad", and you ignore repeated attempts to divert you elsewhere, you get to what seems a useful page: a box to type in a problem and a place to add a link to the page where it happened.
Obviously I'm deliberately describing the system critically here, and I'm picking on Mozilla rather unfairly since Bugzilla is hardly the only bug tracker with usability problems. However, I hope that by being specific, I am making the point about how difficult it would really be for an average user to report a problem. I've taken more than half an hour to write this message, looking through numerous pages on Bugzilla to get this far as if I were a new user, and I haven't even got to the main bug reporting page yet.
Just to finish the story, the actual page itself has far too many options for any non-expert user to be confident with it as well, even for professional web developers who use the system somewhat frequently, but this doesn't matter all that much if most people never get that far anyway.
Perhaps the most unfortunate thing of all is that despite being willing to help, the first time I ever said all of that was in response to an HN post where someone seemed genuinely interested in why I thought there was a problem. I honestly wouldn't know where else to send it, or whether someone somewhere might care enough to do something about it if they knew. Maybe if it's here it'll filter back in some useful way at least.
Just on a sidenote: similar to Chrome, Firefox does have a user-friendly (almost to the point of belittlement) feedback process readily available via Help - Submit feedback... It leads to https://input.mozilla.org/en-US/feedback.
I was a little surprised that the submitted feedback is publicly visible at https://input.mozilla.org
I agree with the previous comment; optimization is fun for programmers but in practice, all the modern browsers are more than fast enough already.
The biggest problem I have is with browsers sometimes failing to wordwrap text. That should be trivial to fix, compared to the effort that's being spent on esoterica (wordwrap is already implemented after all, it must be just something in web pages messing it up). I did file a bug report for this with Chrome, a year or two ago, but to no avail thus far. Is there any way to get it into the pipeline for being fixed?
>In Firefox's case in particular, users are often asked to create a new profile, which AFAIK still requires command-line hackery and, from direct personal experience, carries a significant risk of corrupting or destroying your regular profile even if you follow the instructions to the letter. This is insane.
I have reported a fair amount of bugs to Mozilla over the years, and have never been asked to do this.
You seem to confuse what is mandatory with what will simply be helpful. All the things like creating clean profiles, producing test cases, are things that expedite progress on a bug. If it's a real, reproducible bug, it'll be looked into regardless of those things. And if it's not easily reproducible, exactly how much time should the developers spend before they move on to another bug that is?
> As I said, I'm a professional software developer, and as such I appreciate the value of good bug reports and feedback from users, and I hate it when users make blanket criticisms. But Mozilla and Google are among the most user-hostile organisations on the software landscape in this respect, and until they fundamentally change their approach, I don't see this situation improving.
This seems like a strange comment
What other large scale software do you use on a day to day basis that you can even find and report bugs instead of just sending them in to some companies /dev/null and hope they eventually get fixed
It's not only the difficulty in reporting bugs that is the problem, it's the rapid release cycle and the frequent regressions and bugs it seems to introduce. I don't know which of the other "large scale software" I use would have the same difficulty reporting bugs, because nothing else I use breaks basic functionality every few weeks.
However, for smaller-scale things, I've recently been in touch with a couple of on-line services I use and the manufacturer of a hardware device I bought about various problems. I interacted one-to-one or even one-to-many with some of their technical folks, with very satisfactory service in each case. No doubt there is a bug tracker somewhere on the other side and some new records got created in it along the way, but all I had to do was send a simple message describing my problem and then answer their rapidly-provided follow-up questions. I was happy to make the effort to do that, because they made it easy for me to help and gave me a feeling that my effort was doing something useful.
I would be just as happy to contribute usefully to supporting browsers if it was just as easy to do it. I want to help, and for a long time I really tried to do so. But I live in the real world, and I literally can't afford to spend half a day when no client is paying me just to do someone else's testing for them every time I find a bug in a major browser. Once in a while I could cope with, in the spirit of helping out. Several times a month just isn't practical.
This level of insanity is pervasive. And I suspect it could be mitigated by better reporting tools.
One of the things that Microsoft did really well was to enable their system to capture enough information to reproduce a bug based on what the customer was currently running. Seems like browsers could benefit from that too.
Agreed, perlbug is another great example. And it would take the heat off the user and make developers more productive. Click "send config" and blam everything needed to reproduce the exact environment in the message without any typos etc.
So I agree that there is a great deal of accidental complexity associated with filing bugs, and browser developers could certainly do better here. I also see that there is a certain disconnect between the perceptions of a browser developer/QA, for whom "stable" version means "outdated version" and those of a normal user. On the other hand, if you do web development work professionally (and I appreciate that you personally might not), you should seriously consider running at least the beta versions of browsers on a day-to-day basis, and consider testing in something even newer. If you don't, it's much more likely that a browser change/bug will regress your site for tens or hundreds of millions of users.
Having said that, I would much rather have a bug report based on an outdated — sorry, stable — version of the browser than no bug report at all.
As for the rapid release cycle, my feeling is that process dramatically decreases the bug count. It forces everyone to keep the code in a roughly shippable condition all the time, and provides early feedback if a problem is introduced. I don't have data at hand to backup this hypothesis, but I wouldn't be surprised if someone at Mozilla/Google did.
It is, however, unequivocally false that no one cares about the quality of browsers or that all the focus is on features. Browser vendors have suites of hundreds of thousands of automated tests that every commit must pass. They require tests to accompany code changes. There is a big emphasis on correctness in initial implementation and on maintaining that quality through subsequent releases. However, as you note, this doesn't always go to plan. But submitting tests to the W3C is one way to help make things better. Not only will these automatically get picked up by vendors for their automated test systems, as time goes on we expect it to become more visible who is passing what tests. So there will be a pressure to compete on correctness as well as performance. Of course much like performance benchmarks, if test conformance rates become a big visible thing, there is a temptation to start submitting large numbers of tests for narrow areas where you know that you do well and the competition does badly. Also, not all tests are equally important. So some care is needed in the way that conformance data is presented to prevent people jumping to conclusions from the fact that Firefox 132 scores 89% on the DOM MindControl API but Chrome 140 gets only 87%. Nevertheless, especially for professional web developers, submitting test cases for browser bugs is one of the most effective things that you can do both to get the bug fixed and to advance the state of the web platform.
On the other hand, if you do web development work professionally (and I appreciate that you personally might not), you should seriously consider running at least the beta versions of browsers on a day-to-day basis, and consider testing in something even newer. If you don't, it's much more likely that a browser change/bug will regress your site for tens or hundreds of millions of users.
Again, I appreciate what you're saying, and it seems a reasonable request. However, I think it's unrealistic for a lot of developers.
For example, I work on multiple web-based projects in various capacities. Right now, just to test changes on one of my sites before releasing them, I run manual tests on about half a dozen desktop browsers and several mobile devices. (Automated testing is great, but there is only so far you can get with scripting, particularly when it comes to things like rendering glitches and responsiveness.)
If I put a change live on one of my sites, or worse, hand it over to a client or ship a web app embedded in network-enabled hardware, I shouldn't have to be constantly checking for blatant regressions in basic functionality every few weeks. In fact, I didn't have to, until the teams behind various popular browsers and plug-ins decided that the kind of major/minor/point releases the entire industry had been using successfully for decades weren't sufficient and pushing out whatever was ready at an arbitrary six-weekly interval was a better idea.
As for the rapid release cycle, my feeling is that process dramatically decreases the bug count. It forces everyone to keep the code in a roughly shippable condition all the time, and provides early feedback if a problem is introduced.
That's an admirable intent, but it clearly doesn't always work in practice. For a vast and complicated piece of software like a modern web browser, slipping in an occasional regression in a corner case is all but inevitable, but there have been your-browser-doesn't-work-at-all scale howlers that have been pushed out sometimes too.
I suspect what this really exposes is an over-reliance on automated tests and a false sense of security if all of the tests pass. With development at this kind of pace, and with the distributed and partly volunteer-led development model for these browsers, perhaps there just isn't time for more comprehensive reviews and high-level planning/architectural work and so on, but there is a reason those things have value.
As for decreasing the bug count, I fear that what is actually happening is that there are so many now and the target moves so fast that many of us have just given up on reporting them. Certainly for our project bug trackers here, Chrome used to be worse by a factor of several for browser bug issues, but Firefox spiked after going to rapid releases and has held the dubious honour of being the worst major browser for quality more often than not since that time. Several of these projects use modern but mainstream features from HTML5 and CSS3, but I'm not talking about bleeding edge or obscure features here.
It is, however, unequivocally false that no one cares about the quality of browsers or that all the focus is on features.
Please notice that I was very careful not to claim that. I do, however, feel that as a matter of priorities, quality of implementation often takes a back seat to ticking feature boxes and to performance at the moment.
Again, this is based directly off our own bug trackers. I could have described canvas and SVG issues not far from the kind of things simonsarris mentioned earlier in this thread, for example, and others in CSS3 backgrounds and borders, CSS3 animation, HTML5 media elements, and so on. In contrast, while the pace of development for IE is slower, it has far fewer blunder-type issues in basic functionality these days, in our (obviously anecdotal) experience.
As a final comment, I'm interested in these W3C tests you've mentioned a couple of times. I had no knowledge of anything like that before this thread. If you can share some sort of authoritative/definitive link to how that works, I'd appreciate that, as it sounds like I should learn about it.
That's odd, considering Firefox clones the antialiasing algorithm used by the host OS and uses that to render type. Are you sure that perhaps the OS font rendering settings are to blame, and not necessarily Firefox?
In-canvas kerning seems to be affected by the system rendering. In Arch Linux, using Firefox 22 and the Infinality patches, I get http://imgur.com/eRVru67 IE kerning is better when scaled up, but it looks just as good or better in regular or scaled down sizes.
Just checking in to say that your kerning example looks fine on Safari on OS X (latest stable versions). Which is not to say it's perfect, but there's nothing egregious (no overlaps, no large gaps).
Despite the name this program in fact doesn't use WebGL, it runs using three.js 2D canvas renderer (just check the source).
Additionally even the results they got there are suspicious, I got completely opposite results on my system (Windows 7), with Chrome being faster than Firefox by a large margin:
They got 777 on Firefox vs 437 on Chrome, I got 290 on Firefox vs 441 on Chrome (this is fully in line with my everyday experience, doing browser graphics).
Also it gets rendering artefacts of 2d canvas renderer, console log confirms it runs CanvasRenderer and performance numbers are not consistent with WebGLRenderer (should be in order of 5,000-7,000 thousands of cubes at 60 fps).
Disclaimer: I did write most of WebGLRenderer's code.
Oops, I was looking at the fill rate benchmark, but I see that the cubes benchmark does in fact use the CanvasRenderer, and this is the benchmark Tom's Hardware reports to have used. Sorry for the noise.
I just switched from Chrome to FF. The performance is close enough, and good enough for both, that it's a secondary consideration at this point.
I switched because I got tired of hearing Chrome constantly accessing my hard drive. I wound up going through the list of Chrome switches here (http://src.chromium.org/svn/trunk/src/chrome/common/chrome_s...) to try to alleviate it. Some things helped but not to an acceptable level. I use W7. Using procmon I could see Chrome constantly re-reading keys from the registry and writing to temp files even though caching and pre-fetching were disabled.
I was also was concerned for Chrome accessing my laptop SSD. Even though I couldn't hear it I could see the lifetime allotment of reads and writes being flushed.
On a similar note, my PC's hard drive starts making crazy noises as soon as Windows 7 goes into screensaver mode. I've turned off every disk-related option I can find and it's still doing... something. Maddeningly it stops as soon as the screensaver is interrupted.
i'm developing some js compute-heavy apps and Chrome still pulls ahead quite significantly (by as much as 5x) in many cases. i've filed js-perf bugs with mozilla that have been accepted but seem to be quite low on the priority list. artificial benchmarks don't always tell the full story :(
if any mozillian js dev is lacking stuff to work on ;)
Chrome also phones home all the time, which is why I stopped using it. I don't notice much performance difference between Firefox and Chrome anyway, and always use Noscript, Adblock+ and Disconnect.
Nope, It is a common issues with Firefox although it has been greatly reduced in the last year or so.
The project to fix all this is called e10s. Moving Content, and UI etc to seperate process as well as OMTx ( Off Main Thread Everything ) where Firefox Dev trys to do everything in async style and move them off Main Thread.
Progress has been slow. But at least they are working on it
matthiasv, I don't know what your secret is, only 1 tab open in 1 window open at time, JS disabled, or what, but this is a very real issue. I have Firefox installed on all 4 of my machines, work desktop, home desktop, laptop, and a netbook. 2 of which are fresh installs. On all 4 when I visit multiple HTML5 heavy sites, FF's UI gets unresponsive. Chrome and newer versions of IE do not suffer from this, and I'm pretty sure it comes from FF running in a single process.
Chrome and IE have their issues too, like thrashing memory a lot sooner than FF, which is why FF is still the default browser on my laptop and netbook despite its unresponsive UI under heavy load.
Are you visiting websites containing the Flash Player plugin? If so, try killing the plugin-container process. Flash continually gets less stable and Firefox can only do so much to compensate when the plugin hangs (plugins are intentionally allowed to insert themselves into the message loop, which makes it possible for them to hang a browser)
Do you have any add-ons installed? I'm not shifting the blame from Mozilla to the add-on developer, but it might pinpoint the problem you found. If add-ons are using slow APIs, then Mozilla can do something about it. For example, some synchronous add-on APIs have been replaced with async versions.
I gave up on Firefox a year or two back because of performance concerns - not raw speed but ram consumption (leading to thrashing when tabbing between ram-intensive processes) and its poor single-threaded freezing problems. Nice to hear that FF is getting better and I'll be able to go back to it - its extension community is far better than Chrome's, and Firebug is without peer.
Frankly I think Firefox performs a lot better than Chrome for memory usage now. After a few days of having Chrome open it's usually using between 3 and 6GB of RAM (Usually Facebook is using a gigabyte so killing that tab frees up a lot) and this is with <50 tabs open. Friends who use Firefox end up with ~1GB of RAM usage in a similar timeframe.
It's funny, I was using Chrome for about a month (when they finally released a Linux version) but was forced back to Firefox mainly because of massive memory consumption (because I keep a lot of open tabs). The other problem I had with it was lack of AdBlock Plus back then - without it the web is unusable.
I'm really surprised that no one in these comments and the thread on the actual story[1] (instead of this...whatever the equivalent of blogspam is but for slashdot) hasn't mentioned the horrible averaging methodology of this benchmark suite.
The individual tests aren't even a problem (though I would maybe pick some more and/or different ones, especially their somewhat odd benchmark choices in performance and graphics), but the averaging makes no sense at all.
Averaging time-based benchmarks is problematic enough (it feels more right, but it still isn't a good idea), but how on earth do you convert a "number of times we had to refresh a page" result into a number that you can then average with a "standards compliance" count and a measure of memory efficiency?? Even if you normalize (which it doesn't seem like they did from the output numbers, but maybe they did), the numbers still aren't comparable because you've given no account for the relative magnitude of their effect.
e.g. if you have a test of "does the browser have a konami code easter egg?", it doesn't matter if the geometric mean is less sensitive to outliers, because it still doesn't make any sense to take an average of that with "the playback framerate of an HD Trololo video" and then pretend that the average provides any insight. And it's even worse if you then compare that average to other crazy averages!
At best you can look at relative ranking, which they actually mention but then proceed to give exact numbers for their relative ranking. There's no information about "betterness" in there, though, except if you then divide up the numbers again to say "these points came from the win in test A, and these points from test B"...at which point you just have the original tests again. Better to count win/no-wins and use that as your final result. Then at least it's obvious that if some tests are much more non-trivial than others that you'll have to give arbitrary weights for the final result, as opposed to having the arbitrary weights being implicit in the tests themselves.
Sorry for the rant :) This is a good recognition of Mozilla's hard work, though notably they've been winning many of these tests for a while now (especially the memory ones), but it would be nice if tomshardware could drop the basically meaningless overall scores (or we could just collectively ignore them).
I agree the need to produce a single overall number is nuts. We need a better setup for comparing things, somehow...
That said, if you _do_ need to produce one in a situation like this, then geometric mean is the least bad way to do it because it does not depend on the normalization of the individual tests at all: if you have 'k' tests and change the workload for one of them by a factor of N, then the average will change by a factor of N^{1/k} no matter what the actual numbers are. This is why it's used so commonly in benchmark that want to produce a single number.
Of course this very property is what makes geometric-mean-based benchmarks so gameable: a 20% speedup of an already-fast operation (say eliminating a single instruction) is worth as much for purposes of the benchmark as a 20% speedup in something that's really slow. The result is that browsers keep micro-optimizing already-fast things to win the benchmark game....
So much of any comparison like this is about performance, but I really don't think that's as important as it used to be, and that's coming from someone who generally uses slow computers and optimizes software until they run fast.
I'm typing this on a bit of an exceptional example, a 2.6 GHz Northwood Pentium 4 with 1 GB of single-channel DDR-400 RAM. The one saving grace is that it has an SSD, but I put the swap file on the spinning drive (which is modern). It's running Linux Mint 14, Xfce edition, with a handful of minor OS-level optimizations. Firefox 21, with a fairly standard configuration, flawlessly handles a dozen or more tabs on a daily basis. It's even pretty snappy, more limited by my internet connection (1.5 Mb DSL) than by the hardware it's running on.
If this sorry excuse for a computer does that well, are the relatively minor differences in performance between browsers going to be a big deal on modern hardware? There will be edge cases, such as the people who have hundreds of tabs open at a time, but for the average user I'm having trouble envisioning that.
The things that make a difference anymore are very tough to quantify in tests like Tom's Hardware did. I will always have Firefox around because I think Mozilla actually cares about privacy. I use Presto on my phone because it's the only one I've found that renders things how I want. Many people are tied to a browser because of extensions. Standards support doesn't matter until you find a page where a browser doesn't work, and those pages will be different for different people. Browsers can be rock solid on one computer and worthlessly crashy on another.
I don't think a round of benchmarks has meant anything to me in browser selection for a long time, and when it did I did them myself so as to account for the computer they were running on. I choose by trying to use a variety of them for a while, and a winner always emerges quite quickly.
I wish they (FF) would also spend some time on the small annoyances:
Lack of a restart button. Now when I upgrade FF, I have to "kill -9" it from my terminal to get it to restore windows upon restart
Memory leaks. Leave yourself logged in to Facebook for a few days, and watch it take up 2GB of memory.
No way to easily filter sites with cookies like you can in the next tab over, where you manage sites with saved passwords. Why is this? Does FF secretly want you to not muck with cookies?
Now, I'm sure there are addons and plugins for the above. But I should _not_ need addons for basic UX. Save the addons for fancier stuff.
* For the restart after upgrade, either it is a bug, or you have not seen the little "restore tabs from last time" button in the lower right part of the browser.
* I have not experienced that, I don't use Facebook.
* For the cookie stuff, you'll be pleased to learn about the about:permissions feature (type it in your address bar).
Interesting, thanks. But while it's open, the list of sites keeps growing for me, apparently various shady/malware sites get added at the bottom continuously. I haven't managed to find it why this happens, but I'm guessing that it may have something to do with the "safe browsing" feature, perhaps all sites in the blacklist get added to about:permissions for some obscure reason. Any ideas?
>Leave yourself logged in to Facebook for a few days, and watch it take up 2GB of memory.
It's entirely possible for a website to have a memory leak. (Or, at any rate, consume an ever increasing amount of memory.) If closing the tab and opening it again frees the memory, then there might not be anything FF can do about it. (It's also possible you've got an addon causing the problem.)
Facebook indeed has a long history of really dumb memory leaks that browsers can't do anything about, combined with memory leaks that are exacerbated by browser bugs and require improvements to garbage collectors and stuff. It's rough.
You can also just go into the settings, and set "When Firefox starts:" to "Show my windows and tabs from last time".
And I don't see why a "normal" user should want a default button in the UI to restart the browser.
>Memory leaks
The situation is much better now, there still are some leaks here and there, but nothing huge, at least with my usage (I usually restart my browser every 1-2 days to update to the latest Nightly, and have about 50 tabs open).
I'm not entirely sure what the third point is referring to (I just let the browser handle cookies), but in the Privacy tab of Options, you can view and remove individual cookies and search by site. So maybe that helps with what you want?
I think Facebook itself has the memory leak: I never leave it open for more than about an hour at a time, but I've heard from people that leave it open for days that it can take up between 1 and 2 GB; some of these people use Firefox, some Chrome and some Safari.
In "general" under "preferences", you can select whether the browser restores tabs from last time it was exited. Maybe that somehow get set to something besides "Show my windows and tabs from last time"?
I just learned something my math teachers never told me from a tech blog:
Geometric mean is useful for comparing when the expected range or units of values is different. For example, startup time is measured in seconds, but BrowsingBench numbers are things like the unitless 6646. The arithmetic mean would fail to "normalize" these values and give disproportionate weight to some over others; the geometric mean is one way of trying to account for this.
I suspect this was "by a nose". But that's good: I hope to see both browsers trading blows in this little war, leap frogging back and forth in the lead.
To be fair, this slashdot story covering the original story (presumably submitted because the original story was submitted and talked about yesterday) mentions the "performance crown", which was just "by a nose". You're right about the overall test, though.
I did not find this objective, the test scope was limited and the sample set of browsers tested were not the latest version. Not that I use IE but why use IE10 when IE11 is available for testing? If you are doing a performance benchmark browser vs browser you will not be taken seriously if your test is not objective. In this case I find the sample set skewed.
In all fairness, the sample set included all "release" builds of the browsers. The vast majority of users only use release builds, so it makes sense to test them when they get released. It also is important because it includes all performance fixes that will possibly go in to the released product.
These types of headlines are so meaningless. On what benchmark on what platform?
On OS X Safari outperforms both for many tasks (again, YMMV depending on what you're doing.) But that's meaningless for the billions of Windows and Linux users out there.
I wish our industry would move past these kinds of silly headlines. When it first started when Chrome came out there was a massive difference between Chrome and the other browsers, now all of them are very capable and competitive for pretty much anything you want to do.
TomHardware's browser benchmarks are pretty comprehensive. However, what annoys me about this test and this headline, is that Chrome 28, with its new faster Blink core, is literally a week away from being released, which means Firefox will only have its 15 seconds of fame (or rather a week).
Why did they make the test immediately after Firefox was out? Or do they repeat the test immediately after each browser version comes out? In that case I'm looking forward to the test with Chrome 28 included.
I'm not saying Chrome 28 will necessarily win in the next one. I just find it a little strange that they did it without waiting a bit more for the next Chrome version, too, before writing a headline like that. It just reminds of me those polls who turn out a certain way depending on how you ask the question.
The difference between FF release and next Chrome release is 2 weeks at most. The different between the Chrome release and the next Firefox is 4 weeks. But whatever, I'm not too hung up on this. Good for Firefox, if they can maintain that lead.
But that's my point; the answer is different depending on the platform and the task. The headline is invalid because of that fact. Firefox is not faster at everything, and Chrome is not faster at everything.
In my daily use, I find Chrome to be overall faster and easier on the computer in general. I have recently run into a case during my development where Chrome actually has trouble compared to Firefox ANNNNNNND IE. It involves a large unordered list with several different divs, buttons, and links in it. There is some serious jumpy scrolling compared to both of the others, which surprisingly render smoothly.
This is the only instance I've ever run into where it didn't quite measure up.
The reason I use Firefox is because it has a set of extensions I find useful, and because I like Firefox Sync (end-to-end encryption of bookmarks/history etc.).
Whether the browsers shave a few microseconds off JavaScript performance is neither here nor there for me: but perhaps I'm a weirdo.
Performance is nice, but not enough of a reason for me to switch browser.
I just want to know one thing. When I have only two add ons in Firefox why does it take up nearly 200m of memory with no pages displayed? IE is taking 38m with this page displayed, FF has moved to 208m. Firefox is so damn quick to eat memory it causes my laptop to start caching which decreases performance and eats battery.
Doing comparative tests of browsers is OK up to a point but in reality the performance problems on the web are due to the way we build and deploy sites.
I switched to Chrome to see what the speed was like and I came to the conclusion it was a slicker browser for daily use.
But then I reminded myself that some things are more important than raw speed and responsiveness.
Google's interest is tracking and targeting. Chrome actively worked against me in this regard. Firefox, despite its financial ties to Google, put control and freedom onto my desktop.
A bit of occasional sluggishness is a worthy price to pay.
The beer is free at all the browser bars and while I was drinking it, I remembered I liked free as in speech.
Advertising is annoying and rude. Flashing away trying to steal my attention from the activity I am trying to pursue.
"don't do that, come and do this instead"
"skip this ad in 5 seconds" - no thanks
The interesting thing about the New Albion was that it was so
completely modern in spirit. There was hardly a soul in the firm
who was not perfectly well aware that publicity--advertising--is
the dirtiest ramp that capitalism has yet produced. In the red
lead firm there had still lingered certain notions of commercial
honour and usefulness. But such things would have been laughed at
in the New Albion. Most of the employees were the hard-boiled,
Americanized, go-getting type to whom nothing in the world is
sacred, except money. They had their cynical code worked out. The
public are swine; advertising is the rattling of a stick inside a
swill-bucket. And yet beneath their cynicism there was the final
naivete, the blind worship of the money-god. Gordon studied them
unobtrusively. As before, he did his work passably well and his
fellow-employees looked down on him. Nothing had changed in his
inner mind. He still despised and repudiated the money-code.
Somehow, sooner or later, he was going to escape from it; even now,
after his first fiasco, he still plotted to escape. He was IN the
money world, but not OF it. As for the types about him, the little
bowler-hatted worms who never turned, and the go-getters, the
American business-college gutter-crawlers, they rather amused him
than not. He liked studying their slavish keep-your-job mentality.
He was the chiel amang them takin' notes.
Chrome wins some tests but loses the most on the start-up benchmarks, which is a real bummer. There was a time where we'd spend literally days figuring out why startup regressed by a handful of milliseconds, and this test shows an end-to-end test taking nearly six seconds.
One idea I saw mentioned is that there may be a bug related to how Chrome reads the system proxy settings (something about an API change between Windows 7 and Windows 8). I think most of the Chrome developers are still on Windows 7; the bots that are tracking startup performance are definitely still Windows 7 as well. So maybe that number is more of a single bug than a systematic thing.
All of the above is not intended to diminish Firefox's impressive results -- just thought I'd provide some background! What these sorts of tests show most is that competition pushes browsers ever faster. If the proxy theory above is right, then maybe this test will inspire Google to invest more into Windows 8 testing, which ultimately benefits users.
(Edit: Firefox won other tests too, I also didn't mean to say this was the only factor.)