Hacker Newsnew | past | comments | ask | show | jobs | submit | 2010-01-22login
Stories from January 22, 2010
Go back a day, month, or year. Go forward a day, month, or year.
1.I'm on a deserted island. How can I tell which plants are poisonous? (straightdope.com)
104 points by yanowitz on Jan 22, 2010 | 62 comments
2.Keeping computers from ending science's reproducibility (arstechnica.com)
101 points by yummyfajitas on Jan 22, 2010 | 62 comments
3.Stop restricting my password - Help these sites get better security. (weakpasswords.org)
84 points by dinkumator on Jan 22, 2010 | 60 comments
4.Rails and Merb Merge Update: Rails Core (engineyard.com)
84 points by wifelette on Jan 22, 2010 | 3 comments
5.WSJ Jumps the Shark (ritholtz.com)
81 points by tortilla on Jan 22, 2010 | 67 comments
6.API Status (api-status.com)
76 points by jmonegro on Jan 22, 2010 | 8 comments
7.PDFs in Pure Ruby (majesticseacreature.com)
67 points by jmonegro on Jan 22, 2010 | 46 comments
8.The jQuery Project launched (jquery.org)
60 points by ronnier on Jan 22, 2010 | 14 comments
9.Hustle (how to learn new stuff) (ihumanable.com)
60 points by ihumanable on Jan 22, 2010 | 10 comments

I am often told that I should keep my nose out of other science domain's business because they know more than I do. However, I think when they start building their science on top of computers, I start getting a say again. Here's what concerns me about this increasing use of computers:

It seems like the vast bulk of these simulations are iterative, and therefore subject to mathematical chaos. How many of these researchers have any clue what door they are walking through? How many of them know what a strange attractor is? I'm sure the answer is non-zero; I'm equally sure the answer is nowhere near 100%.

Small errors cascade even if you consider a non-chaotic classical model. (That is, not that there is such a thing as an iterative model that is not potentially subject to chaos, but rather than even if you don't understand chaos you can see that small errors can cascade. Chaos just makes it worse, and weirder.) A simulation will have bugs like any other large problem. A non-programmer approaches bugs by banging on the program until it seems to generate expected results. (About 50-80% of programmers do that too.) Therefore, many of these simulations are simply reflections of the simulator's expected result, due to the effect of the researcher's selection mechanism running on the results of the simulations they run. How do we verify that this is not the primary factor in the result of the simulation? This need not be conscious. It need not be ideological, either; I can easily envision a simulation that "should" return a boring or trivial result being monkeyed with until it produces something "interesting", because the simulators think the boring result should not obtain.

A lot of algorithms you can use in these simulations are fundamentally unstable when used iteratively; some exacerbated by floating point errors, some mathematically unstable even with perfect real numbers. How many of these simulations use something unstable without even realizing it, given that it could take a professional mathematician to work out whether that's the case? Even algorithms thought to be stable and reliable can fall apart under pathological situations, and one of the odd things about mathematics is just how often you end up hitting those pathological situations when programming; far more often than it seems like should be the case.

In information theory terms, a simulation can not contain more information that the sum total of the input data and the content of the simulation algorithm. How many simulators understand the full implications of that statement? I sure don't understand the full implications of that, but what I do understand makes me pause a bit. Very simple simulations with rules that can be verified and initial data that is very solid I can deal with; for instance, I like the cell-automata based social theories that show the spread of information or political views or something, especially when it is clear the researchers understand that it's only an approximation. But as the initial data starts getting sketchy or the simulation grows enormous, I start getting nervous about the actual information content of the output. Just because the output appears to be information doesn't prove that it is. It is vitally necessary to be able to check the simulation against real data. For instance, physical simulations of, say, cars crashing can be verified. How many simulations can actually be verified, though? Frequently the reason computers were reached for in the first place is the inability to do the real experiment. Any simulation that can't be verified should be presumed worthless by default. How often does that happen? (It's 20-f'ing-10 and "the computer said it, it must be right" still runs rampant through our culture....)

And of course there's the whole reproducibility issue, where the absolute bare minimum for science would be to publish the full simulation program, all data, the necessary invocation and compile instructions to bring the two together, and all necessary information to understand the input and the output. Clearly, this is not something that fits in a journal paper, but how often does this happen at all?

No, I am not referring to any specific discipline here and in particular I'm not actually referring to climate science. I'm nervous about the whole movement towards simulations in general.

Note that I'm not reflexively against the idea. Meet these bars and I'm happy; give me enough data for reproducibility and verify that your simulation is in fact simulating something real and corresponds to reality and I am happy. (Many physical simulations fit in here.) But as more disciplines jump in I am concerned that these bars are not well understood, and I'm seeing ever more press releases about simulations that can't possibly meet these bars.

11.How to sue your employer and win (unposto.wordpress.com)
58 points by tallyh00 on Jan 22, 2010 | 22 comments
12.SF Mayor on why open source is the new software policy in San Francisco (mashable.com)
55 points by anigbrowl on Jan 22, 2010 | 27 comments

I am often told that I should keep my nose out of other science domain's business because they know more than I do.

I've only heard such statements coming out of a few fields: math education, labor economics, climate science and psychometrics of race/gender. You should ignore such statements; they are nothing more than an attempt to bully you into accepting received wisdom from activists with a PhD.

As an actual scientist (rather than a political activist with a PhD), I strongly encourage you to stick your nose into any or all of my fields (quantum mechanics, PDEs, medical imaging, complex analysis, prediction markets). If you come up with dumb ideas, I'll even explain why they are dumb, rather than just demanding that you leave things to the experts.

14.Google Chrome's H.264 support not true "free" software (ianweller.org)
54 points by cpearce on Jan 22, 2010 | 82 comments

My all time favorite answer for this question is from a book I read in Brazil. It was a jungle-survival guide from a Brazilian military:

First of all, find a monkey. Follow the monkey, and eat everything the monkey eats. If possible, eat the monkey too.

16.OfficePod - Tiny, Minimalist Office Space (officepod.co.uk)
48 points by rgrieselhuber on Jan 22, 2010 | 53 comments
17.Lisp Quotes (paulgraham.com)
47 points by alrex021 on Jan 22, 2010 | 50 comments
18.Vijual Graph Layout Library For Clojure (lisperati.com)
46 points by swannodette on Jan 22, 2010 | 6 comments
19.Steve Jobs Is Building AppleWorld - And Google's Running Scared (seekingalpha.com)
46 points by profquail on Jan 22, 2010 | 76 comments
20.The Chess Master and the Computer (by Garry Kasparov) (nybooks.com)
44 points by michael_nielsen on Jan 22, 2010 | 5 comments
21.Ban on unscientific "bomb detector" after $85M sales (bbc.co.uk)
44 points by jodrellblank on Jan 22, 2010 | 28 comments
22.Committing Location Based Service Suicide (andrewhy.de)
44 points by davidhoffman on Jan 22, 2010 | 28 comments
23.JavaScript grid editor: I want to be Excel. Updated (open-lab.com)
42 points by alake on Jan 22, 2010 | 7 comments

Apple quickly realized that apps would one day overtake .coms. They knew that mobile devices would overtake PCs.

Indeed. Which is why they spend a full year denying that the iPhone needed native apps and trying to shut down all the jailbreak work before finally caving and shipping an SDK with the next rev of the hardware.

Apple got lucky with the iPhone; it was the right product at the right time, and the App Store was a once-a-decade gold rush. But it certainly wasn't in Jobs' master plan for world domination.

I kept looking for insight in this article and not finding any.

25.Data Mining competition for predicting drug reactions (orwik.com)
40 points by bucanrabi on Jan 22, 2010 | 4 comments
26.Explore GitHub (trending/featured repos and podcasts) (github.com/explore)
39 points by pjhyett on Jan 22, 2010 | 10 comments

It's a complicated issue, so here's a little background (I have a Masters in Urban Planning so I've read a lot).

Streetcar lines (and subways in some places) were profitable businesses, just like railroad lines. But there were a few features that we don't have today.

First, it was a new mobility technology so it opened up land that was too far away to be developed. There is no such land now in metro areas because highways and have cars make all areas equally accessible.

Second, they were a real estate play as much as a transportation play. Because they opened up new land, the lines tended to go to greenfields where the streetcar companies and their allies owned or could buy land. Take a look at the Brown line in Chicago and watch how it winds - that was a land acquisition issue. This wouldn't work now because a rail line doesn't increase the value of land enough since so much is accessible by car.

Third, people rode trains a lot more then than people ride them even now. These trains were extensions off of a very dense, centralized city. Technology and social changes reduced the number of daily rides. For instance, refrigerators meant that women didn't have to ride into the market every day. Worker benefits (like the 6 or 5 day work week) meant that workers didn't ride as often. As shopping and employment decentralized, people didn't have to ride to the city as often. And when people got cars, they had an alternative to the train.

So what can we learn from history and contemporary transit to make transit more valuable today?

First, there must be attractions at both end so the fixed costs in tracks and cars can make money both ways. Early streetcar lines often has amusement parks at the terminus to promote two-way travel. The Las Vegas monorail is a decent modern version of this - there's something at every stop. Transit lines that end in the suburbs at a big parking lot will be underutilized by definition.

Second, land use matters. All of the streetcars and subways were built before zoning and so the market built what the market could bear by transit, and buildings could be razed and built bigger if demand grew. Housing in transit-rich cities and near light rail in cities with new transit systems if more expensive because zoning restricts how much can be built. In addition to maximum height, massing, and lot utilization, there are also minimum parking limits that mean every house/condo is much more expensive and not affordable to people that would use transit the most. Take a look at the area around the transit stops in Arlington, VA for an example of transit zoning done right - extremely dense development within 1/2 mile of transit stops. It has the lowest car ownership and usage in Northern VA and generates 50% of the county's property tax in 5% of its land area.

Third is that quality of service matters. Busses in the US suck and are slow because fare collection takes place one at a time while the bus is stopped. Curitiba, Brazil (look it up, it's the world leader in bus transit) has bus stops where you pay to enter and everyone boards at once. The city has one of the highest rates of car ownership in Brazil and the highest transit utilization in Brazil. On their main bus routes they have 1-3 minute headways so there's no such thing as looking at a schedule. Other things like priority lanes for buses at stoplights, tech to let the bus hold a green light to make it through, etc help. Bogota Columbia is the other leading bus tech center and both cities do something like 50x the miles of service per dollar as a subway would have cost to build and operate.

Fourth, if there's lots of free parking at the destination it's almost always easier to drive. Point to point means the trip is faster and free parking means it costs less. Places in the states that have the highest transit usage (Boston, New York, Chicago Loop, SF) are places where parking sucks or is expensive. Even LA traffic doesn't keep people from driving because a) the buses are stuck in it too, and b) it's free to park when you get there.

Basically, any city that's building a light rail or subway line and not dramatically increasing the zoning around it is throwing money away. For instance, the 2nd Ave subway in NYC probably won't change much for the $5 billion because there's no way to dramatically increase the number of people that live in the Upper East Side or Harlem. Without the proper land use, there's not enough population to drive demand, without demand there's not enough incentive to provide good levels of service, and without good levels of service people will find it faster to drive.

28.Tell HN: I released my open source iPhone AppStore Sales Graphing Tool (maxklein.github.com)
38 points by maxklein on Jan 22, 2010 | 14 comments
29.Microsoft Reveals the Science Behind Project Natal for Xbox 360 (scientificamerican.com)
38 points by stakent on Jan 22, 2010 | 23 comments
30.Passive solar glass home: watching the sun move (faircompanies.com)
37 points by kirstendirksen on Jan 22, 2010 | 5 comments

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: