Hi there, code author here. I've been lurking here for years, but couldn't resist an invitation like this.
I've always enjoyed Fabien's analyses so it's great fun to see what he makes of my own code. Anyway, a few clarifications from what I remember of this. (I wrote this '09, and I don't have my old notes in front of me right now):
* The `n` value returned by the trace function, `T()`, is the surface normal. It returns this whether it hit the plane or a sphere.
* The `r` vector in `S()` is the reflection vector off whatever was hit.
* The mystery `c` point in the `main()` function is the offset from the eye point (ignoring the lens perturbation `t`) to the corner of the focal plane. Note below that we're nominally tracing rays for pixel (x,y) in the direction of ax+by+c. At the image midpoint, this should be `g`.
* "although I suspect this to be a side effect: This is not how soft-shadows are done." True. There are true soft shadows in this, but this isn't where they're computed. That's the job of the randomization where `l` is computed in `S()`.
Anyway, please feel free to ask any questions about it.
Scratchapixel was new to me, so thanks for that link. PBRT (which you already have) is definitely a favorite. You also cited Graphics Gems IV but I enjoy dipping into all of them. Other suggestions:
* http://kesen.realtimerendering.com/ (Especially the sections for "Symposium on Interactive Ray Tracing" and it's successor, "High Performance Graphics")
If I had to pick one, though, it'd be PBRT since it's a unified work that starts from the ground up.
We did a real time raytracer with 2 spheres where you could move around one with C and a cluster of DEC Alphas 20 years ago, funny to see this on my laptop in Javascript (!) now.
Actually, Z-up coordinate systems can also be called right or left handed depending on which way Y points. :-)
But really, it was just personal preference. Way back when I was learning 3D graphics, I found it easiest to visualize Z as elevation above the XY plane. I'd also used some software that followed that convention (like trueSpace and later 3D Studio Max).
These days, I mainly work with software that tends to favor a Y-up convention by default.
In non-programming math contexts, it's pretty standard for Z to point up. I have no idea why, as it seems to me like the typical way it works in graphics programming is the more obvious extension of 2D plotting into 3D.
As far as I know, in the CAD world Z is typically up/down. It makes sense if you look at the scene from above, like an architect a floor plan - world-space X and Y map to screen-space X and Y, and Z is then the depth. In a 3D virtual world, on the other hand, the perspective is usually of a person embedded in the world, in which case "Y is up" is more intuitive.
Sort of, kind of, but it's a stretch. "Up and down" on a page are not oriented according to gravity, but according to the top and bottom of the page. If you say "Well, we'll make the z-axis vertical because vertical is perpendicular to the plane of the desk, and that's depth", and then you put the z-axis toward the top of the page, you're using two completely different definitions of "vertical" in the same sentence.
I always use Z for up/down even though I'm a programmer. For some reason I just find it easier to reason about. In fairness, my first forays into 3D were extending a 2D top-down game with an "altitude".
I'm a game programmer, and I have an answer for my preference of Z being up:
I want positive values going into the screen (away from the viewer) but I also want a right handed coordinate system (mostly because physics become more intuitive - especially wrt torque).
X is almost always to the right, so this gives me Y and Z to work with. To satisfy both the positive values into the screen and right handedness I need either Z forward and Y down or Y forward and Z up. Since most of the time you draw a graph with an axis going up rather than down, that settles it.
It's entirely arbitrary in the end, a single matrix will fix any dispute you have amongst peers.
Yes, it's true that even more could have been shaved off and I did explore some #defines. As I recall, there were also some other changes I could have made to shave off yet more. Ultimately, I decided against them for two main reasons. First, they'd need to occupy lines by themselves which I felt created a bigger gap at the top that ruined the aesthetics somewhat. I wanted something that would look nice and roughly fill the 1.75 aspect ratio of a typical US business card. Secondly, as noted in my reply to matthew-wegner below, I'd noticed the file size that I had achieved.
#defines have to be one to a line, though - in this case, where you're optimising for layout rather than bytes, i'm not sure it would be a net win. counting, there are 5 instances of "operator", so that's a saving of 36 chars versus 40 taken up by the #define (including the whitespace at the end). there are 10 "return"s, so saving 40 chars there just about draws you even.
I reformatted it so it would justify right like the original but for some reason Pastebin likes to reflow things.
It turns out the single-character tokens 'u' and 'w' were not being used so repurposing them for the #defines leads to a saving of 46 characters at the cost of an extra line. There are a few other recurring tokens but you hit a point of diminishing return.
One should not, however, lose sight of what an awesomely clever hack the whole thing is. Reminds me of some of the text-flow layouts in old illuminated manuscripts, or the more modern variations here (like the one with the light-bulb): http://www.smashingmagazine.com/2008/02/11/award-winning-new...
I've always enjoyed Fabien's analyses so it's great fun to see what he makes of my own code. Anyway, a few clarifications from what I remember of this. (I wrote this '09, and I don't have my old notes in front of me right now):
* The `n` value returned by the trace function, `T()`, is the surface normal. It returns this whether it hit the plane or a sphere.
* The `r` vector in `S()` is the reflection vector off whatever was hit.
* The mystery `c` point in the `main()` function is the offset from the eye point (ignoring the lens perturbation `t`) to the corner of the focal plane. Note below that we're nominally tracing rays for pixel (x,y) in the direction of ax+by+c. At the image midpoint, this should be `g`.
* "although I suspect this to be a side effect: This is not how soft-shadows are done." True. There are true soft shadows in this, but this isn't where they're computed. That's the job of the randomization where `l` is computed in `S()`.
Anyway, please feel free to ask any questions about it.