woensdag 30 november 2011

Transformation pipeline and reverse



Dear readers,

In my current project I needed to have a ray from the camera on the point where the mouse was on the screen. The problem here is: how do we get a three dimensional ray from a 2 dimensional point?

To understand how this is done, you need to know a little about the transformation pipeline, these are a couple of matrices together which convert a vertex position in 3D to a point position on the screen.
The transformation pipeline.


To specify viewing, modeling, and projection transformations, you construct a 4x4 matrix M, which is then multiplied by the coordinates of each vertex v in the scene to accomplish the transformation. The viewing and modeling transformations specified are combined to form the modelview matrix, which is applied to the incoming object coordinates to yield eye coordinates. Next, if you've specified additional clipping planes to remove certain objects from the scene or to provide cutaway views of objects, these clipping planes are applied. After that the projection matrix is applied to yield clip coordinates. This transformation defines a viewing volume: objects outside this volume are clipped so that they're not drawn in the final scene. After this perspective division is done to produce normalized device coordinates. Finally, the transformed coordinates are converted to window coordinates by applying the viewport transformation. The dimensions of the viewport can be manipulated to cause the final image to be enlarged, shrunk, or stretched.

Now when we wish to get a three dimensional ray from a 2 dimensional point, we go the other way around all the way to the eye coordinates. What we actually get is a 3 dimensional point located on the near plane of the viewing volume, but from this we can calculate a ray from the camera.

Point (sx,sy) on screen, to point (wx,wy,wz) and ray from camera.


-- Stijn

1 opmerking: