WebGL Essentials

Transformations

The triangle created by the demo app from the previous article occupies the entire canvas plane and does not change its position even if z values of vertices are set to numerals other than 0.0. To create a real 3D scene, a Web application must use a series of transformations instructing WebGL how to build models in three-dimensional space.

Vertex coordinates undergo the following changes:

  • first a model matrix is used to position a geometric object and orient it in the world space; the modeling transformation can assume the form of translation, rotation or scaling;
  • then a view matrix sets the viewpoint for the scene; the viewing transformation involves translations and rotations producing the eye coordinates of each vertex;
  • a projection matrix establishes a viewing volume defining how the object is projected onto the canvas; this type of transformation yields clip coordinates;
  • the next step is the perspective division; clip coordinates are divided by w;
  • finally, the viewport transformation defines a pixel rectangle on the canvas into which the 3D object is mapped.
Transformation Matrices

A transformation matrix is a rectangular array of 16 values arranged in rows and columns. To get access to a separate matrix entry, we will use notation conforming to the DOMMatrix and CSS Transforms specifications: the first element is m11 (column 1, row 1 of the matrix), m12 is in column 1, row 2, etc.

The identity matrix is a special data structure that can be used to clear the current matrix for upcoming transformation commands:

1 0 0 0
0 1 0 0
0 0 1 0
0 0 0 1

A translation matrix stores x, y and z values of a translation vector in the last column:

1 0 0 x
0 1 0 y
0 0 1 z
0 0 0 1

Rotation about the ray from the origin through the point (1, 0, 0) corresponds to the following matrix:

1 0 0 0
0 cos(angle) -sin(angle) 0
0 sin(angle) cos(angle) 0
0 0 0 1

Rotation about the ray going through the (0, 1, 0) point is described by the matrix with the modified m11, m13, m31 and m33 entries:

cos(angle) 0 sin(angle) 0
0 1 0 0
-sin(angle) 0 cos(angle) 0
0 0 0 1

Rotation about the ray going in the direction of the (0, 0, 1) point depends on the values of the m11, m12, m21 and m22 elements.

cos(angle) -sin(angle) 0 0
sin(angle) cos(angle) 0 0
0 0 1 0
0 0 0 1

Scaling factors are stored in the m11, m22 and m33 matrix entries:

x 0 0 0
0 y 0 0
0 0 z 0
0 0 0 1
Matrices in GLSL

Early OpenGL relied upon a range of predefined commands for changing vertex coordinates: for example, such functions as glTranslate(), glRotate(), glScale() were employed for model/view transformations, and the glFrustum() and glOrtho() were used to define a perspective or orthographic viewing volume. At present fixed-function transformations of vertex attributes are considered obsolete: matrix-related commands were removed from the list of the standard OpenGL routines in 2009. They are only available if an application is based on the compatibility profile of OpenGL.

The modern approach presumes that all transformations should be specified in the vertex shader: matrix entries are either spelled out directly in the GLSL code or received from JavaScript and stored in uniform variables. In this article we'll demonstrate the first scenario: the modified vertex shader from the previous example will contain all necessary data for model, view and projection matrices.

<script id="vertex-shader" type="x-shader/x-vertex">
 attribute vec3 position;
 attribute vec3 color;
 varying highp vec3 colour;

 mat4 projectionMatrix;
 mat4 viewMatrix;
 mat4 modelMatrix;

 // function declarations
 mat4 computeProjectionMatrix();
 mat4 computeViewMatrix();
 mat4 computeModelMatrix();

 void main() {
  projectionMatrix = computeProjectionMatrix();
  viewMatrix = computeViewMatrix();
  modelMatrix = computeModelMatrix();
  gl_Position = projectionMatrix * viewMatrix * modelMatrix * vec4(position, 1.0);
  colour = color;
 }

 . . . function definitions . . .

</script>

The shader code above contains a number of fresh GLSL constructs. First of all, we see the new data type: this is mat4 used to declare matrices with 4 rows and 4 columns. Another new feature is GLSL functions: similar to C, OpenGL Shading Language distinguishes between function declarations and definitions. The computeProjectionMatrix(), computeViewMatrix() and computeModelMatrix() are function prototypes. Definitions will follow the main().

The projection matrix creates six clipping planes by using 6 values: these are left (-1.0), right (1.0), bottom (-1.0), top (1.0), near (1.0) and far (6.0) parameters of the viewing volume. The lower left and upper right corners of the near clipping plane are defined by two points at (left, bottom, -near) and (right, top, -near):

mat4 computeProjectionMatrix() {
 float l, r, b, t, n, f;
 float m11, m12, m13, m14, m21, m22, m23, m24, m31, m32, m33, m34, m41, m42, m43, m44;

 l = -1.0;
 r = 1.0;
 b = -1.0;
 t = 1.0;
 n = 1.0;
 f = 6.0;

 m11 = (2.0 * n) / (r - l);
 m12 = 0.0;
 m13 = 0.0;
 m14 = 0.0;

 m21 = 0.0;
 m22 = (2.0 * n) / (t - b);
 m23 = 0.0;
 m24 = 0.0;

 m31 = (r + l) / (r - l);
 m32 = (t + b) / (t - b);
 m33 = - (f + n) / (f - n);
 m34 = -1.0;

 m41 = 0.0;
 m42 = 0.0;
 m43 = - (2.0 * f * n) / (f - n);
 m44 = 0.0;

 return mat4(
  vec4(m11, m12, m13, m14),
  vec4(m21, m22, m23, m24),
  vec4(m31, m32, m33, m34),
  vec4(m41, m42, m43, m44)
 );
}

The view matrix pushes the viewpoint backwards: this operation is equivalent to moving the 3D object away, i.e. translating it down the negative z axis. Our example puts 3 distance units between the triangle and the viewpoint:

mat4 computeViewMatrix() {
 float x, y, z;
 float m11, m12, m13, m14, m21, m22, m23, m24, m31, m32, m33, m34, m41, m42, m43, m44;

 x = 0.0;
 y = 0.0;
 z = -3.0;

 m11 = 1.0;
 m12 = 0.0;
 m13 = 0.0;
 m14 = 0.0;

 m21 = 0.0;
 m22 = 1.0;
 m23 = 0.0;
 m24 = 0.0;

 m31 = 0.0;
 m32 = 0.0;
 m33 = 1.0;
 m34 = 0.0;

 m41 = x;
 m42 = y;
 m43 = z;
 m44 = 1.0;

 return mat4(
  vec4(m11, m12, m13, m14),
  vec4(m21, m22, m23, m24),
  vec4(m31, m32, m33, m34),
  vec4(m41, m42, m43, m44)
 );
}

The model matrix rotates the triangle about the z axis:

mat4 computeModelMatrix() {
 float angle;
 float m11, m12, m13, m14, m21, m22, m23, m24, m31, m32, m33, m34, m41, m42, m43, m44;

 angle = 45.0;

 m11 = cos(radians(angle));
 m12 = sin(radians(angle));
 m13 = 0.0;
 m14 = 0.0;

 m21 = -sin(radians(angle));
 m22 = cos(radians(angle));
 m23 = 0.0;
 m24 = 0.0;

 m31 = 0.0;
 m32 = 0.0;
 m33 = 1.0;
 m34 = 0.0;

 m41 = 0.0;
 m42 = 0.0;
 m43 = 0.0;
 m44 = 1.0;

 return mat4(
  vec4(m11, m12, m13, m14),
  vec4(m21, m22, m23, m24),
  vec4(m31, m32, m33, m34),
  vec4(m41, m42, m43, m44)
 );
}

The rendered triangle looks smaller now as it is moved away from the (0, 0, 0) point. Besides, it is rotated in the counterclockwise direction:

gl_Position = vec4(position, 1.0);

gl_Position = projectionMatrix * viewMatrix * modelMatrix * vec4(position, 1.0);

Viewport

Establishing the viewport of the scene is "last but not the least" stage in the series of WebGL transformations: by changing the drawing region of the canvas we can alter the shape and position of the rendered triangle. Unlike transformation matrices, the viewport is defined in JavaScript. Let's modify the renderScene() function from the previous article:

// command parameters are x, y, width, height
gl.viewport(0, 0, gl.drawingBufferWidth, gl.drawingBufferHeight);
gl.drawArrays(gl.TRIANGLES, 0, 3);

gl.viewport(0, 0, 125, 125);
gl.drawArrays(gl.TRIANGLES, 0, 3);

gl.viewport(375, 0, 125, 125);
gl.drawArrays(gl.TRIANGLES, 0, 3);

gl.viewport(0, 375, 125, 125);
gl.drawArrays(gl.TRIANGLES, 0, 3);

gl.viewport(375, 375, 125, 125);
gl.drawArrays(gl.TRIANGLES, 0, 3);

The viewport() method of the WebGLRenderingContext has four parameters: x and y define the lower left corner of the viewport, width and height set the size of the viewport rectangle. If the dimensions of our canvas are 500, then the code above will create a split-screen effect and pick out five different canvas regions for drawing: