WebGL 02: "Movement and Color"

In this tutorial, you will learn how to use fragment shaders to color shapes, and how to use vertex shader inputs to move them around the canvas.

The new topics we'll cover:

  1. Uniform values, parameters that can be changed between draw calls for the same geometry
  2. Fragment shader inputs, or varyings, which are like per-fragment blended attributes
  3. Vertex Array Objects (VAOs), which hold collections of vertex attribute bindings
The full source code for this tutorial is on GitHub
Indigo Code iconA live demo is available here on indigocode.dev

"Movement and Color" demo

Through this tutorial, we'll write the code for this little animation:

Loading

Most of what's happening here isn't actually new, I've highlighted new ideas in bold:

  1. Prepare a vertex shader (with uniform inputs)
  2. Prepare a fragment shader (with varying inputs)
  3. Prepare geometry with position and color data
  4. Draw a whole bunch of shapes in a loop

As part of setup, I'll also be introducing you to a more modern approach to setting up vertex attributes - Vertex Array Objects (VAOs).

Modern versions of OpenGL (3.3+) actually require using VAOs, and more modern APIs (Vulkan, DirectX12) use comparable constructs. Technically you never have to use VAOs when using WebGL, but they allow graphics drivers to behave more efficiently, make learning new APIs easier, and simplify your code quite a bit.

I think it's worth using VAOs in most situations unless you are severely allergic.

Fragment shader inputs

Let's work a bit backwards by starting with the fragment shader - how to handle color gradients in a shader?

#version 300 es
precision mediump float;

in vec3 fragmentColor;

out vec4 outputColor;

// FRAGMENT SHADER
void main() {
  outputColor = vec4(fragmentColor, 1.0);
}

There's two main differences from the last tutorial:

  1. Now there's an input variable fragmentColor.
  2. The outputColor output primarily uses the fragmentColor input.

Where does this input variable come from? Recall from the last tutorial that there's a pipeline stage called the rasterizer which identifies which pixels in the output are part of an input triangle.

The rasterizer can read data output by the vertex shader and interpolate it across all of the pixels it marks as part of the triangle it's working on. Technically, these aren't "pixels" yet, so in OpenGL parlance they are called "pixel fragments", or more simply "fragments".

Fragment shader inputs MUST match vertex shader outputs - WebGL checks this at the WebGLProgram linking stage. If there are mismatches here, gl.linkProgram() will generate an error.

In our case, we can pick a color for each vertex, and that color will be smoothly blended across drawn triangles - so in the vertex shader, we can input a second vertexColor attribute that we can then pass through the rasterizer into the new fragment shader input:

#version 300 es
precision mediump float;

in vec2 vertexPosition;
in vec3 vertexColor; // New vertex attribute!

out vec3 fragmentColor; // Send to fragment shader

// VERTEX SHADER (WIP - not finished)
void main() {
  fragmentColor = vertexColor;
  gl_Position = vec4(vertexPosition, 0.0, 1.0);
}

If you're ever interested to learn how a rasterizer works (or make a toy one of your own!), here's a couple algorithms to get started with:

Scratchapixel is a phenomenal website for learning computer graphics as well, I highly reccomend checking it out!

Uniform values (per-shape parameters)

So fragment shader inputs can be used for the "color" part of "movement and color", but what of movement? In order to move around shapes using only what I covered in the last tutorial, you'd have to upload a new vertex buffer every frame containing new clip space vertices. For a triangle with only 3 vertices that's not awful, but imagine doing that for a 3D player model with thousands of vertices!

This is where another shader input called a uniform value comes into play. Uniforms are also inputs, but are the same for every vertex and fragment in a draw call. They can be set by an application developer, and remain the same for all draw calls until they are set with a new value.

Here's an updated vertex shader that uses three new inputs to decide where a vertex should be placed:

#version 300 es
precision mediump float;

in vec2 vertexPosition;
in vec3 vertexColor;

out vec3 fragmentColor;

uniform vec2 canvasSize;
uniform vec2 shapeLocation;
uniform float shapeSize;

// VERTEX SHADER
void main() {
  fragmentColor = vertexColor;

  vec2 worldPosition = vertexPosition * shapeSize + shapeLocation;
  vec2 clipPosition = (worldPosition / canvasSize) * 2.0 - 1.0;

  gl_Position = vec4(clipPosition, 0.0, 1.0);
}

For this tutorial, shapes will be defined to fill up a bounding box that goes from [-1, 1] in both the X and Y dimensions - in clip space, this would fill the entire render surface.

To calculate the vertex position with the new uniform values:

  1. Multiply the input by the size of the shape, and add the position
  2. Divide by the size (in pixels) of the canvas to get coordinates in the 0-100% range
  3. Multiply this percent coordinate by 2 and subtract 1 to convert to clip space

Starting in the next tutorial, we'll use the concept of linear algebra spaces a lot. It's helpful to think of coordinates as being defined according to some point of view:

  • Model space (the point of view of the triangle geometry)
  • World space (the location within a simulation)
  • Clip space (the coordinates OpenGL uses for surface positions)

Each line of our math converts from one space to the next, ending in gl_Position's clilp space that the rasterizer consumes. That's a pretty common pattern for graphics programming in general.

I'll get into more of the details below, but you can see how applying vertex shader inputs affects the triangle from the last demo below:

vec2 shapeLocation = [150, 150];

float shapeSize = 75;

vec2 canvasSize = [300, 300];

VertexworldPositionclipPosition
0[150, 225][0.00, 0.50]
1[75, 75][-0.50, -0.50]
2[225, 75][0.50, -0.50]

Small change: I did change the vertex positions to go from -1 to 1 instead of -0.5 to 0.5 like they were in the last demo.

Vertex Array Objects (VAOs)

VAOs are a collection of input assembler state, which hold a list of enabled vertex attributes and bindings to attribute slots.

Take the following code which might set up attributes for two shapes: a red triangle and a blue square:

// (output merger, rasterizer, and program are already set up)
gl.enableVertexAttrib(positionAttribLocation);
gl.enableVertexAttrib(colorAttribLocation);

// Draw red triangle
gl.bindBuffer(gl.ARRAY_BUFFER, trianglePositions);
gl.vertexAttribPointer(
  positionAttribLocation, 2, gl.FLOAT, false, 0, 0);
gl.bindBuffer(gl.ARRAY_BUFFER, redTriangleColors);
gl.vertexAttribPointer(
  colorAttribLocation, 3, gl.FLOAT, true, 0, 0);
gl.bindBuffer(gl.ARRAY_BUFFER, null);
gl.drawArrays(gl.TRIANGLES, 0, 3);

// Draw blue square
gl.bindBuffer(gl.ARRAY_BUFFER, squarePositions);
gl.vertexAttribPointer(
  positionAttribLocation, 2, gl.FLOAT, false, 0, 0);
gl.bindBuffer(gl.ARRAY_BUFFER, blueSquareColors);
gl.vertexAttribPointer(
  colorAttribLocation, 3, gl.FLOAT, true, 0, 0);
gl.bindBuffer(gl.ARRAY_BUFFER, null);
gl.drawArrays(gl.TRIANGLES, 0, 6);

Both position and color attributes are set up for the red triangle, and then a draw call is dispatched to draw it. The same process is repeated for a blue square.

Here's the code to create a VAO for both the red triangle and blue square:

// Red triangle
const redTriangleVao = gl.createVertexArray();
gl.bindVertexArray(redTriangleVao);
gl.enableVertexAttrib(positionAttribLocation);
gl.enableVertexAttrib(colorAttribLocation);
gl.bindBuffer(gl.ARRAY_BUFFER, trianglePositions);
gl.vertexAttribPointer(
  positionAttribLocation, 2, gl.FLOAT, false, 0, 0);
gl.bindBuffer(gl.ARRAY_BUFFER, redTriangleColors);
gl.vertexAttribPointer(
  colorAttribLocation, 3, gl.FLOAT, true, 0, 0);
gl.bindBuffer(gl.ARRAY_BUFFER, null);
gl.bindVertexArray(null);

// Blue square triangle
const blueSquareVao = gl.createVertexArray();
gl.bindVertexArray(blueSquareVao);
gl.enableVertexAttrib(positionAttribLocation);
gl.enableVertexAttrib(colorAttribLocation);
gl.bindBuffer(gl.ARRAY_BUFFER, squarePositions);
gl.vertexAttribPointer(
  positionAttribLocation, 2, gl.FLOAT, false, 0, 0);
gl.bindBuffer(gl.ARRAY_BUFFER, blueSquareColors);
gl.vertexAttribPointer(
  colorAttribLocation, 3, gl.FLOAT, true, 0, 0);
gl.bindBuffer(gl.ARRAY_BUFFER, null);
gl.bindVertexArray(null);

The same bindBuffer and vertexAttribPointer calls are made here, but during setup code instead of during rendering code.

I want you to notice a couple other things too:

First, as soon as I'm done with a VAO, I immediately un-bind it by calling gl.bindVertexArray(null). This is technically unnecessary, since I'm immediately binding and using another VAO, but it's a good habit. The active VAO captures and will re-play any and all input assembler state changes! Forgetting to un-bind a VAO is a great way to accidentally introduce extremely hard to debug problems in a WebGL app.

Second, I'm calling enableVertexAttrib and vertexAttribPointer for both VAOs. VAOs capture both enabled attribute and vertex attribute state, you can (and should) think of each of them as a total fresh start for input assembler state.

VAO setup code is pretty nasty, but it makes the rendering code significantly simpler!

gl.useProgram(colorShapeProgram);
gl.bindVertexArray(redTriangleVao);
gl.drawArrays(gl.TRIANGLES, 0, 3);

gl.bindVertexArray(blueSquareVao);
gl.drawArrays(gl.TRIANGLES, 0, 6);

gl.bindVertexArray(null);

While I was writing this tutorial, I had two bugs around leaking VAO bindings that took me an entire day to debug. It happens to everyone! WebGL has a lot of global state, and it's extremely easy to mess it up.

In my case, the render loop for the demo at the top of the page was leaking a VAO binding, which was screwed up by the "uniform" demo later overwriting the position attribute to be a triangle for every shape type (when this tutorial was originally written, all demos shared a single WebGL context).

The moral of the story: WebGL state is just like any other global state and very error-prone. Do what you can as an application developer to limit the scope of global mutations wherever possible, by un-binding buffers, VAOs, programs, etc. that you're no longer using!

The Render Loop

In the last tutorial, we generated a single image of a triangle and finished the program.

From here on out, we'll be creating animations that generate new images ("frames") as fast as possible to show to the user to create the illusion of motion.

All JavaScript graphics applications will have a structure more or less like this:

function runApp() {
  loadStuff();
  doSetupNonsense();

  function renderFrame() {
    handleUserInput();
    updateAppState();
    renderThings();

    requestAnimationFrame(renderFrame);
  }
  renderFrame();
}

The special function requestAnimationFrame is a browser built-in function that calls the given function as soon as the user's device is ready to draw another image - this usually happens 30 or 60 times per second, but gaming-focused monitors may go much higher.

Expensive one-time setup code happens first - this includes setting up GPU buffers, VAOs, compiling WebGL programs, getting references to attribute locations and uniform locations, etc.

In this tutorial, we won't do anything for handleUserInput, since the demo is more or less a Windows 95 screensaver.

Application logic is updated first - in our case, we'll keep a list of shapes that need to be drawn, and update their positions at this point.

Once our shapes all have the correct calculated positions, we'll render each individual shape - most stages will remain the same frame-to-frame, so the majority of work here will be setting the correct uniform values and VAOs for each shape.

Once the app is updated and rendered, requestAnimationFrame is invoked again to ask the browser to repeat the same process as soon as the user's monitor is ready to display another frame, on and on... forever (or at least until the user navigates away).

TypeScript

I'll be using TypeScript for the rest of this tutorial (and future WebGL tutorials). You can follow along in JavaScript with pretty minor changes, or install TypeScript by following the instructions at typescriptlang.org.

Set up

I'm going to use the same index.html file from the last tutorial, but I'll change the name of the script to index.ts (notice the TypeScript ts extension). I'm mostly doing that because I didn't feel like fighting Sandpack (the code editor + playground I use for these tutorials) - the GitHub version is slightly different here.

There's also a new tsconfig.json and package.json file here, these are pretty basic since I'm not using a lot of fancy TypeScript / WebPack / whatever nonsense (... yet!)

function runDemo() {
  const canvas = document.getElementById("demo-canvas");
  if (!canvas) {
    console.error("Cannot get demo-canvas reference. Check for typos.");
    return;
  }

  const gl = canvas.getContext("webgl2");
  if (!gl) {
    console.error("Cannot get WebGL context. Try a different device or browser.");
    return;
  }

  const triangleVerticies = [
    // Top middle
    0.0, 0.5,
    // Bottom left
    -0.5, -0.5,
    // Bottom right
    0.5, -0.5,
  ];

  const triangleGeoCpuBuffer = new Float32Array(triangleVerticies);

  const triangleGeoBuffer = gl.createBuffer();
  gl.bindBuffer(gl.ARRAY_BUFFER, triangleGeoBuffer);
  gl.bufferData(gl.ARRAY_BUFFER, triangleGeoCpuBuffer, gl.STATIC_DRAW);
  gl.bindBuffer(gl.ARRAY_BUFFER, null);

  const vertexShaderSourceCode = `#version 300 es
  precision mediump float;

  in vec2 vertexPosition;

  void main() {
    gl_Position = vec4(vertexPosition, 0.0, 1.0);
  }`;

  const vertexShader = gl.createShader(gl.VERTEX_SHADER);
  gl.shaderSource(vertexShader, vertexShaderSourceCode);
  gl.compileShader(vertexShader);
  if (!gl.getShaderParameter(vertexShader, gl.COMPILE_STATUS)) {
    const errorMessage = gl.getShaderInfoLog(vertexShader);
    console.error(`Failed to compile vertex shader: ${errorMessage}`);
    return;
  }

  const fragmentShaderSourceCode = `#version 300 es
  precision mediump float;

  out vec4 outputColor;

  void main() {
    outputColor = vec4(0.294, 0.0, 0.51, 1.0);
  }`;

  const fragmentShader = gl.createShader(gl.FRAGMENT_SHADER);
  gl.shaderSource(fragmentShader, fragmentShaderSourceCode);
  gl.compileShader(fragmentShader);
  if (!gl.getShaderParameter(fragmentShader, gl.COMPILE_STATUS)) {
    const errorMessage = gl.getShaderInfoLog(fragmentShader);
    console.error(`Failed to compile fragment shader: ${errorMessage}`);
    return;
  }

  const helloTriangleProgram = gl.createProgram();
  gl.attachShader(helloTriangleProgram, vertexShader);
  gl.attachShader(helloTriangleProgram, fragmentShader);
  gl.linkProgram(helloTriangleProgram);
  if (!gl.getProgramParameter(helloTriangleProgram, gl.LINK_STATUS)) {
    const errorMessage = gl.getProgramInfoLog(helloTriangleProgram);
    console.error(`Failed to link GPU program: ${errorMessage}`);
    return;
  }

  const vertexPositionAttributeLocation = gl.getAttribLocation(
    helloTriangleProgram,
    "vertexPosition"
  );
  if (vertexPositionAttributeLocation < 0) {
    console.error(`Failed to get attribute location for vertexPosition`);
    return;
  }

  // Loading finished! Print a message indicating that.
  console.log('WebGL resources successfully initialized! Ready for render 😁');

  //
  // RENDER FRAME
  canvas.width = canvas.clientWidth;   // * window.devicePixelRatio if you want
  canvas.height = canvas.clientHeight; // * window.devicePixelRatio if you want
  gl.viewport(0, 0, canvas.width, canvas.height);

  gl.clearColor(0.08, 0.08, 0.08, 1.0);
  gl.clear(gl.COLOR_BUFFER_BIT);

  gl.useProgram(helloTriangleProgram);

  gl.enableVertexAttribArray(vertexPositionAttributeLocation);
  gl.bindBuffer(gl.ARRAY_BUFFER, triangleGeoBuffer);
  gl.vertexAttribPointer(
    /* index: vertex attrib location */
    vertexPositionAttributeLocation,
    /* size: number of components in the attribute */
    2,
    /* type: type of data in the GPU buffer for this attribute */
    gl.FLOAT,
    /* normalized: if type=float and is writing to a vec(n) float input, should WebGL normalize the ints first? */
    false,
    /* stride: bytes between starting byte of attribute for a vertex and the same attrib for the next vertex */
    2 * Float32Array.BYTES_PER_ELEMENT,
    /* offset: bytes between the start of the buffer and the first byte of the attribute */
    0
  );

  gl.drawArrays(gl.TRIANGLES, 0, 3);
}

runDemo();

WebGL helper functions

There's a few WebGL tasks that we'll be doing several times in this app - copy and pasting code once or twice to avoid spaghetti is all well and good, but constantly copy/pasting huge chunks gets messy fast. Here's some helper functions we can use - define these at the top of your Typescript file. I'll be re-using these functions in later tutorials, so hold on to them!

Basically every WebGL app you'll write in this tutorial series will want that same error checking for cases where WebGL isn't supported, this'll save a bit of boilerplate.

/** Get a WebGL context reference, or print an error and return null */
function getContext(canvas: HTMLCanvasElement) {
  const gl = canvas.getContext('webgl2');
  if (!gl) {
    const isWebGl1Supported = !!(document.createElement('canvas')).getContext('webgl');
    if (isWebGl1Supported) {
      console.error('WebGL 1 is supported, but not v2 - try using a different device or browser');
    } else {
      console.error('WebGL is not supported on this device');
    }
    return null;
  }

  return gl;
}

Next, let's pull out vertex buffer creation code - this is another super common thing that we can better communicate with a named function createStaticVertexBuffer while shortening our code and adding in a bit of error messaging. Definitely worth adding.

FYI - there are optimizations that involve multiple pieces of geometry sharing different regions in a vertex buffer, or multiple attributes sharing a vertex buffer. Those get somewhat hard to follow pretty quick, so I will not be covering those techniques.

/** Generate a WebGLBuffer for a piece of STATIC (non-changing) geometry */
function createStaticVertexBuffer(
    gl: WebGL2RenderingContext,
    data: ArrayBuffer | Float32Array | Uint8Array) {
  const buffer = gl.createBuffer();
  if (!buffer) {
    console.error('Failed to create buffer');
    return null;
  }
  gl.bindBuffer(gl.ARRAY_BUFFER, buffer);
  gl.bufferData(gl.ARRAY_BUFFER, data, gl.STATIC_DRAW);
  gl.bindBuffer(gl.ARRAY_BUFFER, null);

  return buffer;
}

Finally, creating a WebGL program uses a LOT of boilerplate. There's something to be said for designing an API differently so that programs can share vertex shaders (which can be the same between several effects) but we won't be doing that in this tutorial.

/** Create a WebGLProgram from a vertex + fragment shader source */
function createProgram(
    gl: WebGL2RenderingContext,
    vertexShaderSource: string,
    fragmentShaderSource: string) {
  const vertexShader = gl.createShader(gl.VERTEX_SHADER);
  if (!vertexShader) {
    console.error('Failed to create vertex shader');
    return null;
  }
  gl.shaderSource(vertexShader, vertexShaderSource);
  gl.compileShader(vertexShader);
  if (!gl.getShaderParameter(vertexShader, gl.COMPILE_STATUS)) {
    console.error(`Failed to compile vertex shader - ${gl.getShaderInfoLog(vertexShader)}`);
    return null;
  }

  const fragmentShader = gl.createShader(gl.FRAGMENT_SHADER);
  if (!fragmentShader) {
    console.error('Failed to create fragment shader');
    return null;
  }
  gl.shaderSource(fragmentShader, fragmentShaderSource);
  gl.compileShader(fragmentShader);
  if (!gl.getShaderParameter(fragmentShader, gl.COMPILE_STATUS)) {
    console.error(`Failed to compile fragment shader - ${gl.getShaderInfoLog(fragmentShader)}`);
    return null;
  }

  const program = gl.createProgram();
  if (!program) {
    console.error('Failed to create program');
    return null;
  }
  gl.attachShader(program, vertexShader);
  gl.attachShader(program, fragmentShader);
  gl.linkProgram(program);
  if (!gl.getProgramParameter(program, gl.LINK_STATUS)) {
    console.error(`Failed to link program - ${gl.getProgramInfoLog(program)}`);
    return null;
  }

  return program;
}

That's all the WebGL boilerplate out of the way, on to the code for this tutorial!

Random Value Helpers

First off, we'll be doing a lot of random stuff for this demo - the position, size, and movement of each shape is randomly generated.

function getRandomInRange(min: number, max: number) {
  return Math.random() * (max - min) + min;
}

function getNewSpawnerLocation(canvasWidth: number, canvasHeight: number) {
  return [
    getRandomInRange(150, canvasWidth - 150),
    getRandomInRange(150, canvasHeight - 150)
  ];
}

Demo Constants

This demo will create a bunch of shapes of random type and size, all originating from a point on the screen that changes every few seconds, and all flying out in random directions. The shapes will also accelerate in random directions, to give some curved paths and a bit more visual appeal.

Putting configuration variables like minimum/maximum shape size, speed, and force is a good idea. This way, if you want to make visual tweaks, you just have to change one number at the top of the file instead of hunting around through logic. Neat!

Here's the constants I use in this demo:

/** Demo constants */
const SPAWNER_CHANGE_TIME = 5;
const CIRCLE_SEGMENT_COUNT = 12;
const SPAWN_RATE = 0.08;
const MIN_SHAPE_TIME = 0.25;
const MAX_SHAPE_TIME = 6;
const MIN_SHAPE_SPEED = 125;
const MAX_SHAPE_SPEED = 350;
const MIN_SHAPE_FORCE = 150;
const MAX_SHAPE_FORCE = 750;
const MIN_SHAPE_SIZE = 2;
const MAX_SHAPE_SIZE = 50;
const MAX_SHAPE_COUNT = 250;

Times are in seconds, sizes are in pixels, speeds are in pixels/second, and "forces" (really accelerations) are in pixels/(second^2).

Notice I've also included an extra MAX_SHAPE_COUNT constant - this is nice to have to prevent accidentally creating a demo that spawns waaaaaaaaay too many shapes and slowing down a user's computer.

Soapbox time - if you have an array that you regularly add things to without limit, you absolutely need to be intentional about also removing elements, or else it's dang easy to get memory leaks.

Generating Circle geometry

One of the shapes flying around in this demo is a blue-white circle, and you may have noticed the CIRCLE_SEGMENT_COUNT constant. Circles are relatively easy to generate by arranging triangles like slices of a pizza with CIRCLE_SEGMENT_COUNT slices.

A couple math things that'll be helpful here:

  1. Each "slice" of a circle will represent an angle calculated by 2π/CIRCLE_SEGMENT_COUNT.
  2. The X coordinate of any spot on a circle with radius 1 is the cosine of the angle of that spot
  3. The Y coordinate of any spot on a circle with radius 1 is the sine of the angle of that spot

For this shape, it'll be easy to put positions and colors right next to each other in memory, so each vertex will be structured like this: [X, Y, R, G, B] for [X, Y] coordinates and [R, G, B] colors.

It looks and sounds like a lot of math, and it is.

But the implementation isn't too bad once you understand what that math is doing:

function buildCircleVertexBufferData() {
  const vertexData = [];

  // Append the vertices for each of the N triangle segments
  for (let i = 0; i < CIRCLE_SEGMENT_COUNT; i++) {
    const vertex1Angle = i * Math.PI * 2 / CIRCLE_SEGMENT_COUNT;
    const vertex2Angle = (i + 1) * Math.PI * 2 / CIRCLE_SEGMENT_COUNT;
    const x1 = Math.cos(vertex1Angle);
    const y1 = Math.sin(vertex1Angle);
    const x2 = Math.cos(vertex2Angle);
    const y2 = Math.sin(vertex2Angle);

    // Center vertex is a light blue color and in the middle of the shape
    vertexData.push(
      // Position (x, y)
      0, 0,
      // Color (r, g, b)
      0.678, 0.851, 0.957
    );
    // The other two vertices are along the edges of the circle, and a darker blue color
    vertexData.push(
      x1, y1,
      0.251, 0.353, 0.856
    );
    vertexData.push(
      x2, y2,
      0.251, 0.353, 0.856
    );
  }

  return new Float32Array(vertexData);
}

For fun, you can play with the number of segments in this little demo:

Loading

Notice that the edges get hard to see after about 40 or so slices - human eyes are pretty bad at detecting very gradual edges, especially in the presence of color information that looks smooth. Brightness and color information is much louder than actual shapes to our brains. Remember that - it'll come up later, in a tutorial about 3D lighting models!

Triangle and Square vertices

I've just hard-coded vertex data for triangles and squares, but I am going to use 8-bit unsigned integers (0-255) for color data instead of floats. It's a bit smaller,a it shows a way to store vertex data in separate buffers, and it showcases that tricky normalized parameters in vertexAttribPointer calls.

Remember - when an integer attribute is normalized, the inputs are converted into percentages of the maximum integer value for the type (for 8-bit integers, 0-255).

Below is geometry data for a triangle (3 vertices) and a square (2 triangles, 6 vertices). For the triangles, there are two color buffers - RGB and a bright "firey" color. For the square, there's an indigo color gradient, and a solid gray square.

These are constants that can go anywhere before our demo function.

In the full demo sandbox at the bottom of the page, mess around with these colors to see how changing values affects the color of the final product!

const trianglePositions = new Float32Array([ 0, 1, -1, -1, 1, -1 ]);
const squarePositions = new Float32Array([ -1, 1, -1, -1, 1, -1, -1, 1, 1, -1, 1, 1 ]);
const rgbTriangleColors = new Uint8Array([
  255, 0, 0,
  0, 255, 0,
  0, 0, 255
]);
const fireyTriangleColors = new Uint8Array([
  // Chili red - E52F0F
  229, 47, 15,
  // Jonquil - F6CE1D
  246, 206, 29,
  // Gamboge - E99A1A
  233, 154, 26
]);
const indigoGradientSquareColors = new Uint8Array([
  // Top: "Tropical Indigo" - A799FF
  167, 153, 255,
  // Bottom: "Eminence" - 583E7A
  88, 62, 122,
  88, 62, 122,
  167, 153, 255,
  88, 62, 122,
  167, 153, 255
]);
const graySquareColors = new Uint8Array([
  45, 45, 45,
  45, 45, 45,
  45, 45, 45,
  45, 45, 45,
  45, 45, 45,
  45, 45, 45
]);

MovingShape class (app logic)

I've created a MovingShape class to keep track of an individual shape. Nothing fancy - we'll keep a list of these, and every frame we'll use the velocity of each shape to update the position, and the "force" to update the velocity.

I'm not going to spend a lot of time covering this because it's not really graphics related, other than holding a reference to the VAO of the shape that will be drawn with this one.

class MovingShape {
  constructor(
    public position: [number, number],
    public velocity: [number, number],
    public size: number,
    public forceDirection: [number, number],
    public timeRemaining: number,
    public vao: WebGLVertexArrayObject,
    public numVertices: number) {}

  isAlive() {
    return this.timeRemaining > 0;
  }

  update(dt: number) {
    this.velocity[0] += this.forceDirection[0] * dt;
    this.velocity[1] += this.forceDirection[1] * dt;

    this.position[0] += this.velocity[0] * dt;
    this.position[1] += this.velocity[1] * dt;

    this.timeRemaining -= dt;
  }
}

Notice that "update" takes a dt parameter - this is the amount of time that has passed since the last frame, in seconds. Multiplying velocity (in pixels/second) by this term gives the number of pixels that a given shape should move.

One mistake many people make when they first start writing simulations is they instead move things by some amount that "looks right" - but then they go from a device getting 60FPS to one getting 30FPS and suddenly everything looks EXTRA laggy!

I've also included an isAlive member - for this demo, it only cares that the shape has some amount of time left before disappearing. If you're feeling bold, you can add another condition to this method that makes sure that marks any off-screen shape as no longer alive.

Building VAOs

Once again, there's some repeated logic here, so I've added in two more helper functions.

These functions are specific to this demo! All the other helper functions we wrote earlier are things that we'll be able to take around between demos, these ones are not.

function buildCircleVao(
    gl: WebGL2RenderingContext, buffer: WebGLBuffer,
    posAttrib: number, colorAttrib: number) {
  const vao = gl.createVertexArray();
  if (!vao) {
    console.error('Failed to create interleaved VertexArrayObject');
    return null;
  }
  gl.bindVertexArray(vao);
  gl.enableVertexAttribArray(posAttrib);
  gl.enableVertexAttribArray(colorAttrib);
  gl.bindBuffer(gl.ARRAY_BUFFER, buffer);
  gl.vertexAttribPointer(
    posAttrib,
    2, gl.FLOAT, false,
    5 * Float32Array.BYTES_PER_ELEMENT, 0);
  gl.vertexAttribPointer(
  colorAttrib,
  3, gl.FLOAT, false,
  5 * Float32Array.BYTES_PER_ELEMENT,
  2 * Float32Array.BYTES_PER_ELEMENT);
  gl.bindBuffer(gl.ARRAY_BUFFER, null);
  gl.bindVertexArray(null);

  return vao;
}

This looks like the VAO sample code above - create a VAO, enable the attributes used in it, bind the buffer containing the data, attach vertex attributes, return completed VAO.

I want you to pay special attention to this line, especially the last two parameters (stride and offset, respectively):

gl.vertexAttribPointer(
  colorAttrib,
  3, gl.FLOAT, false,
  5 * Float32Array.BYTES_PER_ELEMENT,
  2 * Float32Array.BYTES_PER_ELEMENT);

Interleaving vertex attributes works great, and is the reason for the parameters "stride" and "offset" in a vertexAttribPointer call.

Stride describes the total size of a vertex in this buffer - in this case, 5 floats (2 for position and 3 for color).

Offset describes at which byte in a vertex the attribute in question begins. In this case, "position" takes up 2 floats (8 bytes) before color, so skip that much space.

The other VAOs will include two separate buffers, one for position and one for color, so constructing them will be a bit different:

function buildVaoFromTwoBuffers(
    gl: WebGL2RenderingContext,
    positionBuffer: WebGLBuffer, colorBuffer: WebGLBuffer,
    posAttrib: number, colorAttrib: number) {
  const vao = gl.createVertexArray();
  if (!vao) {
    console.error('Failed to create parallel VertexArrayObject');
    return null;
  }

  gl.bindVertexArray(vao);
  gl.enableVertexAttribArray(posAttrib);
  gl.enableVertexAttribArray(colorAttrib);
  gl.bindBuffer(gl.ARRAY_BUFFER, positionBuffer);
  gl.vertexAttribPointer(
    posAttrib, 2, gl.FLOAT, false, 0, 0);
  gl.bindBuffer(gl.ARRAY_BUFFER, colorBuffer);
  gl.vertexAttribPointer(
  colorAttrib, 3, gl.UNSIGNED_BYTE, true, 0, 0);

  gl.bindBuffer(gl.ARRAY_BUFFER, null);
  gl.bindVertexArray(null);

  return vao;
}

Similar sort of thing - except this time each attribute pulls data from its own buffer, which requires re-binding buffers in between vertexAttribArray calls.

I want to call attention to the color attribute - remember how it defines data with unsigned 8-bit integers but still reads into a regular float vec3 color attribute? Let's look at the vertexAttribPointer call for that in more detail:

gl.vertexAttribPointer(
  colorAttrib, 3, gl.UNSIGNED_BYTE, true, 0, 0);

Notice the type parameter is now gl.UNSIGNED_BYTE. This specifies that the source data is in an unsigned 8-bit (byte) format - NOT that the attribute itself is that type! The attribute itself is still a float vec2 in the vertex shader.

The normalized parameter is now true, and this is where the magic happens - the bytes are interpreted as percentages across the range of possible bytes. The color bytes [255, 255, 255] would be interpreted in the shader as [1.0, 1.0, 1.0] for full values across all three channels. The color bytes [0, 127, 255] would be interpreted as [0, 0.5, 1.0] representing a color of 0%, 50%, and 100% across the red, green, and blue channels, respectively.

Graphics programming uses quite a few values that make the most sense expressed as numbers between 0 and 1, so storing those values into an 8- or 16-bit integer instead of a full 32-bit float can save you memory.

Not terribly useful here where we're going from 576 bytes to 144, but for complex geometry with many thousands of vertices containing several properties (like animation data)... you could be talking about saving many megabytes of memory, which means more space to make things beautiful!

Shaders

One more thing before we get into the demo - I'll put the shader code in this huge messy top section too, to try to keep the actual demo function as short as possible.

I talk about everything in these shaders above, putting it together looks like this:

const vertexShaderSource = `#version 300 es
precision mediump float;

in vec2 vertexPosition;
in vec3 vertexColor;

out vec3 fragmentColor;

uniform vec2 canvasSize;
uniform vec2 shapeLocation;
uniform float shapeSize;

void main() {
  fragmentColor = vertexColor;

  vec2 worldPosition = vertexPosition * shapeSize + shapeLocation;
  vec2 clipPosition = (worldPosition / canvasSize) * 2.0 - 1.0;

  gl_Position = vec4(clipPosition, 0.0, 1.0);
}`;

const fragmentShaderSource = `#version 300 es
precision mediump float;

in vec3 fragmentColor;

out vec4 outputColor;

void main() {
  outputColor = vec4(fragmentColor, 1.0);
}`;

Demo Code

Okay! Now that we have a bunch of helpers, a class to help move around individual shapes, definitions for all of our vertex data, and shader code prepared, we can write the actual dang demo!

Let's get down the basic boilerplate, like mentioned above, but with a bit of extra logic to help with tracking time between frames:

interface Geometry {
  vao: WebGLVertexArrayObject;
  numVertices: number;
}

function movementAndColor() {
  // SETUP HERE

  let lastFrameTime = performance.now();
  function frame() {
    const thisFrameTime = performance.now();
    const timeElapsed = (thisFrameTime - lastFrameTime) / 1000;
    lastFrameTime = thisFrameTime;

    // UPDATE HERE

    // RENDER HERE

    requestAnimationFrame(frame);
  }
  requestAnimationFrame(frame);
}

try {
  movementAndColor();
} catch (e) {
  console.error(`Uncaught JavaScript exception: ${e}`);
}

The function performance.now() in JavaScript returns the number of milliseconds that have passed since... some point in time, depending on where you're running your code. Thankfully we don't care when the timer begins, we only care about how much time passed since last time we checked it.

SETUP section

With all the helper functions above, the setup is relatively painless:

First, get a WebGL context:

const canvas = document.getElementById('demo-canvas');
if (!canvas || !(canvas instanceof HTMLCanvasElement)) {
  console.error('Could not find HTML canvas element - check for typos, or loading JavaScript file too early');
  return;
}
const gl = getContext(canvas);
if (!gl) {
  return;
}

Next, use that WebGL context to create WebGL buffers for all the vertex data we defined above:

// Create geometry buffers
const circleInterleavedBuffer =
    createStaticVertexBuffer(gl, buildCircleVertexBufferData());
const trianglePositionsBuffer =
    createStaticVertexBuffer(gl, trianglePositions);
const squarePositionsBuffer =
    createStaticVertexBuffer(gl, squarePositions);
const rgbTriangleColorsBuffer =
    createStaticVertexBuffer(gl, rgbTriangleColors);
const fireyTriangleColorsBuffer =
    createStaticVertexBuffer(gl, fireyTriangleColors);
const indigoGradientSquareColorsBuffer =
    createStaticVertexBuffer(gl, indigoGradientSquareColors);
const graySquareColorsBuffer =
    createStaticVertexBuffer(gl, graySquareColors);

if (!circleInterleavedBuffer || !trianglePositionsBuffer || !squarePositionsBuffer
    || !rgbTriangleColorsBuffer || !fireyTriangleColorsBuffer
    || !indigoGradientSquareColorsBuffer || !graySquareColorsBuffer) {
  console.error('Failed to build vertex buffers!');
  return;
}

Once that's done, create the motion and color demo WebGL program, and get the attrib/uniform locations that we'll need at render time:

// Create effect and get attribute+uniform handles
const motionAndColorProgram =
    createProgram(gl, vertexShaderSource, fragmentShaderSource);
if (!motionAndColorProgram) return;

const positionAttribLocation =
    gl.getAttribLocation(motionAndColorProgram, 'vertexPosition');
const colorAttribLocation =
    gl.getAttribLocation(motionAndColorProgram, 'vertexColor');
const canvasSizeUniformLocation =
    gl.getUniformLocation(motionAndColorProgram, 'canvasSize');
const shapeLocationUniformLocation =
    gl.getUniformLocation(motionAndColorProgram, 'shapeLocation');
const shapeSizeUniformLocation =
    gl.getUniformLocation(motionAndColorProgram, 'shapeSize');

if (positionAttribLocation < 0 || colorAttribLocation < 0) {
  console.error(`Failed to get attributes - position=${positionAttribLocation}, color=${colorAttribLocation}`);
}
if (!canvasSizeUniformLocation || !shapeLocationUniformLocation || !shapeSizeUniformLocation) {
  console.error(
    `Failed to get uniform locations - canvasSize=${!!canvasSizeUniformLocation} shapeLocation=${!!shapeLocationUniformLocation} shapeSize=${!!shapeSizeUniformLocation}`);
  return;
}

Notice that the API for getting uniform locations is very similar to the API for attribute locations. But, unlike attribute locations, uniform locations are an opaque WebGLUniformLocation handle (like WebGLProgram) and not just a number.

If a uniform value fails to load, gl.getUniformLocation will return null. This will happen if you have a typo, or (generally) if the uniform is never used in a shader. It's up to you whether or not you want to consider this an error. I generally do consider that an error - both typos and unused variables are worth knowing about and fixing in my opinion.

Now, with the WebGL program created and vertex buffers ready, we have everything we need to create VAOs:

// Create Vertex Array Objects (VAOs) - input assembler states for each piece of geometry
const circleVao = buildCircleVao(
  gl, circleInterleavedBuffer, positionAttribLocation, colorAttribLocation);
const rgbTriangleVao = buildVaoFromTwoBuffers(
  gl, trianglePositionsBuffer, rgbTriangleColorsBuffer,
  positionAttribLocation, colorAttribLocation);
const fireyTriangleVao = buildVaoFromTwoBuffers(
  gl, trianglePositionsBuffer, fireyTriangleColorsBuffer,
  positionAttribLocation, colorAttribLocation);
const indigoGradientSquareVao = buildVaoFromTwoBuffers(
  gl, squarePositionsBuffer, indigoGradientSquareColorsBuffer,
  positionAttribLocation, colorAttribLocation);
const graySquareVao = buildVaoFromTwoBuffers(
  gl, squarePositionsBuffer, graySquareColorsBuffer,
  positionAttribLocation, colorAttribLocation);
if (!circleVao || !rgbTriangleVao || !fireyTriangleVao
    || !indigoGradientSquareVao || !graySquareVao) {
  console.error(`Failed to build VAOs: circle=${!!circleVao} rgbTri=${!!rgbTriangleVao} fireyTri=${!!fireyTriangleVao} indigoSq=${!!indigoGradientSquareVao} graySq=${!!graySquareVao}`);
  return;
}

const geometryList: Geometry[] = [
  { vao: circleVao, numVertices: CIRCLE_SEGMENT_COUNT * 3 },
  { vao: rgbTriangleVao, numVertices: 3 },
  { vao: fireyTriangleVao, numVertices: 3 },
  { vao: indigoGradientSquareVao, numVertices: 6 },
  { vao: graySquareVao, numVertices: 6 }
];

To help with randomly picking a shape later, I've put all geometry into a geometryList variable as well - picking a random shape will be picking a random element from that array and using the VAO and associated number of vertices found in it.

Finally, set up the simulation state - the location of the spawner, the time until the next time a shape appears, the time until the spawner moves again, and an (empty) list of shapes:

// Simulation logic data
let timeToNextSpawn = SPAWN_RATE;
let timeToNextSpawnerLocationChange = SPAWNER_CHANGE_TIME;
let spawnPosition = getNewSpawnerLocation(canvas.width, canvas.height);
let shapes: MovingShape[] = [];

Great! Still here? Wow! Let's talk about what goes on in the // UPDATE section next.

UPDATE section

Each frame, first decide if the shape spawner needs to be moved:

// Update
timeToNextSpawnerLocationChange -= timeElapsed;
if (timeToNextSpawnerLocationChange < 0) {
  timeToNextSpawnerLocationChange = SPAWNER_CHANGE_TIME;
  spawnPosition = getNewSpawnerLocation(canvas.width, canvas.height);
}

The magic here is in timeToNextSpawnerLocationChange - that value is decreased by the amount of time elapsed since the last frame, and when that time reaches 0 the spawner is updated and timer reset.

The same track can be used to decide when to spawn more shapes, but this time in a while loop instead of an if block. If multiple spawns happened since the last frame (e.g., your computer freezes for a couple seconds) then we can still handle multiple shapes in stride:

timeToNextSpawn -= timeElapsed;
while (timeToNextSpawn < 0) {
  timeToNextSpawn += SPAWN_RATE;

  const movementAngle = getRandomInRange(0, Math.PI * 2);
  const movementSpeed = getRandomInRange(MIN_SHAPE_SPEED, MAX_SHAPE_SPEED);
  const forceAngle = getRandomInRange(0, Math.PI * 2);
  const forceMagnitude = getRandomInRange(MIN_SHAPE_FORCE, MAX_SHAPE_FORCE);

  const position: [number, number] = [ spawnPosition[0], spawnPosition[1] ];
  const velocity: [number, number] = [
    Math.sin(movementAngle) * movementSpeed,
    Math.cos(movementAngle) * movementSpeed
  ];
  const force: [number, number] = [
    Math.sin(forceAngle) * forceMagnitude,
    Math.cos(forceAngle) * forceMagnitude
  ];
  const size = getRandomInRange(MIN_SHAPE_SIZE, MAX_SHAPE_SIZE);
  const timeToLive = getRandomInRange(MIN_SHAPE_TIME, MAX_SHAPE_TIME);

  const geometry = geometryList[Math.floor(Math.random() * geometryList.length)];

  const shape = new MovingShape(
    position, velocity, size, force,
    timeToLive, geometry.vao, geometry.numVertices);
  shapes.push(shape);
}

What should you do if your simulation freezes for a couple seconds?

The approach I usually reach for is to run Update logic many times, and then run Render logic once when you're finished. For example - if a frame takes 2 seconds, run Update 10 times at 1/5th of a second each, and then run Render once when you're finished.

It's not a one-size fits all approach (what if Update is the thing causing your sim to run slow?) but it's a nice trick to keep in the back of your mind. We will not be doing that here.

Every field for the shapes is randomly generated except for the initial position, which is the position of the shape spawner.

Geometry is also picked randomly from the list of geometry defined above.

One a new MovingShape instance is made, it's added to the list of shapes.

The last step in the update is to update all the shapes in the list, and remove any that are no longer in use.

for (let i = 0; i < shapes.length; i++) {
  shapes[i].update(timeElapsed);
}
shapes = shapes
    .filter((shape) => shape.isAlive())
    .slice(0, MAX_SHAPE_COUNT);

I've also included a .slice(0, MAX_SHAPE_COUNT) call to limit the length of this array. This way, I can fiddle with the other parameters without accidentally crashing my computer by making eighty five trillion shapes.

And that's it for update! On to // Render!

RENDER section

The render section of this demo is the shortest of the three, and that's by design! GPU APIs are designed to put as much work as possible in the loading SETUP step, in order to keep the performance-critical render code as fast as possible. More render logic means more render time means fewer frames / less happy computer.

Most of this code should be familiar, though there's one new API call (gl.uniformNT):

// Render
canvas.width = canvas.clientWidth;
canvas.height = canvas.clientHeight;
gl.clearColor(0.08, 0.08, 0.08, 1.0);
gl.clear(gl.COLOR_BUFFER_BIT | gl.DEPTH_BUFFER_BIT);

gl.viewport(0, 0, canvas.width, canvas.height);

gl.useProgram(motionAndColorProgram);
gl.uniform2f(canvasSizeUniformLocation, canvas.width, canvas.height);

for (let i = 0; i < shapes.length; i++) {
  gl.uniform2f(
    shapeLocationUniformLocation,
    shapes[i].position[0], shapes[i].position[1]);
  gl.uniform1f(shapeSizeUniformLocation, shapes[i].size);
  gl.bindVertexArray(shapes[i].vao);
  gl.drawArrays(gl.TRIANGLES, 0, shapes[i].numVertices);
  gl.bindVertexArray(null);
}

The first several lines, up through gl.useProgram(motionAndColorProgram) are all more or less what they were in the last tutorial, and will more or less be the same for the rest of this series.

To set a uniform value, a call to gl.uniformNT is made, replacing N with the number of elements in the uniform, and replacing T with an acronym for the type of data to be set.

In the case of the canvasSize uniform, the uniform is a vec2, or a vector of N=2 floats (f). A call to gl.uniform2f is used, passing (1) the location of the uniform to be set, and (2-3) the values of the two components that should be set in that uniform.

Uniforms are something that OpenGL (and therefore WebGL) make pretty heavy use of, but this is an outdated way of setting shader data. Modern APIs (Vulkan, WebGPU, DirectX 12+) use uniform buffers.

I'm not going to cover those in this series, but the idea is very similar to creating vertex buffers with multiple pieces of data in them - put data into an ArrayBuffer, tell the GPU API which buffer should be bound to what uniform buffer binding point, read it from the shader.

Once the per-frame canvasSize uniform is set, a for loop goes through each living shape in the shapes array, binds the uniforms for the shape's location and size, binds the VAO for that shape, issues a draw call, and then cleans up the VAO to avoid accidentally leaking VAO state.

You don't have to call gl.bindVertexArray(null) every time in the loop - you can un-bind it after all shapes have been drawn, or even just leave it out entirely.

I think it's useful to include, at least during development, since it's so easy to accidentally write a bug where you end up clobbering VAO state in some totally unrelated piece of code.

And... that's it! Let's see what the full result is like:

/** Demo constants */
const SPAWNER_CHANGE_TIME = 5;
const CIRCLE_SEGMENT_COUNT = 12;
const SPAWN_RATE = 0.08;
const MIN_SHAPE_TIME = 0.25;
const MAX_SHAPE_TIME = 6;
const MIN_SHAPE_SPEED = 125;
const MAX_SHAPE_SPEED = 350;
const MIN_SHAPE_FORCE = 150;
const MAX_SHAPE_FORCE = 750;
const MIN_SHAPE_SIZE = 2;
const MAX_SHAPE_SIZE = 50;
const MAX_SHAPE_COUNT = 250;

/** Get a WebGL context reference, or print an error and return null */
function getContext(canvas: HTMLCanvasElement) {
  const gl = canvas.getContext("webgl2");
  if (!gl) {
    const isWebGl1Supported = !!document
      .createElement("canvas")
      .getContext("webgl");
    if (isWebGl1Supported) {
      console.error(
        "WebGL 1 is supported, but not v2 - try using a different device or browser"
      );
    } else {
      console.error("WebGL is not supported on this device");
    }
    return null;
  }

  return gl;
}

/** Generate a WebGLBuffer for a piece of STATIC (non-changing) geometry */
function createStaticVertexBuffer(
  gl: WebGL2RenderingContext,
  data: ArrayBuffer | Float32Array | Uint8Array
) {
  const buffer = gl.createBuffer();
  if (!buffer) {
    console.error("Failed to create buffer");
    return null;
  }
  gl.bindBuffer(gl.ARRAY_BUFFER, buffer);
  gl.bufferData(gl.ARRAY_BUFFER, data, gl.STATIC_DRAW);
  gl.bindBuffer(gl.ARRAY_BUFFER, null);

  return buffer;
}

/** Create a WebGLProgram from a vertex + fragment shader source */
function createProgram(
  gl: WebGL2RenderingContext,
  vertexShaderSource: string,
  fragmentShaderSource: string
) {
  const vertexShader = gl.createShader(gl.VERTEX_SHADER);
  if (!vertexShader) {
    console.error("Failed to create vertex shader");
    return null;
  }
  gl.shaderSource(vertexShader, vertexShaderSource);
  gl.compileShader(vertexShader);
  if (!gl.getShaderParameter(vertexShader, gl.COMPILE_STATUS)) {
    console.error(
      `Failed to compile vertex shader - ${gl.getShaderInfoLog(vertexShader)}`
    );
    return null;
  }

  const fragmentShader = gl.createShader(gl.FRAGMENT_SHADER);
  if (!fragmentShader) {
    console.error("Failed to create fragment shader");
    return null;
  }
  gl.shaderSource(fragmentShader, fragmentShaderSource);
  gl.compileShader(fragmentShader);
  if (!gl.getShaderParameter(fragmentShader, gl.COMPILE_STATUS)) {
    console.error(
      `Failed to compile fragment shader - ${gl.getShaderInfoLog(fragmentShader)}`
    );
    return null;
  }

  const program = gl.createProgram();
  if (!program) {
    console.error("Failed to create program");
    return null;
  }
  gl.attachShader(program, vertexShader);
  gl.attachShader(program, fragmentShader);
  gl.linkProgram(program);
  if (!gl.getProgramParameter(program, gl.LINK_STATUS)) {
    console.error(`Failed to link program - ${gl.getProgramInfoLog(program)}`);
    return null;
  }

  return program;
}

function getRandomInRange(min: number, max: number) {
  return Math.random() * (max - min) + min;
}

function getNewSpawnerLocation(canvasWidth: number, canvasHeight: number) {
  return [
    getRandomInRange(150, canvasWidth - 150),
    getRandomInRange(150, canvasHeight - 150),
  ];
}

function buildCircleVertexBufferData() {
  const vertexData = [];

  // Append the vertices for each of the N triangle segments
  for (let i = 0; i < CIRCLE_SEGMENT_COUNT; i++) {
    const vertex1Angle = (i * Math.PI * 2) / CIRCLE_SEGMENT_COUNT;
    const vertex2Angle = ((i + 1) * Math.PI * 2) / CIRCLE_SEGMENT_COUNT;
    const x1 = Math.cos(vertex1Angle);
    const y1 = Math.sin(vertex1Angle);
    const x2 = Math.cos(vertex2Angle);
    const y2 = Math.sin(vertex2Angle);

    // Center vertex is a light blue color and in the middle of the shape
    vertexData.push(
      // Position (x, y)
      0,
      0,
      // Color (r, g, b)
      0.678,
      0.851,
      0.957
    );
    // The other two vertices are along the edges of the circle, and a darker blue color
    vertexData.push(x1, y1, 0.251, 0.353, 0.856);
    vertexData.push(x2, y2, 0.251, 0.353, 0.856);
  }

  return new Float32Array(vertexData);
}

const trianglePositions = new Float32Array([0, 1, -1, -1, 1, -1]);
const squarePositions = new Float32Array([
  -1, 1, -1, -1, 1, -1, -1, 1, 1, -1, 1, 1,
]);
const rgbTriangleColors = new Uint8Array([255, 0, 0, 0, 255, 0, 0, 0, 255]);
const fireyTriangleColors = new Uint8Array([
  // Chili red - E52F0F
  229, 47, 15,
  // Jonquil - F6CE1D
  246, 206, 29,
  // Gamboge - E99A1A
  233, 154, 26,
]);
const indigoGradientSquareColors = new Uint8Array([
  // Top: "Tropical Indigo" - A799FF
  167, 153, 255,
  // Bottom: "Eminence" - 583E7A
  88, 62, 122, 88, 62, 122, 167, 153, 255, 88, 62, 122, 167, 153, 255,
]);
const graySquareColors = new Uint8Array([
  45, 45, 45, 45, 45, 45, 45, 45, 45, 45, 45, 45, 45, 45, 45, 45, 45, 45,
]);

class MovingShape {
  constructor(
    public position: [number, number],
    public velocity: [number, number],
    public size: number,
    public forceDirection: [number, number],
    public timeRemaining: number,
    public vao: WebGLVertexArrayObject,
    public numVertices: number
  ) {}

  isAlive() {
    return this.timeRemaining > 0;
  }

  update(dt: number) {
    this.velocity[0] += this.forceDirection[0] * dt;
    this.velocity[1] += this.forceDirection[1] * dt;

    this.position[0] += this.velocity[0] * dt;
    this.position[1] += this.velocity[1] * dt;

    this.timeRemaining -= dt;
  }
}

function buildCircleVao(
  gl: WebGL2RenderingContext,
  buffer: WebGLBuffer,
  posAttrib: number,
  colorAttrib: number
) {
  const vao = gl.createVertexArray();
  if (!vao) {
    console.error("Failed to create interleaved VertexArrayObject");
    return null;
  }
  gl.bindVertexArray(vao);
  gl.enableVertexAttribArray(posAttrib);
  gl.enableVertexAttribArray(colorAttrib);
  gl.bindBuffer(gl.ARRAY_BUFFER, buffer);
  gl.vertexAttribPointer(
    posAttrib,
    2,
    gl.FLOAT,
    false,
    5 * Float32Array.BYTES_PER_ELEMENT,
    0
  );
  gl.vertexAttribPointer(
    colorAttrib,
    3,
    gl.FLOAT,
    false,
    5 * Float32Array.BYTES_PER_ELEMENT,
    2 * Float32Array.BYTES_PER_ELEMENT
  );
  gl.bindBuffer(gl.ARRAY_BUFFER, null);
  gl.bindVertexArray(null);

  return vao;
}

function buildVaoFromTwoBuffers(
  gl: WebGL2RenderingContext,
  positionBuffer: WebGLBuffer,
  colorBuffer: WebGLBuffer,
  posAttrib: number,
  colorAttrib: number
) {
  const vao = gl.createVertexArray();
  if (!vao) {
    console.error("Failed to create parallel VertexArrayObject");
    return null;
  }

  gl.bindVertexArray(vao);
  gl.enableVertexAttribArray(posAttrib);
  gl.enableVertexAttribArray(colorAttrib);
  gl.bindBuffer(gl.ARRAY_BUFFER, positionBuffer);
  gl.vertexAttribPointer(posAttrib, 2, gl.FLOAT, false, 0, 0);
  gl.bindBuffer(gl.ARRAY_BUFFER, colorBuffer);
  gl.vertexAttribPointer(colorAttrib, 3, gl.UNSIGNED_BYTE, true, 0, 0);

  gl.bindBuffer(gl.ARRAY_BUFFER, null);
  gl.bindVertexArray(null);

  return vao;
}

const vertexShaderSource = `#version 300 es
precision mediump float;

in vec2 vertexPosition;
in vec3 vertexColor;

out vec3 fragmentColor;

uniform vec2 canvasSize;
uniform vec2 shapeLocation;
uniform float shapeSize;

void main() {
  fragmentColor = vertexColor;

  vec2 worldPosition = vertexPosition * shapeSize + shapeLocation;
  vec2 clipPosition = (worldPosition / canvasSize) * 2.0 - 1.0;

  gl_Position = vec4(clipPosition, 0.0, 1.0);
}`;

const fragmentShaderSource = `#version 300 es
precision mediump float;

in vec3 fragmentColor;

out vec4 outputColor;

void main() {
  outputColor = vec4(fragmentColor, 1.0);
}`;

interface Geometry {
  vao: WebGLVertexArrayObject;
  numVertices: number;
}

function movementAndColor() {
  // SETUP HERE
  const canvas = document.getElementById("demo-canvas");
  if (!canvas || !(canvas instanceof HTMLCanvasElement)) {
    console.error(
      "Could not find HTML canvas element - check for typos, or loading JavaScript file too early"
    );
    return;
  }
  const gl = getContext(canvas);
  if (!gl) {
    return;
  }
  // Create geometry buffers
  const circleInterleavedBuffer = createStaticVertexBuffer(
    gl,
    buildCircleVertexBufferData()
  );
  const trianglePositionsBuffer = createStaticVertexBuffer(
    gl,
    trianglePositions
  );
  const squarePositionsBuffer = createStaticVertexBuffer(gl, squarePositions);
  const rgbTriangleColorsBuffer = createStaticVertexBuffer(
    gl,
    rgbTriangleColors
  );
  const fireyTriangleColorsBuffer = createStaticVertexBuffer(
    gl,
    fireyTriangleColors
  );
  const indigoGradientSquareColorsBuffer = createStaticVertexBuffer(
    gl,
    indigoGradientSquareColors
  );
  const graySquareColorsBuffer = createStaticVertexBuffer(gl, graySquareColors);

  if (
    !circleInterleavedBuffer ||
    !trianglePositionsBuffer ||
    !squarePositionsBuffer ||
    !rgbTriangleColorsBuffer ||
    !fireyTriangleColorsBuffer ||
    !indigoGradientSquareColorsBuffer ||
    !graySquareColorsBuffer
  ) {
    console.error("Failed to build vertex buffers!");
    return;
  }

  // Create effect and get attribute+uniform handles
  const motionAndColorProgram = createProgram(
    gl,
    vertexShaderSource,
    fragmentShaderSource
  );
  if (!motionAndColorProgram) return;

  const positionAttribLocation = gl.getAttribLocation(
    motionAndColorProgram,
    "vertexPosition"
  );
  const colorAttribLocation = gl.getAttribLocation(
    motionAndColorProgram,
    "vertexColor"
  );
  const canvasSizeUniformLocation = gl.getUniformLocation(
    motionAndColorProgram,
    "canvasSize"
  );
  const shapeLocationUniformLocation = gl.getUniformLocation(
    motionAndColorProgram,
    "shapeLocation"
  );
  const shapeSizeUniformLocation = gl.getUniformLocation(
    motionAndColorProgram,
    "shapeSize"
  );

  if (positionAttribLocation < 0 || colorAttribLocation < 0) {
    console.error(
      `Failed to get attributes - position=${positionAttribLocation}, color=${colorAttribLocation}`
    );
  }
  if (
    !canvasSizeUniformLocation ||
    !shapeLocationUniformLocation ||
    !shapeSizeUniformLocation
  ) {
    console.error(
      `Failed to get uniform locations - canvasSize=${!!canvasSizeUniformLocation} shapeLocation=${!!shapeLocationUniformLocation} shapeSize=${!!shapeSizeUniformLocation}`
    );
    return;
  }

  // Create Vertex Array Objects (VAOs) - input assembler states for each piece of geometry
  const circleVao = buildCircleVao(
    gl,
    circleInterleavedBuffer,
    positionAttribLocation,
    colorAttribLocation
  );
  const rgbTriangleVao = buildVaoFromTwoBuffers(
    gl,
    trianglePositionsBuffer,
    rgbTriangleColorsBuffer,
    positionAttribLocation,
    colorAttribLocation
  );
  const fireyTriangleVao = buildVaoFromTwoBuffers(
    gl,
    trianglePositionsBuffer,
    fireyTriangleColorsBuffer,
    positionAttribLocation,
    colorAttribLocation
  );
  const indigoGradientSquareVao = buildVaoFromTwoBuffers(
    gl,
    squarePositionsBuffer,
    indigoGradientSquareColorsBuffer,
    positionAttribLocation,
    colorAttribLocation
  );
  const graySquareVao = buildVaoFromTwoBuffers(
    gl,
    squarePositionsBuffer,
    graySquareColorsBuffer,
    positionAttribLocation,
    colorAttribLocation
  );
  if (
    !circleVao ||
    !rgbTriangleVao ||
    !fireyTriangleVao ||
    !indigoGradientSquareVao ||
    !graySquareVao
  ) {
    console.error(
      `Failed to build VAOs: circle=${!!circleVao} rgbTri=${!!rgbTriangleVao} fireyTri=${!!fireyTriangleVao} indigoSq=${!!indigoGradientSquareVao} graySq=${!!graySquareVao}`
    );
    return;
  }

  const geometryList: Geometry[] = [
    { vao: circleVao, numVertices: CIRCLE_SEGMENT_COUNT * 3 },
    { vao: rgbTriangleVao, numVertices: 3 },
    { vao: fireyTriangleVao, numVertices: 3 },
    { vao: indigoGradientSquareVao, numVertices: 6 },
    { vao: graySquareVao, numVertices: 6 },
  ];

  // Simulation logic data
  let timeToNextSpawn = SPAWN_RATE;
  let timeToNextSpawnerLocationChange = SPAWNER_CHANGE_TIME;
  let spawnPosition = getNewSpawnerLocation(canvas.width, canvas.height);
  let shapes: MovingShape[] = [];

  let lastFrameTime = performance.now();
  function frame() {
    const thisFrameTime = performance.now();
    const timeElapsed = (thisFrameTime - lastFrameTime) / 1000;
    lastFrameTime = thisFrameTime;

    // UPDATE HERE
    timeToNextSpawnerLocationChange -= timeElapsed;
    if (timeToNextSpawnerLocationChange < 0) {
      timeToNextSpawnerLocationChange = SPAWNER_CHANGE_TIME;
      spawnPosition = getNewSpawnerLocation(canvas.width, canvas.height);
    }
    timeToNextSpawn -= timeElapsed;
    while (timeToNextSpawn < 0) {
      timeToNextSpawn += SPAWN_RATE;

      const movementAngle = getRandomInRange(0, Math.PI * 2);
      const movementSpeed = getRandomInRange(MIN_SHAPE_SPEED, MAX_SHAPE_SPEED);
      const forceAngle = getRandomInRange(0, Math.PI * 2);
      const forceMagnitude = getRandomInRange(MIN_SHAPE_FORCE, MAX_SHAPE_FORCE);

      const position: [number, number] = [spawnPosition[0], spawnPosition[1]];
      const velocity: [number, number] = [
        Math.sin(movementAngle) * movementSpeed,
        Math.cos(movementAngle) * movementSpeed,
      ];
      const force: [number, number] = [
        Math.sin(forceAngle) * forceMagnitude,
        Math.cos(forceAngle) * forceMagnitude,
      ];
      const size = getRandomInRange(MIN_SHAPE_SIZE, MAX_SHAPE_SIZE);
      const timeToLive = getRandomInRange(MIN_SHAPE_TIME, MAX_SHAPE_TIME);

      const geometry =
        geometryList[Math.floor(Math.random() * geometryList.length)];

      const shape = new MovingShape(
        position,
        velocity,
        size,
        force,
        timeToLive,
        geometry.vao,
        geometry.numVertices
      );
      shapes.push(shape);
    }
    for (let i = 0; i < shapes.length; i++) {
      shapes[i].update(timeElapsed);
    }
    shapes = shapes
      .filter((shape) => shape.isAlive())
      .slice(0, MAX_SHAPE_COUNT);

    // RENDER HERE
    canvas.width = canvas.clientWidth;
    canvas.height = canvas.clientHeight;
    gl.clearColor(0.08, 0.08, 0.08, 1.0);
    gl.clear(gl.COLOR_BUFFER_BIT | gl.DEPTH_BUFFER_BIT);

    gl.viewport(0, 0, canvas.width, canvas.height);

    gl.useProgram(motionAndColorProgram);
    gl.uniform2f(canvasSizeUniformLocation, canvas.width, canvas.height);

    for (let i = 0; i < shapes.length; i++) {
      gl.uniform2f(
        shapeLocationUniformLocation,
        shapes[i].position[0],
        shapes[i].position[1]
      );
      gl.uniform1f(shapeSizeUniformLocation, shapes[i].size);
      gl.bindVertexArray(shapes[i].vao);
      gl.drawArrays(gl.TRIANGLES, 0, shapes[i].numVertices);
      gl.bindVertexArray(null);
    }

    requestAnimationFrame(frame);
  }
  requestAnimationFrame(frame);
}

try {
  movementAndColor();
} catch (e) {
  console.error(`Uncaught JavaScript exception: ${e}`);
}

Wrapping up

That was a ton of code! At the top of this tutorial is a link to a Github repo, a live demo, and a YouTube video covering the same material as this tutorial.

If you're feeling up to a challenge, try to change the code:

  1. Add more shapes with different colors to the geometryList
  2. Make shapes bounce off the walls of the demo
  3. Add another circle shape with a gradient that starts with your favorite color, and ends with the same background color as the canvas.
  4. Add a uniform to the fragment shader for the time remaining in the shape, and blend the color in with the canvas background color as the time remaining approaches 0.

In the next tutorial, we'll be making the jump into 3D graphics. There won't be any new WebGL API ideas, but there will be a lot more math. I hope you keep reading!