WebGL 01: "Hello, triangle!"

In this tutorial, you will learn how to draw the most basic shape in graphics programming: a triangle.

  1. The core stages of the graphics pipeline
  2. Uploading geometry to the GPU
  3. Programming basic vertex shaders in GLSL
  4. Programming basic fragment shaders in GLSL
  5. Executing draw commands

The graphics pipeline

Real-time rendering APIs use a triangle rasterization pipeline to efficiently render geometry, and provide artistic freedom for a huge variety of effects.

It's much easier to write with a specific application in mind: writing a video game.

For both 2D and 3D effects, it turns out a lot of normally very hard things become very simple if you only use triangles. If you put enough triangles together, you can make all sorts of shapes, even approximating smooth surfaces with enough of them.

Modern graphics applications almost exclusively use triangles. It's much more obvious in old blocky video games, but even the more smooth geometry of modern games is just a lot of triangles (with some exceptions).

Wireframe shader adapted from mattdesl/webgl-wireframes

For this tutorial, we'll focus on a single triangle. It'll be centered on the game surface, and take up half of the vertical and horizontal space. This triangle right here:

WebGL is focused on extremely efficiently passing information about triangles along five key stages:

  1. Vertex Shading: Place an input point (vertex) in clip space (defined below)
    • "Hello, Triangle" will run this 3 times - one for each vertex on one triangle
  2. Primitive Assembly: Organize points (vertices) into triangles
    • This is run once - to organize our 3 input vertices into a triangle
  3. Rasterization: Identify which pixels are inside an input triangle
    • Find which of the 360,000 (600x600) frame pixels belong to our triangle
    • Output the (approximately) 45,000 included pixels
  4. Fragment Shading: Decide the color for a single pixel
    • Invoked (about) 45,000 times - once per pixel in our triangle!
    • Output the color indigo - 29.4% red, 0% green, 51% blue
  5. Output Merging: Update the output frame image, usually by replacing the existing color with the fragment shader output
    • Replace existing pixels with our newly shaded pixel fragment (default behavior)

I've made a YouTube version of this tutorial that nicely illustrates (and animates!) each of these stages.

GPU Programs and GPU Memory

Modern computers have a general-purpose Central Processing Unit (CPU) built for general programming, and also a Graphics Processing Unit (GPU) built specifically for handling graphics. GPUs sacrifice the ability to do individual tasks quickly in exchange for doing huge amounts of tasks at the same time.

An example with realistic(-ish) numbers:

Say a CPU can calculate a pixel in 50 nanoseconds, but it takes the GPU 300 nanoseconds (six times longer). The CPU can process 8 pixels simultaneously, and the GPU can process 1024.

A 1080p image has 2,073,600 pixels, which takes the CPU 259,200 batches, or the GPU 225 batches.

Even though each individual pixel is 6x slower on the GPU, the GPU is still able to finish in 0.07 seconds, while it takes the CPU nearly 13 seconds to do the same work!

You can almost think of the CPU and GPU as two separate computers, each with their own way of running code - and, importantly, each with their own distinct memory.

Code run on GPUs needs to be written in a GPU-friendly programming language, and data used by that code needs to be put in VRAM buffers. To make things more interesting, GPUs typically don't just have one single VRAM pool, but instead many VRAM areas, each one optimized for different types of memory access.

So, before we can draw a triangle, we need to do a few things to get the GPU ready:

  • Define our triangle vertices GPU-friendly data types
  • Create a GPU memory buffer, and fill it up with our triangle data
  • Define vertex shader code, compile it, and send it to the GPU
  • Define fragment shader code, compile it, and send it to the GPU

Clip Space

One last thing before getting into code: that "clip space" I mentioned earlier.

Games can support a bunch of different output resolutions - 1080p, 1440p, 4K, 8K, 800x600 if you're running it on an actual potato, whatever. At each of these output resolutions, the relative location of each triangle will stay the same.

For example: You might draw a player character starting at the (X, Y) coordinates (108, 192) on a 1080p monitor, or at positions (216, 384) on a 4K monitor. But, you could also describe that position as (10%, 10%) on both.

Because the actual output resolution doesn't matter until the rasterizer stage anyways, vertex shaders operate in clip space, which has three dimensions:

The X dimension goes from the left of the frame (-1) to the right of the frame (+1).

The Y dimension goes from the left of the frame (-1) to the right of the frame (+1).

The Z dimension defines draw order - anything with Z<-1 or Z>1 is out of frame and should not be drawn, and the pixel fragment with the highest Z value should be drawn to the final image.

Create a "Hello, Triangle" HTML page

For any web app (WebGL or not), an HTML page is the entry point. For WebGL, we'll need a canvas element:

index.html
Expand
<!doctype html>
<html>
  <head>Hello, Triangle!</head>
  <body>
    <canvas id="demo-canvas" width="600px" height="600px"></canvas>
    <script src="hello-triangle.js"></script>
  </body>
</html>

I've put some extra boilerplate and styling on the demo, YouTube, and GitHub versions of this code, but that's the gist. Make a canvas element, make sure to give it an id and a width and height. After that, include a script file with the JavaScript code for this tutorial.

Set up WebGL

The canvas HTML tag gives us a nice area on the page reserved for our WebGL generated image. Get a reference to it, and open up a WebGL context:

hello-triangle.js
Expand
/** @type {HTMLCanvasElement|null} */
const canvas = document.getElementById('demo-canvas');
if (!canvas) {
  throw new Error('Could not get canvas DOM reference - check for typos');
}

const gl = canvas.getContext('webgl2');
if (!gl) {
  throw new Error('This browser does not support WebGL');
}

There's an IDE hint (the "HTMLCanvasElement" comment) and a few lines of error handling, but the important bit here is:

const gl = canvas.getContext('webgl2');

There's a handful of APIs that can be used to fill a canvas element with JavaScript-generated images, including two different versions of WebGL. WebGL 2 has a few more modern features that are worth learning, but the majority of code in these tutorials will also work with WebGL 1.

The returned WebGL2RenderingContext object is what we will use to interact with all of WebGL. The variable name gl is a bit of a convention, but not special other than that. It's nice cosmetically, since the OpenGL C API calls are all prefixed by "gl" and the WebGL equivalents aren't, but name it what you want.

Define triangle geometry

Since we're only drawing one triangle, we might as well define it in clip space from the very beginning. Also, since we're only drawing one triangle and it won't be overlapping anything, we can also skip defining the "Z" value in our JavaScript code, and fill that in with our vertex shader later.

We build our triangle data as a list of X, Y values, like this:

const triangleVertices = [
  // Top middle
  0.0, 0.5,
  // Bottom left
  -0.5, -0.5,
  // Bottom right
  0.5, -0.5
];
  1. Halfway left-to-right (0.0) and 3/4 of the way bottom-to-top (0.5)
  2. 1/4 left-to-right (-0.5) and 1/4 of the way bottom-to-top (-0.5)
  3. 3/4 left-to-right (0.5) and 1/4 of the way bottom-to-top (-0.5)

This is a JavaScript array of numbers, which has two properties that aren't GPU friendly.

First, JavaScript arrays aren't necessarily contiguous - meaning the actual binary data for each element isn't necessarily right next to the other elements in RAM.

Second, JavaScript arrays are made up of the default JavaScript number type, which is a 64-bit floating point number. GPUs really prefer 32-bit floats.

Thankfully, the solution to both problems exists in something that can be built from a standard JS array: the Float32Array type.

const triangleGeoCpuBuffer = new Float32Array(triangleVertices);

Great! Now we have data in a GPU-friendly format, but still in main RAM and not GPU accessible VRAM. To move the code over, create a WebGLBuffer object and fill it with data using the bufferData command.

const triangleGeoBuffer = gl.createBuffer();
gl.bindBuffer(gl.ARRAY_BUFFER, triangleGeoBuffer);
gl.bufferData(gl.ARRAY_BUFFER, triangleGeoCpuBuffer, gl.STATIC_DRAW)
gl.bindBuffer(gl.ARRAY_BUFFER, null);

Soooo... gl.ARRAY_BUFFER? gl.STATIC_DRAW?

Old-ish graphics APIs like OpenGL were designed to hide the complexity of GPU memory pools from developers, and instead only asked developers to provide hints about how the data in question would be used.

WebGL provides binding points instead of allowing direct write-access to GPU memory. The gl.ARRAY_BUFFER point is used for vertex attributes - position, color, texture coordinates, etc. Instead of interacting directly with a buffer using a WebGL call, you interact with whatever buffer is bound to the specified binding point.

Binding points are very easy to mess up! When devloping WebGL applications, un-bind the ARRAY_BUFFER slot to make any later bugs in your code easier to find.

Creating a Vertex Shader

The graphics pipeline requires two custom functions to be written by the application developer and uploaded to the GPU - the vertex shader is the first one.

The primary job of the vertex shader is to take all the input vertex attributes and use them to generate a final clip space position output.

Our case is dead easy, since we already defined our triangle in XY clip space. But, the vertex shader is also where you could add effects like animation, geometry distortion, etc.

This is the code for our vertex shader:

Vertex Shader
Expand
#version 300 es
precision mediump float;

in vec2 vertexPosition;

void main() {
  gl_Position = vec4(vertexPosition, 0.0, 1.0);
}

Shader code is written in GLSL, the OpenGLShading Language.

The important things to note here are:

  • in vec2 position; - this shader takes an input "position" with 2 components (X and Y)
  • void main() { ... } - GLSL is C-like, and the entry point for a shader is typically "void main"
  • gl_Position = vec4(...); - this is a special GLSL variable for the final clip space position of a vertex.

gl_Position takes four values - the first two are the actual clip-space X and Y coordinates. The third, Z, is for depth information, and helps the GPU decide which pixel fragment should be shown if multiple triangles draw to the same pixel. The final value, W, is special for 3D effects.

I'll cover it a lot more in the third tutorial in this series, but the TL;DR is that X, Y, and Z are all divided by W before being used in the rasterizer. Any number divided by 1 is the same number, so setting 1 here cancels the effect.

Shader code can be written in-line in JavaScript using multi-line strings before being compiled for the specific user's GPU and checked for errors, like so:

Compiling the Vertex Shader
Expand
const vertexShaderSourceCode = `#version 300 es
precision mediump float;

in vec2 vertexPosition;

void main() {
  gl_Position = vec4(vertexPosition, 0.0, 1.0);
}`;

const vertexShader = gl.createShader(gl.VERTEX_SHADER);
gl.shaderSource(vertexShader, vertexShaderSourceCode);
gl.compileShader(vertexShader);
if (!gl.getShaderParameter(vertexShader, gl.COMPILE_STATUS)) {
  const errorMessage = gl.getShaderInfoLog(vertexShader);
  showError(`Failed to compile vertex shader: ${errorMessage}`);
  return;
}

Always check for compile errors when developing WebGL apps! It's incredibly easy to make mistakes here. Error checking is a bit weird (well, C-style). The gist of it is to check for compile success with getShaderParameter (checking specifically the gl.COMPILE_STATUS parameter), and report errors if it reports failure. There will be a pretty nice error message that you can get with the gl.getShaderInfoLog method.

You can also keep shaders in their own files, and load them asynchronously with JavaScript, or at build time using Webpack.

Creating a Fragment Shader

The second custom GPU function needed to draw anything is a fragment shader. The fragment shader takes some pixel that the rasterizer has identified as part of a triangle, and declares what color that fragment should be.

The vertex shader has a special gl_Position output, but fragment shaders are capable of writing to multiple outputs for some advanced effects. Older version of GLSL have a special gl_Color variable, but WebGL uses a user-defined output if there's exactly one that matches its expected type.

Fragment Shader
Expand
#version 300 es
precision mediump float;

out vec4 outputColor;

void main() {
  outputColor = vec4(0.294, 0.0, 0.51, 1.0);
}

Nice and easy - define an output variable color (the name here is unimportant, just the type), and set it to the RGBA color for indigo.

Compiling and sending this shader works the same as it did for the vertex shader, with minor adjustments:

Compiling the Fragment Shader
Expand
const fragmentShaderSourceCode = `#version 300 es
precision mediump float;

out vec4 outputColor;

void main() {
  outputColor = vec4(0.294, 0.0, 0.51, 1.0);
}`;

const fragmentShader = gl.createShader(gl.FRAGMENT_SHADER);
gl.shaderSource(fragmentShader, fragmentShaderSourceCode);
gl.compileShader(fragmentShader);
if (!gl.getShaderParameter(fragmentShader, gl.COMPILE_STATUS)) {
  const errorMessage = gl.getShaderInfoLog(fragmentShader);
  showError(`Failed to compile fragment shader: ${errorMessage}`);
  return;
}

This tutorial is WET (Write Everything Twice) for shader creation code because I avoid unnecessary abstraction in tutorials. Your code should definitely be DRY (Don't Repeat Yourself) - write a buildShader(shaderType, shaderCode) method for any real project to avoid nasty copy/paste problems everywhere.

Combining Shaders into a WebGLProgram

WebGL never uses just a vertex or a fragment shader, and it needs to make sure that the user-defined outputs of a vertex shader are compatible with the inputs for a fragment shader. We aren't using either, but this will come up later!

The object for a compatible vertex+fragment shader pair is confusingly called a program. To set up a program, specify which vertex and fragment shaders should be used, and link the program to check for errors:

Building a WebGLProgram
Expand
const helloTriangleProgram = gl.createProgram();
gl.attachShader(helloTriangleProgram, vertexShader);
gl.attachShader(helloTriangleProgram, fragmentShader);
gl.linkProgram(helloTriangleProgram);
if (!gl.getProgramParameter(helloTriangleProgram, gl.LINK_STATUS)) {
  const errorMessage = gl.getProgramInfoLog(helloTriangleProgram);
  showError(`Failed to link GPU program: ${errorMessage}`);
  return;
}

Notice the familiar error checking code, this time using Program instead of Shader and checking for LINK_STATUS instead of COMPILE_STATUS. Same idea, different build step.

Once you have your WebGLProgram, the next step is to get the handles for the vertex shader input attributes, so that you can properly wire up your vertex shader data. Unlike other WebGL handle types, an attribute handle is an integer that refers to a location.

You can figure out the location of each attribute you need in one of a few ways:

  1. Know offhand that the first listed "input" has ID 0, and each following "input" has the next number (1, 2, 3...)
  2. Add a GLSL location=n annotation to each attribute
  3. Ask WebGL what the attribute location is by the name of the input variable

I usually prefer the third option. If you edit your GLSL code and add, remove, or re-order input attributes, you don't have to worry about keeping things pretty or updating a bunch of hard-coded values in your vertex buffer binding code.

Also, if the graphics driver's GLSL compiler optimizes an unused input away, the attribute appears as invalid, which gives a nice hint that maybe something is wrong in your shader code (e.g. an expected input is unused because you forgot to include it in the math somewhere).

const vertexPositionAttributeLocation =
    gl.getAttribLocation(helloTriangleProgram, 'vertexPosition');
if (vertexPositionAttributeLocation < 0) {
  showError(`Failed to get attribute location for vertexPosition`);
  return;
}

Once this is finished, the setup code is complete - congratulations!

The render loop (theory)

Okay. So at this point, all the data that needs to be on the GPU (geometry, shaders) is on the GPU, and we have all the WebGL handles we need in order to actually draw this thing.

As I talked about in the theory section, there's 5 graphics pipeline stages that we have to worry about as graphics programmers - vertex shading, primitive assembly, rasterization, fragment shading, and output merging. Configuring these five stages is performed in the following steps:

  • Binding a WebGL program (set vertex and fragment shader code)
  • Binding vertex attributes to vertex buffers (set vertex shader input)
  • Setting up the HTML canvas for the output merger to draw to (output merger)
  • Setting the viewport (tells the rasterizer which parts of the image to use)
  • Executing a draw call for our geometry (includes a parameter for primitive assembly type)

The draw call is the final step that actually starts the pipeline processing steps, but other than that the order of setting up the pipeline doesn't matter.

For this tutorial, do them in whichever order you'd like! I'm going to use a specific ordering that reflects how code might be organized for a large scene with many objects and many different shaders:

  • Set up the HTML canvas for the next frame
  • Set up the viewport for the next frame
  • For each shader... (in our case, only one)
    • Set the shader for the current effect
    • For each object... (in our case, only one)
      • Bind vertex attributes to that object's vertex buffers
      • Execute a draw call for that object

Prepare the HTML canvas

The way WebGL gets an image to the canvas is somewhat indirect. Behind the scenes, the user's browser sets up two areas of memory large enough to hold an image that will be shown on the HTML canvas. One is always currently being shown to the user, the other is being written to by whatever WebGL things the programmer specifies. In between JavaScript code execution frames, the browser swaps between them if there's been any updates. This all happens behind the scenes, though in APIs like DirectX and Vulkan you do have to think about it a bit more!

Interestingly, the output image can be larger or smaller than the canvas. The browser will stretch or shrink it, as appropriate, to fit the correct area on the final HTML page.

CSS style on the HTML canvas element dictates how large the output appears to the user. The width and height properties on the HTML element itself dictate how large the WebGL image is that your app will be drawing to.

By setting the width and height properties on an HTML canvas in JavaScript, the browser will re-create both image buffers with the appropriate size. You can read the CSS size of the HTML canvas element using the clientWidth and clientHeight properties.

Another important thing is to clear the color and depth buffers. There might be data already in the GPU memory used for our output texture, especially if we were previously drawing to it in another frame. The color buffer holds the actual colorful image that will be displayed to the page, and the depth buffer holds information about the "depth" of each pixel. If two triangles overlap each other, the triangle that's "closer" on the depth buffer is drawn. Clear both before drawing anything on a new frame!

canvas.width = canvas.clientWidth;
canvas.height = canvas.clientHeight;
gl.clearColor(0.08, 0.08, 0.08, 1.0);
gl.clear(gl.COLOR_BUFFER_BIT | gl.DEPTH_BUFFER_BIT);

There is one other tricky note - high-pixel density devices like smartphones with tiny but high resolution screens will often implement a zoom of sorts. One CSS pixel may correspond to several of the actual physical glowing square pixels on a device. Adjusting for this is more complicated that it would seem, but a Good Enough (tm) approach is to multiply CSS pixels by the global devicePixelRatio variable.

The last step is to tell WebGL which part of the output canvas should be used for drawing - this is called the viewport.

gl.viewport(0, 0, canvas.width, canvas.height);

One simple enough call. The first two parameters 0, 0 are the x, y starting position of the viewport, and the last two parameters are the width and height of the output area. (0, 0, canvas.width, canvas.height) draws to the entire canvas, which is a good starting place.

Configure the graphics pipeline

We're getting close, I promise! Just a few more lines of code. Really.

The output image is ready for receiving an image from the WebGL graphics pipeline, the last thing to do is to configure the last few stages and send out the draw call. Those last few stages are the vertex and fragment shaders (collectively, the WebGLProgram), the input assembler, and the primitive assembler.

Attaching the shaders to the pipeline is dead simple, thankfully.

gl.useProgram(helloTriangleProgram);

Configuring the input assembler to correctly read data from the triangle geometry buffer into the vertex shader is... significantly less simple. Code first!

gl.enableVertexAttribArray(vertexPositionAttributeLocation);
gl.bindBuffer(gl.ARRAY_BUFFER, triangleGeoBuffer);
gl.vertexAttribPointer(
  /* index: vertex attrib location */
  vertexPositionAttributeLocation,
  /* size: number of components in the attribute */
  2,
  /* type: type of data in the GPU buffer for this attribute */
  gl.FLOAT,
  /* normalized: if type=float and is writing to a vec(n) float input, should WebGL normalize the ints first? */
  false,
  /* stride: bytes between starting byte of attribute for a vertex and the same attrib for the next vertex */
  2 * Float32Array.BYTES_PER_ELEMENT,
  /* offset: bytes between the start of the buffer and the first byte of the attribute */
  0
);

Enabling the vertex position attribute tells WebGL that this attribute will be used, and should be allowed. I don't know why this is necessary, but it is.

Forgetting to enable vertex attributes is the cause of a lot of confused graphics programmers and blank screens. Myself included.

I like to enable attributes at the same time as binding shaders, usually as part of whatever shader wrapping class I've written for the particular effect.

Just like with the bufferData call earlier, WebGL does not allow you to directly interact with GPU buffers - you first have to bind the buffer to an appropriate binding point. Once again, ARRAY_BUFFER is the appropriate binding point for vertex buffers.

Once something is bound to the ARRAY_BUFFER binding point, the vertexAttribPointer call tells WebGL how to read data from the attached buffer into the appropriate vertex shader input attribute. This is a very complicated call, I highly suggest reading the vertexAttribArray Mozilla documentation page.

The parameters are:

  1. Index: Which vertex attribute is being specified
  2. Size: The number of components (not bytes) in the input attribute. So... float=1, vec2=2, vec3=3, vec4=4.
  3. Type: The type of data in the bound GPU buffer for this attribute (see note below). We're only using floats, so gl.FLOAT
  4. Normalized: This parameter is ignored when reading floats from GPU buffers, but affects how int data is read into float attributes (see note below)
  5. Stride: How many bytes of data to move forward in the buffer to find the next attribute.
  6. Offset: How many bytes of data to skip when reading input data.

On the type and normalized parameters: you can read integer GPU buffer data into float vertex shader inputs. This can be a useful trick for saving space - if you have thousands of vertices in some geometry and don't need a lot of detail, you can store quantized 16-bit integers instead of full-size 32-bit floats and save a ton of data!

What normalized does in this case is decide how those ints are converted to floats. When it's set to false, the nearest float is used - e.g., 17 (int) is read as 17.0 (float). When set to true, the value is first divided by the maximum integer value to get a float in between 0.0-1.0 (for unsigned ints) or -1.0 to 1.0 (for signed ints). Example: 127 (8-bit unsigned int) is read as 0.5 (float).

Or, if you're storing data that exists only between 0 and 1 but don't need very much precision (e.g., a percentage to apply some fragment shading effect down the line), you can store 8-bit integers between 0-255 and normalize them into a float percentage by setting normalized to true.

Most of those parameters make pretty good sense for our case, but stride and offset are worth a little bit more interest.

Stride is how many bytes total the attribute takes up - there are two 32-bit float numbers, 2 * 4 (bytes) = 8 bytes. I like doing that multiplication in the code itself - in C/C++ I would write it as 2 * sizeof(float), JavaScript has a similar-ish notion with 2 * Float32Array.BYTES_PER_ELEMENT, but really both are equivalent to just setting this parameter to 8.

Offset is also easy for this example, the data starts right at the beginning of the buffer, so don't skip any bytes. 0 bytes.

Dispatch the draw call

The graphics pipeline is set up! Time to actually draw this triangle, finally!!

gl.drawArrays(gl.TRIANGLES, 0, 3);

The extra super observant will notice there's one last pipeline stage we never configured - the primitive assembler. This is configured in the draw call itself with the first parameter - gl.TRIANGLES. Draw triangles, organized into groups of 3 - first triangle is vertices 0, 1, 2, second is 3, 4, 5, and so on.

The second parameter specifies how many vertices in the vertex buffer to skip before drawing. Don't skip any, so... 0. This is useful when you have geometry for a lot of different objects all packed together into one vertex buffer, which can be useful.

The final parameter is the number of vertices to draw, NOT the number of triangles. Easy mistake to make, another big cause of mysteriously blank screens or partially drawn geometry.

And... that's it! You should have a triangle on your screen now!

Once again, the full source code for this tutorial is on GitHub and a live demo is available here on indigocode.dev.