Drag And Drop Textures With WebGL


Your browser does not support the canvas tag. This is a static example of what would be seen.
Drop image files here


In this post I’m going build a WebGL app with basic 3D rendering. We’re going to render a rotating 3D box to a WebGL enabled canvas, and since that’s such a common thing to build as a first step into 3D rendering with a new graphics API, I’m going to mix in javascript drag and drop support at the same time so that images can be dropped into the demo and it should then update live so the texture ends up mapped across the box.

I’m going to need some 3D math functions. I don’t want to write my own if I can help it and as far as I know nothing suitable is built into javascript or WebGL, so I’m going to use the excellent glMatrix library, from Brandon Jones, which can be found at glMatrix.net with a good description of the library at stupidly-fast-webgl-matricies. It includes all the basic vec3, vec4 and mat4 ops I need for typical a rendering pipeline. From there we can easily build camera, model and projection matrices to feed to our shaders.

This drop zone for the drag and drop operation is going to be built from a standard HTML div, configured to look like a region where one might want to drop image files, so with a dotted line round the edge and a message encouraging the user to drop their files onto it, and we’ll also be setting up some listeners in our scripts to allow the dropped content to be picked up by the app, and if the app finds that it’s been sent an image file, then we’ll then use the javascript File API’s the pull the data in as a WebGL texture.

The drop zone itself ends up being as simple stack of HTML divs like this, with some CSS to establish that the outer box hash a dashed edge and to position the text vertically and horizontally inside the outer box (where positioning a box vertically is surprisingly complex!), like this…

<div class="dropZone_box" id="dropZone_box">
  <div class="dropZone_text_outer">
    <div class="dropZone_text" id="dropZone_text">
Drop image files here
    </div>
  </div>
</div>

The corresponding CSS looks like this…

.dropZone_box {
  border: 2px dashed #999;
  width: 256px;
  height: 64px;
  margin: 10px 0px 10px 0px;
  display: block;
  margin-right: auto;
  margin-left: auto;
}

.dropZone_text_outer {
  display: table;
  height: 100%;
  width: 100%;
}

.dropZone_text {
  text-align:center;
  display: table-cell;
  vertical-align: middle;
  margin-top: auto;
  margin-bottom: auto;
}

We then want to attach some event listeners to the outer div so that we can receive data. The event handler needs to receive a file, making sure we are dealing with an image file, and then read the file contents such that they can be fed to the texture loading function, and once it’s built the texture it should request an update of the WebGL canvas so that the new content appears immediately. We start by setting up two handlers, one for ‘dragover’ which will trigger when the mouse drag first hits the div, and another for ‘drop’ which will trigger when we let go of the mouse. This is how the code is going to end up looking.

var dropZone = document.getElementById('dropZone_box');

dropZone.addEventListener('dragover', function(evt) {	 	 
  evt.stopPropagation();	 	 
  evt.preventDefault();	 	 
  evt.dataTransfer.dropEffect = 'copy';	 	 
}, false);

dropZone.addEventListener('drop', function(evt) {
  evt.stopPropagation(); 
  evt.preventDefault();
  var files = evt.dataTransfer.files;
  for (var i = 0, f; f = files[i]; i++) {
    /* only process image files */
    if (!f.type.match('image.*')) {
      continue;
    }
    var reader = new FileReader();
    reader.onload = (function(theFile) {
      return function(e) {
        /* e.target.result can be used like a URL now */
        texture = loadTexture(gl, e.target.result, function(texture) {
          requestAnimationFrame(update);  
        });
      };
    })(f);
    /* read in the image file as a data URL */
    reader.readAsDataURL(f);
  }
}, false);

The ‘dragover’ handler is really just there to disable the default drag and drop handling. The important line is evt.preventDefault(). Without this the default handling of dragged images will instead be that the browser will simply load them. Try removing these lines and you’ll get a browser window filled with the image instead of the image being routed to the script. We don’t want that.

The ‘drop’ handler is where the bulk of our work needs to go. It will receive a list of files and using the HTML5 File API will then be able to read the properties and data from each of them. Note that we potentially process more than one image file here, which is kind of pointless as only one can ever end up filling our texture slot. The file API allows us to query the file type, in this case allowing us to match to the image type. Finally we use a series of event driven operations to first read the file as a URL (meaning we get a URL to the data that be passed to other functions), then to load the texture (from the URL), and finally when that finished we request our update.

A WebGL Analog Clock

  
Your browser does not support the canvas tag. This is a static example of what would be seen.

In this example I’m going to try to build something that might be considered useful out of the WebGL experiments I’ve been putting together thus far. I’ve decided to build a simple animated clock, and I’m going to try to keep as much of the code as possible in the shader rather than submit lots of shapes and parameters as might be the more normal way to do this.

My clock is going to consist of five fixed elements. The clock will be circular and will have an outer edge. The clock will have three hands, for hours, minutes and seconds, and will have a small inner circle connecting the hands. My shader is going to represent these as two circles and three lines. Since everything is relative to the center I therefore just need to know the radius of each of the two circles and the position of the end of each line segment, and I’m happy to hard code anything that doesn’t move, which means three points is all the state I need to send to the GPU.

The rendering itself is going to use distance calculations only. I’m going to render a white canvas, but when the shader determines that any given pixel is near enough to one of the five elements it knows about we’ll darken the canvas forming soft black lines. Since we have two element types we only need to be able to calculate the distance to a circle, and the distance to a line segment, and we have all we need. Once we’ve done that for all five we take the darkest resulting colour and run with that.

Finally I’ve set the canvas to refresh every 100ms, which is more than enough to achieve a smooth result for a clock that ticks once per second.

The shader code that I’ve used above looks like this…


precision highp float;

varying vec4 v_texcoord;
uniform vec4 u_bigHand;
uniform vec4 u_smallHand;
uniform vec4 u_secHand;

/* get the squared distance between two points */
float length_squared(vec2 a, vec2 b) {
  return dot(a-b, a-b);
}

/* distance to line segment check */
float minimum_distance(vec2 v, vec2 w, vec2 p) {
  float l2 = length_squared(v, w);
  if (l2 == 0.0) return distance(p, v);
  float t = max(0.0, min(1.0, dot(p - v, w - v) / l2));
  vec2 projection = v + t * (w - v);
  return distance(p, projection);
}

/* clock shader */
void main(void) {
  vec2 pos = v_texcoord.xy;
  float dist = length(pos);
  float circle = abs((dist - 0.95) * 40.0);
  float bigHand  = minimum_distance(vec2(0.0, 0.0), u_bigHand.xy, pos) * 40.0;
  float smallHand  = minimum_distance(vec2(0.0, 0.0), u_smallHand.xy, pos) * 40.0;
  float secHand  = minimum_distance(vec2(0.0, 0.0), u_secHand.xy, pos) * 60.0;
  float middle = abs((dist - 0.02) * 40.0);
  float v = min(min(min(min(circle, bigHand), smallHand), secHand), middle);
  gl_FragColor = vec4(v, v, v, 1.0);
}

In JavaScript it’s fairly easy to get the current time, which we can then convert to the line segment end positions the shader needs by doing this sort of thing…


var d = new Date();
var hours = d.getHours();
var mins = d.getMinutes();
var secs = d.getSeconds();
var smallHand_deg = 360.0 * (hours / 12.0);
var bigHand_deg = 360.0 * (mins / 60.0);
var secHand_deg = 360.0 * (secs / 60.0);
var bigHand_rad = bigHand_deg * Math.PI / 180;
var smallHand_rad = smallHand_deg * Math.PI / 180;
var secHand_rad = secHand_deg * Math.PI / 180;
gl.uniform4f(shader.bigHand_loc,
  Math.sin(bigHand_rad) * 0.7, -Math.cos(bigHand_rad) * 0.7, 0.0, 0.0);    
gl.uniform4f(shader.smallHand_loc,
  Math.sin(smallHand_rad) * 0.5, -Math.cos(smallHand_rad) * 0.5, 0.0, 0.0);    
gl.uniform4f(shader.secHand_loc,
  Math.sin(secHand_rad) * 0.4, -Math.cos(secHand_rad) * 0.4, 0.0, 0.0);    

A Blurred Goat in WebGL

Your browser does not support the canvas tag. This is a static example of what would be seen.

I’d quite like to have a go at building something more closely resembling a modern rendering pipeline out of WebGL and to do that you need to be able to render to targets, and transfer data between them. A simple variant on that theme is a two pass gaussian blur, blurring first horizontally and then taking the result and blurring vertically. That’s what we have here…

The image on the lefts shows the result of feeding a 7×7 texture with a single white dot into the blur filter and the right image shows what it looks like when we feed a goat into the same filter. Note that in the left case the brightness of the blurred image has been boosted 3x so you can see the pixels, otherwise the colours are dark enough the pattern isn’t easily visible.

I won’t go into the detail of what the shaders are doing as I’d rather focus on the new WebGL code I’m adding, but the technique is described elsewhere, for example ‘Efficient Gaussian blur with linear sampling’ for those that are interested. The shader I’m going to use looks like this and implements the same technique described there.

precision highp float;
varying vec4 v_texcoord;
uniform sampler2D s_colourSampler;
uniform vec4 u_blurOffsets;
uniform vec4 u_blurWeights;
void main(void) {
  vec3 rgb1 = texture2D(s_colourSampler, vec2(v_texcoord.xy)).rgb;
  vec3 rgb2 = texture2D(s_colourSampler, vec2(v_texcoord.xy) + u_blurOffsets.xy).rgb;
  vec3 rgb3 = texture2D(s_colourSampler, vec2(v_texcoord.xy) - u_blurOffsets.xy).rgb;
  vec3 rgb4 = texture2D(s_colourSampler, vec2(v_texcoord.xy) + u_blurOffsets.zw).rgb;
  vec3 rgb5 = texture2D(s_colourSampler, vec2(v_texcoord.xy) - u_blurOffsets.zw).rgb;
  vec3 rgb = (
      (rgb1) * u_blurWeights.x
    + (rgb2) * u_blurWeights.y 
    + (rgb3) * u_blurWeights.y
    + (rgb4) * u_blurWeights.z
    + (rgb5) * u_blurWeights.z);
  gl_FragColor = vec4(rgb, 1.0);

What I’m more interested in here is creating and binding targets in WebGL. The following functions builds a fairly standard RGBA colour buffer that can later be bound as the target for rendering operations. The width and height are stored as it can be handy to be able to query them later, say for example if we are matching a viewport to the target dimensions or need to be able to work out the size of a single pixel. The latter part of the function creates a WebGL framebuffer object, which is really just a way of telling WebGL that we want to be able to render to the texture and makes sure that WebGL regards what we’ve built as ‘complete’.

function createRenderTarget(gl, width, height) {

  var rt = {};
  rt.width = width;
  rt.height = height;

  /* create the texture */
  rt.texture = gl.createTexture();
  gl.bindTexture(gl.TEXTURE_2D, rt.texture);
  gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER, gl.LINEAR);
  gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MAG_FILTER, gl.LINEAR);
  gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_S, gl.CLAMP_TO_EDGE);
  gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_T, gl.CLAMP_TO_EDGE);
  gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, width, height, 0, gl.RGBA, gl.UNSIGNED_BYTE, null);

  /* create the framebuffer object */
  rt.fbo = gl.createFramebuffer();
  gl.bindFramebuffer(gl.FRAMEBUFFER, rt.fbo);
  gl.framebufferTexture2D(gl.FRAMEBUFFER, gl.COLOR_ATTACHMENT0, gl.TEXTURE_2D, rt.texture, 0);
  if (gl.checkFramebufferStatus(gl.FRAMEBUFFER) != gl.FRAMEBUFFER_COMPLETE)
    throw new Error("gl.checkFramebufferStatus(gl.FRAMEBUFFER) != gl.FRAMEBUFFER_COMPLETE");

  return rt;
}

I’ve also made some changes to the createTexture function I’ve used in previous posts. It can be helpful to be able to use our render-target and texture objects interchangeably, so I’ve made sure that both have the same member names for common properties like the texture, width and height. After doing this I should be able to pass a render-target to any function that accepts a texture as input.

To demonstrate how to use the render-targets, this is the complete method used to generate the guassian blur shown in the example above. The relevant lines here are the calls to bindFramebuffer where we can choose to either bind a render-target we previously created, or else we can set null which binds to the canvas. In this case we write both the intermediate and target data to render-targets as for the purpose of making a the white dot part of the sample I needed to be able to fill the canvas using point filtering.

function doGuassianBlur5x5(gl, src, tmp_rt, dst_rt) {

  var blurShader = guassianBlur5x5Shader;

  /* *** pass 1 - horizontal *** */
		
  /* set the output buffer */
  gl.bindFramebuffer(gl.FRAMEBUFFER, tmp_rt.fbo);
  gl.viewport(0, 0, tmp_rt.width, tmp_rt.height);
	
  /* setup the blur constants */
  gl.useProgram(blurShader.program);
  gl.uniform4f(blurShader.texCoordScaleBias_loc, 0.5, -0.5, 0.5, 0.5);
  gl.uniform4f(blurShader.blurOffsets_loc, 
    1.3846153846 / src.width, 0.0, 
    3.2307692308 / src.width, 0.0);
  gl.uniform4f(blurShader.blurWeights_loc, 0.2270270270, 0.3162162162, 0.0702702703, 0.0);

  /* bind the source texture */
  gl.activeTexture(gl.TEXTURE0);
  gl.bindTexture(gl.TEXTURE_2D, src.texture);
  gl.uniform1i(blurShader.colourSampler_loc, 0);
				
  /* execute the draw call */
  gl.bindBuffer(gl.ARRAY_BUFFER, fullscreenQuadMesh.pos_buffer);
  gl.vertexAttribPointer(blurShader.pos_loc,
       fullscreenQuadMesh.pos_numComponents, 
       fullscreenQuadMesh.pos_type, 
       false,
       fullscreenQuadMesh.pos_stride, 
       0);
  gl.enableVertexAttribArray(blurShader.pos_loc);
  gl.drawArrays(gl.TRIANGLES, 0, 6);	
					
  /* *** pass 2 - vertical *** */

  /* set the output buffer */
  gl.bindFramebuffer(gl.FRAMEBUFFER, dst_rt.fbo);
  gl.viewport(0, 0, dst_rt.width, dst_rt.height);
			
  /* setup the blur constants -  */
  gl.uniform4f(blurShader.texCoordScaleBias_loc, 0.5, 0.5, 0.5, 0.5);
  gl.uniform4f(blurShader.blurOffsets_loc, 
    0.0, 1.3846153846 / src.height, 
    0.0, 3.2307692308 / src.height);
  gl.uniform4f(blurShader.blurWeights_loc, 0.2270270270, 0.3162162162, 0.0702702703, 0.0);

  /* bind the temp blur target as input */
  gl.activeTexture(gl.TEXTURE0);
  gl.bindTexture(gl.TEXTURE_2D, tmp_rt.texture);
  gl.uniform1i(blurShader.colourSampler_loc, 0);
			
  /* execute the draw call */
  gl.bindBuffer(gl.ARRAY_BUFFER, fullscreenQuadMesh.pos_buffer);
  gl.vertexAttribPointer(blurShader.pos_loc,
       fullscreenQuadMesh.pos_numComponents, 
       fullscreenQuadMesh.pos_type, 
       false,
       fullscreenQuadMesh.pos_stride, 
       0);
  gl.enableVertexAttribArray(blurShader.pos_loc);
  gl.drawArrays(gl.TRIANGLES, 0, 6);

}

Loading Textures In WebGL

Your browser does not support the canvas tag. This is a static example of what would be seen.

This post covers loading a texture, and sampling it inside a shader using WebGL.

This is the function we are going to use to load the texture data.

function loadTexture(gl, imageURL) {

  var texture = {}

  /* check for extensions */
  var glTextureAnisoExt = gl.getExtension("EXT_texture_filter_anisotropic");
 
  /* create a texture object */
  texture.textureObject = gl.createTexture();
 
  /* the texture is going to be a flat colour (for now) */
  var pixel = new Uint8Array([255.0, 255.0, 255.0, 255.0]);
 
  /* setup state */
  gl.bindTexture(gl.TEXTURE_2D, texture.textureObject);
  gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_S, gl.REPEAT);
  gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_T, gl.REPEAT);
  gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER, gl.LINEAR_MIPMAP_LINEAR);
  gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MAG_FILTER, gl.LINEAR);
  if (glTextureAnisoExt != null) {
    gl.texParameterf(gl.TEXTURE_2D, glTextureAnisoExt.TEXTURE_MAX_ANISOTROPY_EXT, 2);
  }     
  gl.bindTexture(gl.TEXTURE_2D, null);
  
  /* fill with the flat colour for now - ensures we can use it before its loaded */
  gl.bindTexture(gl.TEXTURE_2D, texture.textureObject);
  gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, 1, 1, 0, gl.RGBA, gl.UNSIGNED_BYTE, pixel);
  gl.bindTexture(gl.TEXTURE_2D, null);
  
  /* hook a callback to process the loaded image */
  var image = new Image();
  image.onload = function(textureObject, image) {
    return function() {
      gl.bindTexture(gl.TEXTURE_2D, textureObject);
      gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, gl.RGBA, gl.UNSIGNED_BYTE, image);
      gl.generateMipmap(gl.TEXTURE_2D);
      gl.bindTexture(gl.TEXTURE_2D, null);
      }
    } (texture.textureObject, image);
   
  /* trigger the load */
  image.src = imageURL;

  return texture;
}

There are a few things that need explaining here.

First is that we actually populate the texture with data twice. We first setup the texture to be a 1×1 white texture and then later load the real texture data into it. We do this because the load is asynchronus, meaning the texture data will be filled in some time after the function completes, whenever the load completes, and we may want to be able to bind the texture while we are waiting, in order that we don’t delay the initial render of the canvas. Binding an empty texture will crash WebGL but doing it this way makes it completely safe.

You might also notice that we generate mip maps of our texture. This is via the call to gl.generateMipmap. Mip maps will improve the quality of the texture when it’s resized and would be recommended for most textures. Building them requires the source texture dimensions be powers of two though, so once this is enabled you need to ensure the content is built to be mip-map capable. I’d recommend it though…

Finally, we also use an extension to allow for anisotropic filtering. This further improves the quality of the texture filtering. Google ‘anisotripic filtering’ if you want to see what it’s for.

Once that function is established we can load a texture like this.

texture = loadTexture(gl, 'goat_on_grass_194991_cropped.jpg');

We need a bit more code to actually make use of the texture though.

First the fragment shader needs to be modified to sample a texture. I’ve added a sampler called colourSampler here and a varying called v_texCoord where we will recieve UV coordindates from the vertex shader.

precision mediump float;
varying vec4 v_texCoord;
uniform sampler2D colourSampler;
void main(void) {
	gl_FragColor = texture2D(colourSampler, v_texCoord.xy);
}

Then the vertex shader provides UV coordinates for the texture, in this case automatically by scaling and biasing the vertex position. Note that on OpenGL (or WebGL in this case) V’s are flipped vs how they might appear in other shading languages.

attribute vec3 in_position;
varying vec4 v_texCoord;
uniform vec4 u_texCoordScale;
void main(void) {
	v_texCoord.xy = in_position.xy * u_texCoordScale.xy * vec2(0.5, -0.5) + 0.5;
	gl_Position = vec4(in_position, 1.0);
}

Then we add this just after creating the shader to lookup the location of the colour sampler uniform. We store it for later use.

shader.colourSampler_loc = gl.getUniformLocation(shader.program, "colourSampler");
shader.texCoordScale_loc = gl.getUniformLocation(shader.program, "u_texCoordScale");

Finally we add this to the draw call code to bind the texture. WebGL is a bit odd in that there is an extra level of indirection for sampler bindings, so we have to bind the texture oject to slot 0 and then bind slot 0 to the uniform. It would be nicer to just bind the object to the uniform directly, but thats not how WebGL likes to do things.

gl.activeTexture(gl.TEXTURE0);
gl.bindTexture(gl.TEXTURE_2D, colourTexture.textureObject);
gl.uniform1i(boundShader.baseColourSamplerUniform, 0);	

For fun I’ve also added a time value to the code thats updated while animating and used to control the scale for the draw call like so.

gl.uniform4f(
  shader.texCoordScale_loc,
  0.9 + 0.1 * Math.sin(time),
  0.9 + 0.1 * Math.sin(time),
  0.0, 0.0);

My First WebGL Shader

Your browser does not support the canvas tag. This is a static example of what would be seen.

This builds on my existing posts that so far show only how to setup the basic WebGL environment. I won’t cover ground that’s already been covered. Instead I’m going to focus on executing a single draw call, which in turn implies we have a fragment and vertex shader bound, and that we have some vertex data to feed into the front of the shading pipeline.

While we are at it… some of the things we need to do are things pretty much every WebGL script we every write will do, so rather than write the same things over and over we are going to start to break out some of the code into a set of reusable utility functions, and you might notice things are written that way in some cases.

Compiling a shader means taking GLSL source code and turning it into WebGL byte code. WebGL gives us back a handle to the byte code that we can then use to create a shader program object. Code for this looks like this.

  function compileShader(gl, code, type) {
    var shader = gl.createShader(type);
    gl.shaderSource(shader, code);
    gl.compileShader(shader);
    var success = gl.getShaderParameter(shader, gl.COMPILE_STATUS);
    if (!success) {
      throw "compileShader error:" + gl.getShaderInfoLog(shader);
    }
    return shader;
  }

A shader program object is made up of a vertex and fragment program that are compiled and linked together. Think of it as the output from one being the input to the other. If we assume we have compiled the two parts using the above function, we can link them together using something like the code below, and as our program’s are always going to start with pairs of source code strings rather than shader bytecode objects, we expose another helper function that does all this work for us, taking two strings of GLSL code and returning a shader we can use.

  function createProgram(gl, vs, fs) {
    var program = gl.createProgram();
    gl.attachShader(program, vs);
    gl.attachShader(program, fs);
    gl.linkProgram(program);
    var success = gl.getProgramParameter(program, gl.LINK_STATUS);
    if (!success) {
      throw "createProgram error:" + gl.getProgramInfoLog(program);
    } 
    return program;
  }

  function createShader(gl, vs_code, fs_code) {
    var shader = {};
    var vs = compileShader(gl, vs_code, gl.VERTEX_SHADER);
    var fs = compileShader(gl, fs_code, gl.FRAGMENT_SHADER);
    shader.program = createProgram(gl, vs, fs);
    shader.pos_loc = gl.getAttribLocation(shader.program, "in_pos");
    return shader;
  }

You might notice there that we returned more than just a GL shader program. We looked up an attribute too and cached it’s location alongside the shader. The attribute was called in_pos which we’ll use to feed vertex positions to the shader (more on this later).

Next make a mesh. A simple example of a mesh (that’s also very useful) is a full-screen quad that basically fills the viewport in X and Y. We can make something like that using the code below, and as before we also store some useful data describing the layout of the data buffer that will help when we later bind this object for rendering.

  function createFullscreenQuadMesh(gl) {
    var mesh = {};
    pos_array = new Float32Array([
        -1.0, -1.0, 0.0,
         1.0, -1.0, 0.0,
        -1.0,  1.0, 0.0,
        -1.0,  1.0, 0.0,
         1.0, -1.0, 0.0,
         1.0,  1.0, 0.0
    ]);
    mesh.pos_buffer = gl.createBuffer();
    mesh.pos_type = gl.FLOAT;
    mesh.pos_numComponents = 3;
    mesh.pos_stride = 12;
    gl.bindBuffer(gl.ARRAY_BUFFER, mesh.pos_buffer);
    gl.bufferData(gl.ARRAY_BUFFER, pos_array, gl.STATIC_DRAW);
    gl.bindBuffer(gl.ARRAY_BUFFER, null);
    return mesh;
  }

Finally we need some shaders. The shaders below are about as simple as it gets. The vertex shader feeds through the position unchanged and the fragment program turns the X and Y values it receives into colours so we can check it actually did something. Javascript allows us to build multi-line strings and for simple shaders we do just that rather than deal with referencing external files.

  var vs_code = 
    "attribute vec3 in_pos;" +
    "varying vec3 v_pos;" +
    "void main(void) {" +
    "  gl_Position = vec4(in_pos, 1.0);" +
    "  v_pos = gl_Position.xyz;" +
    "}";

  var fs_code = 
    "precision mediump float;" +
    "varying vec3 v_pos;" +
    "void main(void) {" +
    "  gl_FragColor = vec4(v_pos.x * 0.5 + 0.5, v_pos.y * 0.5 + 0.5, 1, 1);" +
    "}";

Putting this together… we can now embed our shaders into our WebGL app, then call these two new methods to build some WebGL shaders and buffers.

    shader = createShader(gl, vs_code, fs_code);
    quad_mesh = createFullscreenQuadMesh(gl);

Finally we can execute a draw call by doing the following. Note that some of the data we bound to the objects we made earlier is now proving useful as WebGL needs us to connect the shader input to the vertex data (using pos_loc) and describe the vertex layout so the shader knows how to pull the data in.

    gl.useProgram(shader.program);
    gl.bindBuffer(gl.ARRAY_BUFFER, quad_mesh.pos_buffer);
    gl.vertexAttribPointer(shader.pos_loc,
       quad_mesh.pos_numComponents, 
       quad_mesh.pos_type, 
       false,
       quad_mesh.pos_stride, 
       0);
    gl.enableVertexAttribArray(shader.pos_loc);
    gl.drawArrays(gl.TRIANGLES, 0, 6);