Latest Entries »

This is a short post to anounce the new master’s assignment I’ve just uploaded to the Portfolio page: a simple Ray Tracer application. More information on the dedicated page :)

New section: Portfolio

After almost a year away from the blog (not from the keyboard, though ;)), I’ve started a new section: Portfolio.

The first work I’ve added is an assignment from the Master I’m studying (Computer Graphics): A Fantastic Voyage.

Spherical Harmonics

This little demo experiments with Spherical Harmonics. I won’t explain what Spherical Harmonics lighting is about, though, because I’m not an expert on the field and there are many people all over the network that have already done a better job explaining what spherical harmonics are than what I could do here.

For this demo I used the values for some environments taken from the book OpenGL Shading Language 3rd Edition (buy), only to try out some effects I could achieve with a model.

The shaders are pretty simple:

Vertex:

#version 330

// Attributes per position: position and normal
in vec4 position;
in vec3 normal;

uniform mat4   MVP;
uniform mat4   MV;
uniform mat3   normal_matrix;
uniform float  scale_factor;

struct SHFactors
{
    vec3 L00;
    vec3 L1m1;
    vec3 L10;
    vec3 L11;
    vec3 L2m2;
    vec3 L2m1;
    vec3 L20;
    vec3 L21;
    vec3 L22;
};

uniform SH
{
    SHFactors sh_array[10];
} sh;

uniform int sh_idx;

const float C1 = 0.429043;
const float C2 = 0.511664;
const float C3 = 0.743125;
const float C4 = 0.886227;
const float C5 = 0.247708;

smooth out vec3 v_diffuse_color;

void main(void)
{
    // Get surface normal in eye coordinates
    vec3 tnorm = normal_matrix * normal;

    v_diffuse_color = C1 * sh.sh_array[sh_idx].L22 * (tnorm.x * tnorm.x - tnorm.y * tnorm.y) +
                      C3 * sh.sh_array[sh_idx].L20 *  tnorm.z * tnorm.z +
                      C4 * sh.sh_array[sh_idx].L00 -
                      C5 * sh.sh_array[sh_idx].L20 +
                      2.f * C1 * sh.sh_array[sh_idx].L2m2 * tnorm.x * tnorm.y +
                      2.f * C1 * sh.sh_array[sh_idx].L21  * tnorm.x * tnorm.z +
                      2.f * C1 * sh.sh_array[sh_idx].L2m1 * tnorm.y * tnorm.z +
                      2.f * C2 * sh.sh_array[sh_idx].L11  * tnorm.x +
                      2.f * C2 * sh.sh_array[sh_idx].L1m1 * tnorm.y +
                      2.f * C2 * sh.sh_array[sh_idx].L10  * tnorm.z;

    v_diffuse_color *= scale_factor;

    // Don't forget to transform the geometry!
    gl_Position = MVP * position;
}

Fragment:

#version 330

out vec4 fragmentColor;

smooth in vec3 v_diffuse_color;

void main(void)
{
    fragmentColor = vec4 (v_diffuse_color, 1.f);
}

And here are some screenshots from the results:

Bump Mapping

In this little tech demo I experimented a bit on Bump Mapping. For it, I used the Doom 3 model from Fabien Sanglard’s Bump Mapping demo. There’s also a good explanation on bump mapping theory and some source and shader code. I used only the model and textures from his demo, and I only implemented Bump Mapping, with simple lighting and no shadowing.

Here is the code from my Bump Mapping shaders:

Vertex shader:

#version 330

// Attributes per vertex
in vec4 position;
in vec3 normal;
in vec2 texCoord0;
in vec3 tangent;

uniform mat4   MVP;
uniform mat4   MV;
uniform mat3   normalMatrix;
uniform vec3   light_position;

smooth out vec3 vNormal;
smooth out vec3 vLightPos;
smooth out vec2 vTexCoords;

void main(void)
{
    vec3 n = normalize (normalMatrix * normal);
    vec3 t = normalize (normalMatrix * tangent);
    vec3 b = cross (n, t);

    // Get surface normal in eye coordinates
    vNormal = normalMatrix * normal;

    // Get vertex position in eye coordinates
    vec4 vertexPos = MV * position;
    vec3 vertexEyePos = vertexPos.xyz / vertexPos.w;

    vec3 lightDir = normalize (light_position - vertexEyePos);

    vec3 v;
    v.x = dot (lightDir, t);
    v.y = dot (lightDir, b);
    v.z = dot (lightDir, n);
    vLightPos = normalize (v);

    vTexCoords = texCoord0.st;

    // Don't forget to transform the geometry!
    gl_Position = MVP * position;
}

Fragment shader:

#version 330

out vec4 fragmentColor;

uniform vec4 model_color;
uniform sampler2D color_texture;
uniform sampler2D specular_texture;
uniform sampler2D normal_texture;
uniform int use_color = 0;

smooth in vec3 vNormal;
smooth in vec3 vLightPos;
smooth in vec2 vTexCoords;

void main(void)
{
    // Lookup normal from normal map, move from [0, 1] to [-1, 1] range,
    // normalize
    vec3 normal = 2.f * texture (normal_texture, vTexCoords.st).rgb - 1.f;
    normal = normalize (normal);

    // Dot product gives us diffuse intensity
    float diff = max(0.0, dot(normalize(normal), normalize(vLightPos)));

    vec4 ambientColor = model_color;

    // Multiply intensity by diffuse color, force alpha to 1.0
    fragmentColor = diff * model_color;

    // Add in ambient light
    fragmentColor += ambientColor;

    // Use diffuse color tecture
    if (use_color != 0)
        fragmentColor *= texture (color_texture, vTexCoords.st);

    // Specular Light
    vec3 reflection = normalize(reflect(-normalize(vLightPos), normalize(normal)));
    float spec = max(0.0, dot(normalize(normal), reflection));
    if(diff != 0)
    {
        float fSpec = pow(spec, 2.0);
        vec4 specular_mat = texture (specular_texture, vTexCoords.st);
        fragmentColor.rgb += vec3(fSpec, fSpec, fSpec) * specular_mat.rgb;
    }
}

And finally, as always, some screenshots showing the result:

The necessary step after the previous post: Spot Lights.

After tweaking around with point lights, it was the time to experiment with another kind of light: the spot light. I also introduced a 3D model from the Stanford University 3D Scanning Repository to make it a little more interesting.

Here is the fragment shader, which includes slight modifications in the lighting calculation function: spotLight. The vertex shader is exactly the same as the previous post.

#version 330

out vec4 fragmentColor;

struct Light
{
    vec4    position;
    vec4    ambient;
    vec4    diffuse;
    vec4    specular;
    float   constant_attenuation;
    float   linear_attenuation;
    float   quadratic_attenuation;
    vec3    spot_direction;
    float   spot_cutoff;
    float   spot_exponent;
};

uniform Lights
{
    Light light[8];
} lights;

uniform Material
{
    vec4    ambient;
    vec4    diffuse;
    vec4    specular;
    float   shininess;
} material;

uniform int num_lights;

smooth in vec3 vPosition;
smooth in vec3 vNormal;

vec4
spotLight (int lightID)
{
    float nDotVP;       // normal * light direction
    float nDotR;        // normal * light reflection vector
    float pf;           // power factor
    float spotDot;      // cosine of angle between spotlight
    float spot_att;     // spotlight attenuation factor;
    float attenuation;  // computed attenuation factor
    float d;            // distance from surface to light position
    vec3 VP;            // direction from surface to light position
    vec3 reflection;    // direction of maximum highlights

    // Compute vector from surface to light position
    VP = vec3 (lights.light[lightID].position) - vPosition;

    // Compute distance between surface and light position
    d = length (VP);

    // Normalize the vector from surface to light position
    VP = normalize (VP);

    // Compute attenuation
    attenuation = 1.f / (lights.light[lightID].constant_attenuation +
                         lights.light[lightID].linear_attenuation * d +
                         lights.light[lightID].quadratic_attenuation * d * d);

    // See if point on surface is inside cone of illumination
    spotDot = dot (-VP, normalize (lights.light[lightID].spot_direction));

    if (spotDot < lights.light[lightID].spot_cutoff)
        spot_att = 0.f;
    else
        spot_att = pow (spotDot, lights.light[lightID].spot_exponent);

    // Combine the spot and distance attenuation
    attenuation *= spot_att;

    reflection = normalize (reflect (-normalize (VP), normalize
                (vNormal)));

    nDotVP = max (0.f, dot (vNormal, VP));
    nDotR = max (0.f, dot (normalize (vNormal), reflection));

    if (nDotVP == 0.f)
        pf = 0.f;
    else
        pf = pow (nDotR, material.shininess);

    vec4 ambient = material.ambient * lights.light[lightID].ambient * attenuation;
    vec4 diffuse = material.diffuse * lights.light[lightID].diffuse * nDotVP * attenuation;
    vec4 specular = material.specular * lights.light[lightID].specular * pf * attenuation;

    return ambient + diffuse + specular;
}

void main(void)
{
    for (int i = 0; i < num_lights; ++i)
        fragmentColor += spotLight (i);
}

And finally some screenshots showing the dragon model and the two spotlights over it:

Now it’s time to learn a little more about lighting. Throughout the book, there wasn’t any element such a “light” neither a “material”, all what was used to shade a triangle was a light position and a couple of colors: ambient  and diffuse (specular was always white). But the lighting equation is a little more complex: it includes interaction between lights and objects. Each object has its own material, which defines the “look” properties for the object. Light and Material are declared as follows:

struct Light
{
    glm::vec4 position;
    glm::vec4 ambient;
    glm::vec4 diffuse;
    glm::vec4 specular;
    float constantAtt;
    float linearAtt;
    float quadraticAtt;
    glm::vec3 spotDirection;
    float spotExponent;
    float spotCutoff;
};

struct Material
{
    glm::vec4 ambient;
    glm::vec4 diffuse;
    glm::vec4 specular;
    float shininess;
 };

When we have our lights and material initiliazed we must send all this data to the GPU, to be used by the shaders. So, first of all, we’ll see the shaders and then see how we can make our lights and material data available to them.

Point light phong shading vertex shader:

#version 330

// Attributes per position: position and normal
in vec4 position;
in vec3 normal;

uniform mat4   MVP;
uniform mat4   MV;
uniform mat3   normalMatrix;

smooth out vec3 vPosition;
smooth out vec3 vNormal;

void main(void)
{
    // Get surface normal in eye coordinates
    vNormal = normalMatrix * normal;

    // Get vertex position in eye coordinates
    vec4 vertexPos = MV * position;
    vec3 vertexEyePos = vertexPos.xyz / vertexPos.w;

    vPosition = vertexEyePos;

    // Don't forget to transform the geometry!
    gl_Position = MVP * position;
}

Point light phong shading fragment shader:

#version 330

out vec4 fragmentColor;

struct Light
{
    vec4    position;
    vec4    ambient;
    vec4    diffuse;
    vec4    specular;
    float   constant_attenuation;
    float   linear_attenuation;
    float   quadratic_attenuation;
    vec3    spot_direction;
    float   spot_cutoff;
    float   spot_exponent;
};

uniform Lights
{
    Light light[8];
} lights;

uniform Material
{
    vec4    ambient;
    vec4    diffuse;
    vec4    specular;
    float   shininess;
} material;

uniform int num_lights;

smooth in vec3 vPosition;
smooth in vec3 vNormal;

vec4
pointLight (int lightID)
{
    float nDotVP;       // normal * light direction
    float nDotR;        // normal * light reflection vector
    float pf;           // power factor
    float attenuation;  // computed attenuation factor
    float d;            // distance from surface to light position
    vec3 VP;            // direction from surface to light position
    vec3 reflection;    // direction of maximum highlights

    // Compute vector from surface to light position
    VP = vec3 (lights.light[lightID].position) - vPosition;

    // Compute distance between surface and light position
    d = length (VP);

    // Normalize the vector from surface to light position
    VP = normalize (VP);

    // Compute attenuation
    attenuation = 1.f / (lights.light[lightID].constant_attenuation +
                         lights.light[lightID].linear_attenuation * d +
                         lights.light[lightID].quadratic_attenuation * d * d);

    reflection = normalize (reflect (-normalize (VP), normalize
                (vNormal)));

    nDotVP = max (0.f, dot (vNormal, VP));
    nDotR = max (0.f, dot (normalize (vNormal), reflection));

    if (nDotVP == 0.f)
        pf = 0.f;
    else
        pf = pow (nDotR, material.shininess);

    vec4 ambient = material.ambient * lights.light[lightID].ambient * attenuation;
    vec4 diffuse = material.diffuse * lights.light[lightID].diffuse * nDotVP * attenuation;
    vec4 specular = material.specular * lights.light[lightID].specular * pf * attenuation;

    return ambient + diffuse + specular;
}

void main(void)
{
    for (int i = 0; i < num_lights; ++i)
        fragmentColor += pointLight (i);
}

The vertex shader is pretty simple. The fragment shader, however, is rather more interesting. It introduces a new concept: uniform blocks. Lights and Material are the two uniform blocks used in this shader. The second is pretty straighforward; it contains four values: ambient, diffuse and specular color  and shininess for the current object to be shaded. The Lights uniform block contains a single variable light, which is an array of Lights, a struct mirroring the one with the same name in the client application. The Material uniform block also mirrors the struct in the client side.

Uniform blocks are not like simple uniforms, it’s no longer possible to retrieve the uniform location and set a value there. You need Uniform Buffer Objects (UBO) instead. Filling them is not as straightforward as filling array buffers with vertex data, thought. The GPU will expect the data to be in specific locations within the buffer, and keeping certain conditions regarding data layouts for arrays and matrices for example (stride). Take a look to http://www.opengl.org/wiki/Uniform_Buffer_Object for more information about Uniform Buffer Objects.

So, now it’s the time to fill the uniform buffers with our light and material data:

glGenBuffers (1, &light_ubo);
glBindBuffer (GL_UNIFORM_BUFFER, light_ubo);

const GLchar *uniformNames[1] =
{
    "Lights.light"
};
GLuint uniformIndices;

glGetUniformIndices (_phongProgram, 1, uniformNames, &uniformIndices);

GLint uniformOffsets[1];
glGetActiveUniformsiv (_phongProgram, 1, &uniformIndices,
        GL_UNIFORM_OFFSET, uniformOffsets);

GLuint uniformBlockIndex = glGetUniformBlockIndex (_phongProgram,
        "Lights");

GLsizei uniformBlockSize (0);
glGetActiveUniformBlockiv (_phongProgram, uniformBlockIndex,
        GL_UNIFORM_BLOCK_DATA_SIZE, &uniformBlockSize);

const GLchar *names[] =
{
    "Lights.light[0].position",
    "Lights.light[0].ambient",
    "Lights.light[0].diffuse",
    "Lights.light[0].specular",
    "Lights.light[0].constant_attenuation",
    "Lights.light[0].linear_attenuation",
    "Lights.light[0].quadratic_attenuation",
    "Lights.light[0].spot_direction",
    "Lights.light[0].spot_cutoff",
    "Lights.light[0].spot_exponent",
    "Lights.light[1].position",
    "Lights.light[1].ambient",
    "Lights.light[1].diffuse",
    "Lights.light[1].specular",
    "Lights.light[1].constant_attenuation",
    "Lights.light[1].linear_attenuation",
    "Lights.light[1].quadratic_attenuation",
    "Lights.light[1].spot_direction",
    "Lights.light[1].spot_cutoff",
    "Lights.light[1].spot_exponent",
};
GLuint indices[20];

glGetUniformIndices (_phongProgram, 20, names, indices);

std::vector _lightUniformOffsets (20);
glGetActiveUniformsiv (_phongProgram, _lightUniformOffsets.size (),
        indices, GL_UNIFORM_OFFSET, &_lightUniformOffsets[0]);
GLint *offsets = &_lightUniformOffsets[0];

const unsigned int uboSize (uniformBlockSize);
std::vector buffer (uboSize);

int offset;

for (unsigned int n = 0; n < _lights.size (); ++n)
{
    // Light position (vec4)
    offset = offsets[0 + n * 10];
    for (int i = 0; i < 4; ++i)
    {
        *(reinterpret_cast<float*> (&buffer[0] + offset)) =
            _lights[n].position[i];
        offset += sizeof (GLfloat);
    }
    // Light ambient color (vec4)
    offset = offsets[1 + n * 10];
    for (int i = 0; i < 4; ++i)
    {
        *(reinterpret_cast<float*> (&buffer[0] + offset)) =
            _lights[n].ambient[i];
        offset += sizeof (GLfloat);
    }
    // Light diffuse color (vec4)
    offset = offsets[2 + n * 10];
    for (int i = 0; i < 4; ++i)
    {
        *(reinterpret_cast<float*> (&buffer[0] + offset)) =
            _lights[n].diffuse[i];
        offset += sizeof (GLfloat);
    }
    // Light specular color (vec4)
    offset = offsets[3 + n * 10];
    for (int i = 0; i < 4; ++i)
    {
        *(reinterpret_cast<float*> (&buffer[0] + offset)) =
            _lights[n].specular[i];
        offset += sizeof (GLfloat);
    }
    // Light constant attenuation (float)
    offset = offsets[4 + n * 10];
    *(reinterpret_cast<float*> (&buffer[0] + offset)) =
            _lights[n].constantAtt;
    // Light linear attenuation (float)
    offset = offsets[5 + n * 10];
    *(reinterpret_cast<float*> (&buffer[0] + offset)) =
            _lights[n].linearAtt;
    // Light quadratic attenuation (float)
    offset = offsets[6 + n * 10];
    *(reinterpret_cast<float*> (&buffer[0] + offset)) =
            _lights[n].quadraticAtt;
    // Light spot direction (vec3)
    offset = offsets[7 + n * 10];
    for (int i = 0; i < 3; ++i)
    {
        *(reinterpret_cast<float*> (&buffer[0] + offset)) =
            _lights[n].spotDirection[i];
        offset += sizeof (GLfloat);
    }
    // Light spot cutoff (float)
    offset = offsets[8 + n * 10];
    *(reinterpret_cast<float*> (&buffer[0] + offset)) =
            _lights[n].spotCutoff;
    // Light spot exponent (float)
    offset = offsets[9 + n * 10];
    *(reinterpret_cast<float*> (&buffer[0] + offset)) =
            _lights[n].spotExponent;
}
glBufferData (GL_UNIFORM_BUFFER, uboSize, &buffer[0], GL_DYNAMIC_DRAW);
glBindBufferBase (GL_UNIFORM_BUFFER, 0, light_ubo);
glUniformBlockBinding (_phongProgram, uniformBlockIndex, 0)

Filling the Material uniform buffer is easier:

glGenBuffers (1, &material_ubo);
glBindBuffer (GL_UNIFORM_BUFFER, material_ubo);

const GLchar *uniformNames[4] =
{
    "Material.ambient",
    "Material.diffuse",
    "Material.specular",
    "Material.shininess"
};
GLuint uniformIndices[4];

glGetUniformIndices (_phongProgram, 4, uniformNames, uniformIndices);

GLint uniformOffsets[4];
glGetActiveUniformsiv (_phongProgram, 4, uniformIndices,
        GL_UNIFORM_OFFSET, uniformOffsets);

GLuint uniformBlockIndex = glGetUniformBlockIndex (_phongProgram,
        "Material");

GLsizei uniformBlockSize (0);
glGetActiveUniformBlockiv (_phongProgram, uniformBlockIndex,
        GL_UNIFORM_BLOCK_DATA_SIZE, &uniformBlockSize);

const unsigned int uboSize (uniformBlockSize);
std::vector<unsigned char> buffer (uboSize);

int offset;
// Sphere material ambient color (vec4)
offset = uniformOffsets[0];
for (int i = 0; i < 4; ++i)
{
    *(reinterpret_cast<float *> (&buffer[0] + offset)) =
        _sphereMat.ambient[i];
    offset += sizeof (GLfloat);
}
// Sphere material diffuse color (vec4)
offset = uniformOffsets[1];
for (int i = 0; i < 4; ++i)
{
    *(reinterpret_cast<float *> (&buffer[0] + offset)) =
        _sphereMat.diffuse[i];
    offset += sizeof (GLfloat);
}
// Sphere material specular color (vec4)
offset = uniformOffsets[2];
for (int i = 0; i < 4; ++i)
{
    *(reinterpret_cast<float *> (&buffer[0] + offset)) =
        _sphereMat.specular[i];
    offset += sizeof (GLfloat);
}
// Sphere material shininess (float)
offset = uniformOffsets[3];
*(reinterpret_cast<float *> (&buffer[0] + offset)) =
        _sphereMat.shininess;

glBufferData (GL_UNIFORM_BUFFER, uboSize, &buffer[0], GL_DYNAMIC_DRAW);
glBindBufferBase (GL_UNIFORM_BUFFER, 1, material_ubo);
glUniformBlockBinding (_phongProgram, uniformBlockIndex, 1);

For more information about the subject I’d recommend reading the OpenGL SuperBible 5th ed, chapter 11, whis has a large explanation abaout all you need to know about uniform blocks and Uniform Buffer Objects.

Finally, we get to the end of the book. Chapter 12. This chapter talks about advanced geometry topics: buffers, instance rendering, getting the results from a vertex shader execution (transform feedback), getting information from OpenGL through queries, syncronizing and clipping.

I implemented an amazing example which uses some of the aforementioned features: geometry buffers, instanced rendering and transform feedback. This example performs GPU simulation of flocking behaviour. First of all the animation data is processed by the vertex shader, this data doesn’t follow its way towards the rasterizing step, but is kept for the rendering pass, which using the just updated data and the geometry for simple and tiny paper airplanes renders them in a new position and with new velocity and direction.

I’ll show the vertex shader that updates the flocking data:

#version 330

precision highp float;

// Position and velocity inputs
layout (location = 0) in vec3 flock_position;
layout (location = 1) in vec3 flock_velocity;

// Outputs (via transform feedback)
out vec3 position_out;
out vec3 velocity_out;

// TBOs containing the position and velocity of other flock members
uniform samplerBuffer tex_positions;
uniform samplerBuffer tex_velocities;

// Parameters...
uniform int flock_size;
// These all have defaults. In the example application, these aren't changed.
// Just edit these and rerun the application. It's certainly possible to change
// these parameters at run time by hooking the uniforms up in the application.
uniform float rule1_weight = 0.17;
uniform float rule2_weight = 0.01;
uniform float damping_coefficient = 0.99999;
uniform float closest_allowed_dist = 500.0;

// Time varying uniforms
uniform vec3 goal;
uniform float timestep;

// The two per-member rules
vec3
rule1 (vec3 my_position, vec3 my_velocity, vec3 their_position, vec3 their_velocity)
{
    vec3 d = my_position - their_position;
    if (dot (d, d) < closest_allowed_dist)
        return d;
    return vec3(0.0);
}

vec3
rule2 (vec3 my_position, vec3 my_velocity, vec3 their_position, vec3 their_velocity)
{
     vec3 d = their_position - my_position;
     vec3 dv = their_velocity - my_velocity;
     return dv / (dot (d, d) + 10.0);
}

void main (void)
{
    vec3 accelleration = vec3(0.0);
    vec3 center = vec3(0.0);
    vec3 new_velocity;
    int i;

    // Apply rules 1 and 2 for my member in the flock (based on all other
    // members)
    for (i = 0; i < flock_size; i++) {
        if (i != gl_VertexID) {
            vec3 their_position = texelFetch(tex_positions, i).xyz;
            vec3 their_velocity = texelFetch(tex_velocities, i).xyz;
            accelleration += rule1(flock_position, flock_velocity, their_position, their_velocity) * rule1_weight;
            accelleration += rule2(flock_position, flock_velocity, their_position, their_velocity) * rule2_weight;
            center += their_position;
        }
    }
    // Also accellerate towards the goal (rule 3)
    accelleration += normalize (goal - flock_position) * 0.025;
    // Update position based on prior velocity and timestep
    position_out = flock_position + flock_velocity * timestep;
    // Update velocity based on calculated accelleration
    accelleration = normalize (accelleration) * min (length (accelleration), 10.0);
    new_velocity = flock_velocity * damping_coefficient + accelleration * timestep;
    // Hard clamp speed (mag(velocity) to 10 to prevent insanity
    if (length (new_velocity) > 10.0)
        new_velocity = normalize (new_velocity) * 10.0;
    velocity_out = new_velocity;
    // Write position (not strictly necessary as we're capturing user defined
    // outputs using transform feedback)
    gl_Position = vec4 (flock_position * 0.1, 1.0);
}

The rendering shaders are simpler and aren’t as important as the update vertex shader, so there is no point in posting them here. Remember that all the source code is available in the book’s page.

And finally, a screencast of the effect:

I’ll post another example from the eleventh chapter: a Julia set renderer. I won’t explain what a Julia set is, but I’ll post the vertex and fragment shaders used to create it ;).

Vertex Shader:

#version 330

precision highp float;

// Attributes per vertex: position and normal
in vec4 position;
in vec2 texCoords;

uniform mat4 MVP;
uniform float zoom;
uniform vec2 offset;

out Fragment
{
    vec2 texCoord;
} fragment;

out vec2 initialZ;

void main(void)
{
    // Don't forget to transform the geometry!
    gl_Position = MVP * position;

    fragment.texCoord = texCoords;

    initialZ = (position.xy * zoom) + offset;
}

Fragment Shader:

#version 330

precision highp float;

in Fragment
{
    vec2 texCoord;
} fragment;

in vec2 initialZ;

uniform sampler1D texGradient;
uniform vec2 C;
uniform int maxIterations = 1000;

out vec4 outColor;

void main (void)
{
    vec2 Z = initialZ;
    int iterations = 0;
    const float threshold_squared = 16.f;

    while (iterations < maxIterations && dot (Z, Z) < threshold_squared)
    {
        vec2 z_squared;
        z_squared.x = Z.x * Z.x - Z.y * Z.y;
        z_squared.y = 2.f * Z.x * Z.y;
        Z = z_squared + C;
        ++iterations;
    }

    if (iterations == maxIterations)
        outColor = vec4 (0.f, 0.f, 0.f, 1.f);
    else
        outColor = texture (texGradient, float (iterations) / float
                (maxIterations));
}

And some cool screenshots!

In this chapter the book covers some more advanced shader topics that will allow you to use your programmable graphics hardware for more than simple polygon rendering.

In this post I will focus in the first part of the chapter. Here an entirely new shader stage is introduced: the geometry shader, which can process entire primitives and even generate new primitives on the fly. I’ll present 4 examples of simple geometry shader usage. They may not be very useful, but this is only introductory material ;).

The vertex and fragment shaders for the three first examples are the same, and they are…

#version 330

precision highp float;

// Attributes per vertex: position and normal
in vec4 vertex;
in vec3 normal;

uniform mat4 MVP;
uniform mat4 modelview;
uniform mat3 normalMatrix;
uniform vec3 lightPos;

out VertexData
{
    vec4 color;
    vec3 normal;
} vertexData;

void main(void)
{
    vec3 N = normalize (normalMatrix * normal);

    // Get vertex position in eye coordinates
    vec4 vertexPos = modelview * vertex;
    vec3 vertexEyePos = vertexPos.xyz / vertexPos.w;

    // Get vector to light source
    vec3 L = normalize(lightPos - vertexEyePos);

    // Dot product gives us diffuse intensity
    vertexData.color = vec4 (.3f, .3f, .9f, 1.f) * max (0.f, dot (N, L));

    // Don't forget to transform the geometry!
    gl_Position = MVP * vertex;
    vertexData.normal = normal;
}
#version 330

precision highp float;

smooth in vec4 color;

out vec4 outColor;

void main (void)
{
    outColor = color;
}

And here are the examples:

Culling

This example implements an “imaginary” viewpoint and culls the triangles that face backwards.

#version 330

precision highp float;

layout (triangles) in;
layout (triangle_strip, max_vertices = 3) out;

in VertexData
{
    vec4 color;
    vec3 normal;
} vertexData[];

uniform vec3 viewpoint;

smooth out vec4 color;

void main(void)
{
    // Calculate two vectors in the plane of the input triangle
    vec3 ab = gl_in[1].gl_Position.xyz - gl_in[0].gl_Position.xyz;
    vec3 ac = gl_in[2].gl_Position.xyz - gl_in[0].gl_Position.xyz;
    vec3 normal = normalize (cross (ac, ab));

    // Calculate the transformed face normal and the view direction vector
    vec3 vt = normalize (viewpoint - gl_in[0].gl_Position.xyz);

    // Take the dot product of the normal with the view direction
    float d = dot (vt, normal);

    // Emit a primitive only if the sign of the dot product is positive
    if (d > 0.f)
    {
        for (int i = 0; i < gl_in.length (); ++i)
        {
            gl_Position = gl_in[i].gl_Position;
            color = vertexData[i].color;
            EmitVertex ();
        }

        EndPrimitive ();
    }
}

Explode

This example computes the triangles’ normals and displaces the triangles along them.

#version 330

precision highp float;

layout (triangles) in;
layout (triangle_strip, max_vertices = 3) out;

in VertexData
{
    vec4 color;
    vec3 normal;
} vertexData[];

uniform float explode_factor;

smooth out vec4 color;

void main(void)
{
    vec3 face_normal = normalize (
            cross (gl_in[2].gl_Position.xyz - gl_in[0].gl_Position.xyz,
                gl_in[1].gl_Position.xyz - gl_in[0].gl_Position.xyz));

    for (int i = 0; i < gl_in.length (); ++i)
    {
        color = vertexData[i].color;
        gl_Position = gl_in[i].gl_Position + vec4 (explode_factor * face_normal,
                0.f);
        EmitVertex ();
    }
    EndPrimitive ();
}

Tessellate

This example creates a new vertex on every triangle that forms a cube and places it at the same distance as the rest of the vertices, then creates a triangle strip with the new vertex. We now have 2 triangles for each one that entered the geometry shader!

Note: For this example, the vertex positions in the vertex shader doesn’t have to be transformed. Just a passthrough. (Thanks Lezardong)

#version 330

precision highp float;

layout (triangles) in;
layout (triangle_strip, max_vertices = 6) out;

in VertexData
{
    vec4 color;
    vec3 normal;
} vertexData[];

uniform mat4 MVP;

smooth out vec4 color;

void main(void)
{
    vec3 a = normalize(gl_in[0].gl_Position.xyz);
    vec3 b = normalize(gl_in[1].gl_Position.xyz);
    vec3 c = normalize(gl_in[2].gl_Position.xyz);

    vec3 d = normalize(b + c);

    gl_Position = MVP * vec4 (b, 1.f);
    color = vec4 (1.f, 0.f, 0.f, 1.f);
    EmitVertex ();

    gl_Position = MVP * vec4 (d, 1.f);
    color = vec4 (0.f, 1.f, 0.f, 1.f);
    EmitVertex ();

    gl_Position = MVP * vec4 (a, 1.f);
    color = vec4 (0.f, 0.f, 1.f, 1.f);
    EmitVertex ();

    gl_Position = MVP * vec4 (c, 1.f);
    color = vec4 (1.f, 0.f, 1.f, 1.f);
    EmitVertex ();
}

Normals

The last example uses a slightly different vertex shader to create a new effect. It renders the polygon’s normals. The geometry shader transforms the triangles into lines representing vertices and faces normals. To do this, the vertex shader doesn’t have to transform the geometry, that’s why the following vertex shader is just a pass-through for the geometry.

#version 330

precision highp float;

// Attributes per vertex: position and normal
in vec4 position;
in vec3 normal;

out VertexData
{
    vec3 normal;
} vertexData;

void main(void)
{
    // Pass through the data, the Geometry Vertex will transform it
    gl_Position = position;
    vertexData.normal = normal;
}
#version 330

precision highp float;

layout (triangles) in;
layout (line_strip, max_vertices = 8 ) out;

in VertexData
{
    vec3 normal;
} vertexData[];

uniform mat4 MVP;

smooth out vec4 color;

void main(void)
{
    // Normals for the triangle vertices
    for (int i = 0; i < gl_in.length (); ++i)
    {
        color = vec4 (1.f, .3f, .3f, 1.f);
        gl_Position = MVP * gl_in[i].gl_Position;
        EmitVertex ();

        color = vec4 (0.f);
        gl_Position = MVP * (gl_in[i].gl_Position + vec4 (vertexData[i].normal
                    * .05f, 0.f));
        EmitVertex ();

        EndPrimitive ();
    }

    // Triangle face normal (from triangle's centroid)
    vec4 cent = (gl_in[0].gl_Position + gl_in[1].gl_Position +
            gl_in[2].gl_Position) / 3.f;
    vec3 face_normal = normalize (
            cross (gl_in[1].gl_Position.xyz - gl_in[0].gl_Position.xyz,
                gl_in[2].gl_Position.xyz - gl_in[0].gl_Position.xyz));

    gl_Position = MVP * cent;
    color = vec4 (.3f, 1.f, .3f, 1.f);
    EmitVertex ();

    gl_Position = MVP * (cent + vec4 (face_normal * .1f, 0.f));
    color = vec4 (0.f);
    EmitVertex ();

    EndPrimitive ();
}
#version 330

precision highp float;

smooth in vec4 color;

out vec4 outColor;

void main (void)
{
    outColor = color;
}

And finally, here are some cool screenshots!

This chapter walks through the last steps in the OpenGL pipeline, the per-fragment operations. These are the scissor test, multi-sample operations, stencil test, depth buffer test, blending, logic operations and dithering.

The example in this chapter plays a little with the blending and depth test operations. The main code here belongs to the OpenGL operations, so I won’t be posting any shader this time. Nevertheless, enjoy the pictures! The first one correspond to an Order Independent Transparency implementation, the rest are different blending equations that don’t blend correctly because of the many rectangles involved.

Follow

Get every new post delivered to your Inbox.