LWJGL 3D Terrain: Demo and Source code available
by
, October 6th, 2012 at 06:26 PM (19913 Views)
A demo and the source code for creating 3D terrains using LWJGL and OpenGL 3.3+ is now available over at Static Void Games. Props to KevinWorkman for creating the site. NOTE: The page suggests that there is an Applet. This is currently NOT true. Use the Java WebStart version or download and build the source yourself.
Even as I'm writing this the code available has already been modified and made better. Once I get enough of these changes together I'll update the code and demo.
This post will also go into a few of the details involved with using OpenGL and GLSL (OpenGL's shader language). I won't delve too deeply into the details of generating the height map as I pretty much just followed this page with a sine-based interpolation. I will be focusing exclusively on OpenGL 3.3+.
First things first we need to understand the render pipeline OpenGL follows. Thankfully there's a nice article on OpenGL's Wiki page.
The current OpenGL render pipeline (sorry for the outrageously large image):
There are a number of optional stages such as the Tesselation and Geometry shader.
The pipeline my demo follows is this:
Vertex Shader -> Clipping -> Rasterization -> Fragment Shader -> ... (everything else after the fragment shader).
The vertex and fragment shaders I use are very similar to the ones presented here.
My Vertex shader:
#version 330 // view matrix uniform mat4 view; // model matrix uniform mat4 model; // projection matrix uniform mat4 proj; layout(location = 0) in vec4 vertex; layout(location = 1) in vec4 color; layout(location = 2) in vec3 normal; // diffuse color out vec4 Kd; out vec4 vert_eye; out vec4 norm_eye; void main(void) { gl_Position = proj * view * model * vertex; Kd = color; vert_eye = view * vertex; norm_eye = view * vec4(normal, 0); }
The vertex shader handles the basic vertex location and conversion of model data to be used by the fragment shader. Because of the way I'm calculating material properties this shader also takes a color parameter (generated on the Java side). This color serves primarily as the diffuse color.
The Fragment Shader:
#version 330 // phong lighting with 1 light source in vec4 vert_eye; in vec4 norm_eye; // diffuse color in vec4 Kd; // view matrix uniform mat4 view; // model matrix uniform mat4 model; // source light color uniform vec4 Ld = vec4(1,1,1,1); // source light location uniform vec4 lpos = vec4(0, 0, 0, 0); // Specular color uniform vec4 Ks = vec4(0.01,0.01,0.01,0.01); // Specular exponent uniform float n = 100.0; // Ambient color uniform vec4 Ka = vec4(0.05,0.05,0.05,0.05); out vec4 out_color; void main(void) { // surface normal vec4 n_eye = normalize(norm_eye); // direction from surface fragment to light vec4 s_eye = normalize(view * lpos - vert_eye); vec4 v_eye = normalize(-vert_eye); vec4 h_eye = normalize(v_eye + s_eye); // vec4 r_eye = reflect(-s_eye, norm_eye); // ambient illuminance vec4 Ia = vec4(0.1,0.1,0.1,1) * Ka; // diffuse illuminance vec4 Id = Ld * Kd * max(dot(s_eye, n_eye), 0.0); // specular illuminance vec4 Is = Ld * Ks * pow(max(dot(h_eye, v_eye), 0.0), n); out_color = Ia + Id + Is; }
The fragment shader is slightly more complicated. The main reason for this is because all lighting must be implemented via shaders. So this shader stores information for the light and other phong material properties such as how the material reacts to ambient illuminance and to specular illuminance (the site linked above where I got this code from explains Phong lighting quite well).
Now in my code I created separate objects which represented the basic entities which need to be captured, such as the camera (Camera class), light (OmniLight class), and terrain (Perlin class). Rather than repost the code in full go to Static Void Games and download the source from there. These classes are fairly basic and merely hold the data and how to transmit the data from the client (Java) to the server (GPU/OpenGL).
The bulk of the actual application code comes in the TerrainApp class. There are a few things I want to highlight about coordinates and handling user input.
In my code the camera view matrix is always at (0,0,0). This allows the camera to rotate "around the user". Instead of moving the camera the terrain model matrix is moved.
Now the transformations performed on the matrices are Affine Transformations. This can be thought of as moving in the "global" coordinates rather than the "local" coordinates. To solve this issue the camera's orientation can be extracted from the view matrix.
The first 3 columns/rows hold the camera's x, y, and z local axis. So to move in the Z direction (forwards/backwards), I would extract the first 3 rows of the 3rd column. Likewise, the 2nd column holds the Y direction and the 1st column holds the X direction.
Those are the main points I wanted to highlight. Feel free to ask questions and dig through the source code if you want to learn more about creating 3D terrains using LWJGL.
The code is available AS-IS with no warranty or guarantees. You are free to use it as you wish.