I will leave a (dd/mm/yy) formatted date here, so you know when this guide was last updated: 19/10/20
- Added some info about texturing
- Moved the post to my website
- tested with the latest version of GMS2, changed FPS to 60
- added info about transparency and 3D
- Added more correct info about the negative aspect ratios and up-vectors
- (Main content was last updated 16/04/18)
I’ll also update the guide if there a better methods that I haven’t thought of/found out about yet.
GM Version: IDE: 2.2.4.451, RUNTIME: 2.2.4.357
Target Platform: ALL
Download: Built Project
Summary:
So, you want to get started in 3D with GMS2? Well, here’s a few things you should know first:
- I will be referring to “GameMaker Studio 2” as “GMS2” throughout the guide, because I am too lazy to keep writing it in full.
- This guide is written with beginners to 3D in mind. If I’m explaining something and you think you already know all about it, just skip it. Alternatively, if something I’ve written is confusing, don’t be afraid to ask and I’ll try to be more clear!
- I will be referencing matrix/matrices a lot. They are necessary to do almost anything in 3D, but you don’t necessarily have to know how they work. A lot of matrices are autogenerated and assigned in the code without ever needing to know how they work. However, knowing how they work can help a lot.
- GMS2, much like GM:S 1.X, is NOT a 3D engine. It is primarily designed for 2D, so 3D games require a LOT of manual work to set up.
- You will need to know how to use GML. I don’t know if any D&D functions have been added for 3D (I don’t believe so anyway), but this guide will be GML based anyway.
- This guide may be easier if you ever used the “d3d_*” functions, however, they no longer exist. This guide is written with beginners of 3D in mind anyway.
- You can no longer import models using “d3d_model_load” (well, you CAN, but that uses compatibility scripts and is not recommended), so it’s a good idea to write your own model importer/exporter. I’ve already written an exporter for Blender that creates a buffer file that can be loaded with buffer_load – check the extension here: https://marketplace.yoyogames.com/assets/5839/export-3d-blendertogm
- The new tiles do work in a 3D environment, and their z-coordinate is directly taken from layer depth.
- I’ll not be covering HUD set up, as that works the same as it has since 1.X (draw GUI), and there are already guides and help available.
- Remember to use the manual! It answers a LOT of the questions you may have, and can give you more information about a function.
- If the manual doesn’t help you, you can ask a question here, and I will try to help!
Now that that’s out of the way, here’s a list of what I’ll be covering:
- Setting up 3D requirements and a 3D projection with the new camera
- Using a vertex buffer to build a model and render it
- Transforming models with the new matrix functionality
- Other useful d3d_* functions that have been replaced (texture repeat, texture interpolation, backface culling, etc.)
- Other odds-n-ends I think I should talk about (batching, layer depth)
- Why Can’t I See Things Behind My Transparent/Translucent Model??
If you go through the guide from start to finish, this is pretty much what you’ll see:
Not super impressive, but it covers the basics.
Relevant Documentation Links:
So, lets get started then!
Tutorial:
1) Setting up 3D requirements and a 3D projection with the new camera
To begin with, the following code will all go in the create event of some kind of control object, unless I specify otherwise.
Okay, before we even get started, we need to enable some things.
In 3D, we have something called the “z-buffer” – this basically enables the sorting of draw order, so things that are further away appear to be behind things that are closer (Note that this behaviour CAN be changed in GMS2) If we don’t enable the z-buffer, then rendering will look very odd, as things far away may draw on top of things that are closer (even within a single model!)
To enable the z-buffer, we just use this code:
gpu_set_zwriteenable(true);//Enables writing to the z-buffer
gpu_set_ztestenable(true);//Enables depth testing, so far away things are drawn beind closer things
You may also find it useful to force layers to draw at a depth of 0 (this will stop layers translating the z-coordinate of what you draw on any layer – though you may find the default behaviour preferable in some cases)
layer_force_draw_depth(true, 0);
Believe it or not, that’s all the required set up needed to get rendering in 3D!
Now we can move onto the next step – setting up and using a camera in a 3D environment.
Since the GMS2 update, views have been totally replaced with cameras, but so has the d3d projection functions – they are now the same thing, which is far more convenient when you get used to them.
The first thing we’ll want to do is create a camera, assign it to a view, as well as enable views and make yours visible. You can still create views in the room editor, but sometimes it’s nice to have all the code in front of you.
Technically speaking, you don’t even need a view to set up a 3D projection, however, it makes managing the camera much easier.
Note that, as previously, the game window size will default to the view port/room size of the first room. For this reason, you should have a “set-up room”, to get things ready and set the size to be appropriate. Otherwise, you could use the window_* functions combined with view_set_*port to get the size you want.
Anyway, the code!
First, getting our view to actually appear (you don’t need to do this if you have set up a view in the room editor)
//First, we need to enable views and make our view visible (in this case, view 0)
view_enabled = true;//Enable the use of views
view_set_visible(0, true);//Make this view visible
Next, we need to create a camera, and make it fully functional. In order to do this, we must create the camera, assign a projection matrix to it and bind it to the camera. We will keep the camera variable, though we will be using view_camera[0]
to reference it later. We keep the camera variable in case we want to bind the camera to another view later, perhaps in another room, without re-creating it.
//First, create the camera. We could use camera_create_view, but that is more useful in a 2D environment
camera = camera_create();
/** Then, we need to build a projection matrix.
* I keep this in instance scope in case I need to reassign it later. (Though you can retrieve matrices from a camera with camera_get_* functions)
* I use matrix_build_projection_perspective_fov, as it gives the most control over how your projections looks.
*
* Here's how I use the arguments:
* I give a 60 degree vertical field of view,
* with an aspect ratio of view_wport/view_hport,
* a 32 unit near clipping plane, and a 32000 far clipping plane.
* Some of these values may need tweaking to your liking.
*/
projMat = matrix_build_projection_perspective_fov(-60, -view_get_wport(0)/view_get_hport(0), 32, 32000);
//Now we assign the projection matrix to the camera
camera_set_proj_mat(camera, projMat);
//Finally, we bind the camera to the view
view_set_camera(0, camera);
NOTE: In GMS2, setting the projection differs from 1.4 – by default, it now does a conversion to right-handed coordinate space to stay consistent with 2D, but this inverts the up vector. To make the projection act like 1.4 and before and use left-handed space and make the up-vector act as expected, you must make the fov and ratio values negative, hence the code above. If you are happy working in the same coordinate space as the room editor (which I understand!), just make sure to invert the up-vector
You may notice that we have assigned a projection matrix, but didn’t actually point a camera at anything. This happens later, and separately – this is one of the benefits of the new camera system – we only have to build the projection once, hence saving a lot of time compared to calling d3d_set_projection_ext
every draw frame, as half the math is already done.
If you want your camera to have an orthographic projection (instead of perspective), replace the matrix build with this function:
matrix_build_projection_ortho(view width, view height, znear, zfar);
There is one more thing you need to do to get your camera ready! Note that we still haven’t assigned a lookat matrix to the camera! This is used to actually point a camera from a location, to a location.
Now, you could update this matrix in step or draw, but cameras come with a new, cool function – camera_set_update_script()
(there are begin_script and end_script functions too! Check the manual to see more about them!).
Basically, this script is called every draw update per view, assuming the camera is bound to the view. This means you can enable frame skipping, or disable a view, and a camera won’t waste time updating what it doesn’t need to!
So, to use a camera update script, you must create a script. Ideally, the contents of these scripts should be able to be self contained, and the only external references should be globals or instance variables that you know exist, otherwise the game will probably crash.
For this demo, I just created a camera that would spin around the center of the room, based on time. The script is named “camera_update_script”, and contains the code:
//Set up camera location
var zz = -640;
var xx = lengthdir_x(720,-current_time/10) + (room_width*0.5);//Rotation is negative now to match with the old gif and spin clockwise
var yy = lengthdir_y(720,-current_time/10) + (room_height*0.5);
//Build a matrix that looks from the camera location above, to the room center. The up vector points to -z
mLookat = matrix_build_lookat(xx,yy,zz, (room_width*0.5),(room_height*0.5),0, 0,0,-1);
//Assign the matrix to the camera. This updates were the camera is looking from, without having to unnecessarily update the projection.
camera_set_view_mat(view_camera[0], mLookat);
Note that we now reference view_camera[0]
now. I have done this because it is the only proper way to guarantee that we are targeting the camera that belongs to view0.
As another point of interest, you could set up a test so that the camera only updates the lookat matrix if it has moved or changed orientation, which can save even more processing time.
Okay, the last thing you need to do is assign the update script to the camera. Back in the create event (where your other code should be), just add this:
//Assigns the update script named "camera_update_script" to the camera belonging to view0
camera_set_update_script(view_camera[0], camera_update_script);
An that is it! If you set up this camera in a simple, tiled room, it should look something like this:
Aiming and moving your camera around via user input is a little more work and more project-specific (e.g. 3rd person will have different controls to first person, and a flight sim would be different again), but as long as you have a sound understanding of algebra and trigonometry, it should be pretty straightforward.
2) Using a vertex buffer to build a model and render it
Okay, compared to the camera, this is probably harder to understand. However! Don’t be put off! I kept putting off learning buffer related things for years, simply because of the word “buffer” – it sounded hard and scary, but they’re actually pretty simple – especially in the context of vertex buffers, where a lot of work is done for you when it comes to reading the buffer.
Again, we’ll define most of this stuff in a create event, unless otherwise specified.
So, the vertex buffer is used to build models – or maybe more appropriately – “vertex layouts” – they work in both 2D and 3D, and allow you to draw things with stuff like custom attributes, or with the bare minimum data needed to render, to minimise overhead. When used in conjunction with shaders, they are super powerful.
A vertex buffer is effectively comprised of 2 parts:
- the format, which defines what the information contained within the vertex buffer represents
- the buffer, which actually contains the information passed to a shader for rendering
The first thing you’ll want to do is build a vertex format. The format contains information about what every single vertex in the buffer will contain. The one we will create will contain a 3D position, a color and a texture coordinate.
It is important to remember the order that you define information in a vertex format, as data added to the buffer must be in the same order. Each defined vertex must also contain all the information, even if you feel a specific vertex does not need part of it.
NOTE: The format described below represents what we lovingly call the “full-fat format” – it has all the elements required to use GameMakers default shader – if you miss any of these elements, drawing will error with an “invalid input layout”. If you wish to use fewer components, you need to write a shader. This can have a few benefits such as slightly lower memory usage and lower overhead when the buffer is sent to the shader.
The code:
//Begin defining a format
vertex_format_begin();
vertex_format_add_position_3d();//Add 3D position info
vertex_format_add_color();//Add color info
vertex_format_add_textcoord();//Texture coordinate info
//End building the format, and assign the format to the variable "format"
format = vertex_format_end();
Now we have a format, we can create and build a vertex buffer. In this case, I’ll be building a simple, white plane (the flat type, not the flying one), built with triangle list usage in mind (to see more information on this, check the manual for draw_primitive_begin
. You can build far more complex models, but that would be a little time consuming for this tutorial.
To build the plane, we effectively need 2 triangles. They are defined as follows:
//Create the vertex buffer. Another function, vetex_create_buffer_ext can be used to create the buffer with its size predefined and fixed.
//With the standard vertex_create_buffer, the buffer will just grow automatically as needed.
vb_plane = vertex_create_buffer();
//Begin building the buffer using the format defined previously
vertex_begin(vb_plane, format);
//Using size to keep it square if we decide to change how bug it is.
var size = 32;
//Add the six vertices needed to draw a simple square plane.
//The first triangle
vertex_position_3d(vb_plane, -size, -size, 0);
vertex_color(vb_plane, c_white, 1);
vertex_texcoord(vb_plane, 0, 0);
vertex_position_3d(vb_plane, size, -size, 0);
vertex_color(vb_plane, c_white, 1);
vertex_texcoord(vb_plane, 1, 0);
vertex_position_3d(vb_plane, -size, size, 0);
vertex_color(vb_plane, c_white, 1);
vertex_texcoord(vb_plane, 0, 1);
//The second triangle. The winding order has been maintained so drawing is consistent if culling is enabled.
vertex_position_3d(vb_plane, -size, size, 0);
vertex_color(vb_plane, c_white, 1);
vertex_texcoord(vb_plane, 0, 1);
vertex_position_3d(vb_plane, size, -size, 0);
vertex_color(vb_plane, c_white, 1);
vertex_texcoord(vb_plane, 1, 0);
vertex_position_3d(vb_plane, size, size, 0);
vertex_color(vb_plane, c_white, 1);
vertex_texcoord(vb_plane, 1, 1);
//Finish building the buffer.
vertex_end(vb_plane);
If this feels like a lot of code to just render a square, I think you’re right. I actually made a thing that makes building a vertex buffer slightly easier — Less Tedious Buffer Building
All that’s left is to render the vertex buffer – really that is it! I even set up the plane to take texture coordinates properly, so it can have a texture, which is good for floors or ceilings.1
So, let’s see the monster code that is necessary to render this buffer. This goes in the draw event:
//Arguments are just vertex_buffer, primitive type and texture.
vertex_submit(vb_plane, pr_trianglelist, -1);
And that’s how to build a basic vertex buffer and render it. Hopefully, you’ll be able to use this info to build buffers that are a little more complicated than a flat plane, which has little use anyway since the new tiles work in 3D.
If you wish to render the buffer with a texture from a sprite or a surface, for example, you need to get the texture ID (which is different to the sprite index or surface ID) and then pass it as the third argument in vertex_submit
– if the third argument is -1
, the default “blank” texture will be used.
To get the texture for a sprite, use sprite_get_texture(sprite_index, image_index)
and for a surface use surface_get_texture(surfaceID)
. You can also get the texture for a font with font_get_texture
, though it is not as useful since there is no way that I know of to get the UVs of a specific character in a font texture.
Then you can get the texture and submit the vertex buffer with it like this:
var _texture = sprite_get_texture(spr_demoTexture, 0);
vertex_submit(vb_plane, pr_trianglelist, _texture);
When rendering vertex buffers with textures, you’ll need to handle each textures UV coordinates, otherwise you can end up mapping a whole texture page to a single primitive. You can fix this by marking “Use separate texture page” in the sprite editor, or use texture_get_uvs
and use the returned values either when building the buffer, or passing them to a shader for re-mapping at draw time.
One more thing to note is that you’ll want to delete your vertex buffers and formats when you’re done with them to free up memory.
Vertex buffers are deleted with vertex_delete_buffer(buffer)
, and formats are deleted with vertex_format_delete(format)
. Note that you can only delete a format when all buffers that use it are deleted, otherwise the runner will crash. There is also the function vertex_freeze(buffer)
– this will disable editing/updating the buffer, but provides a performance increase.
So, now you know how to create a vertex buffer, but it can be a painfully boring process. It’s good to understand what they are and how they work (and really useful for creating dynamic meshes), but there’s a much easier way to create static meshes – Blender! And I made an export for Blender that creates models that can be loaded with buffer_load or my extension for GameMaker!
The extension includes a simple model_load("path/to/file.dat")
script and vertex format building script – the models can then be rendered with vertex_submit
.
If you want to move the model around, check out the next part: “Transforming models with the new matrix functionality”
3) Transforming models with the new matrix functionality
Ok, at this point, you have a little permission to shout. If you’d just done the guide on vertex buffers, you could be shouting “Yeah, okay, but how do I move the model with building it over and over again at different positions! This is DUMB!!!”
Luckily for you, there is a solution!
Using matrices, you can transform where is drawn. In a most basic sense, you can transform the translation, rotation and scaling of how something is drawn. If you understand slightly more complex matrix usage, you can even do things like shearing a drawing. Don’t let this put you off though! You don’t *need* to understand matrices to use matrix_build
, but it does help!
So, in order to transform your drawing, you use the function matrix_build
. This takes in translation, rotation and scaling as arguments, and builds a matrix. The order of transforms is YXZ, built to render in the order rotate->scale->translate. Note that this can have unwanted results when scaling each axis non-uniformly, as the scaling happens after rotation, unlike when drawing sprites which scales and then rotates. To work around this, you must build at least 2 matrices and multiply them (or use the new matrix stack). NOTE: I did request that order be configurable/changed, but it isn’t planned.
This gif shows how the result may be unexpected (matrix on the left, sprite_ext on the right, same transform input)
More Information
Anyway, onto building the matrix. For this example, I will use a matrix to move the drawing to the center of the room, and rotate it by 45 degrees about the Z-axis. We must then assign this matrix to the world matrix. This is best done in the draw event, right before you need it.
var mat = matrix_build(room_width * 0.5, room_height * 0.5, 0, 0, 0, 45, 1, 1, 1);
//The world matrix is what is used to transform drawing within "world" or "object" space.
matrix_set(matrix_world, mat);
You can then render what you want, or submit your vertex buffers. It works on both.
After drawing your transformed output, you should reset the transform matrix to the identity, so drawing returns to normal.
//Resetting transforms can be done like this:
matrix_set(matrix_world, matrix_build_identity());
If we use the vertex format from the previous tutorial, we’ll have code like this:
var mat = matrix_build(room_width * 0.5, room_height * 0.5, 0, 0, 0, 45, 1, 1, 1);
//The world matrix is what is used to transform drawing within "world" or "object" space.
matrix_set(matrix_world, mat);
//Draw the buffer
vertex_submit(vb_plane, pr_trianglelist, -1);
//Resetting transforms can be done like this:
matrix_set(matrix_world, matrix_build_identity());
And that concludes the basics of transforming in 3D. These methods also work in 2D, as this is an exact replacement of the d3d_transform functions.
The above performs the same way as d3d_transform_set_*
To replace d3d_transform_add_*, you can multiply matrices together, using matrix_multiply. This is used like this:
newMatrix = matrix_multiply(currentTransformMatrix, addedTransformMatrix);
In future, I will add info about the matrix_stack
. From what I can tell, it works roughly the same way as the d3d_transform stack, but I am not yet confident enough to write a guide about it. It seems to be a handy way of sotring matrices globally without the use of vars.
4) Other useful d3d_* functions that have been replaced (texture repeat, texture interpolation etc.)
Well, this is more like a list, but I’ll go through them anyway. This is more useful to those used to d3d_* functions, but I’ll have a breif explanation on each on wwhat they do. It’s not a definitive list, but it contains the functions I used the most.
One of the most useful functions is culling, which boosts performance by not rendering both sides of a “triangle”. This used to be “d3d_set_culling()”. The new equivalent is gpu_set_cullmode()
. It is important to remember that this does NOT affect the drawing of transparent textures! Use gpu_set_alphatestenable
and gpu_set_alphatestref
for that. Draw-order still matters too!
The direct equivalent is using gpu_set_cullmode(cull_counterclockwise);
, however we can now reverse the cull order by using gpu_set_cullmode(cull_clockwise)
instead – this is good for shader makers in particular.
To disable culling, just use gpu_set_cullmode(cull_noculling)
Note that depending on how you build your projection matrix, you may need to reverse the cull order
gpu_set_texfilter
is used for enabling linear interpolation. An _ext version exists for setting specific texture stages.
gpu_set_texrepeat
is used to enable texture repeating. An _ext version exists for setting specific texture stages.
All the blendmode functions have been replaced with gpu_get_blendmode
/gpu_set_blendmode
functions. Look em’ up! They are far more flexible now. You can also enable/disable blending with gpu_set_blendenable
.
Fog and alpha tests are now “gpu_*” functions.
d3d_lights still exist! They are just draw_lights instead.
d3d_set_shading is gone! So you no longer have a choice between automatic flat or gouraud shading – the alternative is to write a shader and make sure your normals are correct for the type of shading you want. Luckily, my Blender export handles both flat and smooth normals on export.
“d3d_set_perspective()” is gone too (the one that flips the y axis in 3D mode), but it can be replicated. Noting the above cameras, we used negative fov and aspect values in order to use left-handed space.
e.g, go from this:
matrix_build_projection_perspective_fov(-60, -view_get_wport(0)/view_get_hport(0), 32, 32000);
to this:
matrix_build_projection_perspective_fov(60, view_get_wport(0)/view_get_hport(0), 32, 32000);
5) Other odds-n-ends I think I should talk about
Layer depth – this is a thing that can and will affect any 3D rendering! Layer depth of the instance that is drawing is added on relative to the current world matrix! Keep this in mind if something seems to have its Z offset weirdly.
Vertex batches – the more of these, the slower your game can run as more info has to be sent to the graphics pipeline. This is more of a concern on mobile targets as most modern PCs can handle a lot, but optimisation is still important and good.
When using normal rendering (e.g., draw_sprite
), GameMaker automatically handles batches for you, but there are a few things that can break a batch (into more parts – don’t worry, your game won’t be broken):
- Texture Swaps – this makes it important to organise texture pages to minimise the amount of swaps (e.g, keep GUI on one texture page and render it in one group)
- Shader changes – whether you are changing the shader target or just updating a uniform, the batch will be broken
matrix_set()
– when on of the main render matrices is set, internal matrices need updating and shader uniforms for gm_Matrices are updated. Hence, the batch is broken.- There are probably more, but I’m not an expert at what makes GameMaker batches break, so I’ve just listed the ones I’m confident about – if you have a correction or know of something else that breaks the batch, I’ll list it here with a ” – thanks, YOURNAME!”.
Okay, so that stuff breaks GameMakers automatic batching, but here’s a separate issue – when using vertex buffers, you’re all on your own – no automatic batching – each buffer is a batch, so you need to batch models by yourself.
This is pretty challenging for a couple of reasons.
- Sprites are on texture pages. Texture Page swaps break a batch, and there’s no way to tell what texture page a sprite is on in GameMaker. That really sucks. The only current workaround I know of is to make your own texture atlas and manage sprite rendering yourself. (Although, some page functions were added recently that may now allow handling pages. I will investigate soon)
- Packing info into one buffer is annoying –
vertex_begin
clears the vertex buffer first, and there is no “vertex_append” equivalent. In order to do this, you’ve got to manage all vertex information in a regular buffer (which is fairly easy if you know how vertex buffers are structured) and then convert it to a vbuffer. - In order to not break a batch and still be able to transform vertices, you’ve got to change them CPU side with
matrix_transform_vertex()
. This can get pretty slow with a lot of vertices, particularly in GML with a lot of array access calls to get the new location, then lots of buffer writes. This can be partially solved with an extension that directly accesses the buffer, but that’s far from ideal. If you’re only transforming the location of vertices or doing basic scaling, it can be faster though, as you don’t need matrices.
If you keep things simple, it can be doable, but it’s rather tricky to get right at the moment.
Currently, I’ve been just using separate buffers anyway. As long as you freeze them, they’re usually still really fast.
I’ve rendered 2000 separate texured cubes, 25 terrain tiles each consisting of 32,768 faces, and 3 rigged, animated and textured characters each with about 9 meshes while staying above 90 fps_real
on my low-mid range PC. Performance could definitely be better (and it will be when I’ve got basic batching in and frustum culling etc), but that’s just a rough baseline for those concerned. In my experience, the limit is much harsher on mobile platforms in general.
Why Can't I See Things Behind My Transparent/Translucent Model??
I’ve added this because I’ve seen some confusion around this a lot, so I figured it may help to have a dedicated section on this. This is not a complete explanation or list of solutions, so there may be better ways out there, but this should at least help point you in the right direction and help you to understand what is going on here.
Often, you want to draw something transparent. You expect to be able to see anything behind it with no trouble at all. After all, it works just like this with all the 2D games you made, why would this be any different in 3D? Well, it’s complicated.
When working in 2D with the built-in functions, depth and layers, GameMaker manages draw order so you never experience this artifact of rendering transparent things. As soon as you switch to 3D, enabling a depth buffer and start rendering in ways GameMaker cannot automatically manage, suddenly this problem rears its ugly head. Why?
When something is rendered and z-writes and z-tests are enabled, the process is a little more like this:
- The location of the pixel being drawn is figured out and checked against something called the z-buffer for the current depth at that location.
- If the new pixel is closer to the viewer, it writes its render-space depth to the z-buffer and gets drawn. This write happens even if the new pixel is completely invisible
- Otherwise, it is discarded.
No information about transparency is stored, therefore, anything drawn behind but after a transparent shape is discarded, so it appears invisible, however, things drawn before the transparent shape appears as expected.
Some of the process of the tests can be tweaked (e.g, gpu_set_zfunc
), but I want to keep things simple for now.
In engines designed for 3D rendering — like Unity and Unreal — order is managed by the engine usually (and can sometimes be tweaked in materials and shaders), so you don’t tend to see this problem. Since GameMaker is designed for 2D games, it doesn’t handle this stuff on its own.
How can this be fixed?
There are a couple of solutions, each usually having their own advantages and drawbacks.
The Simplest and Easiest: Discard Non-Opaque Pixels so they are not Drawn
This can be done in two ways – in the engine and in a shader.
In engine, you can enable alpha testing like so:
gpu_set_alphatestenable(true);
//Any threshold you prefer from 0-255, where 0 is fully transparent and 255 is fully opaque.
gpu_set_alphatestref(128);
(Note that texture filtering may cause unwanted affects near the aedges of transparent regions!)
and in a shader, you can test the alpha of a color and then discard it. e.g:
if(col.a < 0.5) { discard; }
This has the benefit of beings the fastest and easiest solution, but comes at the cost of disabling transparent rendering often completely. Well, unless you do a little more work. More on that soon.
For solids, I personally like to do this in the shader, discarding any pixels below 50% alpha and force any pixels above that threshold to 100% alpha, so I can have transparent areas of texture and texture-filtering without causing any super bad artifacting. I’ve also found this tends to work well even when mip-mapping is enabled.
Use Additive Blending and Disable Z-writes for the Transparent Meshes
This method is neat because it doesn’t require any ordering and allows transparent things to exist. The problem is that it can make things get way too bright, due to the additive blending, and can make the order of things look very.. strange, since they don’t write where they are — if they did, we’d end up with the same artifacting anyway. Because of this drawback, it’s usually reserved for rendering transparent particles, as ordering many of them would take a lot of time and visual clarity is usually less important. It can be used with other transparent things too, but can often have weird layering issues, particularly with things with a fairly high alpha and/or varying colors.
All it takes is using
before the render, then using
to reset the modes when you’re done. Note that we still leave z-testing active even though z-writes are disabled, so that particles do not render on top of things that they are behind.
Things drawn this way should be drawn after opaque things have been drawn, otherwise there’ll be no depth to test against.
Separate Rendering into Groups and Use Ordering
One of the simplest ways to be able to render transparent things without too many artifacts is to use z-ordering when rendering.
Split things into two groups:
- “Solids” – Things that are fullly opaque (and/or fully transparent, if you do alpha-testing)
- “Non-Solids” – Things that are transparent/have transparency
(“Solids” and “Non-Solids” may not be the most ‘correct’ words to use, but I think they make enough sense here)
In some cases, you may have a mesh that has both transparent and solid parts. You may want to split this into multiple meshes, but this can cause performance hits. You kinda have to go by trial-and-error to see what gives a good balance of good visuals and good performance.
Once you’ve sorted what you’re rendering into groups, you can go ahead and render the Solids in one go, since they’ll have no ordering issues, then move on to the non-solids. These will require sorting by the distance to the camera.
There are two distances you could use:
- General Distance to the Camera (e.g.
point_distance_3d
– though using squared distance (a2 + b2 + c2) will provide the same results but greatly improve performance by removing a square-root operation, with the only issue being floating point accuracy at really high distances). This is probably quicker, but less true to the fact that cameras render on a plane - Perpendicular Distance to the Camera Plane – More accurate to how the camera renders, with less artifacting, but does require some matrix math. Probably slower on average than the other method.
You can then sort the meshes by these distances and render in order from furthest to nearest.
In some cases, you may still find it useful to disable z-writes and use additive blending to make self-overlapping transparent meshes render with fewer artifacts.
It’s also sometimes recommended to sort solids, but from closest to furthest – because further pixels are not rendered if they are already obstructed, reducing the load on the GPU. You have to decide if this is worthwhile for your game on your own though – if you have a complicated fragment shader and the GPU is a bottleneck, sorting beforehand may be beneficial. However, if you have a CPU bottleneck and no GPU trouble, sorting solids may just make performance much worse. Just don’t try to optimise too much until it’s needed, in this case.
You can actually just use one group and order everything, but this can have more of a hit on performance and still leave artifacts when transparent and non-transparent meshes are at similar depths.
I will make a post explaining this in more detail soon.
Other Possibilities
A method that solves most of the problems, at the cost of performance, is Order-Independent Transparency. This is a really cool method that allows you to draw in any order and still have transparent things appear properly, even at close proximity and even within a single mesh. The problem is that it can be very difficult to implement in a way that isn’t too costly to performance and memory, at least to my knowledge at the time of writing. It would be easier if there were more hardware support, which I believe the Sega Dreamcast had…
Okay, I think that covers the basics! I will be adding to this, especially when shaders are public. If you have any questions or corrections, let me know!
Hey there. Thanks for taking the time to make this. I’m looking to integrate some 3D elements into my 2D project and this has been a good intro to help me on my way. Thanks again! Martin.