Why is my screen dark A digestible blog on common render issues and what their cause could be
Part 3
If you are here from Part 1, you missed Part 2, somehow.
You have tried everything from the first two parts that seemed applicable, and your screen is still a window to the void? No problem: we’ve gathered another five reasons this could be happening and how to go about fixing them.
Issue 11: Is it a bird, is it a plane?
Your 3D environment (aka: scene) normally has several elements and those elements each have their own properties. One element of particular importance is ‘you’, the viewer of the scene. If you aren’t in a room, you can’t be expected to see what is in that room. With 3D scenes, the viewer is usually referred to as the camera. (Unlike in 2D where it’s often called a window or view)
Part of the camera’s properties are the near and far clipping planes, which specify the closest point to the camera which is visible, and the furthest away point. Anything closer than the near plane, or further away than the far plane, will be clipped and hence invisible.
Of course, you can get something in between. If your cube is 200 units across, sitting at 900 units from the camera, and the far plane is at 1000 units … you will see half of it.
The solution here is to set the near and far plane distances appropriately to the scene you’re working in: sometimes this is easy, everything is a similar scale and stays a consistent distance to the camera. Other times, it’s a huge topic which requires redesigning your renderer to avoid artefacts : especially when you have large distances or tiny objects. For more on this, and why selecting good near/far value is hard, read up ‘depth buffer precision’.
Issue 12: I just want to be normal.
When transforming surface normals, it’s important to use a correctly derived normal matrix (from your model and view transformations). If this normal matrix is incorrect, your normals will be incorrectly scaled or rotated, and this can break all the the corresponding lighting calculations.
(There’s many ways incorrectly loaded, transformed or generated normals can break lighting, more to come in a future part)
Technically you need to compute the transpose of the inverse of the upper 3×3 of your combined model-view matrix. Which is some slightly nasty mathematical juggling. Fortunately, QMatrix4x4
has a helper to compute the correct matrix for you. Just make sure to compute the normal matrix for any of your transformation matrices, and pass this into your shaders as an additional uniform value.
Issue 13: All the world’s a stage…
Ready, steady,… and? You have a beautifully crafted startup animation. There are fades, there’s camera movement, there’s a reflection map swooshing over the shiny surface of your model. Just remember to actually start the animation: in Qt, animations are not played on load (unlike some authoring tools), so maybe you just need to press ‘play’.
Issue 14: Triangels. Vertexes. Phong shading.
If you’re writing shaders by hand, and you have a misnaming of the attribute in your shader code, compared to the C++ or QML code which binds uniforms or attributes to those names, then most rendering languages will treat the unbound data as 0,0,0,0 in the shader. If it’s a colour, you’ll get black (if it’s a normal or vertex position, it’s likely also not what you want). The good news is the shading language doesn’t care about your spelling, it just cares that the names you use match. So you can happily use Triangels, so long as you call them that everywhere. (But it will break if someone helpfully fixes your code in one place and not the other…)
If you’re lucky, your graphics driver has a debug mode, or some developer tooling, to warn you when you set a name which is not used in the shader. However, there are various techniques which rely on unbound uniforms or attributes efficiently returning zeroes, so the default production driver is unlikely to warn you about this.
Issue 15: Primitive thoughts.
GPUs draw triangles. Lots of triangles, lovely triangles everywhere. But occasionally some old-timer with an SGI Indigo under their desk will mention some other stuff – fans and strips? Or quads? Or maybe you’re using tessellation shaders (they are great). All of these things are different primitive types, which tell the GPU what kind of thing we’re drawing. Even if you’re not using tessellation shaders (where you draw patches), drawing lines can be very useful in industrial and scientific models, and drawing points can be one way to draw many lights (think flying over a city at night) or clouds of particles.
But if you’re sending drawing commands yourself, you need to specify the primitive type: even if you’ve carefully arranged your geometry into buffers and arrays and indexes of beautiful triangles. And if the type is incorrect, you won’t see triangles, but maybe just points, which by default are single-fragment dots. Which can be really hard to see, or even invisible (depending on your lighting model and fragment shader).
There are many ways to mess up 3D rendering; new technologies, new languages, new engines are coming out everyday. If we helped you with your issue, or even more than one, great! However, if you are still having issues, we have more help on the way. Why not comment your issue below to possibly have it feature in one of the following parts?
If you like this article and want to read similar material, consider subscribing via our RSS feed.
Subscribe to KDAB TV for similar informative short video content.
KDAB provides market leading software consulting and development services and training in Qt, C++ and 3D/OpenGL. Contact us.