3D Software Renderer Basics

posted in: Programming 0

As part of my Introduction to 3D Graphics Programming module on the second year of Computer Games Programming at Derby University I am developing a 3D software renderer using C++. At its core the renderer’s purpose is to take a series of vertices which make up a model defined in 3D space and transform them such that they can be displayed on a 2D surface (the screen). In addition to this the renderer will be responsible for texturing, lighting, and so on.

In modern computers these tasks are generally handled by dedicated graphics hardware (your graphics card or GPU). APIs such as DirectX and OpenGL provide a way of interfacing with these devices, namely with shader languages such as HLSL and GSL. Once upon a time, however, it was down to the central processing unit (CPU) to perform 3D rendering tasks. There are still some uses for software renderers – namely mobile devices – but creating this one is primarily a learning exercise.

My current renderer supports loading models in the .md2 format, which was developed by id Software’s id for the Tech 2 engine as used in Quake II). In its current state the renderer simply extracts vertex information, stores these in my custom vertex format, and takes them through the 3D transformation pipeline. This involves applying world transformations, a view/camera transformation, a perspective projection, and a screen transformation.

The world transformation step allows a model to be rotated, scaled, or translated (moved) in world space, as needed. The view/camera transformation transforms the model into camera space, where the camera is at the origin. When a model is in space the parts of it that are visible can be determined and subsequently projected onto a 2D viewing plane. A final screen transformation simply scales these projected coordinates to match the computer screen/window size.

You can see the current state of my 3D software renderer, which can only draw models in wireframe at the moment, in the video above. My next step will be to implement backface culling so that any polygons/parts of the model that are not facing the camera (and should not be visible) are hidden. The lack of backface culling is all the more noticeable currently due to the lack of shading/solid surfaces.

Leave a Reply to Anonymous Cancel reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.