As described on MSDN, drawing lines and other 3D Primitives with XNA involves defining vertices in world space, setting up an Effect, applying view and projection transformations, and finally making a call to the graphics card with a DrawUserPrimitives (or similar) call.
If you’re working in 2D, however, not only is this overkill, but if you’re going to use SpriteBatch it makes sense to be able to define primitives with pixel-based coordinates where (0, 0) is the screen’s top-left corner and (ScreenWidth, ScreenHeight) is the bottom-right corner just as you do with your sprites.
I recently implemented just such a system so I could render the 2D bounding primitives I’m using for Separating Axis Theorem-based collision detection (which I hope to write about soon) that are also defined in screen space in order to simply algorithms involving other sprites.
The SpriteBatch Shader on the Creators Club website details how to go about transforming vertices from screen/pixel space to clip space (the final transformation required by Direct3D/XNA before it can initiate rasterization) as shown in the following image (minus the z-dimension for clip space):
(1680×1050 is simply an example resolution, the principle remains the same for any values)
The following HLSL code performs the necessary calculations (where position is the float4 POSITION of the vertex passed into the vertex shader and ViewportSize is equal to the width and height of the screen/back buffer):[c#]position.xy /= ViewportSize;
position.xy *= float2(2, -2);
position.xy -= float2(1, -1);[/c#]
If you’d rather not use a custom shader, though, the following C# implementation achieves the same thing (albeit less efficiently, because the CPU is doing the work, not the GPU) where vertices is a Vector2 array of positions in screen space and verticesForGPU is a VertexPositionColor array.[c#]for (int i = 0; i &amp;amp;amp;amp;lt; vertices.Length; i++)
verticesForGPU[i].Position = new Vector3(
vertices[i].X / BackBufferWidth * 2.0f – 1.0f,
vertices[i].Y / BackBufferHeight * -2.0f + 1.0f, 0);
Manually sifting through the maths soon makes it clear how it all works. For x values between (0 and 1680): divide by 1680 to return values between (0 and 1), multiply by 2 to get values between (0 and 2) and subtract 1 to get the desired values between (-1 and 1). Transforming y values is synonymous.
I’ve uploaded a couple of sample files that demonstrate the process by rendering four lines defined in screen space: one uses a custom shader (source file, shader file) while the other does its calculations in the main Game class and uses an instance of BasicEffect for rendering (source file).
The output of both samples should look something like this: