Concepts5 min read

Shader Concepts

Core GPU and shader concepts explained for programmers new to graphics.

Reading Time
5 min
Word Count
741
Sections
13
Try It Live

Turn the guide into code

Take the key idea from this page into the playground and validate it in a real shader instead of leaving it as theory.

Open Playground

This section covers the foundational ideas behind GPU shaders. If you can write code but haven't worked with shaders before, start here.

What Is a Shader?

A shader is a small program that runs on your GPU instead of your CPU. The GPU executes the same shader thousands or millions of times in parallel — once per vertex, once per pixel, or once per compute thread.

You don't write loops to process each pixel. You write the logic for one pixel, and the GPU runs it for all of them simultaneously.

bwsl
fragment {
// This runs once per pixel on screen.
// input.position tells you WHICH pixel.
float2 uv = input.position.xy / float2(1920.0, 1080.0);
output.color = float4(uv.x, uv.y, 0.0, 1.0);
}

CPU vs GPU Thinking

On the CPU, you think sequentially: iterate over data, branch freely, allocate memory. On the GPU, the model is different:

CPUGPU
ExecutionA few threads, complex logicThousands of threads, simple logic
BranchingCheapExpensive (threads diverge)
MemoryFlexible allocationFixed buffers, read-only textures
Data flowYou control everythingData flows through a fixed pipeline

The key mental shift: instead of "how do I process all my data?" think "what does one element of my data need to become?"

The Rendering Pipeline

When you draw 3D geometry, data flows through a fixed sequence of stages. Some stages are programmable (you write the shader), others are fixed-function (the GPU handles them automatically):

  1. Input Assembly — The GPU reads vertex data from buffers
  2. Vertex Shader — Your code transforms each vertex (position, normals, UVs)
  3. Rasterization — The GPU figures out which pixels each triangle covers
  4. Fragment Shader — Your code computes the color for each pixel
  5. Output — The final color is written to the screen or a texture

In BWSL, you write the vertex and fragment stages inside a pass:

bwsl
pipeline MyFirst {
attributes {
position: float3
}
pass "Main" {
vertex {
output.position = float4(attributes.position, 1.0);
}
fragment {
output.color = float4(1.0, 0.0, 0.0, 1.0);
}
}
}

See The Pipeline for a detailed interactive diagram of every stage.

Coordinate Spaces

Vertices pass through several coordinate transformations on their way to the screen. Understanding these spaces is essential for writing vertex shaders.

Object Space (Local)

The raw vertex positions as defined in the 3D model. A cube might have corners at -1 to +1. This is what you get from attributes.position.

World Space

After applying the model's position, rotation, and scale. A cube placed at (10, 0, 5) in your scene has its vertices transformed to that location.

View Space (Camera)

The world as seen from the camera. The camera sits at the origin, looking down the negative Z axis. Everything is relative to the camera.

Clip Space

After projection (perspective or orthographic). This is what you assign to output.position in the vertex shader. Coordinates outside the range [-w, +w] get clipped (removed). This is a float4 — the fourth component w is used for perspective division.

Screen Space (NDC → Pixels)

The GPU divides by w and maps to pixel coordinates. In the fragment shader, input.position.xy gives you the pixel position on screen.

bwsl
vertex {
// Typical transform chain:
float4 worldPos = modelMatrix * float4(attributes.position, 1.0);
float4 viewPos = viewMatrix * worldPos;
output.position = projectionMatrix * viewPos;
}

Vectors and Swizzling

Shader math is built on vectors — float2, float3, float4. These represent positions, directions, colors, or anything with multiple components.

You access components with .x, .y, .z, .w (or equivalently .r, .g, .b, .a for colors):

bwsl
float3 color = float3(1.0, 0.5, 0.2);
float red = color.r; // 1.0
float2 rg = color.rg; // float2(1.0, 0.5)
float3 bgr = color.bgr; // float3(0.2, 0.5, 1.0) — reversed!

This is called swizzling — you can rearrange and duplicate components freely. It's one of the most useful features in shader languages.

Normals and the Dot Product

A normal is a unit-length vector pointing perpendicular to a surface. Normals are how shaders know which direction a surface faces, which is critical for lighting.

The dot product of two vectors tells you how aligned they are:

  • dot(A, B) = 1.0 → pointing the same direction
  • dot(A, B) = 0.0 → perpendicular
  • dot(A, B) = -1.0 → pointing opposite directions

Basic diffuse lighting is just a dot product between the surface normal and the light direction:

bwsl
fragment {
float3 normal = normalize(input.normal);
float3 lightDir = normalize(float3(1.0, 1.0, 1.0));
float diffuse = max(dot(normal, lightDir), 0.0);
output.color = float4(float3(diffuse), 1.0);
}

normalize makes a vector unit-length. Always normalize directions before using dot — otherwise the result depends on vector length, not just angle.

UV Coordinates

UV coordinates map 2D texture images onto 3D geometry. They range from (0, 0) at one corner to (1, 1) at the opposite corner. Each vertex carries a UV, and the GPU interpolates them smoothly across each triangle.

bwsl
fragment {
float2 uv = input.texcoord;
float4 texColor = sample(resources.albedoMap, uv);
output.color = texColor;
}

UVs are also useful without textures — you can use them for procedural patterns, gradients, or masking.

What Next?

Now that you understand the fundamentals:

  • Shader I/O — How data flows between stages in BWSL
  • The Pipeline — Interactive diagram of the full GPU pipeline
  • Cookbook — Copy-paste recipes for common shader effects
  • Quick Start — Build your first complete shader