Skip to content

Multiview Rendering

multiview.png

This example shows how to use VK_KHR_multiview to render multiple views (typically left/right eye for VR) in a single render pass. Multiview is dramatically more efficient than rendering each view separately because it avoids duplicating draw calls, state changes, and vertex processing. The GPU renders to multiple texture array layers simultaneously, with shaders accessing gl_ViewIndex to differentiate between views.

The example uses the KDGpuExample helper API for simplified setup.

Overview

What this example demonstrates:

  • Enabling and using VK_KHR_multiview extension
  • Creating texture arrays as multiview render targets (2 layers = 2 views)
  • Configuring render passes with view masks
  • Accessing gl_ViewIndex in shaders for per-view transformations
  • Displaying both views side-by-side on screen

Use cases:

  • Virtual Reality (VR) stereo rendering (left/right eyes)
  • Augmented Reality (AR) stereo displays
  • Cube map generation (6 views in one pass)
  • Shadow cascades (multiple shadow maps)
  • Split-screen rendering

Vulkan Requirements

  • Vulkan Version: 1.1+ (multiview promoted to core)
  • Extensions: VK_KHR_multiview (core in 1.1)
  • Features: multiview, multiviewGeometryShader, multiviewTessellationShader (optional)
  • Limits: Check maxMultiviewViewCount (typically 6+)

Key Concepts

Multiview Rendering:

Traditional stereo rendering requires two passes:

1
2
3
4
for each eye:
    Bind framebuffer
    Set viewport/scissor
    Execute all draw calls with eye-specific transforms

Multiview renders both views simultaneously:

1
2
3
4
Bind multiview framebuffer (texture array with N layers)
Set viewMask = 0b11 (views 0 and 1)
Execute draw calls once
 GPU renders to both layers automatically

Benefits:

  • ~50% reduction in CPU overhead (one set of draw calls)
  • Shared vertex processing (GPU processes vertices once, broadcasts to views)
  • Better cache coherence
  • Fewer state changes

Spec: https://registry.khronos.org/vulkan/specs/1.3-extensions/man/html/VK_KHR_multiview.html

View Index:

In shaders, gl_ViewIndex (GLSL) or SV_ViewID (HLSL) indicates which view is being rendered:

1
2
3
4
5
6
7
layout(set = 0, binding = 0) uniform CameraMatrices {
    mat4 viewProj[2];  // Array of view-projection matrices
} cameras;

void main() {
    gl_Position = cameras.viewProj[gl_ViewIndex] * vec4(position, 1.0);
}

For VR, typically:

  • View 0 = Left eye transform
  • View 1 = Right eye transform

Texture Array Layers:

Multiview writes to texture array layers (slices):

  • 2D array texture: extent = {width, height, arrayLayers=N}
  • Each view renders to one layer
  • viewMask = 0b11 → Render to layers 0 and 1
  • Later, each layer can be sampled/displayed independently

Spec: https://registry.khronos.org/vulkan/specs/1.3-extensions/man/html/VkImageCreateInfo.html

Implementation

Creating Multiview Texture Array:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
void MultiView::createMultiViewOffscreenTextures()
{
    m_multiViewColorOutput = m_device.createTexture(TextureOptions{
            .type = TextureType::TextureType2D,
            .format = m_mvColorFormat,
            .extent = { m_window->width(), m_window->height(), 1 },
            .mipLevels = 1,
            .arrayLayers = 2,
            .samples = SampleCountFlagBits::Samples1Bit,
            .usage = TextureUsageFlagBits::ColorAttachmentBit | TextureUsageFlagBits::SampledBit,
            .memoryUsage = MemoryUsage::GpuOnly,
    });
    m_multiViewDepth = m_device.createTexture(TextureOptions{
            .type = TextureType::TextureType2D,
            .format = m_mvDepthFormat,
            .extent = { m_window->width(), m_window->height(), 1 },
            .mipLevels = 1,
            .arrayLayers = 2,
            .samples = SampleCountFlagBits::Samples1Bit,
            .usage = TextureUsageFlagBits::DepthStencilAttachmentBit,
            .memoryUsage = MemoryUsage::GpuOnly,
    });

    m_multiViewColorOutputView = m_multiViewColorOutput.createView(TextureViewOptions{
            .viewType = ViewType::ViewType2DArray,
    });
    m_multiViewDepthView = m_multiViewDepth.createView(TextureViewOptions{
            .viewType = ViewType::ViewType2DArray,
    });
}

Filename: multiview/multiview.cpp

Key configuration:

  • arrayLayers = 2: Two views (left/right)
  • Both color and depth need array layers
  • usage: ColorAttachmentBit + SampledBit (render to, then sample from)

Configuring Multiview Render Pass:

The render pass is configured with viewMask to specify which views are rendered. The viewMask = 0b11 (binary) enables views 0 and 1. For multi-view rendering, the same vertex/index buffers are rendered to multiple view layers automatically.

Shader Access to View Index:

Typically you would use different per eye camera matrices.

For the sake of simplicity, in this example we just change the rotation direction of our triangle for each eye.

1
2
    float rotationSign = gl_ViewIndex == 0 ? -1.0 : 1.0;
    gl_Position = vec4(vertexPosition * rotationAroundZ(rotationSign * pushConstants.angle), 1.0);

Filename: multiview/rotating_triangle.vert

The vertex shader uses gl_ViewIndex to select which direction to rotate the triangle.

The Multiview Render pass will fill both layers of the Texture2DArray attachment.

Multiview pipeline creation:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
    m_mvPipeline = m_device.createGraphicsPipeline(GraphicsPipelineOptions{
            .shaderStages = {
                    { .shaderModule = vertexShader, .stage = ShaderStageFlagBits::VertexBit },
                    { .shaderModule = fragmentShader, .stage = ShaderStageFlagBits::FragmentBit } },
            .layout = m_mvPipelineLayout,
            .vertex = {
                    .buffers = {
                            { .binding = 0, .stride = sizeof(Vertex) },
                    },
                    .attributes = {
                            { .location = 0, .binding = 0, .format = Format::R32G32B32_SFLOAT }, // Position
                            { .location = 1, .binding = 0, .format = Format::R32G32B32_SFLOAT, .offset = sizeof(glm::vec3) }, // Color
                    },
            },
            .renderTargets = {
                    { .format = m_mvColorFormat },
            },
            .depthStencil = {
                    .format = m_mvDepthFormat,
                    .depthWritesEnabled = true,
                    .depthCompareOperation = CompareOperation::Less,
            },
            .viewCount = 2,
    });

Filename: multiview/multiview.cpp

Note the viewCount parameter which tells how many views we will be rendering to for multiview rendering.

Multiview rendering:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
    // MultiView Pass
    auto mvPass = commandRecorder.beginRenderPass(RenderPassCommandRecorderOptions{
            .colorAttachments = {
                    { .view = m_multiViewColorOutputView,
                      .clearValue = { 0.3f, 0.3f, 0.3f, 1.0f },
                      .finalLayout = TextureLayout::ColorAttachmentOptimal },
            },
            .depthStencilAttachment = { .view = m_multiViewDepthView },
            .viewCount = 2, // Enables multiview rendering
    });
    mvPass.setPipeline(m_mvPipeline);
    mvPass.setVertexBuffer(0, m_vertexBuffer);
    mvPass.pushConstant(m_mvPushConstantRange, &rotationAngleRad);
    mvPass.draw(DrawCommand{ .vertexCount = 3 });
    mvPass.end();

Filename: multiview/multiview.cpp

Full-Screen Display Pass:

After multiview rendering, a full-screen pass displays both views side-by-side by sampling from each array layer.

If we have a Stereo display and stereo capable GPU, we could have used a Swapchain with texture arrays directly. This is what the Multiview Stereo Swapchain does.

For this example however, we will display a side by side view of each texture array entry.

To do so, we will display 2 quads side by side and sample from the texture array.

Push Constants for Layer Selection:

The full-screen display fragment shader uses a push constant to select which texture array layer (eye) to sample from:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
layout(push_constant) uniform PushConstants
{
    int arrayLayer;
}
pushConstants;

void main()
{
    vec3 color = texture(colorTexture, vec3(texCoord, pushConstants.arrayLayer)).rgb;
    fragColor = vec4(color, 1.0);
}

Filename: multiview/fullscreenquad.frag

When displaying, push constants specify which texture array layer to sample from.

Pipeline Configuration:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
    // Create a pipeline
    m_fsqPipeline = m_device.createGraphicsPipeline(GraphicsPipelineOptions{
            .shaderStages = {
                    { .shaderModule = vertexShader, .stage = ShaderStageFlagBits::VertexBit },
                    { .shaderModule = fragmentShader, .stage = ShaderStageFlagBits::FragmentBit },
            },
            .layout = m_fsqPipelineLayout,
            .vertex = {
                    .buffers = {},
                    .attributes = {},
            },
            .renderTargets = { { .format = m_swapchainFormat } },
            .depthStencil = { .format = m_depthFormat, .depthWritesEnabled = true, .depthCompareOperation = CompareOperation::Less },
    });

Filename: multiview/multiview.cpp

Push constant ranges must be declared in the pipeline layout.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
    // FullScreen Pass
    auto fsqPass = commandRecorder.beginRenderPass(RenderPassCommandRecorderOptions{
            .colorAttachments = {
                    {
                            .view = m_swapchainViews.at(m_currentSwapchainImageIndex),
                            .clearValue = { 0.0f, 0.0f, 0.0f, 1.0f },
                            .finalLayout = TextureLayout::PresentSrc,
                    },
            },
            .depthStencilAttachment = { .view = m_depthTextureView },
    });
    fsqPass.setPipeline(m_fsqPipeline);
    fsqPass.setBindGroup(0, m_fsqTextureBindGroup);

    // Left Eye
    fsqPass.setViewport(Viewport{
            .x = 0,
            .y = 0,
            .width = halfWidth,
            .height = float(m_window->height()),
    });
    const int leftEyeLayer = 0;
    fsqPass.pushConstant(m_fsqLayerIdxPushConstantRange, &leftEyeLayer);
    fsqPass.draw(DrawCommand{ .vertexCount = 6 });

    // Right Eye
    fsqPass.setViewport(Viewport{
            .x = halfWidth,
            .y = 0,
            .width = halfWidth,
            .height = float(m_window->height()),
    });
    const int rightEyeLayer = 1;
    fsqPass.pushConstant(m_fsqLayerIdxPushConstantRange, &rightEyeLayer);
    fsqPass.draw(DrawCommand{ .vertexCount = 6 });

    // Call helper to record the ImGui overlay commands
    renderImGuiOverlay(&fsqPass);

    fsqPass.end();

Filename: multiview/multiview.cpp

Performance Notes

Performance Gains:

  • CPU: ~40-50% reduction in driver overhead (one set of draw calls)
  • GPU: Vertex shader runs once per vertex (broadcast to all views)
  • Bandwidth: Same as multi-pass (writing to multiple layers)
  • Memory: Texture array memory = single texture × layer count

When Multiview Helps Most:

  • CPU-bound scenarios (many draw calls)
  • Vertex-heavy scenes (complex geometry)
  • Shared geometry between views (same objects, different cameras)

Limitations:

  • Fragment shaders still run per-view (can't share fragment work)
  • Geometry/tessellation shaders may be less efficient (check hardware support)
  • View-dependent effects (reflections) need per-view computation

Mobile Optimization:

  • Tile-based renderers benefit greatly (fewer passes)
  • On-chip tile memory shared across views
  • Reduced memory bandwidth for geometry data

VR Considerations

For VR rendering:

  1. Eye Separation: Calculate left/right view matrices with interpupillary distance (IPD ~64mm)
  2. Lens Distortion: Apply post-process distortion correction per-eye
  3. Foveated Rendering: Higher resolution at gaze center (combine with variable rate shading)
  4. Asynchronous Timewarp: Reproject frames for reduced latency

See Hello XR (OpenXR VR Application) for complete OpenXR integration example.

See Also

Further Reading


Updated on 2026-03-31 at 00:02:07 +0000