Render to Texture
This example demonstrates how to render a scene onto a texture, and then to sample from it in another pass.
Initialization
There are a couple function calls at the beginning of initializeScene
: initializeMainScene
and initializePostProcess
. The former initializes a vertex, index, and transformation matrix buffers, as well as the pipeline, in a similar fashion to the previous examples. This pipeline will be used in the first pass to render the actual geometry to the texture. initializePostProcess
initializes the second pipeline, which will be used to draw the texture onto a quad covering the whole screen. Because it's drawn onto a quad, this pipeline also has geometry buffers, except there is no index buffer and the vertices are for a quad instead of one triangle.
The post process pass also needs a sampler:
TODO: this snippet is broken for no reason
Filename: render_to_texture/render_to_texture.cpp
Which will be passed into a bind group on initialization and on resize:
1
2
3
4
5
6
7
8
9
10
11
12
13
14 | void RenderToTexture::updateColorBindGroup()
{
// Create a bindGroup to hold the Offscreen Color Texture
// clang-format off
const BindGroupOptions bindGroupOptions = {
.layout = m_colorBindGroupLayout,
.resources = {{
.binding = 0,
.resource = TextureViewSamplerBinding{ .textureView = m_colorOutputView, .sampler = m_colorOutputSampler }
}}
};
// clang-format on
m_colorBindGroup = m_device.createBindGroup(bindGroupOptions);
}
|
Filename: render_to_texture/render_to_texture.cpp
We will also be using post processing shaders for this pass. The fragment shader also expects a push constant- a fraction. We will be passing in sine wave values into this, and the fragment shader will create an oscillating red line, on the right of which it will apply its post processing and on the left of which it render the original image.
| const auto vertexShaderPath = KDGpu::assetPath() + "/shaders/examples/render_to_texture/desaturate.vert.spv";
auto vertexShader = m_device.createShaderModule(KDGpuExample::readShaderFile(vertexShaderPath));
const auto fragmentShaderPath = KDGpu::assetPath() + "/shaders/examples/render_to_texture/desaturate.frag.spv";
auto fragmentShader = m_device.createShaderModule(KDGpuExample::readShaderFile(fragmentShaderPath));
|
Filename: render_to_texture/render_to_texture.cpp
Additionally, we lay out a bind group for this pipeline which contains a CombinedImageSampler
. That's the layout for the bind group shown previously.
| const BindGroupLayoutOptions bindGroupLayoutOptions = {
.bindings = {{
.binding = 0,
.resourceType = ResourceBindingType::CombinedImageSampler,
.shaderStages = ShaderStageFlags(ShaderStageFlagBits::FragmentBit)
}}
};
|
Filename: render_to_texture/render_to_texture.cpp
The pipeline needs to also accept the push constant, which will be the position of the red line filter.
| const PipelineLayoutOptions pipelineLayoutOptions = {
.bindGroupLayouts = { m_colorBindGroupLayout },
.pushConstantRanges = { m_filterPosPushConstantRange }
};
|
Filename: render_to_texture/render_to_texture.cpp
The final pipeline configuration should look very familiar, with the exception of the new primitive
entry. This provides an easy default alternative to having an index buffer. TriangleStrip
will cause the vertices to be interpreted as a series of triangles sharing one side, which is ideal for drawing a quad.
| .primitive = {
.topology = PrimitiveTopology::TriangleStrip
}
|
Filename: render_to_texture/render_to_texture.cpp
This post-process pass will be configured in a member variable called m_finalPassOptions
, with similar settings to pass options seen in previous examples. The configuration for the first pass, which does the actual rendering to the texture, is notable:
Filename: render_to_texture/render_to_texture.cpp
Note that the view is constant. It is always the view for the render target texture.
Lets take a look at where that texture and view were created:
1
2
3
4
5
6
7
8
9
10
11
12
13 | void RenderToTexture::createOffscreenTexture()
{
const TextureOptions colorTextureOptions = {
.type = TextureType::TextureType2D,
.format = m_colorFormat,
.extent = { m_swapchainExtent.width, m_swapchainExtent.height, 1 },
.mipLevels = 1,
.usage = TextureUsageFlagBits::ColorAttachmentBit | TextureUsageFlagBits::SampledBit,
.memoryUsage = MemoryUsage::GpuOnly
};
m_colorOutput = m_device.createTexture(colorTextureOptions);
m_colorOutputView = m_colorOutput.createView();
}
|
Filename: render_to_texture/render_to_texture.cpp
TODO: multiview is probably more advanced than render to texture, so they should be re-ordered and reference each other in the opposite direction
This should look similar to depth texture creation in previous examples (such as Hello Triangle Native and particularly Multiview). Just note that usage
includes the ColorAttachmentBit
and SampledBit
. This is identical to how we created the multiview texture.
Per-Frame Logic
We've got an update loop in this example, part of which populates a variable m_filterPosData
with the sine wave value that determines the location of the red line:
| const float t = engine()->simulationTime().count() / 1.0e9;
m_filterPos = 0.5f * (std::sin(t) + 1.0f);
std::memcpy(m_filterPosData.data(), &m_filterPos, sizeof(float));
|
Filename: render_to_texture/render_to_texture.cpp
And finally, lets look at the bulk of the render
method:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50 | // Pass 1: Color pass
auto opaquePass = commandRecorder.beginRenderPass(m_opaquePassOptions);
opaquePass.setPipeline(m_pipeline);
opaquePass.setVertexBuffer(0, m_buffer);
opaquePass.setIndexBuffer(m_indexBuffer);
opaquePass.setBindGroup(0, m_transformBindGroup);
opaquePass.drawIndexed(DrawIndexedCommand{ .indexCount = 3 });
opaquePass.end();
// Wait for Pass1 writes to offscreen texture to have been completed
// Transition it to a shader read only layout
commandRecorder.textureMemoryBarrier(TextureMemoryBarrierOptions{
.srcStages = PipelineStageFlagBit::ColorAttachmentOutputBit,
.srcMask = AccessFlagBit::ColorAttachmentWriteBit,
.dstStages = PipelineStageFlagBit::FragmentShaderBit,
.dstMask = AccessFlagBit::ShaderReadBit,
.oldLayout = TextureLayout::ColorAttachmentOptimal,
.newLayout = TextureLayout::ShaderReadOnlyOptimal,
.texture = m_colorOutput,
.range = {
.aspectMask = TextureAspectFlagBits::ColorBit,
.levelCount = 1,
},
});
// Wait for Pass1 writes to depth texture to have been completed
commandRecorder.textureMemoryBarrier(KDGpu::TextureMemoryBarrierOptions{
.srcStages = PipelineStageFlagBit::AllGraphicsBit,
.srcMask = AccessFlagBit::DepthStencilAttachmentWriteBit,
.dstStages = PipelineStageFlagBit::TopOfPipeBit,
.dstMask = AccessFlagBit::None,
.oldLayout = TextureLayout::DepthStencilAttachmentOptimal,
.newLayout = TextureLayout::DepthStencilAttachmentOptimal,
.texture = m_depthTexture,
.range = {
.aspectMask = TextureAspectFlagBits::DepthBit | TextureAspectFlagBits::StencilBit,
.levelCount = 1,
},
});
// Pass 2: Post process
m_finalPassOptions.colorAttachments[0].view = m_swapchainViews.at(m_currentSwapchainImageIndex);
auto finalPass = commandRecorder.beginRenderPass(m_finalPassOptions);
finalPass.setPipeline(m_postProcessPipeline);
finalPass.setVertexBuffer(0, m_fullScreenQuad);
finalPass.setBindGroup(0, m_colorBindGroup);
finalPass.pushConstant(m_filterPosPushConstantRange, m_filterPosData.data());
finalPass.draw(DrawCommand{ .vertexCount = 4 });
renderImGuiOverlay(&finalPass);
finalPass.end();
|
Filename: render_to_texture/render_to_texture.cpp
Notice that only the fullscreen pass uses the swapchain view, because the first is rendering to the texture.
Updated on 2024-12-15 at 00:01:56 +0000