This renderer is meant to render meshes projected by a Camera. It therefore creates a Camera with its associated bindings as well as lights and shadows bindings used for lighting and their associated bind group.
Can be safely used to render compute passes and meshes if they do not need to be tied to the DOM.

// first, we need a WebGPU device, that's what GPUDeviceManager is for
const gpuDeviceManager = new GPUDeviceManager({
label: 'Custom device manager',
})

// we need to wait for the WebGPU device to be created
await gpuDeviceManager.init()

// then we can create a camera renderer
const gpuCameraRenderer = new GPUCameraRenderer({
deviceManager: gpuDeviceManager, // we need the WebGPU device to create the renderer context
container: document.querySelector('#canvas'),
})

Hierarchy (View Summary)

Constructors

Properties

Accessors

Methods

Constructors

Properties

camera: Camera

Camera used by this GPUCameraRenderer.

cameraLightsBindGroup: BindGroup

bind group handling the camera, lights and shadows BufferBinding.

lights: Light[]

Array of all the created Light.

lightsBindingParams: LightsBindingParams

An object defining the current lights binding parameters, including the maximum number of lights for each type and the structure used to create the associated BufferBinding.

shadowsBindingsStruct: Record<string, Record<string, Input>>

An object defining the structure used to create the shadows BufferBinding.

The bindings used by the camera, lights and shadows bind group.

pointShadowsCubeFaceBindGroups: BindGroup[]

An array of BindGroup containing a single BufferBinding with the cube face index onto which we'll want to draw for PointShadow depth cube map. Will be swapped for each face render passes by the PointShadow.

Options used to create this GPUCameraRenderer.

transmissionTarget: {
    passEntry?: RenderPassEntry;
    texture?: Texture;
    sampler: Sampler;
}

If our scene contains transmissive objects, we need to handle the rendering of transmissive meshes. To do so, we'll need a new screen pass RenderPassEntry and a Texture onto which we'll write the content of the non transmissive objects main buffer rendered objects.

Type declaration

  • OptionalpassEntry?: RenderPassEntry

    The new screen pass RenderPassEntry where we'll draw our transmissive objects.

  • Optionaltexture?: Texture

    The Texture holding the content of all the non transmissive objects we've already drawn onto the main screen buffer.

  • sampler: Sampler

    The Sampler used to sample the background output texture.

type: string

The type of the GPURenderer

uuid: string

The universal unique id of this GPURenderer

deviceManager: GPUDeviceManager

The GPUDeviceManager used to create this GPURenderer

HTMLCanvasElement onto everything is drawn

The WebGPU context used

renderPass: RenderPass

The render pass used to render our result to screen

postProcessingPass: RenderPass

Additional render pass used by ShaderPass for compositing / post processing. Does not handle depth

scene: Scene

The Scene used

shouldRender: boolean

Whether we should render our GPURenderer or not. If set to false, the render hooks onBeforeCommandEncoderCreation, onBeforeRenderScene, onAfterRenderScene and onAfterCommandEncoderSubmission won't be called, the scene graph will not be updated and the scene will not be rendered, completely pausing the renderer. Default to true.

shouldRenderScene: boolean

Whether we should explicitly update our Scene or not. If set to false, the scene graph will not be updated and the scene will not be rendered. Default to true.

computePasses: ComputePass[]

An array containing all our created ComputePass

pingPongPlanes: PingPongPlane[]

An array containing all our created PingPongPlane

shaderPasses: ShaderPass[]

An array containing all our created ShaderPass

renderTargets: RenderTarget[]

An array containing all our created RenderTarget

An array containing all our created meshes

textures: (MediaTexture | Texture)[]

An array containing all our created Texture

environmentMaps: Map<string, EnvironmentMap>

A Map containing all the EnvironmentMap handled by this renderer.

renderBundles: Map<string, RenderBundle>

A Map containing all the RenderBundle handled by this renderer.

animations: Map<string, TargetsAnimationsManager>

A Map containing all the TargetsAnimationsManager handled by this renderer.

pixelRatio: number

Pixel ratio to use for rendering

rectBBox: RectBBox

An object defining the width, height, top and left position of the canvas. Mainly used internally. If you need to get the renderer dimensions, use boundingRect instead.

domElement: DOMElement

DOMElement that will track our canvas container size

onBeforeCommandEncoderCreation: TasksQueueManager

Allow to add callbacks to be executed at each render before the GPUCommandEncoder is created

onBeforeRenderScene: TasksQueueManager

Allow to add callbacks to be executed at each render after the GPUCommandEncoder has been created but before the Scene is rendered

onAfterRenderScene: TasksQueueManager

Allow to add callbacks to be executed at each render after the GPUCommandEncoder has been created and after the Scene has been rendered

onAfterCommandEncoderSubmission: TasksQueueManager

Allow to add callbacks to be executed at each render after the Scene has been rendered and the GPUCommandEncoder has been submitted

_onResizeCallback: () => void = ...

function assigned to the onResize callback

_onAfterResizeCallback: () => void = ...

function assigned to the onAfterResize callback

Accessors

Methods