This is a set of Device API also known as the hardware adaptation layer(HAL). It is implemented using WebGL1/2 & WebGPU underneath and inspired by noclip.
Now we use it in the following projects:
- g-webgl & g-webgpu Used in G2 & G6 3D plots.
- L7 Large-scale WebGL-powered Geospatial Data Visualization analysis engine.
- A8 An audio visualizer.
- renderer A toy renderer inspired by bevy.
npm install @antv/g-device-api-
Resource Creation
-
Submit
-
Query
-
Debug
-
GPU Resources
- Buffer
- Texture
- Sampler
- RenderTarget
- RenderPass
- ComputePass
- Program
- QueryPool
- queryResultOcclusion
- Readback
A device is the logical instantiation of GPU.
import {
Device,
BufferUsage,
WebGLDeviceContribution,
WebGPUDeviceContribution,
} from '@antv/g-device-api';
// Create a WebGL based device contribution.
const deviceContribution = new WebGLDeviceContribution({
targets: ['webgl2', 'webgl1'],
});
// Or create a WebGPU based device contribution.
const deviceContribution = new WebGPUDeviceContribution({
shaderCompilerPath: '/glsl_wgsl_compiler_bg.wasm',
// shaderCompilerPath:
// 'https://unpkg.com/@antv/g-device-api@1.4.9/rust/pkg/glsl_wgsl_compiler_bg.wasm',
});
const swapChain = await deviceContribution.createSwapChain($canvas);
swapChain.configureSwapChain(width, height);
const device = swapChain.getDevice();A Buffer represents a block of memory that can be used in GPU operations. Data is stored in linear layout.
We references the WebGPU design:
createBuffer: (descriptor: BufferDescriptor) => Buffer;The parameters are as follows, references the WebGPU design:
- viewOrSize
requiredSet buffer data directly or allocate fixed length(in bytes). - usage
requiredThe allowed usage for this buffer. - hint
optionalKnown as hint when calling bufferData in WebGL.
interface BufferDescriptor {
viewOrSize: ArrayBufferView | number;
usage: BufferUsage;
hint?: BufferFrequencyHint;
}We can set buffer data directly, or allocate fixed length for later use e.g. calling setSubData:
const buffer = device.createBuffer({
viewOrSize: new Float32Array([1, 2, 3, 4]),
usage: BufferUsage.VERTEX,
});
// or
const buffer = device.createBuffer({
viewOrSize: 4 * Float32Array.BYTES_PER_ELEMENT, // in bytes
usage: BufferUsage.VERTEX,
});
buffer.setSubData(0, new Uint8Array(new Float32Array([1, 2, 3, 4]).buffer));The allowed usage for buffer.
They can also be composited like BufferUsage.VERTEX | BufferUsage.STORAGE.
enum BufferUsage {
MAP_READ = 0x0001,
MAP_WRITE = 0x0002,
COPY_SRC = 0x0004,
COPY_DST = 0x0008,
INDEX = 0x0010,
VERTEX = 0x0020,
UNIFORM = 0x0040,
STORAGE = 0x0080,
INDIRECT = 0x0100,
QUERY_RESOLVE = 0x0200,
}This param is called usage in WebGL. We change its name to hint avoiding duplicate naming.
enum BufferFrequencyHint {
Static = 0x01,
Dynamic = 0x02,
}This method references the WebGPU design to create a Texture:
createTexture: (descriptor: TextureDescriptor) => Texture;The parameters are as follows, references the WebGPU design:
interface TextureDescriptor {
usage: TextureUsage;
format: Format;
width: number;
height: number;
depthOrArrayLayers?: number;
dimension?: TextureDimension;
mipLevelCount?: number;
pixelStore?: Partial<{
packAlignment: number;
unpackAlignment: number;
unpackFlipY: boolean;
}>;
}- usage
requiredThe allowed usages for this GPUTexture. - format
requiredThe format of this GPUTexture. - width
requiredThe width of this GPUTexture. - height
requiredThe height of this GPUTexture. - depthOrArrayLayers
optionalThe depth or layer count of this GPUTexture. Defaulting to1. - dimension
optionalThe dimension of the set of texel for each of this GPUTexture's subresources. Defaulting toTextureDimension.TEXTURE_2D - mipLevelCount
optionalThe number of mip levels of this GPUTexture. Defaulting to1. - pixelStore
optionalSpecifies the pixel storage modes in WebGL.- packAlignment Packing of pixel data into memory.
gl.PACK_ALIGNMENT - unpackAlignment Unpacking of pixel data from memory.
gl.UNPACK_ALIGNMENT - unpackFlipY Flips the source data along its vertical axis if true.
gl.UNPACK_FLIP_Y_WEBGL
- packAlignment Packing of pixel data into memory.
The TextureUsage enum is as follows:
enum TextureUsage {
SAMPLED,
RENDER_TARGET, // When rendering to texture, choose this usage.
}The TextureDimension enum is as follows:
enum TextureDimension {
TEXTURE_2D,
TEXTURE_2D_ARRAY,
TEXTURE_3D,
TEXTURE_CUBE_MAP,
}Samplers are created via createSampler().
createSampler: (descriptor: SamplerDescriptor) => Sampler;The params reference GPUSamplerDescriptor.
interface SamplerDescriptor {
addressModeU: AddressMode;
addressModeV: AddressMode;
addressModeW?: AddressMode;
minFilter: FilterMode;
magFilter: FilterMode;
mipmapFilter: MipmapFilterMode;
lodMinClamp?: number;
lodMaxClamp?: number;
maxAnisotropy?: number;
compareFunction?: CompareFunction;
}AddressMode describes the behavior of the sampler if the sample footprint extends beyond the bounds of the sampled texture.
enum AddressMode {
CLAMP_TO_EDGE,
REPEAT,
MIRRORED_REPEAT,
}FilterMode and MipmapFilterMode describe the behavior of the sampler if the sample footprint does not exactly match one texel.
enum FilterMode {
POINT,
BILINEAR,
}
enum MipmapFilterMode {
NO_MIP,
NEAREST,
LINEAR,
}CompareFunction specifies the behavior of a comparison sampler. If a comparison sampler is used in a shader, an input value is compared to the sampled texture value, and the result of this comparison test (0.0f for pass, or 1.0f for fail) is used in the filtering operation.
enum CompareFunction {
NEVER = GL.NEVER,
LESS = GL.LESS,
EQUAL = GL.EQUAL,
LEQUAL = GL.LEQUAL,
GREATER = GL.GREATER,
NOTEQUAL = GL.NOTEQUAL,
GEQUAL = GL.GEQUAL,
ALWAYS = GL.ALWAYS,
}createRenderTarget: (descriptor: RenderTargetDescriptor) => RenderTarget;interface RenderTargetDescriptor {
format: Format;
width: number;
height: number;
sampleCount: number;
texture?: Texture;
}createRenderTargetFromTexture: (texture: Texture) => RenderTarget;createProgram: (program: ProgramDescriptor) => Program;wgsl will be used directly in WebGPU while glsl will be compiled internally.
Since WebGL doesn't support compute shader, compute is only available in WebGPU.
interface ProgramDescriptor {
vertex?: {
glsl?: string;
wgsl?: string;
};
fragment?: {
glsl?: string;
wgsl?: string;
};
compute?: {
wgsl: string;
};
}createBindings: (bindingsDescriptor: BindingsDescriptor) => Bindings;interface BindingsDescriptor {
bindingLayout: BindingLayoutDescriptor;
pipeline?: RenderPipeline | ComputePipeline;
uniformBufferBindings?: BufferBinding[];
samplerBindings?: SamplerBinding[];
storageBufferBindings?: BufferBinding[];
storageTextureBindings?: TextureBinding[];
}BufferBinding has the following properties:
- binding
requiredShould match thebindingin shader. - buffer
required - offset
optionalThe offset, in bytes, from the beginning of buffer to the beginning of the range exposed to the shader by the buffer binding. Defaulting to0. - size
optionalThe size, in bytes, of the buffer binding. If not provided, specifies the range starting at offset and ending at the end of buffer.
interface BufferBinding {
binding: number;
buffer: Buffer;
offset?: number;
size?: number;
}InputLayout defines the layout of vertex attribute data in a vertex buffer used by pipeline.
createInputLayout: (inputLayoutDescriptor: InputLayoutDescriptor) =>
InputLayout;A vertex buffer is, conceptually, a view into buffer memory as an array of structures. arrayStride is the stride, in bytes, between elements of that array. Each element of a vertex buffer is like a structure with a memory layout defined by its attributes, which describe the members of the structure.
interface InputLayoutDescriptor {
vertexBufferDescriptors: (InputLayoutBufferDescriptor | null)[];
indexBufferFormat: Format | null;
program: Program;
}
interface InputLayoutBufferDescriptor {
arrayStride: number; // in bytes
stepMode: VertexStepMode; // per vertex or instance
attributes: VertexAttributeDescriptor[];
}
interface VertexAttributeDescriptor {
shaderLocation: number;
format: Format;
offset: number;
divisor?: number;
}- shaderLocation
requiredThe numeric location associated with this attribute, which will correspond with a "@location" attribute declared in the vertex.module. - format
requiredThe VertexFormat of the attribute. - offset
requiredThe offset, in bytes, from the beginning of the element to the data for the attribute. - divisor
optional
Create a Readback to read GPU resouce's data from CPU side:
createReadback: () => Readback;readBuffer: (
b: Buffer,
srcByteOffset?: number,
dst?: ArrayBufferView,
dstOffset?: number,
length?: number,
) => Promise<ArrayBufferView>;const readback = device.createReadback();
readback.readBuffer(buffer);Only WebGL 2 & WebGPU support:
createQueryPool: (type: QueryPoolType, elemCount: number) => QueryPool;queryResultOcclusion(dstOffs: number): boolean | nullA RenderPipeline is a kind of pipeline that controls the vertex and fragment shader stages.
createRenderPipeline: (descriptor: RenderPipelineDescriptor) => RenderPipeline;The descriptor is as follows:
- colorAttachmentFormats
requiredThe formats of color attachment. - topology
optionalThe type of primitive to be constructed from the vertex inputs. Defaulting toTRIANGLES: - megaStateDescriptor
optional - depthStencilAttachmentFormat
optionalThe format of depth & stencil attachment. - sampleCount
optionalUsed in MSAA, defaulting to1.
interface RenderPipelineDescriptor extends PipelineDescriptor {
topology?: PrimitiveTopology;
megaStateDescriptor?: MegaStateDescriptor;
colorAttachmentFormats: (Format | null)[];
depthStencilAttachmentFormat?: Format | null;
sampleCount?: number;
}enum PrimitiveTopology {
POINTS,
TRIANGLES,
TRIANGLE_STRIP,
LINES,
LINE_STRIP,
}interface MegaStateDescriptor {
attachmentsState: AttachmentState[];
blendConstant?: Color;
depthCompare?: CompareFunction;
depthWrite?: boolean;
stencilFront?: Partial<StencilFaceState>;
stencilBack?: Partial<StencilFaceState>;
stencilWrite?: boolean;
cullMode?: CullMode;
frontFace?: FrontFace;
polygonOffset?: boolean;
polygonOffsetFactor?: number;
polygonOffsetUnits?: number;
}createComputePipeline: (descriptor: ComputePipelineDescriptor) =>
ComputePipeline;type ComputePipelineDescriptor = PipelineDescriptor;
interface PipelineDescriptor {
bindingLayouts: BindingLayoutDescriptor[];
inputLayout: InputLayout | null;
program: Program;
}A RenderPass is usually created at the beginning of each frame.
createRenderPass: (renderPassDescriptor: RenderPassDescriptor) => RenderPass;export interface RenderPassDescriptor {
colorAttachment: (RenderTarget | null)[];
colorAttachmentLevel?: number[];
colorClearColor?: (Color | 'load')[];
colorResolveTo: (Texture | null)[];
colorResolveToLevel?: number[];
colorStore?: boolean[];
depthStencilAttachment?: RenderTarget | null;
depthStencilResolveTo?: Texture | null;
depthStencilStore?: boolean;
depthClearValue?: number | 'load';
stencilClearValue?: number | 'load';
occlusionQueryPool?: QueryPool | null;
}createComputePass: () => ComputePass;RenderBundle can record the draw calls during one frame and replay this recording for all subsequent frames.
const renderBundle = device.createRenderBundle();
// On each frame.
if (frameCount === 0) {
renderPass.beginBundle(renderBundle);
// Omit other renderpass commands
renderPass.endBundle();
} else {
renderPass.executeBundles([renderBundle]);
}Should call this method at the beginning of each frame.
device.beginFrame();
const renderPass = device.createRenderPass({});
// Omit other commands.
renderPass.draw();
device.submitPass(renderPass);
device.endFrame();Schedules the execution of the command buffers by the GPU on this queue.
submitPass(o: RenderPass | ComputePass): void;Should call this method at the end of each frame.
copySubTexture2D: (
dst: Texture,
dstX: number,
dstY: number,
src: Texture,
srcX: number,
srcY: number,
depthOrArrayLayers?: number,
) => void;⚠️ WebGL 1 not supported- WebGL 2 uses blitFramebuffer
- WebGPU uses copyTextureToTexture
// @see https://www.w3.org/TR/webgpu/#gpusupportedlimits
queryLimits: () => DeviceLimits;interface DeviceLimits {
uniformBufferWordAlignment: number;
uniformBufferMaxPageWordSize: number;
supportedSampleCounts: number[];
occlusionQueriesRecommended: boolean;
computeShadersSupported: boolean;
}Query whether device's context is already lost:
queryPlatformAvailable(): booleanWebGL / WebGPU will trigger Lost event:
device.queryPlatformAvailable(); // falsequeryTextureFormatSupported(format: Format, width: number, height: number): boolean;const shadowsSupported = device.queryTextureFormatSupported(
Format.U16_RG_NORM,
0,
0,
);WebGL 1/2 & WebGPU use different origin:
queryVendorInfo: () => VendorInfo;interface VendorInfo {
readonly platformString: string;
readonly glslVersion: string;
readonly explicitBindingLocations: boolean;
readonly separateSamplerTextures: boolean;
readonly viewportOrigin: ViewportOrigin;
readonly clipSpaceNearZ: ClipSpaceNearZ;
readonly supportMRT: boolean;
}When using Spector.js to debug our application, we can set a name to relative GPU resource.
setResourceName: (o: Resource, s: string) => void;For instance, we add a label for RT and Spector.js will show us the metadata:
device.setResourceName(renderTarget, 'Main Render Target');
On WebGPU devtools we can also see the label:

Checks if there is currently a leaking GPU resource. We keep track of every GPU resource object created, and calling this method prints the currently undestroyed object and the stack information where the resource was created on the console, making it easy to troubleshoot memory leaks.
It is recommended to call this when destroying the scene to determine if there are resources that have not been destroyed correctly. For example, in the image below, there is a WebGL Buffer that has not been destroyed:
We should call buffer.destroy() at this time to avoid OOM.
https://developer.mozilla.org/en-US/docs/Web/API/GPUCommandEncoder/pushDebugGroup
pushDebugGroup(debugGroup: DebugGroup): void;interface DebugGroup {
name: string;
drawCallCount: number;
textureBindCount: number;
bufferUploadCount: number;
triangleCount: number;
}https://developer.mozilla.org/en-US/docs/Web/API/GPUCommandEncoder/popDebugGroup
A Buffer represents a block of memory that can be used in GPU operations. Data is stored in linear layout.
We can set data in buffer with this method:
- dstByteOffset
requiredOffset of dest buffer in bytes. - src
requiredSource buffer data, must use Uint8Array. - srcByteOffset
optionalOffset of src buffer in bytes. Defaulting to0. - byteLength
optionalDefaulting to the whole length of the src buffer.
setSubData: (
dstByteOffset: number,
src: Uint8Array,
srcByteOffset?: number,
byteLength?: number,
) => void;One texture consists of one or more texture subresources, each uniquely identified by a mipmap level and, for 2d textures only, array layer and aspect.
We can set data in buffer with this method:
- data
requiredArray of TexImageSource or ArrayBufferView. - lod
optionalLod. Defaulting to0.
setImageData: (
data: (TexImageSource | ArrayBufferView)[],
lod?: number,
) => void;Create a cubemap texture:
// The order of the array layers is [+X, -X, +Y, -Y, +Z, -Z]
const imageBitmaps = await Promise.all(
[
'/images/posx.jpg',
'/images/negx.jpg',
'/images/posy.jpg',
'/images/negy.jpg',
'/images/posz.jpg',
'/images/negz.jpg',
].map(async (src) => loadImage(src)),
);
const texture = device.createTexture({
format: Format.U8_RGBA_NORM,
width: imageBitmaps[0].width,
height: imageBitmaps[0].height,
depthOrArrayLayers: 6,
dimension: TextureDimension.TEXTURE_CUBE_MAP,
usage: TextureUsage.SAMPLED,
});
texture.setImageData(imageBitmaps);A GPUSampler encodes transformations and filtering information that can be used in a shader to interpret texture resource data.
The RenderPass has several methods which affect how draw commands.
Sets the viewport used during the rasterization stage to linearly map from normalized device coordinates to viewport coordinates.
- x
requiredMinimum X value of the viewport in pixels. - y
requiredMinimum Y value of the viewport in pixels. - w
requiredWidth of the viewport in pixels. - h
requiredHeight of the viewport in pixels. - minDepth
optionalMinimum depth value of the viewport. - maxDepth
optionalMinimum depth value of the viewport.
setViewport: (
x: number,
y: number,
w: number,
h: number,
minDepth?: number, // WebGPU only
maxDepth?: number, // WebGPU only
) => void;Sets the scissor rectangle used during the rasterization stage. After transformation into viewport coordinates any fragments which fall outside the scissor rectangle will be discarded.
- x
requiredMinimum X value of the scissor rectangle in pixels. - y
requiredMinimum Y value of the scissor rectangle in pixels. - w
requiredWidth of the scissor rectangle in pixels. - h
requiredHeight of the scissor rectangle in pixels.
setScissorRect: (x: number, y: number, w: number, h: number) => void;Sets the current RenderPipeline.
setPipeline(pipeline: RenderPipeline)Bindings defines the interface between a set of resources bound and their accessibility in shader stages.
setBindings: (bindings: Bindings) => void;setVertexInput: (
inputLayout: InputLayout | null,
buffers: (VertexBufferDescriptor | null)[] | null,
indexBuffer: IndexBufferDescriptor | null,
) => void;Bind vertex & index buffer(s) like this:
interface VertexBufferDescriptor {
buffer: Buffer;
offset?: number; // in bytes
}
type IndexBufferDescriptor = VertexBufferDescriptor;Sets the stencilReference value used during stencil tests with the "replace" GPUStencilOperation.
setStencilReference: (value: number) => void;Draws primitives.
- vertexCount
requiredThe number of vertices to draw. - instanceCount
optionalThe number of instances to draw. - firstVertex
optionalOffset into the vertex buffers, in vertices, to begin drawing from. - firstInstance
optionalFirst instance to draw.
draw: (
vertexCount: number,
instanceCount?: number,
firstVertex?: number,
firstInstance?: number,
) => void;Draws indexed primitives.
- indexCount
requiredThe number of indices to draw. - instanceCount
optionalThe number of instances to draw. - firstIndex
optionalOffset into the index buffer, in indices, begin drawing from. - baseVertex
optionalAdded to each index value before indexing into the vertex buffers. - firstInstance
optionalFirst instance to draw.
drawIndexed: (
indexCount: number,
instanceCount?: number,
firstIndex?: number,
baseVertex?: number,
firstInstance?: number,
) => void;Draws primitives using parameters read from a GPUBuffer.
drawIndirect: (indirectBuffer: Buffer, indirectOffset: number) => void;// Create drawIndirect values
const uint32 = new Uint32Array(4);
uint32[0] = 3;
uint32[1] = 1;
uint32[2] = 0;
uint32[3] = 0;
// Create a GPUBuffer and write the draw values into it
const drawValues = device.createBuffer({
viewOrSize: uint32,
usage: BufferUsage.INDIRECT,
});
// Draw the vertices
renderPass.drawIndirect(drawValues, 0);Draws indexed primitives using parameters read from a GPUBuffer.
drawIndexedIndirect: (indirectBuffer: Buffer, indirectOffset: number) => void;// Create drawIndirect values
const uint32 = new Uint32Array(5);
uint32[0] = 6; // The indexCount value
uint32[1] = 1; // The instanceCount value
uint32[2] = 0; // The firstIndex value
uint32[3] = 0; // The baseVertex value
uint32[4] = 0; // The firstInstance value
// Create a GPUBuffer and write the draw values into it
const drawValues = device.createBuffer({
viewOrSize: uint32,
usage: BufferUsage.INDIRECT,
});
// Draw the vertices
renderPass.drawIndirect(drawValues, 0);Occlusion query is only available on render passes, to query the number of fragment samples that pass all the per-fragment tests for a set of drawing commands, including scissor, sample mask, alpha to coverage, stencil, and depth tests. Any non-zero result value for the query indicates that at least one sample passed the tests and reached the output merging stage of the render pipeline, 0 indicates that no samples passed the tests.
When beginning a render pass, occlusionQuerySet must be set to be able to use occlusion queries during the pass. An occlusion query is begun and ended by calling beginOcclusionQuery() and endOcclusionQuery() in pairs that cannot be nested.
beginOcclusionQuery: (queryIndex: number) => void;endOcclusionQuery: () => void;Start recording draw calls in render bundle.
beginBundle: (renderBundle: RenderBundle) => void;Stop recording.
endBundle: () => void;Replay the commands recorded in render bundles.
executeBundles: (renderBundles: RenderBundle[]) => void;Computing operations provide direct access to GPU’s programmable hardware. Compute shaders do not have shader stage inputs or outputs, their results are side effects from writing data into storage bindings.
Dispatch work to be performed with the current ComputePipeline.
X/Y/Z dimension of the grid of workgroups to dispatch.
dispatchWorkgroups: (
workgroupCountX: number,
workgroupCountY?: number,
workgroupCountZ?: number,
) => void;Dispatch work to be performed with the current GPUComputePipeline using parameters read from a GPUBuffer.
dispatchWorkgroupsIndirect: (
indirectBuffer: Buffer,
indirectOffset: number,
) => void;setUniformsLegacy: (uniforms: Record<string, any>) => void;program.setUniformsLegacy({
u_ModelViewProjectionMatrix: modelViewProjectionMatrix,
u_Texture: texture,
});Readback can read data from Texture or Buffer.
Read pixels from texture.
- t
requiredTexture. - x
requiredX coordinate. - y
requiredY coordinate. - width
requiredWidth of dimension. - height
requiredHeight of dimension. - dst
requiredDst buffer view. - length
optional
readTexture: (
t: Texture,
x: number,
y: number,
width: number,
height: number,
dst: ArrayBufferView,
dstOffset?: number,
length?: number,
) => Promise<ArrayBufferView>;For instance, if we want to read pixels from a texture:
const texture = device.createTexture({
format: Format.U8_RGBA_NORM,
width: 1,
height: 1,
usage: TextureUsage.SAMPLED,
});
texture.setImageData([new Uint8Array([1, 2, 3, 4])]);
const readback = device.createReadback();
let output = new Uint8Array(4);
// x/y 0/0
await readback.readTexture(texture, 0, 0, 1, 1, output);
expect(output[0]).toBe(1);
expect(output[1]).toBe(2);
expect(output[2]).toBe(3);
expect(output[3]).toBe(4);readTextureSync: (
t: Texture,
x: number,
y: number,
width: number,
height: number,
dst: ArrayBufferView,
dstOffset?: number,
length?: number,
) => ArrayBufferView;Read buffer data.
- src
requiredSource buffer. - srcOffset
requiredOffset in bytes of src buffer. Defaulting to0. - dst
requiredDest buffer view. - dstOffset
optionalOffset in bytes of dst buffer. Defaulting to0. - length
optionalLength in bytes of dst buffer. Defaulting to its whole size.
readBuffer: (
src: Buffer,
srcOffset: number,
dst: ArrayBufferView,
dstOffset?: number,
length?: number,
) => Promise<ArrayBufferView>;BufferUsage.COPY_SRC must be used if this buffer will be read later:
const vertexBuffer = device.createBuffer({
viewOrSize: new Float32Array([0, 0.5, -0.5, -0.5, 0.5, -0.5]),
usage: BufferUsage.VERTEX | BufferUsage.COPY_SRC,
hint: BufferFrequencyHint.DYNAMIC,
});
const data = await readback.readBuffer(vertexBuffer, 0, new Float32Array(6));Since WebGL 1/2 & WebGPU use different shader languages, we do a lot of transpiling work at runtime.
We use a syntax very closed to GLSL 300, and for different devices:
- WebGL1. Downgrade to GLSL 100.
- WebGL2. Almost keep the same which means GLSL 300.
- WebGPU. Transpile to GLSL 440 and then use gfx-naga WASM to generate WGSL.
Syntax as follows:
// raw
layout(location = 0) in vec4 a_Position;
// compiled GLSL 100
attribute vec4 a_Position;
// compiled GLSL 300
layout(location = 0) in vec4 a_Position;
// compiled GLSL 440
layout(location = 0) in vec4 a_Position;
// compiled WGSL
var<private> a_Position_1: vec4<f32>;
@vertex
fn main(@location(0) a_Position: vec4<f32>) -> VertexOutput {
a_Position_1 = a_Position;
}// raw
out vec4 a_Position;
// compiled GLSL 100
varying vec4 a_Position;
// compiled GLSL 300
out vec4 a_Position;
// compiled GLSL 440
layout(location = 0) out vec4 a_Position;
// compiled WGSL
struct VertexOutput {
@location(0) v_Position: vec4<f32>,
}We need to use SAMPLER_2D / SAMPLER_Cube wrapping our texture.
// raw
uniform sampler2D u_Texture;
outputColor = texture(SAMPLER_2D(u_Texture), v_Uv);
// compiled GLSL 100
uniform sampler2D u_Texture;
outputColor = texture2D(u_Texture, v_TexCoord);
// compiled GLSL 300
uniform sampler2D u_Texture;
outputColor = texture(u_Texture, v_Uv);
// compiled GLSL 440
layout(set = 1, binding = 0) uniform texture2D T_u_Texture;
layout(set = 1, binding = 1) uniform sampler S_u_Texture;
outputColor = texture(sampler2D(T_u_Texture, S_u_Texture), v_Uv);
// compiled WGSL
@group(1) @binding(0)
var T_u_Texture: texture_2d<f32>;
@group(1) @binding(1)
var S_u_Texture: sampler;
outputColor = textureSample(T_u_Texture, S_u_Texture, _e5);WebGL2 uses Uniform Buffer Object.
// raw
layout(std140) uniform Uniforms {
mat4 u_ModelViewProjectionMatrix;
};
// compiled GLSL 100
uniform mat4 u_ModelViewProjectionMatrix;
// compiled GLSL 300
layout(std140) uniform Uniforms {
mat4 u_ModelViewProjectionMatrix;
};
// compiled GLSL 440
layout(std140, set = 0, binding = 0) uniform Uniforms {
mat4 u_ModelViewProjectionMatrix;
};
// compiled WGSL
struct Uniforms {
u_ModelViewProjectionMatrix: mat4x4<f32>,
}
@group(0) @binding(0)
var<uniform> global: Uniforms;instance_name for now:
// wrong
layout(std140) uniform Uniforms {
mat4 projection;
mat4 modelview;
} matrices;We still use gl_Position to represent the output of vertex shader:
// raw
gl_Position = vec4(1.0);
// compiled GLSL 100
gl_Position = vec4(1.0);
// compiled GLSL 300
gl_Position = vec4(1.0);
// compiled GLSL 440
gl_Position = vec4(1.0);
// compiled WGSL
struct VertexOutput {
@builtin(position) member: vec4<f32>,
}// raw
out vec4 outputColor;
outputColor = vec4(1.0);
// compiled GLSL 100
vec4 outputColor;
outputColor = vec4(1.0);
gl_FragColor = vec4(outputColor);
// compiled GLSL 300
out vec4 outputColor;
outputColor = vec4(1.0);
// compiled GLSL 440
layout(location = 0) out vec4 outputColor;
outputColor = vec4(1.0);
// compiled WGSL
struct FragmentOutput {
@location(0) outputColor: vec4<f32>,
}It is worth mentioning that since WGSL is not natively supported, naga does conditional compilation during the GLSL 440 -> WGSL translation process.
#define KEY VAR
#define PI 3.14@group(x) in WGSL should obey the following order:
group(0)Uniform eg.var<uniform> time : Time;group(1)Texture & Sampler pairgroup(2)StorageBuffer eg.var<storage, read_write> atomic_storage : array<atomic<i32>>;group(3)StorageTexture eg.var screen : texture_storage_2d<rgba16float, write>;
For example:
@group(1) @binding(0) var myTexture : texture_2d<f32>;
@group(1) @binding(1) var mySampler : sampler;@group(1) @binding(0) var myTexture : texture_2d<f32>;
@group(1) @binding(1) var mySampler : sampler;
@group(2) @binding(0) var<storage, read_write> input : array<i32>;Uniform and storage buffer can be assigned binding number:
device.createBindings({
pipeline: computePipeline,
uniformBufferBindings: [
{
binding: 0,
buffer: uniformBuffer,
},
],
storageBufferBindings: [
{
binding: 1,
buffer: storageBuffer,
},
],
});@group(0) @binding(0) var<uniform> params : SimParams;
@group(0) @binding(1) var<storage, read_write> input : array<i32>;
@group(1) @binding(0) var myTexture : texture_2d<f32>;
@group(1) @binding(1) var mySampler : sampler;Currently we don't support dynamicOffsets when setting bindgroup.
// Won't support for now.
passEncoder.setBindGroup(1, dynamicBindGroup, dynamicOffsets);