The idea of adding virtual 3D worlds to the web has a long history: In 1994, Dave Ragget proposed a platform independent 3D scene description based on web technologies as “mechanisms for people to share VR models on a global basis”. Ragget also proposed to describe the structure of the virtual world on a high abstraction level based on the experiences of SGML and HTML. In the same year, Pesce et al. proposed extending “HTML to describe both geometry and space”.
However, the developers of the resulting VRML standard decided against designing VRML as an extension to HTML. As a result, Web3D never took off until Vladimir Vukićević prototyped an OpenGL-based API for the HTML <canvas> element in 2006. This idea finally led to [[WebGL]].
Although WebGL is a great technology and paved the way for accelerated 3D graphics in the browser, it also comes with some disadvantages that are addressed by XML3D: First, it is only loosely coupled with other W3C technologies such as HTML, CSS, DOM scripting and events. As a consequence, web developers need to learn new concepts and WebGL-based libraries can not interoperate with HTML and DOM libraries such as jQuery. Secondly, WebGL is tied to OpenGL and does not take other algorithms and APIs into account. For instance, the definition of pipeline shaders is too specific and too low-level to use for ray tracing or global-illumination algorithms.
Thus, XML3D picks up the idea of a platform independent 3D scene description on a higher abstraction level on the basis of HTML (and yes, XML3D is a stupid name for that). At the same time, it takes the features of modern graphics APIs into account and exposes data processing and shading capabilities to the user, which can be mapped to the GPU pipeline. Despite these capabilities it can still be rendered with arbitrary rendering algorithms.
We do not expect that browser vendors will implement XML3D natively in the near future. The main motivation of this specification is to enable people to share their 3D assets including 3D models, materials and animations and to easily create stunning 3D web applications. Therefore, XML3D is accompanied with a reference application, a polyfill that uses WebGL and JavaScript to emulate native XML3D support. As a result, XML3D is fully usable today and various applications already exist.
However, having said that, we expect that a native implementation of XML3D would come with a series of benefits: Browsers could render scenes with advanced rendering algorithms or render the scene or parts of the scene as a service in the cloud. The higher abstraction level gives implementations opportunities to optimize performance and to provide advanced debugging facilities. A native renderer can also reason about the scene's content and changes and - as a result - driving VR applications would be much easier.
This specification defines the features and syntax of XML3D.
XML3D is a language for describing interactive 3D scenes. Although the name may suggest otherwise, XML3D is not an XML language. Instead it is designed as an extension to [[HTML5]]. Consequently, XML3D defines an abstract language and APIs for interacting with the 3D scene. Similar to HTML5, the concrete syntax of XML3D can be either the [[HTML5]] syntax or the [[XML]] syntax.
XML3D reuses concepts from HTML5: [[DOM]] scripting and [[UIEvents]] allow easy creation of interactive 3D scenes. [[CSS2]] can be used to define styling properties for XML3D elements such as transformations or visibility. Hence, XML3D is tightly integrated into the W3C technology stack.
On the other hand, XML3D introduces some novel concepts. XML3D introduces a generic data model to define arbitrarily named and typed parameters. XML3D has a dataflow graph concept in order to describe data processing within the HTML document. Finally, it supports programmable materials based on JavaScript. These concepts are essential to gain a higher flexibility compared to previous declarative 3D scene descriptions (e.g. VRML or X3D).
XML3D is platform independent, i.e. an XML3D scene description can be rendered using arbitrary rendering algorithms. Thus, XML3D can not only be rendered with various flavours of GPU rasterization (forward rendering, deferred rendering, etc), but also with ray tracing and rendering algorithms taking global illumination into account (e.g. Monte Carlo path tracing). Similar to HTML, XML3D describes what should be rendered rather than how it should be rendered.
XML3D is a lean low-level 3D scene description. Convenience functionality found in other approaches can be implemented on top of XML3D, for instance using scripting and concepts such as [[custom-elements]].
<img> element to define texture data.HTMLElement DOM interface.<defs> element from [[SVG]].XML3D generic data model provides means to share and compose data between data consumers such as the mesh element, light element, view element, and material element. It is also the basis for the dataflow graph concept and for programmable materials with shade.js, which both require arbritraily typed and named parameters.
All data elements output a table of named entries. This table is composed from
<data id="parameter-set"> <!-- generic data container -->
<float3 name="parameter-1">1.0 0.0 0.0</float3>
<float name="parameter-2">0.5</float>
</data>
<material id="material1" model="..."> <!-- parameterized consumer -->
<float3 name="parameter-1">1.0 0.0 0.0</float3>
<float name="parameter-2">0.5</float>
</material>
The generic data model allows arbitrarily long sequences of data to be defined using the key attribute on value elements. This is useful, for example, to define the base data for mesh animations to be driven by Xflow compute operators.
<data id="keyFrameData">
<float3 name="position" key="0" >-5 0 5 ... </float3>
<float3 name="normal" key="0" >0 -1 0 ... </float3>
<float3 name="position" key="0.1" >-2.886751 2.113249 2.886751 ... </float3>
<float3 name="normal" key="0.1">-0.554395 -0.620718 0.554395 ... </float3>
<float3 name="position" key="0.2">-1.341089 4.649148 1.341089 ... </float3>
<float3 name="normal" key="0.2">-0.696886 0.169412 0.696886 ... </float3>
<float3 name="position" key="0.3" >-6.158403 1.408833 6.158403 ... </float3>
<float3 name="normal" key="0.3">-0.141341 -0.979819 0.141341 ... </float3>
...
</data>
The above sequence of data represents the keyframes of a mesh animation. Typically a compute operator would then be used to interpolate between keyframes, or to otherwise map the sequences of data to a definite set of position and normal data to be rendered.
position and to per-vertex normals (requires processing) named normal, both of type float3. Then a mesh element can reference STL files directly:
<mesh src="printable-teapot.stl"></mesh>
Note that defining this mapping is possible in xml3d.js via a plug-in. A plug-in for STL files is available here.
<data id="specularTerm">
<float3 name="specularColor">0.1 0.2 0.3</float3>
<float name="ambientIntensity">0.1</float>
<float name="shininess">0.2</float>
<float name="shininess">0.6</float> <!-- shininess is 0.6 according to rule 1 -->
</data>
<data id="diffuseTerm">
<float3 name="diffuseColor">0.1 0.2 0.3</float3>
<float name="ambientIntensity">0.9</float>
</data>
<material id="composedMaterial"> <!-- Composing a material from multiple sources -->
<data src="path/to/external/resource.xml#textureData"></data>
<data src="#specularTerm"></data> <!-- shininess is 0.6 according to rule 4 -->
<data src="#diffuseTerm"></data> <!-- ambientIntensity is 0.9 according to rule 3 -->
</material>
<material id="specializedMaterial">
<data src="#specularTerm"></data> <!-- Reuse data from #specularTerm -->
<float name="shininess">0.4</float> <!-- shininess is 0.4 according to rule 2 -->
</material>
The generic data model defines three filters that may be applied to data through the filter attribute on the data element and the dataflow element.
filter="keep|remove(field1 [, field2...])"
filter="rename( { newFieldName : field [, newFieldName2 : field2...] } )"
A filter does not change the data itself but rather influences how that data is made available to other elements. For example, using a remove filter does not actually remove the data from the element, it simply hides it from parent elements.
An data element may only have one filter, if multiple filters are needed they can be applied with nested data elements.
<data filter="keep(A,D)">
<float name="A" >0</float>
<float name="B" >1</float> <!-- will be removed -->
<float name="C" >1</float> <!-- will be removed -->
<float name="D" >1</float>
</data>
<data filter="remove(D)">
<float name="A" >0</float>
<float name="B" >1</float>
<float name="C" >1</float>
<float name="D" >1</float> <!-- will be removed -->
</data>
<data filter="rename( {A2 : A, B2 : B} )">
<float name="A" >0</float> <!-- will be renamed into A2 -->
<float name="B" >1</float> <!-- will be renamed into B2 -->
<float name="C" >1</float>
<float name="D" >1</float>
</data>
<data filter="keep( {A2 : A, B2 : B} )">
<float name="A" >0</float> <!-- will be renamed into A2 -->
<float name="B" >1</float> <!-- will be renamed into B2 -->
<float name="C" >1</float> <!-- will be removed -->
<float name="D" >1</float> <!-- will be removed -->
</data>
<data filter="rename( {A1 : A, A2 : A, A3: A} )">
<float name="A" >0</float> <!-- will be provided under the names A1, A2 and A3 -->
</data>
XML3D comes with a lean set of scene elements, in particular if compared to other scene descriptions. To describe dynamic effects in the scene XML3D provides a dataflow graph approach (Xflow), that allows arranging data processing operations as a graph of operations. This way, it is possible to describe complex dynamic data processing from basic blocks.
Xflow is powerful enough to describe all common dynamic effects usually implemented in fixed-function entities including skinning, morphing, augmented reality functionality, etc. The basic principle of Xflow is a small addition to the general data model: It allows attaching an operator to a data element using the compute attribute. A data element with such an compute operator attached, composes the data table as usual. This data tabe is the input of the compute operator. The output of the compute operator (i.e. the result) is merged with original data table. In this merge, entries from the result table override entries with the same name from the input table.
Compute operators can be used to change or create data inside an Xflow graph. They operate like functions, receiving a set of input arguments, doing some work and then outputting the result. On a data element an operator may be invoked through the compute attribute using the following syntax:
compute="(output1 [,output2...]) = xflow.[operatorName]([argument1, ...])"
In a dataflow element one or more compute operators can be invoked inside a compute block using the compute element.
The fields listed as outputs will be added to the list of data that this data element provides. The arguments must be available to the data element that invokes the operator, either as value elements or provided by a child data element. Compute operators can be invoked in sequence by nesting data elements:
<data compute="position = xflow.add(position, offset2)">
<data compute="position = xflow.add(position, offset1)">
<float3 name="position">...</float3>
<float3 name="offset1">...</float3>
</data>
<float3 name="offset2">...</float3>
</data>
XML3D includes several compute operators by default (a complete list may be found here) but also provides an interface to declare custom compute operators. These can be saved in their own JavaScript files and served alongside xml3d.js. Before being used a custom operator must be registered with Xflow by calling Xflow.registerOperator with the following syntax:
Xflow.registerOperator("xflow.myOperatorName", {
outputs: [ {type: 'float3', name: 'outputName', customAlloc=true },
...
],
params: [ {type: 'int', source: 'inputName1', optional=true },
{type: 'float3', source: 'inputName2', array=true },
...
],
alloc: function(sizes [, inputName1, ...]) {
// Only necessary if one or more outputs have the flag 'customAlloc=true'
sizes['outputName'] = inputName2.length;
},
evaluate: function(outputName, inputName1, inputName2, info) {
...
}
});
xml3d.js script in the document flow the Xflow.registerOperator function can (and should) be called immediately. This is typical behavior for a JavaScript plugin architecture and ensures the operators have been registered before XML3D initializes the scene (during the document.onload event).
In essence the declaration of the compute operator must contain at least a list of input and output fields, including types, and an evaluate function that is called by Xflow during data processing. Input fields may be marked as optional, otherwise they will generate an error if missing.
Output fields may be allocated with a custom size, indicated with the customAlloc flag. When this flag is present Xflow will call the alloc function, which should declare the sizes of the data arrays that Xflow needs to create for these fields. If a field is not marked with customAlloc then Xflow will attempt to choose the right size based on the inputs to the compute operator.
Fields marked with the array flag will be provided to the evaluate function as is and will exclude them from the normal length-matching checks that Xflow performs on input arguments. This can be used, for example, to pass an array of data with 100 elements while the other input fields all contain thousands of elements. Normally this would generate an error as Xflow would not be able to properly iterate through the data.
The evaluate function will always be called with the list of output fields first, then the input fields, then an info object supplied by Xflow. The info object contains information about how the data can be iterated and offers a place to store data during processing:
info = {
iterFlag: [true|false,...], // Is the input at this position an array that should be iterated over or a single element?
iterateCount: number, // The number of elements in the input data to iterate over, ie. (input array length) / (tuple size)
customData: {} // A field to hold custom data during and between operator executions
}
array flag will always have an iterFlag value of false
The following is an example operator that uses the info object to iterate over a set of positions, adding a constant offset and returning the result as a new array of positions:
Xflow.registerOperator("xflow.addOffset", {
outputs: [ {type: 'float3', name: 'result' }
],
params: [ {type: 'float3', source: 'position' },
{type: 'float3', source: 'offset', array=true }
],
evaluate: function(result, position, offset, info) {
// In this example 'offset' is an array with 3 values (a single float3)
// 'position' is an array containing thousands of values
for (var i=0; i < info.iterateCount; i++) {
result[i*3] = position[ info.iterFlag[0] ? i*3 : 0 ] + offset[0];
result[i*3+1] = position[ info.iterFlag[0] ? i*3+1 : 1 ] + offset[1];
result[i*3+1] = position[ info.iterFlag[0] ? i*3+2 : 2 ] + offset[2];
}
}
}
visible and pointer-events properties.
display property| Value: | inline | block | list-item | inline-block | table | inline-table | table-row-group | table-header-group | table-footer-group | table-row | table-column-group | table-column | table-cell | table-caption | none | inherit |
| Initial: | inherit |
| Applies to: | Scene elements |
The meanings of the values of this property for XML3D elements:
none will hide all descendant elements regardless of their local properties, as it does in HTML. Here are some examples for the 'display' property:
mesh.hidden { display: none } /* do not display meshes with hidden class */
xml3d > * > * > * > * model { display: none; } /* Hide all models deeper than the fifth hierarchy level */
Modifying the display property with jQuery:
$("#myMesh").hide();
$(".walls").toggle();
transform property| Values: | translateX | translateY | translateZ | translate3d | rotateX | rotateY | rotateZ | rotate3d | scaleX | scaleY | scaleZ | scale3d | matrix3d |
| Initial: | identity matrix |
| Applies to: | Transformable elements |
[[CSS3-transforms]] can be used to specify 3D transformations for any transformable element. The transform property may hold any number and combination of values, which will be combined from left to right. Transformations also apply to all descendant scene elements, building a transformation hierarchy.
translate3d) require a unit of measurement to be valid. Because browsers do not yet support units that make sense for a 3D scene these values should be given in 'px'. For example, translate3d(10px, 5px, 0px). Internally these transformations will of course be in the units that the scene uses.
assetmesh, group, mesh, model, light, view
Transformable elements are those that are able to be transformed, either through CSS3 transforms or the transform element. Not all transformable elements can be nested but when they are they build a transformation hierarchy, with the transformation matrix of each node being defined as its own local transformation matrix multiplied with the transformation matrix of its parent element.
A third possibility for defining a transformation is to reference a data element through the transform attribute instead of a transform element. This data element may either contain a single float4x4 element with name transform or use a compute operator to generate the transformation matrix.
transform attribute are given the CSS transform will take precedence.
The following example shows four different ways of defining the same transformation on a group element.
<!-- Using a CSS3 transform -->
<group style="transform: translate3d(0px, 0px, 10px)"></group>
<!-- Using a transform element -->
<transform id="myTransformElement" translation="0 0 10"></transform>
<group transform="#myTransformElement"></group>
<!-- Giving the transformation matrix directly -->
<data id="myDataTransform">
<float4x4 name="transform">1 0 0 0 0 1 0 0 0 0 1 10 0 0 0 1</float4x4>
</data>
<group transform="#myDataTransform"></group>
<!-- Computing the transformation matrix with an Xflow operator -->
<data id="myComputeTransform" compute="transform = xflow.createTransform(translation)">
<float3 name="translation">0 0 10</float3>
</data>
<group transform="#myComputeTransform"></group>
assetdata, data, dataflow, mesh, material, model, light
Data elements are the non-leaf nodes of an Xflow graph. They may contain any combination of data elements and value elements. They may also reference other data elements through the src attribute using a standard HTML URI.
The ultimate function of a graph of data elements is to provide data to a "sink". Some data sinks in XML3D include the mesh element, the material element and the projection attribute of the view element.
The ability to reference data elements makes it possible to share a common set of data between many different sinks. This saves memory and increases performance, as the data is also shared internally whenever possible. For example, a set of vertex positions may be shared between many instances of the same mesh using a different set of face indices each time. Internally these meshes will also share a common WebGL vertex position buffer:
<data id="shared_positions">
<float3 name="position"> 1.0 0.0 0.0 0.5 1.0 1.0 ... </float3>
</data>
<mesh>
<data src="#shared_positions"></data>
<int name="index"> 0 1 2 3 4 5 ... </int>
</mesh>
<mesh>
<data src="#shared_positions"></data>
<int name="index"> 3 4 5 0 1 2 ... </int>
</mesh>
If two data elements containing fields with identical names are present in an XFlow data graph then the outermost value will replace any value nested deeper within the graph. In this example the value of the color field will be 1.0 0.0 0.0 when referencing #my-data:
<data id="my-data">
<float3 name="color">1.0 0.0 0.0</float3> <!-- Overrides the nested color -->
<data>
<float3 name="color">1.0 1.0 1.0</float3>
</data>
</data>
float, float2, float3, float4, float4x4, int, int4, bool, texture
Value elements are the leaf nodes of an Xflow graph. They may not be nested and may not contain any non-text child elements. Data should be provided as a text node containing a space-separated list of values. The tag name determines how this data is interpreted:
<bool>1 0 0</bool> // an array of three boolean values
<float3>1 0 0</float3> // a single three-dimensional floating point vector
The name attribute of a value element acts as an ID for the data contained in this element. It may be referenced in XFlow operators or in material shaders.
If two value elements with the same name are present inside a data element then the value appearing later in the DOM will be used. In this example the value of the color field will be 1.0 1.0 1.0 when referencing #my-data:
<data id="my-data">
<float3 name="color">1.0 0.0 0.0</float3>
<float3 name="color">1.0 1.0 1.0</float3> <!-- Overrides the previous color -->
</data>
mesh, model
Pickable elements are the drawable geometries of a scene. These elements can trigger mouse events like most visible HTML elements do, we call this picking. The list of available mouse event listeners is described in the Events section.
Mouse events will also bubble up through the scene hierarchy, which allows mouse event listeners to also be placed on the group element and the xml3d element. Listeners on these elements can only be triggered by a pickable element (model or mesh) in the element's subtree.
When the user interacts with an object on the canvas (eg. clicks on it) the relevant MouseEvent will be generated on the mesh element or model element and then bubbled up the scene hierarchy. This will continue until the event reaches the xml3d element, or until event.stopPropagation() is called.
In the following example, when clicking on the object in the scene corresponding to this mesh element both listeners will be triggered in the appropriate order:
<group onmousedown="myMouseDownListener(event)">
<group>
<mesh onmouseup="myMouseUpListener(event)" type="triangles"></mesh>
</group>
</group>
xml3d elementThe xml3d element is the root element of an XML3D scene. It will create a canvas element at this position in the DOM to display the rendered scene. CSS styles and event listeners from the XML3D element are also applied to the canvas allowing for mouse interaction with the XML3D scene.
A page may have more than one XML3D element, in this case multiple canvases with their own WebGL contexts will be created. It is also possible to share date elements between scenes in which case XML3D will automatically create the necessary WebGL buffers for each context.
Because XML3D uses HTML ids to reference elements it is important to avoid duplicate ids. For example, a duplicate id for a data element that appears in two different XML3D scenes on the same page may lead to undefined behavior.
The view attribute, if present, must contain a selector that returns a view element as first matching element using querySelector on the xml3d element. The selection mechanism is described in the Selectors API [[selectors-api2]]. If the attribute is not present, or if the selector does not return a valid view element, the selector view is used instead, returning the first view in the scene. If no view is available, the system must append a view element as first child of the xml3d element.
The background color of the 3D canvas may be set through CSS using the background-color property on the xml3d element.
xml3d element has finished loading. This event is fired once after initial loading of the scene is complete, including all external resources such as textures or external models. It will not be fired again if subsequent changes to the scene cause more resources to be loaded. When adding a listener for this event through JavaScript it may be necessary to check the status of the complete attribute beforehand, as the load event will not be resent if it was already dispatched before the listener was registered.
getElementByPoint this function is not dependent on the currently active view. This makes it useful for, for example, finding the surface normal of an object at a particular point regardless of whether or not that object is currently visible to the active camera.
group elementThe group element is a non-leaf node in an XML3D scene tree. They can be used to build transformation hierarchies and to group renderable objects together. Groups can be nested, but because they are not Xflow elements they may not hold any data elements and cannot be referenced by other group elements. This means that unlike data elements, groups build a tree structure rather than a graph. Each group may only have a single parent.
Group elements inherit properties such as transformations, visibility and materials from their parent group.
<group style="display: none;" material="#blueMaterial">
<group style="display: block;">
<!-- This mesh will not be visible but will inherit #blueMaterial -->
<mesh type="triangles"></mesh>
</group>
</group>
view elementThe view element defines a viewport into the scene. The view (or camera) model that defines the projection of the scene to the XML3D canvas is defined by the model attribute. The view uses the generic data model to define the parameters of the referenced view model (aka intrinsic camera parameters). The coordinate system of the view element defines the coordinate system for the view (aka extrinsic camera parameters).
The view model used by the view is defined by the view model attribute. If the model attribute is present, it must contain a valid non-empty URN referencing one of the predefined view models. If the URN is empty or references an unknown view model, or if the model attribute is not present, the perspective view model is used.
This example illustrates the use of a view based on the predefined perspective view model using its default parameters:
<view></view>
It defines a perspective frustum that conforms to the right-handed rules and points along the negative z-axis. In the following example, the default direction is altered using CSS Transformations. Additionally, the perspective frustum has a different vertical field-of-view:
<view style="transform: rotate3d(0, 1, 0, 180deg);">
<float name="fovVertical">0.5</float>
</view>
This example illustrates the use of a projective view model using a custom projection matrix:
<view model="urn:xml3d:view:projective">
<float4x4 name="projectionMatrix">1.4485281705856323 0 0 0 0 2.4142134189605713 0 0 0 0 -9.523809432983398 -1 0 0 -94.18809509277344 0</float4x4>
</view>
model IDL attribute must reflect the model content attribute.
light elementThe light element defines a light source in the scene that emits light based on the light model defined by the model attribute. The light uses the generic data model to define the parameters of the referenced light model. The coordinate system of the light element defines the base coordinate system for the light. However, the final position and direction of the light source can altered by specific parameters of the light model.
A light affects all geometry elements within the same scene, i.e. with the same xml3d element as ancestor.
The light model used by the light is defined by the model attribute. If the model attribute is present, it must contain a valid non-empty URN referencing one of the predefined light models. If the URN is empty or references an unknown light model, or if the model attribute is not present, the directional light model is used.
This example illustrates the use of a single light source based on the predefined directional light model using its default parameters:
<light></light>
Since no direction for the light source is specified for the light source, the default direction 0 0 -1 (along the negative z axis) is transformed by the global coordinate system of the light element.
This example illustrates the use of a single light source based on the predefined point light model:
<light model="urn:xml3d:light:point">
<float3 name="intensity">0.8 0.8 1</float3>
</light>
model IDL attribute must reflect the model content attribute.
mesh elementThe mesh element represents a single renderable object in the scene. To be drawn correctly a mesh must either inherit a material from a parent element or assign its own through the material attribute. The type attribute determines how the mesh data is interpreted to be drawn and must be one of the predefined primitive types.
The simplest way to define a mesh is to include its data directly in the mesh element:
<mesh type="triangles" material="#myMaterial">
<int name="index">0 1 2 ... </int>
<float3 name="position">1.0 0.0 0.0 ... </float3>
<float3 name="normal">0.0 1.0 0.0 ... </float3>
</mesh>
However it's usually a good idea to reference this data instead, either in the same document or in an external document as shown below:
<!-- myDataElement is the id of a data element containing the mesh data -->
<mesh src="myMesh.xml#myDataElement" type="triangles" material="#myMaterial"></mesh>
Each entry in the mesh data is passed on to the material shader in the form of a vertex attribute. A mesh must always supply at least a position entry, any others are optional but may be required by a material in order to be rendered properly (eg. normal in conjunction with the predefined phong material).
As with any data element, a mesh may override certain entries or supply its own. This applies even to material entries:
<!-- This mesh will be rendered with a blue diffuseColor even though the material specifies a red one -->
<mesh src="myMesh.xml#myDataElement" type="triangles" material="#myRedMaterial">
<float3 name="diffuseColor">0.0 0.0 1.0</float3>
</mesh>
The mesh type can also be set using the generic attribute system, setting the mesh type to derived
<!-- The mesh data will be interpreted as lines -->
<mesh src="teapot.json" type="derived">
<string name="type">lines</string>
</mesh>
Different ways of assigning transformations to meshes are described in the transformable elements section.
triangles, tristrips, points, lines, linestrips and derived.["position", "index", "normal"]. This is useful for accessing the mesh data directly through JavaScript.model elementThe model element is used to instantiate an asset. This is useful for rendering complex objects with many individual meshes or materials. Not only is it easier to insert a single model element into the DOM, it's also much more efficient.
When referencing an external file the URI must contain the id of the asset element to be instantiated:
<model src="myExternalAsset.xml#myAsset"></model>
A model may override data inside the Asset by specifying the assetmesh element or assetdata element that should be overwritten. For example, to change the material of a mesh inside the Asset named "hat" we would define our model tag as follows:
<model src="myExternalAsset.xml#myAsset">
<assetmesh name="hat" material="#aNewMaterialDefinedLocally"></assetmesh>
</model>
model. To assign a material from outside the asset either remove all inner materials entirely or override each assetmesh individually as shown above.
model element can have its own animation state, even if all reference the same asset. Typically this is done by exposing the animation key through its own assetdata, which is then overwritten in the model:
<!-- Assuming "myAsset" contains an assetdata element with name "animation" -->
<model src="myExternalAsset.xml#myAsset">
<assetdata name="animation">
<float id="animation_key" name="key">1.0</float>
</assetdata>
</model>
By changing the value of the animation_key through JavaScript we can now control the model's animation state.
model should instantiate.
asset.
defs elementThe defs element is simply an organizational tool to separate the scene tree from elements that are not explicitly part of the scene, but may be referenced by elements that are. These implicit elements include transform, material, data and dataflow. Note that any of these elements may appear inside the scene tree as well, it's just good practice to keep them in the defs section whenever possible.
defs section are group, mesh, model, view and light. Note that these elements will be ignored if they are inside the defs section, since it is not considered part of the scene tree. data elements usually belong in the defs section but may also be part of the scene tree if contained by a mesh element.
<xml3d>
<defs>
<transform id="myTransform" rotation="0 1 0 0.75">
<data id="myMeshData" >
<float3 name="position">1.0 0.0 0.0 ...</float3>
</data>
</defs>
<group transform="#myTransform">
<mesh src="#myMeshData" type="triangles"></mesh>
</group>
</xml3d>
material elementA material describes the surface shading of an object. Materials are defined using the material element and then referenced by the material property on a given scene element to indicate that the given element shall be shaded using the referenced material. Multiple scene elements can share a material. The material uses the generic data model to define the parameters of the referenced material model. Note that graphics elements can override the parameters defined in the material element. Hence, the parameters in the material element can be considered default values.
The material model used by the material is defined by the material model attribute. The model attribute must be present, and must contain a valid non-empty URL referencing either a predefined material model or a scripted material model, e.g. using shade.js or a custom shader.
Here is a simple example for a material based on the predefined phong material model:
<material model="urn:xml3d:material:phong">
<float3 name="diffuseColor">0 0 1</float3>
<texture name="diffuseTexture">
<img src="../stone.jpg"/>
</texture>
</material>
model IDL attribute must reflect the model content attribute.
transform elementIn addition to CSS3 transformations applied through the style attribute, the transform element provides another way to define transformations for transformable elements. The various transformation components are combined into a transformation matrix which is then applied to the element or elements referencing this transform element.
Transform elements are generally placed into the defs section of a scene, however it's possible to define them anywhere inside the xml3d element. No matter where a transform element is defined it must be referenced by its id from the transform attribute of a transformable element to be used. A single transform element can be referenced by multiple other elements.
The XML3D asset format is designed to encompass everything needed to define a complex model consisting of one or more meshes and materials. Conceptually an asset is designed to be static and self-enclosed. When referenced from a model element an asset behaves as a single object in the scene, even though it may be composed of many different meshes. Interaction through mouse event listeners, for example, can only be done on the model level and not on the level of individual meshes comprising the asset.
As with other Xflow elements most parts of the asset (eg. materials, mesh data, transformations) can be overridden inside the model element. However it's important to note that adding or removing overrides for assets has a very high performance penalty. Best practice is to define all the necessary overrides during construction of the model element and then stick to changing the values of those overrides, which carries no performance penalty. See the XML3D Wiki for more information on how to override asset data.
It's important to note that unlike a group and mesh hierarchy, assets are always flattened. An asset always consists of an asset element with a list of assetmesh elements as children, which cannot be nested. Asset elements themselves, on the other hand, can be nested.
asset elementThe asset element defines an asset that can be instantiated through a model element or extended by another asset. The asset element also defines a scope for the name and includes attributes of any child assetdata and assetmesh elements.
Asset elements may be nested but their id must be unique within the document.
assetdata elementSimilar to the data element an assetdata element is used to define and share generic data within an asset. Unlike data elements, assetdata elements may not be nested and are named by and referenced through a name attribute rather than an id. An assetdata element may contain normal data elements as children.
Assetdata names must be unique within an asset element.
assetmesh elementAn assetmesh element represents a single drawable mesh in the asset and works similar to the mesh element. Unlike the mesh element assetmeshes are identified by their name attribute, which may not be duplicated within the same asset element.
As with the mesh element a transformation may be supplied either through the transform attribute or through a CSS3 transform.
<assetmesh name="exampleMesh" style="transform: translate3d(0px, 0px, 10px)" type="triangles" material="#exampleMaterial">
<data src="#myMeshData"></data>
<assetdata src="#someMoreMeshData"></assetdata>
</assetmesh>
assetmesh element. The name is scoped to the surrounding asset element and may not be duplicated within this scope.
assetmesh or assetdata elements that this one should extend.
assetmesh. Supported values are triangles, tristrips, points, lines, linestrips, and deriveddata elementA data element is a non-leaf node in an Xflow graph. They act primarily as containers for data but may also modify that data through compute operators and filters. When data elements are nested the data from all child elements is merged, in this sense the parent data element acts as a data aggregator. In the case of two data fields with the same name the data element further down the list in the DOM will have priority. For example:
<data> <!-- At this level "color" will be "0.0 1.0 0.0" -->
<data>
<float3 name="color">1.0 0.0 0.0</float3>
</data>
<data>
<float3 name="color">0.0 1.0 0.0</float3>
</data>
</data>
Data elements may also reference other data elements. This can be used to share a common dataset between objects, overwriting certain fields on a per-object basis as required. See the data elements section for an example.
One important use for data elements is to dynamically change or generate data, for example to drive an animation or generate a ground mesh from a height map. This can be accomplished with a combination of compute operators and data overrides. The following is an example of a simple compute operator that will add the provided offset to all vertex positions of a mesh:
<mesh>
<data compute="position = xflow.add(position, offset)">
<float3 name="offset">0.5 0.5 0.5 ... </float3>
<data id="originalData">
<float3 name="position">1.0 0.0 0.0 ... </float3>
</data>
</data>
</mesh>
When using a compute operator with a data element all input arguments must be available to the data element that invokes the operator. In this example the "position" field of the mesh will contain the offset position data, while the data element with id originalData will contain the original positions. If this data element were referenced from another mesh it would also return the original positions:
<mesh>
<!-- "positions" contains the original data 1.0 0.0 0.0 ... -->
<data src="#originalData"></data>
</mesh>
Xflow is designed as a reactive framework, meaning operators will only be recomputed if input data has changed and a sink element has requested the output data (eg. during a draw call in a subsequent frame).
keep, rename or remove) to adjust the data that is provided by this data element. See the Wiki page How to use Xflow for more information.src node.["position", "index", "normal"]. This is useful for accessing the data directly through JavaScript.dataflow elementThe dataflow element can be thought of as a template for a compute operation consisting of one or more Xflow operators executed in sequence. This template can be defined once and then reused many times in the scene, applying the operations to a different set of input data each time. Consider the following dataflow example which computes skeletal animation for a mesh:
<dataflow id="skinning" out="position, normal, boneXform">
<float3 param="true" name="position" ></float3>
<float3 param="true" name="normal" ></float3>
<int4 param="true" name="boneIdx" ></int4>
<float4 param="true" name="boneWeight" ></float4>
<int param="true" name="boneParent" ></int>
<float3 param="true" name="bindTranslation" ></float3>
<float4 param="true" name="bindRotation" ></float4>
<float3 param="true" name="translation" ></float3>
<float4 param="true" name="rotation" ></float4>
<float param="true" name="key" >0</float>
<compute>
bindPose = xflow.createTransformInv({translation: bindTranslation, rotation: bindRotation});
bindPose = xflow.forwardKinematicsInv(boneParent, bindPose);
rot = xflow.slerpSeq(rotation, key);
trans = xflow.lerpSeq(translation, key);
pose = xflow.createTransform({translation: trans, rotation: rot});
pose = xflow.forwardKinematics(boneParent, pose);
boneXform = xflow.mul(bindPose, pose);
normal = xflow.skinDirection(normal, boneIdx, boneWeight, boneXform);
position = xflow.skinPosition(position, boneIdx, boneWeight, boneXform);
</compute>
</dataflow>
By defining the param attribute of the various value elements we instruct Xflow to expect them as inputs provided by any element that references this dataflow. The compute element is only found inside dataflows and can be used to define a sequence of Xflow operators that should be applied to the input data. The list of operators will be computed from top to bottom and any new data fields they create (ie. bindPose in this example) can be used as input for operators further down the list.
To apply this dataflow to a set of data another data element may reference it in its own compute block or attribute:
<data compute="position, normal = dataflow['#skinning']">
<!-- We assume this file contains all the input data that the 'skinning' dataflow expects -->
<data src="myMeshData.xml"></data>
<float id="myAnimationKey" name="key">1.0<float>
</data>
Conceptually this data element will 'call' the dataflow element with the input 'arguments' from the file myMeshData.xml and then assign the output of the dataflow to the position and normal fields, effectively overriding the ones found in myMeshData.xml. Note the URI fragment inside the dataflow[] construct. This may also reference an external document containing the dataflow.
By declaring the key value separately we can control the animation state of this model. Note also that the key is declared after the reference to myMeshData.xml to ensure that it overrides the key value found in the xml file.
["position", "index", "normal"]. This is useful for accessing the data directly through JavaScript.float, float2, float3, float4, and float4x4 elementsThe float* elements hold a space separated list of floating point values. The tag name determines how this data is interpreted, ie. a float2 element will interpret the data as an array of 2D vectors while a float4x4 element will interpret it as an array of 4x4 matrices.
param attribute are the only value elements that may be empty, all others must contain data. See the dataflow element for example usage.
int and int4 elementsThe int* elements hold a space separated list of integer values. The tag name determines how this data is interpreted, ie. an int4 element will interpret the data as an array of 4-component integer vectors.
param attribute are the only value elements that may be empty, all others must contain data. See the dataflow element for example usage.
bool elementThe bool element holds a space separated list of boolean values. The values may be given in string form (true/false) or as integers (1/0).
param attribute are the only value elements that may be empty, all others must contain data. See the dataflow element for example usage.
string elementThe string element holds a comma separated list of string values.
Currently the following string attributes may be supplied by including a string element with the matching name:
Below is an example using a custom Xflow operator to change the type attribute of a mesh element:
<!-- The custom xflow operator will output a string field named 'type' -->
<data id="meshTypeCompute" compute="type = xflow.selectString(selector, value1, value2)">
<string name="value1">triangles</string>
<string name="value2">lines</string>
<int name="selector">2</int>
</data>
<!-- In this case type will evaluate to 'lines' -->
<mesh src="#meshdata" type="derived">
<data src="#meshTypeCompute"></data>
</mesh>
param attribute are the only value elements that may be empty, all others must contain data. See the dataflow element for example usage.
texture elementimg, video, or canvas elementsTexture sampling attributes configure fixed-function sampling methods on the graphics hardware. Thus these attributes qualify as CSS properties. We have abstained from using CSS properties because we currently cannot define custom CSS properties.
wrap production of the following form:
wrap := <wrap-mode> <wrap-mode>?
wrap-mode := repeat | clamp
If two wrap-mode values are given, then the first value defines the wrap mode for s coordinates and the second for t coordinates. Otherwise the wrap mode is applied in all directions. The wrap-mode values correspond to CLAMP_TO_EDGE and REPEAT in OpenGL.
wrap attribute is a combined enumerated attribute. A valid filter value is a string that matches the filter production of the following form:
filter := (<min-filter-mode> <mag-filter-mode>) | <mag-filter-mode>
min-filter-mode := nearest | linear | nearest-mipmap-nearest | nearest-mipmap-linear | linear-mipmap-nearest | linear-mipmap-linear
mag-filter-mode := nearest | linear
If only the mag-filter-mode is given, the specified function is used for both, minifying and magnification. Otherwise, the min-filter-mode is used for minifying and the mag-filter-mode is used for magnification. The functions specified by the filter modes correspond to those in OpenGL (NEAREST, LINEAR, NEAREST_MIPMAP_NEAREST, NEAREST_MIPMAP_LINEAR, LINEAR_MIPMAP_NEAREST, and LINEAR_MIPMAP_LINEAR). The default filter mode is "linear-mipmap-linear linear".
xml3d.js will automatically resize textures to the nearest power-of-two dimensions when the texture wrap mode is set to "repeat" or filtermin is set to anything other than "nearest" or "linear". See WebGL limitations for more information.
Here is an example of using thewrap and filter attributes to configure the sampling of a texture:
<texture name="diffuseTexture" wrap="repeat clamp" filter="nearest linear">
<img src="../stone.jpg"/>
</texture>
The type attribute is an enumerated attribute with four states with three explicit keywords:
The type attribute's missing value default is the auto state.
wrap IDL attribute must reflect the respective content attribute of the same name.filter IDL attribute must reflect the respective content attribute of the same name.type IDL attribute must reflect the content attribute of the same name, limited to only known values.Attribute name: triangles
WebGL primitive: TRIANGLE
| Name | Type | Description |
|---|---|---|
| index | int | A list of indicies to build triangles out of. |
The triangle primitive type renders faces out of sets of 3 vertices. Triangles may be constructed with or without an array of indices. If no indices are provided XML3D will construct the triangles from the array of vertex positions: the first 3 will create the first triangle, the next 3 the second and so on.
<mesh type="triangles">
<int name="index">0 1 2 1 3 2 ... </int>
<float3 name="position">-1 -1 1 1 -1 1 -1 1 1 1 1 1 ... </float3>
</mesh>
Attribute name: tristrips
WebGL primitive: TRIANGLE_STRIP
| Name | Type | Description |
|---|---|---|
| index | int | A list of indicies to build tristrips out of. |
| vertexCount | int | The number of vertices or indices to use for each tristrip segment. |
The tristrip primitive type creates triangles from a list of vertex positions and (optionally) a list of segments and/or indices. Each segment begins by building a triangle out of 3 vertex positions. Each subsequent triangle in the segment is then created from the last two vertex positions and the next one in the list. Note that this creates a sequence of connected triangles.
Segments can be used to create disconnected sets of triangles by providing a list of integers with the name vertexCount. Each number in the list specifies the number of vertex positions to use for that segment. XML3D will then work through the list of vertex positions sequentially building a tristrip for each segment.
<mesh type="tristrips">
<int name="vertexCount">4 4 4 4 4 4</int>
<float3 name="position">-1 -1 1 1 -1 1 -1 1 1 1 1 1 ... </float3>
</mesh>
Note that because the first triangle in a segment requires 3 vertex positions to define, a segment with vertex count 4 will create two triangles, while vertex count 5 will create 3 and so on.
Attribute name: lines
WebGL primitive: LINES
| Name | Type | Description |
|---|---|---|
| index | int | A list of indicies to build lines out of. |
Lines are drawn from pairs of vertex positions and (optionally) a list of indices.
Attribute name: linestrips
WebGL primitive: LINE_STRIP
| Name | Type | Description |
|---|---|---|
| index | int | A list of indicies to build linestrips out of. |
| vertexCount | int | The number of vertices or indices to use for each linestrip segment. |
A linestrip is drawn from a list of vertex positions and (optionally) a list of segments and/or indices. For each segment a line is drawn between the first vertex and the second, then the second and the third and so on. This creates a continuous line.
<mesh type="linestrips">
<int name="vertexCount">4 2</int>
<int name="index">0 1 2 3 1 3</int>
<float3 name="position">-1 -1 1 1 -1 1 -1 1 1 1 1 1 ... </float3>
</mesh>
The above example will create two line segments, the first using vertices 0, 1, 2, 3 and the second using vertices 1 and 3.
Attribute name: points
WebGL primitive: POINT
Points are drawn from a list of vertex positions, which each position being drawn as a single point.
Attribute name: derived
The special primitive type derived delegates the evaluation of the primitive type to the generic data model. The requested parameter has the name type. The contained value needs to match one of the primitive types above.
<mesh type="derived">
<string name="type">triangles</string>
...
</mesh>
URN: urn:xml3d:material:matte
| Name | Type | Default | Description |
|---|---|---|---|
| diffuseColor | float3 | 1 1 1 | The objects RGB color |
| useVertexColor | bool | false | if true, the vertex attribute 'color' is used to color the object. |
Simple material that does not apply any lighting but shades the object with a single uniform color defined by the diffuseColor parameter or by the vertex attribute color, if useVertexColor is set to ''true''.
URN: urn:xml3d:material:diffuse
| Name | Type | Default | Description |
|---|---|---|---|
| diffuseColor | float3 | 1 1 1 | The object's RGB diffuse color component. |
| diffuseTexture | texture | undefined | Texture to read the diffuse color component and opacity (alpha) from. Accessed based on texcoord per-vertex attribute. If diffuseTexture is defined, the rgb channel of the diffuseTexture gets multiplied with the current diffuseColor and the the alpha channel gets multiples with the current opacity. |
| emissiveColor | float3 | 0 0 0 | The object's RGB emissive color component. |
| emissiveTexture | texture | undefined | Texture to read the emissive color component from. Accessed based on texcoord per-vertex attribute. If emissiveTexture is defined, the emissiveColor gets multiplied with the color accessed from the texture. |
| ambientIntensity | float | 0 | The amount of the 'diffuseColor' to be added to the shading without considering lighting. |
| opacity | float | 1 | The opacity of the object, with 1 being opaque and 0 being fully transparent. |
| useVertexColor | bool | false | Setting useVertexColor to 'true', the vertex attribute color will be multiplied to the diffuse color component (before the diffuseTexture gets applied). |
The diffuse material model describes a diffuse surfaces that reflects light equally in all directions. Additionally, the surface has an optional emissive and ambient component. This is the logic of the diffuse material model in JavaScript/shade.js pseudo code:
function shade(env) {
var diffuseColor = env.diffuseColor || new Vec3(1, 1, 1);
var emissiveColor = env.emissiveColor || new Vec3(0);
var opacity = Math.max(1, env.opacity);
if (env.useVertexColor && env.color) {
diffuseColor *= new Vec3(env.color);
}
if (env.diffuseTexture && env.diffuseTexture.sample2D) {
var texDiffuse = env.diffuseTexture.sample2D(env.texcoord);
diffuseColor *= texDiffuse.rgb();
opacity *= texDiffuse.a();
}
if (env.emissiveTexture && env.emissiveTexture.sample2D) {
var texEmissive = env.emissiveTexture.sample2D(env.texcoord);
emissiveColor *= texEmissive.rgb();
}
return Shade.diffuse(diffuseColor, env.normal)
.transparent(1.0 - opacity)
.emissive(emissiveColor);
}
URN: urn:xml3d:material:phong
| Name | Type | Default | Description |
|---|---|---|---|
| diffuseColor | float3 | 1 1 1 | The object's RGB diffuse color component. |
| diffuseTexture | texture | undefined | Texture to read the diffuse color component and opacity (alpha) from. Accessed based on texcoord per-vertex attribute. If diffuseTexture is defined, the rgb channel of the diffuseTexture gets multiplied with the current diffuseColor and the the alpha channel gets multiples with the current opacity. |
| specularColor | float3 | 0 0 0 | The object's RGB specular color component. |
| specularTexture | texture | undefined | Texture to read the specular color component from. Accessed based on texcoord per-vertex attribute. If specularTexture is defined, the specularColor gets multiplied with the rgb-color accessed from the texture. |
| shininess | float | 0.5 | A scalar for the object's specular exponent, to be multiplied by 128 (e.g. a value of 0.5 will give a specular exponent of 64) |
| emissiveColor | float3 | 0 0 0 | The object's RGB emissive color component. |
| emissiveTexture | texture | undefined | Texture to read the emissive color component from. Accessed based on texcoord per-vertex attribute. If emissiveTexture is defined, the emissiveColor gets multiplied with the rgb-color accessed from the texture. |
| ambientIntensity | float | 0 | The amount of the 'diffuseColor' to be added to the shading without considering lighting. |
| opacity | float | 1 | The opacity of the object, with 1 being opaque and 0 being fully transparent. |
| useVertexColor | bool | false | Setting useVertexColor to 'true', the vertex attribute 'color' will be multiplied to the diffuse color component. |
The phong material model extends the diffuse material model by the specular term from the Phong reflection model. The additional parameters are specularColor, specularTexture and shininess. This is the logic of the phong material model in JavaScript/shade.js pseudo code:
function shade(env) {
var diffuseColor = env.diffuseColor || new Vec3(1, 1, 1);
var specularColor = env.specularColor || new Vec3(0, 0, 0);
var emissiveColor = env.emissiveColor || new Vec3(0);
var opacity = Math.max(1, env.opacity);
var shininess = env.shininess != undefined ? env.shininess : 0.5;
if (env.useVertexColor && env.color) {
diffuseColor *= new Vec3(env.color);
}
if (env.diffuseTexture && env.diffuseTexture.sample2D) {
var texDiffuse = env.diffuseTexture.sample2D(env.texcoord);
diffuseColor *= texDiffuse.rgb();
opacity *= texDiffuse.a();
}
if (env.specularTexture && env.specularTexture.sample2D) {
var texSpecular = env.specularTexture.sample2D(env.texcoord);
diffuseColor *= texSpecular.rgb();
}
if (env.emissiveTexture && env.emissiveTexture.sample2D) {
var texEmissive = env.emissiveTexture.sample2D(env.texcoord);
emissiveColor *= texEmissive.rgb();
}
return Shade.diffuse(diffuseColor, env.normal)
.phong(specularColor, env.normal, shininess)
.transparent(1.0 - opacity)
.emissive(emissiveColor);
}
URN: urn:xml3d:light:point
| Name | Type | Default | Description |
|---|---|---|---|
| position | float3 | 0 0 0 | The position of the point light in object space |
| attenuation | float3 | 0 0 1 | The attenuation of the point light given as its constant, linear, e.g quadratic component. |
| intensity | float3 | 1 1 1 | The RGB intensity of the point light. |
Point light sources emit light from a single point in space with a uniform distribution in all directions, i.e. omnidirectional. The position of the point light is defined by its position attribute and affected by the transformation of the light element that defines the occurrence of the point light. The orientation of the light element is not influencing the light.
URN: urn:xml3d:light:directional
| Name | Type | Default | Description |
|---|---|---|---|
| direction | float3 | 0 0 -1 | The direction of the light in object space. |
| intensity | float3 | 1 1 1 | The RGB intensity of the point light. |
Directional light sources, also known as distant light sources, emit light along parallel rays from an infinite distance away. The direction the light sources emits from is defined by its direction attribute and affected by the transformation of the light element that defines the occurrence of the distant light. The position of the light element is not taken into account.
URN: urn:xml3d:light:spot
| Name | Type | Default | Description |
|---|---|---|---|
| position | float3 | 0 0 0 | The position of the point light in object space |
| direction | float3 | 0 0 -1 | The direction of the light in object space. |
| intensity | float3 | 1 1 1 | The RGB intensity of the directional light. |
| attenuation | float3 | 0 0 1 | The attenuation of the point light given as its constant, linear, e.g quadratic component. |
| cutoffAngle | float | Math.PI/4 | Spot angle in radians. Controls the size of the outer cone of a spot light, i.e. the circular area a spot light covers. |
| softness | float | 0 | Softness of the spot light in the range [0;1]. |
Spot light source are a variation of point lights: Instead of emitting light omnidirectional, they emit light from their position in a cone of direction. The cutoffAngle attribute defines the size of the cone. Objects outside the cone defined by the cutoffAngle are not lit by the light source. The softness attribute defines the percentage of the cone in which the illumination ramps down from full to no illumination, i.e. a softness of 0 specifies a hard transition between full to no illumination and a softness of 1.0 a linear transition along the radius of the cone.
URN: urn:xml3d:view:perspective
| Name | Type | Default | Description |
|---|---|---|---|
| fovVertical | float | Math.PI / 4 | The vertical field of view of the view frustum in radians |
| fovHorizontal | float | - | The horizontal field of view of the view frustum in radians |
| near | float | - | The distance of the near clipping plane (unitless) |
| far | float | - | The distance of the far clipping plane (unitless) |
The perspective view model defines a perspective view frustum based on the near and far planes and on a vertical or horizontal opening angle. A small field of view roughly corresponds to a telephoto lens; a large field of view roughly corresponds to a wide-angle lens. If fovHorizontal is given, the frustum is defined using this horizontal angle, otherwise fovVertical is used. If the far or the near clipping place is not defined, the system will try to compute a useful value automatically based on the scene's dimension.
URN: urn:xml3d:view:projective
| Name | Type | Default | Description |
|---|---|---|---|
| projectionMatrix | float4x4 | The frustum described as projection matrix |
The projective view model defines a projective view frustum based on a projection matrix. This view model is typically used, if the intrisic camera parameters are computed, e.g. from a webcam image using computer vision algorithms.
XML3D provides an interface to define custom materials in addition to the predefined material models. Such materials must first be registered with XML3D:
XML3D.materials.register("my-material", { ...material definition... });
and may then be referenced through the model attribute of a material element:
<material model="urn:xml3d:materials:my-material"></material>
The material definition must provide vertex and fragment shader code in the form of a string as well as definitions and default values for all uniform and sampler variables:
XML3D.materials.register("my-material", {
vertex : "...vertex shader code...",
fragment : "...fragment shader code...",
//All uniform variables that appear in either shader block, with default values
uniforms : {
exampleFloat : 0.5,
exampleVec3 : [1, 1, 1]
},
//All textures that appear in either shader block
samplers : {
exampleTexture : null
},
//All vertex attributes that appear in the vertex shader
attributes : {
position : {required : true},
normal : null //Synonymous with {required : false}
exampleVertexAttribute : {required : false},
},
//Optional function to mark this material as requiring alpha blending
hasTransparency : function(params) {
return params.opacity && params.opacity.getValue()[0] < 1;
},
//Optional function to add compiler directives to shaders based on scene parameters (eg. the number of lights)
addDirectives : function(directives, lights, params) {
directives.push("HAS_EXAMPLETEXTURE " + ('exampleTexture' in params ? "1" : "0"));
}
});
As with the predefined material models the generic data model can be used to set material parameters like uniform variables and textures for custom materials:
<material model="urn:xml3d:material:my-material">
<float3 name="exampleVec3">0 1 0</float3>
<texture name="exampleTexture">
<img src="textures/my-example-texture.png"/>
</texture>
</material>
Custom materials may also be defined through shade.js which is generally easier to use and is able to generate cross platform materials that may be used outside of XML3D as well.
<mesh src="myMesh.xml" material="#my-material">
<float3 name="boundingBox">-10 -10 -10 5 5 5</float3> <!-- min max points of the bounding box -->
</mesh>
XML3D provides a set of global options through the XML3D.options interface. These options are shared between all XML3D elements on a page.
Currently the following options are available in xml3d.js, the default is shown in bold:
| Key | Values | Description |
|---|---|---|
| loglevel | all, debug, info, warning, error, exception | Controls the level of logging to the console. |
| resource-crossorigin-attribute | anonymous, use-credentials | This value will be assigned to the crossOrigin field of requested resources such as img or video. |
| renderer-faceculling | back, front, both, none | Controls which faces are culled during rendering. Corresponds to WebGL's cullFace function. |
| renderer-frontface | cw, ccw | Controls the winding order of polygon faces during rendering. Corresponds to WebGL's frontFace function. |
| renderer-frustum-culling | true, false | Toggles view frustum culling during rendering. |
| renderer-mousemove-picking | true, false | Enable object picking for mousemove events. Ex: The XML3D standard camera disables mousemove picking between the mousedown and mouseup events of a camera rotation. |
| renderer-movement-aware-click-handler | true, false | When true, disregard click events where the mouse has moved between the mousedown and mouseup events. |
| renderer-continuous | true, false | Toggle continuous rendering. If false a frame will only be drawn if XML3D detects a scene change that requires it. |
| renderer-ssao | true, false | Toggle screen space ambient occlusion. Note: This is an experimental feature! |
xml3d-, like so: ?xml3d-loglevel=debug
XML3D specifies the following content-types for external resources:
JSON mesh file model/vnd.xml3d.mesh+json
XML mesh file model/vnd.xml3d.mesh+xml
XML asset file model/vnd.xml3d.model+xml
The response should include a Content-Type header with the same content-type or the appropriate standard type application/json, application/xml or application/octet-stream.
Content-Type header at all will not be handled and will generate an error message.
XML3D provides a range of math types based on the glMatrix library. In general the functions they provide are immutable, with the exception of the Box and Ray types.
A two component vector object.
[0,0]A three component vector object.
[0,0,0]A four component vector object.
[0,0,0,0]An axis-angle representation of a rotation with the angle in radians. This type is used for all XML3D interface methods and element attributes that expect a rotation. When working with rotations mathematically it's best to first convert the AxisAngle representation to a Quat and convert back to AxisAngle when passing the result back to an XML3D interface.
[x, y, z, angle][0,0,1,0]Quat represents a rotation as a quaternion of the form [x, y, z, w].
[0,0,0,1]Quat.conjugate but also works on non-normalized quaternions.A 2x2 matrix object.
null will be returned.A 3x3 matrix object.
null will be returned.A 4x4 matrix object.
null will be returned.The Box type represents a bounding box with a stored minimum and maximum point. Unlike the math types the methods of the Box object are mutable and, when applicable, return the same instance of the Box.
min set to MAX_VALUE and max set to MIN_VALUE.opt.dist exists then the distance to the intersection point will be written into this field. If the ray does not intersect then this field will hold InfinityThe Ray type represents a ray with an origin and a direction. Unlike the math types Rays are mutable and methods will return the same instance, if applicable.
[0,0,0] and direction [0,0,-1]opt.dist exists then the distance to the intersection point will be written into this field. If the ray does not intersect then this field will hold Infinityon[eventName] attribute or through JavaScript using element.addEventListener(...).
"myListenerFunction(event)"