The frontend GUI is asynchronous and can be controlled interactively at will.
The frontend utilizes a canvas from the index.ts, which is used by the engine. The engine allows different layers. One layer is responsible for representing the data, and the other holds the components from the GUI and is not affected by any shaders or postprocessing.
The representation and some user settings can be manipulated via the GUI to allow an interactive experience.
The GUI is separated into three parts:
Memory Information: Note: The browser reads no hardware information. The memory is approximated based on the loaded data. It shows the used and available memory. The users can control available memory.
Debug Information: The debug information is shown at the bottom, which shows the fps, position of the camera, and the target (center of the camera's orbit).
Main View: Renders the data
User Controls: Interactive manipulation of settings. The table below describes the different settings:
All settings are also shown here as screenshot:
All GUI elements are included in gui.ts. A sidescroller element allows you to scroll if the window is too small for the settings. Also, a component group can be toggled. For implementation, the following pattern must be considered:
For a complete group, a control panel (currentPanel) must be added with the function createStackPanel. Additionally, the control to toggle this group must be added (currentButtonClient):
currentPanel = createStackPanel("Client Settings", parentStackPanel, allPanels);
let currentButtonClient = Button.CreateImageButton("Client Settings button","Client Settings", "./../pngwing.com.png");
currentButtonClient.height = "25px";
currentButtonClient.width = "250px";
currentButtonClient.background = 'lightgray';
currentButtonClient.alpha = 0.7;
currentButtonClient.overlapGroup = 1;
currentButtonClient._children[1].rotation = pi/2;
currentButtonClient.onPointerClickObservable.add(function(value) { //make anything isntead of button disappear
allPanels[3].children.forEach(element => {
if (element.name != currentButtonClient.name)
element.isVisible = !element.isVisible;
});
currentButtonClient._children[1].rotation = rotations.filter((x,i)=> x != currentButtonClient._children[1].rotation)[0];
})
currentPanel.addControl(currentButtonClient);In the case of adding a control to a group first a text must be added as the label (percentage_text), added to the panel of the group, and then add the control that is needed (percentage_slide), which also has to be added to the panel:
let percentage_text = new TextBlock("Download Percentage");
percentage_text.text = "Download Percentage: " + DownloadControl.percentage;
percentage_text.height = "30px";
percentage_text.color = "lightgray";
currentPanel.addControl(percentage_text);
let percentage_slider = new Slider("percentage_slider");
percentage_slider.minimum = 0;
percentage_slider.maximum = 1;
percentage_slider.step = 0.1;
percentage_slider.value = 0.1
percentage_slider.height = "20px";
percentage_slider.width = "200px";
percentage_slider.onValueChangedObservable.add(function(value) {
DownloadControl.percentage = value;
percentage_text.text = "Download Percentage: " + value.toFixed(1);
});
currentPanel.addControl(percentage_slider);Note: The functionality is inline but can be changed at will for your purposes. Here, it is often used to update either the UI itself or update objects that control the rendering.
sceneryWithSplines is the initial code for the scene. It controls the camera and the scene itself. Herby, the file is separeted into 3 main parts: the camera, the materias and the scnene.
The camera is realized by an singleton export class CameraConfig. It holds the position, the target position of the center of the viewbox) and the radius (size of the viewbox). The viewbox is the calculated visible area within the simulation space. The coordinates are in space coordinates of the simulation. An ArcRotateCamera is used (see https://doc.babylonjs.com/features/featuresDeepDive/cameras).
The Camera position is updated with the onViewMatrixChangeObservable which is automatically started by the Babylonengines as soon the camera is moved (and therefore the vie matrix changes). onViewMatrixChangeObservable is able to push new information to the material, like the camera positon or the farplane (maximal visible area). Additionally, the funtion checks if the camera is moved to another position and the new viewbox is aligned with the current one. The box did not move (like possible with an rotation) nothiung is done, but if the viewbox moves, new data may be loaded and thus the download process is reactivated.
Finally, there is a guiCamera, that has an own layer for the gui such that the gui is not effected by postprocessing effects.
The scene create initializes the camera, the gui and the post processing. The postprocessing is only used for a proper fluidrendering, which is currently not available.
The material initialzes the material, i.e. the shader for the particles in our case. The material is called shaderMaterial and is based on https://doc.babylonjs.com/features/featuresDeepDive/materials/shaders/shaderMaterial. The most commen usage is to initilize the uniforms, the attributes and the defines. Also is create a material, that can be applied to a new pcs. Thus, we have full control on the appearance of the particles. It also initilizes the depthShaderMaterial which is used to calcuate the proper z-values (High interest for proper fluid rendering). Most uniforms are used to push the user settings from the gui to the shaders. In index.ts:updateMesh the material is connected to the pcs. Two shaders are used to impact the appearance of the particles: The vertexshader splineInterpolator.vertex.fx and the fragmentshader splineInterpolator.fragment.fx
Main focus is the interpolation based on t to interpolate between two snapshots. the Formular is based on the usual hermit-spline calculation. The spline-coefficionts are precalculated, downloaded from the server and used as attributes for each particle.
Also, the gl_pointSize is calculated, which changes the size based on the initial size and the distance scale. Finally, the normalized depth is forwarded to the fragment shader.
Depth uses the same vertex shader.
Main focus is to change the appearance in fluid like rendering that can destinguish between very high and very low densities. This can be archieved with different steps:
- The appearance is changed such that the gl_primitive point is rendered as circle and not as square. All parts of the fragment that are not within this circle are ignored. (Line 19-23)
- A linear interpolation between the initial density and the final density is done based on t. (line 25-29)
- Based on the density a color is calculated. (line 30)
- The fragment is colored and the alpha is calculated based on the position of the primitive (gets less alpha on the outside to simulate a blur). (line 33) This might be different for the fluidrendering. The final alpha is based on the alpha setting in the gui. Default is alpha_add (see: https://doc.babylonjs.com/features/featuresDeepDive/materials/using/blendModes).
Depth is using the *depth.fragment.fx to calculate a proper depth that can be logarithmic if wanted. The depthBuffer can be used in the postprocessing, e.g. fluidrendering and is normalized between 0 and 1 based on the farplane.
We store the preprocessed data on the filesystem under
$HOME/Documents/data/tng/webapp/. From here the backend can load the necessary
information into the cache and serve it to the frontend.
Dependend on the current position and point in time, the frontend requests specific data, and stores the returned data for the current space and time info, as well as the current level of detail.
The data is loaded in batches, e.g. 500 data elements and pushed onto a list of to be rendered elements for a current time point (snapnum). With this we can render a specific point in time by iterating over the elements for that key
private static pcsDictonary:Record<number, Array<PointsCloudSystem>> = {}
// ...
if(this.pcsDictonary[snapnum])
this.pcsDictonary[snapnum].forEach(element => {
if (element.mesh)
element.mesh.visibility = 1;
});
To remember which elements we would need to load next, we store metadata about the level of detail. This is provided to the backend when asking for new data.
private static _level_of_detail:Record<number, Record<number,number>> = {};
The first key is the snapnum (point in time), the keys of the second set are the indices of the leafs of the octree, while the values are the loaded elements of that leaf, e.g.:
snapnum: 80 -> leaf number #3: 500 loaded elements
-> leaf number #4: 123 loaded elements
snapnum: 81 -> leaf number #3: 500 loaded elements
We additionally support loading only a specific percentage of the data. Similar to the preprocessing pipeline this percentage works by focusing onto the higher value data elements by sorting the data before returning the wanted percentage.
Originally writted by Nicolas Bender, Marc Burg, and Jonannes Maul as part of a research project at Heidelberg University.
Supervised by Dylan Nelson and Filip Sadlo.
This is the frontend/web-app. The backend/server is in a separate repository.