You are not logged in.

**ThaOneDon****Member**

Updated (from time to time) with tons of tips/techniques not just for messing around in Tesseract but Graphics in General.

Resource thread for anyone who wants to experiment with stuff.

The Order:1886 Supraleiter Inside Frostbite IW Uncharted 4 Far Cry 5 Witcher 3 Titanfall Doom

jcgt.org GPU Pro(Source Code) The ryg’s The Danger Zone iquilezles.org hanecci’s

CRYENGINE(License) UNREAL ENGINE(License) UNITY(License)

SOURCE SDK(License) X-RAY(License) ID TECH 3 ID TECH 4+(Licenses/GPL)

ANKI3D TESSERACT WICKED GODOT LUMIX The Forge Diligent

**Rendering/Shading**

Reducing Driver Overhead Compute Parallel Reduction SIMD Writing fast code queue Job System, ParallelFor More

Dynamic Resources Reversed-Z Logarithmic Small G-Buffers(Packing) Accelerated Triangle Rasterization(Another)

Complex Transformative Portal Interaction Instancing Checkerboard Rendering Indirect Rendering Incremental

- Clustered, Forward+, Deferred -

Clustered Forward vs Deferred Shading Practical Clustered Shading Simple Alternative(Another)

Intel`s Forward Clustered Shading(Source Code/Demo) Humus`s Source Code/Demo Optimisations

Optimizing tile-based light culling Cull that Cone(Another) Spotlight Frustum vs Pyramid

(compatible/works with both Forward and Deferred, while being less prone to problems)

Triangle Visibility Buffer Clustered Deferred(Implementation) Tiled Deferred(Sample)

Optimization to Deferred Shading Pipeline Deferred Shading of Transparent Surfaces With Shadows and Refraction

PBR

Approximate Models For Physically Based Rendering(Source Code) PBR(Source Code)

Reflectance Model Combining Reflection and Diffraction Interfaced Lambertian

Combined Approximation of Fresnel/Visibility(Source Code) Sampling, GGX Importance Sampling

Toward more realism and robustness in global illumination Unifying points, beams, and paths

Real-Time Polygonal-Light Shading with Linearly Transformed Cosines(Demo/Source Code included)

Real-time 2D manipulation of plausible 3D appearance using shading and geometry buffers

Fast 4D Sheared Filtering for Interactive Rendering of Distribution Effects

Product Integral Binding Coefficients for High-order Wavelets

optimization Convert Blinn-Phong to Beckmann distribution(and more)

There are many different explanations of what PBR exactly entails.

Here are few papers that cover most aspects of it thoroughly and small renderers/code to get inspiration from.

RAY MARCHING

Raymarching Distance Fields Volumetric Ray Marcher Generic

Pixel-Projected Reflections(Implementation)(Source Code) Screen Space HIZ Tracing(Reflections)

Clustered Volumetric Fog Vapor Volumetric Fog Source Code

Real-Time Rendering of Volumetric Clouds(Another) Tiny Clouds

Free Penumbra Shadows for Raymarching Distance Fields Dynamic Occlusion

Raymarching is a 3d-rendering technique, praised by programming-enthusiasts for both its simplicity and speed.

It has been used extensively in the demoscene, producing low-size executables and amazing visuals.

VOXELS

Cascaded Voxel Cone Tracing in The Tomorrow Children(Source Code Another) 3D textures Demo

Global Illumination with Voxel Cone Tracing using Clipmaps Using 3D Textures (Source Code) Another

Storage Infinite Sparse Volumes Isosurface Contouring More (Source Code Sandbox Terrain)

Geometry-shader-based real-time voxelization and applications

Geometry and Attribute Compression for Voxel Scenes SSVDAGs

Flying Edges: A High-Performance Scalable Isocontouring Algorithm

Memory-Efficient On-The-Fly Voxelization of Particle Data

OIT

Real Time Depth Sorting of Transparent Fragments OIT_Optimized with MSAA with Linked Lists/Shared Fragment Pool

Real-Time Deep Image Rendering and Order Independent Transparency Guarded

Memory-Efficient Order-Independent Transparency with Dynamic Fragment Buffer

Faster Transparency from Low Level Shader Optimisation (Source Code)

A Phenomenological Scattering Model for OIT Phenomenological Transparency

Recent graphics hardware features, namely atomic operations and dynamic memory location writes, now make it possible to capture and store all per-pixel fragment data from the rasterizer in a single pass in what they call a deep image. A deep image provides a state where all fragments are available and gives a more complete image based geometry representation, providing new possibilities in image based rendering techniques.

A core and driving application is order-independent transparency(OIT). A number of deep image sorting improvements are presented, through which an order of magnitude performance increase is achieved, significantly advancing the ability to perform transparency rendering in real time. In the broader context of image based rendering they look at deep images as a discretized 3D geometry representation and discuss sampling techniques for raycasting and antialiasing with an implicit fragment connectivity approach.

PROCEDURAL

Procedural Rendering of Geometry-Based Grass in Real-Time (Source Code) Vertex Generation

Studying and solving visual artifacts occurring when procedural texturing Filtering

Towards Automatic Band-Limited Procedural Shaders

Dijkstra-based Terrain Generation Using Advanced Weight Functions

Procedural Terrain Generation using a Level of Detail System(Source Code)

Terrain Modelling from Feature Primitives Another Implementation

Real-Time Editing of Procedural Terrains Transition Contour Synthesis with Dynamic Patch Transitions

Procedural placement of 3D objects Methods for the Interactive Design Dynamic On-Mesh Generation

Generating, Animating, and Rendering Varied Individuals for Real-Time Crowds

Depth-fighting Aware Methods for Multi-fragment Rendering

Inclusion Test for Polyhedra Using Depth Value Comparisons on the GPU

Multi-fragment rasterization is susceptible to flickering artifacts when two or more visible fragments of the scene have identical depth values. This phenomenon is called coplanarity or Z-fighting and incurs various unpleasant and unintuitive results when rendering complex multi-layer scenes.

In this work, they develop depth-fighting aware algorithms for reducing, eliminating and/or detecting related flaws in scenes suffering from duplicate geometry. They adapt previously presented single and multi-pass rendering methods, providing alternatives for both commodity and modern graphics hardware.

A Non-linear GPU Thread Map for Triangular Domains Improving the accuracy

There is a stage in the GPU computing pipeline where a grid of thread-blocks, in parallel space, is mapped onto the problem domain, in data space. Threads that fall inside the domain perform computations while threads that fall outside are discarded at runtime.

In this work they study the case of mapping threads efficiently onto triangular domain problems and propose a block-space linear map λ(ω), based on the properties of the lower triangular matrix, that reduces the number of unnecessary threads from O(n2) to O(n).

This study is about the performance of algorithms, with similar purpose as Carmack and Lomont implementation of square root using three iterations of the Newton-Raphson method and the magic number “0x5f3759df”.

**Light/Scattering/Reflections/Ambient Obscurance-Occlusion/GI**

Real-Time Light Transport in Analytically Integrable Participating Media Analytic Single Scattering in Screen Space

Precomputed Atmospheric Scattering: a New Implementation Analytic Fog Density(More)

Real-time rendering of participating media, such as fog is an important problem, because such media significantly influence the appearance of the rendered scene. Physically correct solution involves a costly simulation of a very large number of light-particle interactions, especially when considering multiple scattering.

This work briefly examines the existing solutions and then presents an improved method for real-time multiple scattering in quasi-heterogeneous media. Inherent visual artifacts are minimized with several techniques using analytically integrable density functions.

There are also several strategies to stylize volumetric single scattering, overcoming the difficulty that light shafts depend on the layout of an entire environment. These approaches are compatible with animated scenes and rely on very efficient solutions, which makes it ready to be used for real-time applications, and enable a quick exploration of the various settings. The techniques are applied at a global scope – i.e., for the whole scene – but can also be used to make local changes to the scattering behavior.

Image-based occluder manipulations modify the complexity of the scattering appearance and are controlled by only a few parameters. Transfer functions allow us to interactively design a general mood and the result can even

be transferred to other scenes.

Modern LightMapping(Source Code etc) artifact-free lightmaps lightmapper

Lightmap Compression Little Lightmap Tricks Bin Packing

Shading with Dynamic Lightmaps

The Baking Lab(Source Code) Spherical Gaussians Irradiance Caching Part 2 (Source Code) More1 More2 More3

Adaptive Texturing and Geometry Processing for High-Detail Real-Time Rendering

Various tests and observations to find most effective methods.

The main challenge of compressing lightmaps is that often they have a wider range than regular diffuse textures. This range is not as large as in typical HDR textures, but it’s large enough that using regular LDR formats results in obvious quantization artifacts. Lightmaps don’t usually have high frequency details, they are often close to greyscale, and only have smooth variations in the chrominance.

Area Lights Volumetric Lights(Another) Image-Based Distant Lighting

Explanations for efficient punctual light source rendering, like point, spot and directional lights. Most games get away with using these simplistic light sources.

Inspired by Frostbite`s representative point method. What this technique essentially does is that you keep the specular calculation, but change the light vector. The light vector was the vector pointing from the light position to the surface position. But for lights, we are not interested in the reflection between the light’s center and the surface, but between the light “mesh” and the surface.

Screen Space Planar Reflections Stochastic SSR Local Cubemaps

Screen Space Reflections in Killing Floor 2(Source Code included) Screen Space Reflections(Source Code)

kode80's screen space reflections implementation for Unity3D 5. Features screen space ray casting, backface depth buffer for per-pixel geometry thickness, distance attenuated pixel stride, rough/smooth surface reflection blurring, fully customizable for quality/speed and more.

Separable Subsurface Scattering(Source Code) Extending Another Implementation(Source Code)

Subsurface Scattering-Based Object Rendering Techniques Directional Subsurface Scattering

Wrap Shading Extension to Energy-Conserving Wrapped Diffuse

Real-time Rendering of Subsurface Scattering according to Translucency Magnitude

Radiance Scaling for Versatile Surface Enhancement

Addressing Grazing Angle Reflections in Phong Models

Two techniques to generate separable approximations of diffuse reflectance profiles to simulate subsurface scattering for a variety of materials using just two 1D convolutions. Separable models yield state-of-the-art results in less than 0.5 millisecond per frame, which makes high-quality subsurface scattering affordable even in the most challenging real-time contexts such as games, where every desired effect may have a budget of tenths of a millisecond.

Real-Time Global Illumination using Precomputed Light Field Probes(Source Code etc) Spherical Light Fields

Spherical Illuminance Composition(indirect)(Source Code etc included) More Another Implementation

Alternative definition of Spherical Harmonics for Lighting Renderer More Converting SH Radiance to Irradiance

The concepts of light transport for the purpose of rendering are well understood, but expensive to calculate. For real-time solutions, simplification is necessary, often at the cost of visual quality.

The proposed method is fast enough to be suitable for real-time applications on contemporary consumer hardware. It is based on the radiosity technique with various adaptions to speed up the computation. An infinite number of light bounces can be calculated iteratively. Indirect light is stored in the form of spherical harmonics (SH). This directional representation increases the quality of the results and enables the use of normal mapping. The idea of irradiance volumes has been incorporated to provide indirect lighting for dynamic objects.

The proposed technique supports full dynamic lighting and works with all commonly used light source models. In addition, area and environment lighting are facilitated.

Furthermore, they present details on how this technique can be implemented on contemporary hardware. Various approaches are explained and compared to give guidelines for practical implementation.

NNAO(Source Code) HBIL(Source Code) Accurate Indirect Occlusion(More) SAO with Temporal Reprojection

LSAO/FFAO MiniEngineAO DSSDO(Directional Occlusion) Volumetric SSAO(Another)

Simplifying the rendering equation they derive an ambient transfer function that expresses the response of the surface and its neighborhood to ambient lighting, taking into account multiple reflection effects. The ambient transfer function is built on the obscurances of the point. If we make assumptions that the material properties are locally homogenous and incorporate a real-time obscurances algorithms, then the proposed ambient transfer can also be evaluated in real-time. Their model is physically based and thus can not only provide better results than empirical ambient occlusion techniques at the same cost, but also reveals where tradeoffs can be found between accuracy and efficiency.

Analyzing and Predicting Anisotropic Effects of BRDFs

Exploring Clustering Algorithms in the Appearance Modeling Problem

The majority of the materials we encounter in the real-world have variable reflectance when rotated along a surface normal, azimuthally-variable behavior known as visual anisotropy. Such behavior can be represented by a fourdimensional anisotropic BRDF that characterizes the anisotropic appearance of homogeneous materials. Unfortunately, most past research has been devoted to simplistic three dimensional isotropic BRDFs.

In this paper, they analyze and categorize basic types of BRDF anisotropy, use a psychophysical study to assess at which conditions can isotropic appearance be used without loss of details in material appearance. To this end, they tested the human impression of material anisotropy on various shapes and under two illuminations. Concluding that subjects sensitivity to anisotropy declines with increasing complexity of 3D geometry and increasing uniformity of illumination environment.

Next, they proposed two anisotropy measures; while the first one is based on entire BRDF information, the second one requires only a sparse subset of reflectance values. Both measures have a similar performance on the tested dataset, and we have shown a positive correlation with results of the psychophysical study. The achieved results demonstrate that the proposed anisotropy measures can be considered as a promising approximation of human perception of real-world visual anisotropy.

Real-time Rendering of Translucent Material by Contrast-Reversing Procedure

The conventional method of rendering the translucence of an object is difficult to implement in real time, since the translucency is accompanied by complicated light behavior such as scattering and absorption. To simplify this rendering process, they focus on the contrast-reversing stimulant property in vision science. This property is based on the perception that we can recognize a luminance histogram compatible between scattering and absorption. According to this property, they propose a simple rendering method to reverse the light path between reflection and transmission.

Their method adopts an additional function for selecting a front or back scattering process in the calculation of each pixel value. Because this improvement makes only slight alterations in the conventional reflection model, it can reproduce a translucent appearance in real time while inheriting the advantages of various reflection models.

**Shadows/Shadow Mapping-Volumes**

Shadows (Source Code) More (Also Source Code) Fast PCSS using Temporal Coherence

GEARS (Optimized Soft Shadows) Soft bilateral filtering volumetric shadows using cube shadow maps(Code)

Realistic Local Lighting in Dynamic Height Fields Multi-Pass Gaussian Contact-Hardening Soft Shadows

This method improves the rendering performance of the Percentage Closer Soft Shadows method by exploiting the temporal coherence between individual frames: The costly soft shadow recalculation is saved whenever possible by storing the old shadow values in a screen-space History Buffer. By extending the shadow map algorithm by a so-called Movement Map, they can not only identify regions disoccluded by camera movement, but also robustly detect and update shadows cast by moving objects: Only the shadows in the areas marked red in the right image have to be re-evaluated. This saves rendering time and doubles the soft shadow rendering performance in real-time 3D scenes with both static and dynamic objects.

Optimized Visibility Functions for Revectorization-Based Shadow Mapping(RBSM)(Source Code included)

Revectorization-based shadow mapping minimizes shadow aliasing by revectorizing the jagged shadow edges generated with shadow mapping, keeping low memory footprint and real-time performance for the shadow computation. However, the current implementation of RBSM is not so well optimized because its visibility functions are composed of a set of 43 cases, each one of them handling a specific revectorization scenario and being implemented as a specific branch in the shader.

Here, they take advantage of the shadow shape patterns to reformulate the RBSM visibility functions, simplifying the implementation of the technique and further providing an optimized version. Results indicate that this implementation runs faster than the original implementation, while keeping its same visual quality and memory consumption.

Non-Linearly Quantized Moment Shadow Maps(Source Code included) Moment-Based Methods

Moment Shadow Maps enable direct filtering to accomplish proper antialiasing of dynamic hard shadows. For each texel, the moment shadow map stores four powers of the depth in either 64 or 128 bits. After filtering, this information enables a heuristic reconstruction. However, the rounding errors introduced at 64 bits per texel necessitate a bias that strengthens light leaking artifacts noticeably. In this paper, they propose a non-linear transform which maps the four moments to four quantities describing the depth distribution more directly.

As a prerequisite for the use of its quantization schemes, they propose a compute shader that applies a resolve for a multisampled shadow map and a 9² two-pass Gaussian filter in shared memory. The quantized moments are written back to device memory only once at the very end. This approach makes the technique roughly as fast as

Variance Shadow Mapping without any of its drawbacks.

Since hardware-accelerated bilinear filtering is incompatible with non-linear quantization, they employ

blue noise dithering as inexpensive alternative to manual bilinear filtering.

Irregular Adaptive Shadow Maps

A tile based partitioning scheme is provided to facilitate dynamic customizability and allow for adaptive distribution of resources at run time which leads to more efficient use of memory resources.

Efficient High-Quality Shadow Maps

This thesis provides an efficient GPU implementation of various optimizations to basic shadow mapping. The optimizations, which echo the idea of making full use of the available resolution and precision, are simple to implement, provide a great deal of improvement and allow for some amount of dynamic refinement of shadows with change in the camera view.

An evaluation of moving shadow detection techniques

Adaptive Depth Bias for Soft Shadows Perception-Based Filtering

Resolution Estimation for Shadow Mapping

Shadows of moving objects may cause serious problems in many computer vision applications, including object tracking and object recognition. In common object detection systems, due to having similar characteristics, shadows can be easily misclassified as either part of moving objects or independent moving objects. To deal with the problem of misclassifying shadows as foreground, various methods have been introduced. This paper addresses the main problematic situations associated with shadows and provides a comprehensive performance comparison on up-to-date methods that have been proposed to tackle these problems.

**Terrain Rendering/Level-of-Detail(LOD)/Occlusion Culling/Caching**

GPU-Based Occlusion Culling Hierarchical-Z with Compute Shaders/SSBOs Frustrum Culling with Voxels

Software Occlusion Culling Optimizing Culling(a lot of info)

Vertex Discard Occlusion Culling Clip Space Sample Culling Stochastic Light Culling

Performing visibility determination in densely occluded environments is essential to avoid rendering unnecessary objects and achieve high frame rates. In this implementation, the image space Occlusion Culling algorithm is done completely in GPU, avoiding the latency introduced by returning the visibility results to the CPU. It utilizes the GPU rendering power to construct the Occlusion Map and then performs the image space visibility test by splitting the region of the screen space occludees into parallelizable blocks. This implementation is especially applicable for low end graphics hardware and the visibility results are accessible by GPU shaders. It can be applied with excellent results in scenes where pixel shaders alter the depth values of the pixels, without interfering with hardware Early-Z culling methods. They demonstrate the benefits and show the results of this method in real-time densely occluded scenes.

Sphere Projection

An Adaptive and Hybrid Approach to Revisiting the Visibility Pipeline

In computer graphics very often you want to know how big an object looks in screen, probably measured in pixels. Or at least you want to have an upper bound of the pixel coverage, because that allows you to perform intelligent

Level of Detail (LOD) for that object. For example, if a character or a tree are not but a couple of pixels in screen, you probably want to render them with less detail. One easy way to get an upper bound of the pixel coverage is to embed your object in a bounding box or sphere, then rasterize the sphere or box and count the amount of pixels.

This requires complexity in your engine, and probably some delayed processing as the result of that rasterization won't be immediately ready.

It would be cool if a tessellation shader or a geometry shader would be able to tessellate or kill geometry on the fly based on the pixel coverage of the object, just immediately.

The pixel coverage of a (bounding) sphere happens to have analytic expression can be solved with no more than

one square root, its very compact.

Height map compression techniques

In practice, many applications handle the real-time rendering well with LOD schemes tailored to their needs.

In such cases, a compression method tied to a concrete LOD scheme is not feasible.

This method handles only the compression, so it can be used as a plug & play component in an existing real-time renderer. Its only job is to compress a block of terrain height samples sized 2nx2n and to provide fast progressive decompression of its mip-maps, while respecting the maximum error bound at every mip-map. The source code of the method is written modularly, so that any representation of the height samples can be compressed - doubles, floats or even arbitrary structures. It is inspired by C-BDAM - the compression method is extracted from the LOD scheme and simplified.

This approach introduces heavy redundancy of the data - a block corresponding to a certain quadtree node contains simplified blocks of its children and all these blocks are stored separately. The reason why this approach is used is that the user can navigate to any area almost immediately - only the data needed for the scene has to be fetched, without having to reconstruct it by traversing from the root. Moreover, this approach enables the user to flexibly extend the terrain data by high-resolution insets.

This algorithm should be able to compress a regular square block of height samples and progressively decompress it in the real-time, from the smallest mip-map to the largest one. Apart from this, the algorithm should not in any way interfere with the rendering pipeline of the application.

Fast Terrain Rendering with Continuous Detail on a Modern GPU Real-Time LOD (Algorithms, Code etc)

Applying Tessellation to Clipmap Terrain Rendering More(solutions for clipmap/heightmap issues)

Implementation Details of Sample App Using Hybrid Terrain Representation(voxels+heightmap, and deformation)

How to achieve fast terrain rendering using combination of best what was available (that they know) at the time this was posted.

Downsampling Scattering Parameters for Rendering Anisotropic Media

A new approach to compute scattering parameters at reduced resolutions. Many detailed appearance models involve high-resolution volumetric representations. Such level of detail leads to high storage but is usually unnecessary especially when the object is rendered at a distance. However, naïve downsampling often loses intrinsic shadowing structures and brightens resulting images.

This method computes scaled phase functions, a combined representation of single-scattering albedo and phase function, and provides significantly better accuracy while reducing the data size by almost three orders of magnitude.

They also show that modularity can be exploited to greatly reduce the amortized optimization overhead by allowing multiple synthesized models to share one set of downsampled parameters. Optimized parameters generalize well to novel lighting and viewing configurations.

A Caching System for a Dependency-aware Scene Graph(Poster)(Full Paper, Source Code included))

Efficient Management of Last-level Caches in Graphics Processors for 3D Scene Rendering Workloads

Insomniac Games’ CacheSim

This thesis proposes a scene graph caching system that automatically creates an alternative representation of selected subgraphs. This alternative representation poses a render cache in the form of a so-called instruction stream which allows to render the cached subgraph at lower CPU cost and thus more quickly than with a regular render traversal.

In order to be able to update render caches incrementally in reaction to certain scene graph changes, a dependency system was developed. This system provides a model for describing and tracking changes in the scene graph and enables the scene graph caching system to update only those parts of the render cache that needs to be updated.

The actual performance characteristics of the scene graph caching system were investigated using a number of synthetic test scenes in different configurations. These tests showed that the caching system is most useful in scenes with a high structural complexity (high geometry count and/or deep scene graph hierarchies) and moderate primitive count per geometry.

**Animation/Physics**

Forward And Backward Reaching Inverse Kinematics(FABRIK)(Source Code) RBDL(Source Code)

An Efficient Energy Transfer Inverse Kinematics Solution(Source Code - page 102) Dual-Laplacian Cage-Based IK

Generalized Canonical Time Warping(Source Code)

Implicit Skinning: Real-Time Skin Deformation with Contact Modeling More

REPLACE quaternions Optimizing the Rendering Pipeline of Animated Models Dual Quaternions ACL

This mesh editing technique allows users to produce visually pleasing deformations. The linearity of the underlying objective functional makes the processing very efficient and improves the effectiveness of deformable surface computation. Method offers the possibility to encode global topological changes of the shape with respect of local influence and allows animators to re-use the estimation paraterization.

Texture Distortion(Another,Tessellation) Optimizing a Water Simulation(More) Scalable GPU Fluid Simulation

Optimization for Large-Scale Real-Time Water Simulation Real-time Interactive Water Waves Another Implementation

Real-Time Screen Space Fluid Rendering with Scene Reflections SPH-based Rendering of Fluid Transport Dynamics

To solve the singular problem of water waves obtained with the traditional model, a hybrid deep-shallow-water model is estimated by using an automatic coupling algorithm. It can handle arbitrary water depth and different underwater terrain. As a certain feature of coastal terrain, coastline is detected with the collision detection technology. Then, unnecessary water grid cells are simplified by the automatic simplification algorithm according to the depth. Finally, the model is calculated on CPU and the simulation is implemented on GPU.

Animated Foliage and Cloth(gif of this) Meshfree C2-Weighting for Shape Deformation

GPU Assisted Self-Collisions of Cloths Real Time Cloth Simulation with B-spline Surfaces(Source Code)

Cloth Shading Fixed Spherical n Points Density-Based Clustering Technique for Cloth Simulation Tearing

The GPU is very effective doing vector math and the vertex program is already looping through all of the vertices to convert them to screen space and send them to the fragment program. Before this we can simply add a value to these positions before sending them down the line.

By supplying properties for the material stiffness wind direction and wind speed a more realistic look can be achieved which correlates more to what actually happens in nature. A second property of how attached parts of an object is could be supplied through vertex color.

The second scenario relates to vegetation. By instead building the bending properties into the shader you wouldn’t need any collision detection for each individual plant, instead just look at the distance between vertex and player and scale the bending based on this.

Zero-byte AABB-trees Dynamic box pruning revisited OPCODE

Collision Detection between Dynamic Rigid Objects and Static Displacement Mapped Surfaces

Output Sensitive Collision Detection for Unisize Boxes

Cubemap based collision detection Generic Convex Collision Detection using Support Mapping

Rigid Body Physics for Synthetic Data Generation 6D Frictional Contact for Rigid Bodies

libccd(Source Code) - 3-clause BSD Licensed. Open Source.

The usual algorithm requires to compute an octree for the scenery meshes. Then collisions between the character and the scenery are computed using sphere-octree collision detection algorithm. The octree can be either precomputed and included into meshes data, or computed at the loading of the application.

Algorithm computes physics by rendering a world axis aligned depth cubemap. It can work with low end graphic devices, and computations are done mainly on GPU.

Rubikon (Source 2 Physics)(Valve) More

Entire development of Rubikon in detail.

Newton Dynamics

zlib licensed. Open-Source. Cross-platform. Written in C++.

Used in Mount and Blade/Penumbra/Amnesia games. Implements a deterministic solver, which is not based on traditional LCP or iterative methods, but possesses the stability and speed of both respectively. This feature makes Newton Dynamics a tool not only for games, but also for any real-time physics simulation.

Conformal Surface Morphing with Applications on Facial Expressions

Numerical Methods in Shape Spaces and Optimal Branching Patterns

Modeling and Compression of Motion Capture Data

Morphing is the process of changing one figure into another. Some numerical methods of 3D surface morphing by deformable modeling and conformal mapping are shown in this study. It is well known that there exists a unique Riemann conformal mapping from a simply connected surface into a unit disk by the Riemann mapping theorem. a 3D surface deformable model can be built via various approaches such as mutual parameterization from direct interpolation or surface matching using landmarks. In this paper, they take the advantage of the unique representation of 3D surfaces by the mean curvatures and the conformal factors associated with the Riemann mapping.

In 3D Surface Morphing similar to the traditional morphing approaches based on boundary representation, a wrap has to be created via feature correspondence and interpolation between shapes based on the wrap is employed to generate the morphing sequence. By taking advantage of the conformal parameterization and the unique surface representation of conformal factor and mean curvature, the wrap can be easily obtained by the composition of deformations from the Mobius transformation and the thin-plate matching function. To mimic the non-isomorphic risk that usually occurs in matching largely deformed surfaces, a single mesh based on geodesic frame is employed. As a result, the correspondence, including geometric information and texture information, of the whole surface can be defined and interpolation among original surface and target surface can be computed by the usual cubic spline homotopy in a disk parametric domain. This non-linear iterative surface reconstruction algorithm can be accelerated by using the multigrid method on a uniform mesh by which multi-resolution surfaces can also be obtained.

Several numerical experiments of the face morphing are presented to demonstrate the robustness of

this approach.

A Cracking Algorithm for Destructible 3D Objects

Simulating Rigid Body Fracture with Surface Meshes

A Novel GPU-Based Deformation Pipeline

Fracturable Surface Model Fast Algorithm to Split and Reconstruct Triangular Meshes

Parallel explicit FEM algorithms using GPU's

By combining an indirect boundary integral formulation, explicit surface tracking and a kernel-independent fast multipole method, presented method is effective for rigid body brittle fracture using the boundary surface mesh only.

Existing explicit mesh tracking methods are modified to support evolving cracks directly in the triangle mesh representation, giving highly detailed fractures with sharp features, independent of any volumetric sampling (unlike tetrahedral mesh or level set approaches) and avoids the need for calculations; the triangle mesh representation also allows simple integration into rigid body engines.

It is accurate, and at the same time computationally economical, and it successfully resolves crack evolution in various settings.

Fast Collision Culling in Large-Scale Environments Using GPU Mapping Function

Efficient and Reliable Self-Collision Culling using Unprojected Normal Cones

In order to take advantage of the high number of cores, a new mapping function is defined that enables GPU threads to determine the objects pair to compute without any global memory access.

These new optimized GPU kernel functions use the thread indexes and turn them into a unique pair of objects to test. A square root approximation technique is used based on Newton’s estimation, enabling the threads to only perform a few atomic operations.

A first characterization of the approximation errors is presented, enabling the fixing of incorrect computations. The I/O GPU streams are optimized using binary masks.

Defending Continuous Collision Detection against Errors

Numerical errors and rounding errors in continuous collision detection (CCD) can easily cause collision detection failures if they are not handled properly.

This paper demonstrates a set of simple modifications to make a basic CCD implementation failure-proof. Using error analysis, they prove the safety of these methods and they formulate suggested tolerance values to reduce false positives.

**Audio**

TinyOAL

Apache License v2.0. Open-Source. Cross-platform. Written for C/C++ and .NET.

A minimalist OpenAL Soft audio engine for quick implementation.

Steam Audio - (Requires a License but Free - Valve Corporation) - HRTF etc

Sound occlusion for virtual 3D environments(Source Code)

Efficient HRTF-based Spatial Audio for Area and Volumetric Sources

Efficient Approximation of HRTF in Subbands for Accurate Sound Localization(Source Code)

Geometric-based reverberator using acoustic rendering networks

Prioritized Computation for Numerical Sound Propagation

HRTF for XAudio2 + X3DAudio(Source Code)

Results indicate that the proposed algorithms preserve the salience of spatial cues, even for relatively high approximation tolerances, yielding computationally very efficient implementations.

**Anti-Aliasing**

Temporal Reprojection Anti-Aliasing(TRAA)(Source Code) TSCMAA

Source Code/Paper for TRAA used in Playdead's INSIDE.

Enhanced Subpixel Morphological Antialiasing(SMAA)(Detail) CMAA2

A very efficient GPU-based MLAA implementation, capable of handling subpixel features seamlessly, and featuring an improved and advanced pattern detection & handling mechanism.

Amortized Supersampling

Real-time rendering scheme that reuses shading samples from earlier time frames to achieve practical antialiasing of procedural shaders. Using a reprojection strategy, they maintain several sets of shading estimates at subpixel precision, and incrementally update these such that for most pixels only one new shaded sample is evaluated per frame.

Alpha to Coverage (Alpha Mipmaps) Anti-aliased Alpha Test

Efficient Dithering(Another) Improved box/triangle filtering

They solve the problems with alphas by sampling the alpha and interpreting how much it covers the pixel, dithering and distributing the result to an appropriate number of multisample samples.

Controlling and Sampling Visibility Information on the Image Plane

Visibility-induced aliasing can be reduced substantially by, first, choosing a suitable function space that admits a sampling theorem for the given locations; second, determining the pre-filtering of the step function for this space; third, constructing a sampling theorem with the given locations; and fourth, deriving the quadrature weights from the sampling theorem.

They applied their methodology to the classical setting of bandlimited functions but also considered shift invariant spaces. Also demonstrated that the better spatial localization of the kernel functions in the latter setting compared to the sinc-function also yields lower error rates.

Variance reduction using interframe coherence for animated scenes

In an animated scene, geometry and lighting often change in an unpredictable way. Rendering algorithms based on various methods are usually employed to precisely capture all features of an animated scene. However, often these methods typically take a long time to produce a noise-free image.

In this paper, they propose a variance reduction technique which exploits coherence between frames.

Firstly, they introduce a dual cone model to measure the incident coherence intersecting camera rays in object space. Secondly, they allocate multiple frame buffers to store image samples from consecutive frames. Finally, the color of a pixel in one frame is computed by borrowing samples from neighboring pixels in current, previous, and subsequent frames. Experiments show that noise is greatly reduced by this method since the number of effective samples is increased by use of borrowed samples.

**Shaders/Effects**

ReShade (Shaders)

Shader Minifier(Source Code)(compiled exe) More

Shader Live-Reloading

Vertex Shader Tricks Compute Shaders(More) Low-level Shader Optimization(More)

Particle Systems Using 3D Vector Fields with Compute Shaders(Source Code - last pages)

GPU-based particle simulation(Source Code included) Practical Particle Lighting

Particle Simulation with GPUs GParticles(Source Code)

Particle systems and particle effects are used to simulate a realistic and appealing atmosphere in many virtual environments. However, they do occupy a significant amount of computational resources. The demand for more advanced graphics increases by each generation, likewise does particle systems need to become increasingly more detailed.

This thesis proposes a texture-based 3D vector field particle system, computed on the GPU, and compares it to an equation-based particle system.

Several tests were conducted comparing different situations and parameters. All of the tests measured the computational time needed to execute the different methods.

A System for Rapid, Automatic Shader Level-of-Detail

Geometry-Aware Framebuffer Level of Detail

Using an optimized greedy search algorithm, adding parameter binding time processing capabilities (parameter shader), and a simple but general simplification rule (ACSE) yields a system that can process complex game-style shaders to produce policies featuring simplified shaders similar to those created by hand.

http://graphics.cs.cmu.edu/projects/lodgen/

Deep Shading: Convolutional Neural Networks for Screen-Space Shading

Deep Shading leverages deep learning to turn attributes of virtual 3D scenes into appearance. CNNs can actually model any screen-space shading effect such as ambient occlusion, indirect light, scattering, depth-of-field, motion blur, or anti-aliasing as well as arbitrary combinations of them at competitive quality and speed.

High Dynamic Range Imaging Pipeline on the GPU

In this article they aim to fill a gap of providing a detailed description of how the HDRI pipeline, from HDR image assembly to tone mapping, can be implemented exclusively on the GPU. They also explain the trade-offs that need to be made for improving efficiency and show timing comparisons for CPU vs GPU implementations.

Another goal of this paper is to demonstrate how both the global and local versions of this operator can be efficiently implemented by using fragment shaders. Different from previous work, they show that the implementation of this operator neither requires expensive convolution nor Fourier transform operations to compute local adaptation luminances.

Extending the Graphics Pipeline with Adaptive, Multi-Rate Shading

Due to complex shaders and high-resolution displays (particularly on mobile graphics platforms), fragment shading often dominates the cost of rendering in games. To improve the efficiency of shading on GPUs, they extend the graphics pipeline to natively support techniques that adaptively sample components of the shading function more sparsely than per-pixel rates.

They perform an extensive study of the challenges of integrating adaptive, multi-rate shading into the graphics pipeline, and evaluate two- and three-rate implementations that they believe are practical evolutions of modern GPU designs.

Unified Surface Shaders

($count+1)th attempt to avoid writing redundant and hard to maintain shader code.

Implements surface shaders that follow the same rules, just flow through another render path usually requires to write a lot of tedious repetitive code.

But since we have a full C-preprocessor to our use, we can unify our different shader versions into a single file and split out the differences via preprocessor conditionals.

Genetic Programming for Shader Simplification

Fast multi-resolution shading of acquired reflectance using bandwidth prediction

Precision Selection for Energy-Efficient Pixel Shaders

This approach computes a series of increasingly simplified shaders that expose the inherent

trade-off between speed and accuracy. Compared to existing automatic methods for pixel shader simplification [Olano et al.2003; Pellacini 2005], approach considers a wider space of code

transformations and produces faster and more faithful results. Further demonstrating how cost function can be rapidly evaluated using graphics hardware, which allows tens of thousands of shader

variants to be considered during the optimization process. This approach is also applicable to multi-pass shaders and perceptual based error metrics.

A Fresh Look at Generalized Sampling Path Space Filtering

Integration with Stochastic Point Processes Forced Random Sampling(Code)

Low-Discrepancy Blue Noise Sampling(More) Advancing Front Animating Noise For Integration Over Time

Filtering Non-Linear Transfer Functions on Surfaces Supplemental(Algorithms, Notes, Code etc, ALL included)

Discovering New Monte Carlo Filters BRDF Sampling etc Multiple Importance Sampling Bayesian WLR Generalized

Combining Reprojection and Adaptive Sampling Pixel Cache Light Tracing

It decomposes a filter into two parts: a compactly supported continuous-domain function and a digital filter. This broadly summarizes the key aspects of the framework, and delves into specific applications in graphics. Using new notation, concisely presents and extends several key techniques.

In addition, demonstrates benefits for prefiltering in image downscaling, supersample-based rendering, and analyzes the effect that generalized sampling has on noise.

**Meshes**

Adaptive GPU Tessellation with Compute Shaders(Source Code) Feature-Adaptive Catmull-Clark Adaptive Quadtrees

Simplified and Tessellated Mesh for Realtime High Quality Rendering 3D Mesh Simplification meshoptimizer

Evaluating the visibility threshold for a local geometric distortion on a 3D mesh and its applications

GPU-based refinement scheme that is free from the limitations incurred by tessellation shaders. Specifically, scheme allows arbitrary subdivision levels at constant memory costs. Its achieved by manipulating an implicit (triangle-based) subdivision scheme for each polygon of the scene in a dedicated compute shader that reads from and writes to a compact, double-buffered array. Performance of the implementation is both fast and stable. Naturally, the average GPU rendering time depends on how the terrain is shaded.

Multi-Resolution Meshes for Feature-Aware Hardware Tessellation More(Source Code - pages 34-37)

Feature-Adaptive Rendering of Loop Subdivision Surfaces on Modern GPUs

A general framework for the construction and rendering of non-uniform LODs suitable for hardware tessellation.

Its key component is a novel hierarchical representation of multiresolution meshes that allows us to finely control the topological locations of vertex splits and merges. they thus managed to relax the regularity of fractional tessellation, while retaining the efficiency of the respective GPU’s units.

Within the framework, they presented a dedicated mesh decimation scheme that can be driven by any edge-based error metric. In particular, by applying it with a feature-preserving geometric error, they leveraged hardware tessellation for feature-aware LOD rendering of meshes.

Fast Screen Space Curvature Estimation on GPU(Source-Shader Code/Demo etc)

Efficient Pixel-accurate Rendering of Animated Curved Surfaces Bilateral and Mean Shift Filtering

Quadratic Error Metric Mesh Simplification Algorithm Based on Discrete Curvature

Curvature is an important geometric property in computer graphics that provides information about the behavior of object surfaces. The exact curvature can only be calculated for a limited set of surfaces description. Most of the time, dealing with triangles, point sets or some other discrete representation of the surface. For those, curvature computation is problematic. Moreover, most of existing algorithms were developed for static geometry and can be slow for interactive modeling.

This paper proposes a screen space method which estimates the mean and Gaussian curvature at interactive rates. The algorithm uses positions and normals to estimate the curvature from the second fundamental form matrix. Using the screen space has advantages over the classical approach: low-poly geometry can be used and additional detail can be added with normal and bump maps.

Convex Hull Problems(Streaming Geometry)(Source Code)

Reducing the number of points on the convex hull calculation using the polar space subdivision in ?2

The convex hull is a well-studied problem with a large body of results and algorithms in a variety of contexts.

They consider three contexts: when only an approximate convex hull is required, when the input points come from a (potentially unbounded) data stream, and when layers of concentric convex hulls are required.

Existing algorithms for these problems either do not achieve optimal runtime and linear space, or are overly complex and difﬁcult to implement and use in practice. This thesis remedies this situation by proposing novel algorithms that are both simple and optimal. The simplicity is achieved by independently computing four sets of monotone convex layers in time and linear space. These are then merged together in O(n log n) time.

A General Framework for Constrained Mesh Parameterization Volume Encoded UV-Maps BFF

Real Time Rendering of Parametric Surfaces on the GPU (Algorithms, Notes, Code etc) Möbius Registration

Parameterizing or flattening a triangle mesh is necessary for many applications in computer graphics and geometry. Certain downstream applications require adherence to more general, geometric constraints – possibly at the cost of higher distortion. By means of this method various geometric features, such as, lines, circular arcs and various subregions can be constrained, while the energy is also minimized, providing a more general solution than previous approaches. Presented framework is motivated by the As-Rigid-As-Possible parameterization method, and demonstrates its effectiveness through several examples. The method can easily be adapted to parameterization methods that minimize alternative distortion measures.

Geometry Batching Using Texture-Arrays

Batching can be used to group and sort geometric primitives into batches to reduce the number of required state changes, whereas the size of the batches determines the number of required draw-calls, and therefore, is critical for rendering performance.

For example, in the case of texture atlases, which provide an approach for efficient texture management, the batch size is limited by the efficiency of the texture-packing algorithm and the texture resolution itself.

This paper presents a pre-processing approach and rendering technique that overcomes these limitations by further grouping textures or texture atlases and thus enables the creation of larger geometry batches. It is based on texture arrays in combination with an additional indexing schema that is evaluated at run-time using shader programs.

Basically, facilitates a flexible partitioning of geometry.

**Textures**

MinLod Mipmap(Source Code) Decals Texture tiling/swizzling ESRGAN

Bindless(Source Code) Virtual Texturing(VT)(Source Code) GPU Driven Adaptive VT ARB_sparse_texture Real VT

The Implementation of a Scalable Texture Cache(Source Code) Incremental loading of terrain textures min-max mip

Virtual texturing is a solution to the problem of real-time rendering of scenes with vast amounts of texture data which does not fit into graphics or main memory. Virtual texturing works by preprocessing the aggregate texture data into equally-sized tiles and determining the necessary tiles for rendering before each frame. These tiles are then streamed to the graphics card and rendering is performed with a special virtual texturing fragment shader that does texture coordinate adjustments to sample from the tile storage texture.

GST(GPU-decodable Supercompressed Textures)(Source Code)

Real-time BC6H Compression on GPU(Source Code) bc7enc16 ASTC(Codec) RGBV

Modern GPUs supporting compressed textures allow interactive application developers to save scarce GPU resources such as VRAM and bandwidth. Compressed textures use fixed compression ratios whose lossy representations are significantly poorer quality than traditional image compression formats such as JPEG. They present a new method in the class of supercompressed textures that provides an additional layer of compression to already compressed textures. Texture representation is designed for endpoint compressed formats such as DXT and PVRTC and decoding on commodity GPUs. They apply this algorithm to commonly used formats by separating their representation into two parts that are processed independently and then entropy encoded. Method preserves the CPU-GPU bandwidth during the decoding phase and exploits the parallelism of GPUs to provide up to 3X faster decode compared to prior texture supercompression algorithms. Along with the gains in decoding speed, the method maintains both the compression size and quality of current state of the art supercompressed texture representations.

Normal Mapping For Triplanar Shader Normal Mapping Without Precomputed Tangents

Horizon Occlusion for Normal Mapped Reflections

Semi-Calibrated Near-Light Photometric Stereo

Guided Robust Matte-Model Fitting for Accelerating Multi-light Reflectance Processing Techniques

Optimisation of photometric stereo methods by non-convex variational minimisation

More efficient forms of Normal Maps.

**AI/Scripting**

Compromise-free Pathfinding on a Navigation Mesh(Source Code)

Adaptive Layered Goal Oriented Action Planning(GOAP)(Source Code)

Dynamic and Robust Local Clearance Triangulations

A optimization of A* algorithm to make it close to human pathfinding behavior

Time-Bounded Best-First Search for Reversible and Non-reversible Search Graphs

Refers to a simplified STRIPS-like planning architecture specifically designed for real-time control of autonomous character behavior in games. To create the most dynamic AI.

**State-of-the-Art/Comparisons/Roundups/Surveys/Analysis**

Optimization Techniques for 3D Graphics Deployment More Debris: Opening the box

Rendering massive 3D scenes in real-time

A Catalog of Stream Processing Optimizations

Continuity and Interpolation Techniques More in Detail(Code/Samples/Algorithms)

Feature Aware Sampling and Reconstruction

Recent Advances in Adaptive Sampling and Reconstruction for Monte Carlo Rendering

Anti-Aliased Low Discrepancy Samplers for Monte Carlo Estimators in Physically Based Rendering Another

A survey of photon mapping state-of-the-art research and future challenges

Theory and Numerical Integration of Subsurface Light Transport

Real-Time Rendering Fourth Edition, Real-Time Ray Tracing

Light Shafts Rendering for Indoor Scenes

Global Illumination in Participating Media

Scalable Algorithms for Height Field Illumination

Ambient Occlusion on Mobile: an empirical comparison (Source Code - last pages)

Temporal Coherence Methods in Real-Time Rendering(Warping)

Transparency and Anti-Aliasing Techniques for Real-Time Rendering

Filtering Approaches for Real-Time Anti-Aliasing

Algorithms for Efficient Computation of Convolution

Kernel optimization by layout restructuring

A Bigger Mathematical Picture for Computer Graphics Intersection

3D mesh compression: survey, comparisons and emerging trends

On Some Interactive Mesh Deformations

Adaptive Physically Based Models in Computer Graphics

Efficient encoding of texture coordinates guided by mesh geometry

Methods for Avoiding Round-off Errors on 2D and 3D Geometric Simplification

Fundamental computational geometry on the GPU

Real-time Rendering Techniques with Hardware Tessellation

Combining displacement mapping methods on the GPU for real-time terrain visualization

Comparison of spherical cube map projections used in planet-sized terrain rendering

Course/Book/Presentations that Provide Useful Info/Analyses most of most useful Shadow Map/Shadow Volume, Hard/Soft/Volumetric Shadow Techniques Shadow Mapping Algorithms

A Comprehensive Study on Pathfinding Techniques for Robotics and Video Games

**??? ... hmm**

Smarter Screen Space Shading

General approach, called deep screen space, using which a variety of light transport aspects can be simulated.

This approach is then further extended to additionally handle scenes containing participating media like clouds.

Shows how to improve the correctness of screen space and related algorithms by accounting for mutual visibility of points in a scene. After that, taking a completely different point of view on image generation using a learning-based approach to approximate a rendering function. Neural networks can hallucinate shading effects which otherwise have to be computed using costly analytic computations. Finally, a holistic framework to deal with phosphorescent materials in computer graphics, covering all aspects from acquisition of real materials, to easy editing, to image synthesis.

Parallel Computing and Optimization for Radiosity(Source Code etc)

Radiosity for Real-Time Simulations of Highly Tessellated Models

Real-Time Dynamic Radiosity for High Quality Global Illumination

Perspective-Driven Radiosity Image-Space Radiosity

GPU Enhanced Algorithms for Radiosity and Shadow Volume Rendering

Techniques based around Radiosity that provide unique advantages.

Shadows and Reflections Part2(Reflections Shadows) Denoising Surfaces and Light Scattering Fire and Smoke

Annotated Realtime Raytracing(Source Code) Q2 Realtime Pathtracer(Project) GPU RT(Code) DIRT(Demo Shaders)

Ray Tracing Gems Gideon NanoRT for Mobile Stochastic Path Tracer

For practical pipelines À-Trous PSVGF Ray-Triangle Intersection(Source) Another More Batching(Packets)

Texture LOD GPU Ray Tracer Using Ray Marching and Distance Fields 3D grid

Ray stream techniques augment the fast single-ray traversal with increased utilization of vector units and leverage memory bandwidth for batches of rays. Despite their success, the proposed implementations suffer from high bookkeeping cost and batch fragmentation, especially for small batch sizes.

Various contributions here make it all the way to real-time.

A Temporal Stable Distance To Edge Anti-aliasing

Improved Geometry Buffer Anti-Aliasing(GBAA+)(Source Code)

Triangle-based Geometry Anti-Aliasing(TGAA)

The implementation can, without any sub-pixel information and by storing extra geometrical data in a pre-render pass, prevent temporal instability and solve aliasing artifacts during a post-render pass. Thus being a real alternative to the state of the art post-processing Anti-Aliasing solutions, in sense of performance and quality in high end game engines and systems.

Reliance on hardware features for solving triangle edges can easily be removed from the solution making it implementable on a large variety of hardware. If this is the case, prototype 1 can be an excellent complement to Anti-Aliasing solutions such as Multi Sampling which can not solve alpha clipped edges.

Light Propagation Volumes

Stores lighting information from a light in a 3D grid. Every light stores which points in the world they light up.

These points have a coordinate in the world, which means you can stratify those coordinates in a grid. In that way you save lit points (Virtual Point Lights) in a 3D grid and can use those initial points to spread light across the scene.

BVH BVH splitting More TSS BVH Another Hashed Shading Octree Quadtree

Generic Hybrid CPU-GPU Parallelization Dynamic Data Structures for Scheduling

Fast Data Parallel Radix Sort Implementation by Avoiding Zero Bits Based on Divide and Conquer Technique Another

The algorithms implement several optimization techniques to take advantage of the HW architecture such as:

taking advantage of kernel fusion strategy, the synchronous execution of threads in a warp/waveform to eliminate the need for barrier synchronization, using shared memory across threads within a group, management of bank conflicts, eliminate divergence by avoiding branch conditions and complete unrolling of loops, use of adequate group/thread dimensions to increase HW occupancy and application of highly data-parallel algorithms to accelerate the scan operations.

Revised fast convolution Efficient FFT Algorithms

Convolution is a mathematical tool used in filtering, correlation, compression and in many other applications. Although the concept of convolution is not new, the efficient computation of convolution is still an open topic. As the burden of data is constantly increasing, there appears request for fast manipulation with large data.

The fast convolution have been proposed to recursively determine if one new signal sample or new small portion of samples emerge in the given period N of a realization x(n) replacing the old one sample or old portion of samples, respectively. The number of operations for their speedy calculating is essentially reduced by the original recursive expression in comparison with the ordinary FFT procedure used only in the case of fixed values of samples.

*Last edited by ThaOneDon (Today 13:46:29)*

Offline

**ImNotQ009****Moderator**

We already have parallax occlusion mapping

Offline

**ThaOneDon****Member**

I guess i should always take a look at Cube engine's docs first as well before making suggestions?

:)

Still SSDO is quite interesting...

Offline

**ThaOneDon****Member**

Update 7

Offline

**Calinou****Moderator**

Now, add all this by yourself.

*Last edited by Calinou (2014-11-09 10:35:52)*

Offline

**ThaOneDon****Member**

Wish it was that easy, still theres tons of work to look forward to.

I hope all of this is useful.

*Last edited by ThaOneDon (2016-01-29 16:23:36)*

Offline

**ThaOneDon****Member**

Update 9

Offline

**ThaOneDon****Member**

I'm absorbing more research papers at the moment.

UPDATE: Added UPDATE 10

*Last edited by ThaOneDon (2016-08-29 05:51:46)*

Offline

**eihrul****Administrator**

Unless you've actually read all the stuff in here and can give actual descriptions of why each of these papers is individually interesting, I am going to have to delete this thread as spam.

Offline

**ThaOneDon****Member**

OK. Its going to take some time thou theres a lot to cover.

*Last edited by ThaOneDon (2016-08-29 05:51:17)*

Offline

**noman222****Member**

Tesseract is a great game, and bots are fun, but what's an online based game without a good sized player base?

A game like this would have a good chance to fly if it got into steam greenlight. Add the facts: that it's free, include steam workshop support for sharing maps and making mods, it's a tribute to old school gaming (a bit), and there'll be an awesome player base for lots of fun.

What's more, we'll be tapping into a huge source of ideas and good map designers, and, if it's not too hard, let tesseract use steam servers for multiplayer.

What do you think? Is it worth a try?

_______________________

Noman

Offline

**spikeymikey0196****Member**

noman222 wrote:

Tesseract is a great game, and bots are fun, but what's an online based game without a good sized player base?

A game like this would have a good chance to fly if it got into steam greenlight. Add the facts: that it's free, include steam workshop support for sharing maps and making mods, it's a tribute to old school gaming (a bit), and there'll be an awesome player base for lots of fun.

What's more, we'll be tapping into a huge source of ideas and good map designers, and, if it's not too hard, let tesseract use steam servers for multiplayer.

What do you think? Is it worth a try?_______________________

Noman

As much as I get what you're saying, you have to realise that Tesseract isnt anywhere near ready to be put onto Greenlight.. The developers know this, otherwise it would be on there.

You're also missing out the fact that this is in very early stages and it would take ages to get it to actually be successful on greenlight.. there's just too little at the moment to work with.. although the userbase does need to excel, it wont happen just yet :3

Offline

**ThaOneDon****Member**

There are few conditions needed for that to happen.

Greenlight implies also short timeframe to make the game and that ofcourse would stress the development.

Right now the engine/game is going in steady and precise phase. Thats what i and i'm sure the team wants. To make small but meaningful changes.

If anyone is willing to use the engine to make something interesting for steam Greenlight, license wise it shouldn't be a problem. Don't use the stuff from "media", everything else is A-OK.

:)

Offline

**ThaOneDon****Member**

Massive Updates 18/11/2014

Offline

**ThaOneDon****Member**

More Updates 19/11/2014

Tech

*Line Space Gathering for Single Scattering in Large Scenes

*ManyLoDs: Parallel Many-View Level-of-Detail Selection for Real-Time Global Illumination

*Improving Performance and Accuracy of Local PCA

Performance saving

*Importance Caching for Complex Illumination

*Fast Parallel GPU-Sorting Using a Hybrid Algorithm

Shaders

*3D Unsharp Masking for Scene Coherent Enhancement

*Precision Selection for Energy-Efficient Pixel Shaders

*Bidirectional Light Transport with Vertex Merging

Offline

**ThaOneDon****Member**

Updates 20/11/2014

Tech

*Sample Distribution Shadow Maps

*Depth Interval Grid Displacement Mapping

*Frostbyte Engine Tech (incredibly advanced and performance friendly)

Performance Saving

*Parallel View-Dependent Level-of-Detail Control

*Efficient Interactive Rendering of Detailed Models with Hierarchical Levels of Detail

*Last edited by ThaOneDon (2014-11-20 16:57:07)*

Offline

**spikeymikey0196****Member**

Gonna add slightly to the list with this:

DOT Engine AI: https://github.com/MatrixCompSci/DOT

Offline

**ThaOneDon****Member**

Updates 21/11/2014

Tech

*Deep Opacity Maps

Performance Saving

*Frame Sequential Interpolation for Discrete Level-of-Detail Rendering

Shaders

*An Optimizing Compiler for Automatic Shader Bounding

*Last edited by ThaOneDon (2014-11-21 06:47:16)*

Offline

**ThaOneDon****Member**

Updates 22/11/2014

Tech

*PMAO (Photometric Ambient Occlusion)

*C-BDAM - Compressed Batched Dynamic Adaptive Meshes for Terrain Rendering

*Tile-Trees

Performance Saving

*Tuning Catmull-Clark Subdivision Surfaces (OpenSubDiv is based on these)

*An Interactive Perceptual Rendering Pipeline using Contrast and Spatial Masking

Shaders

*Implementing the Render Cache and the Edge-and-Point Image

*Last edited by ThaOneDon (2014-11-22 07:09:15)*

Offline

**RaZgRiZ****Moderator**

Maybe you should spend some time to cagegorize all of them and make the least more readable.. It's a total mess.

Offline

**ThaOneDon****Member**

I'll see what i can do with the stuff thats related but a lot of it isn't so theres really no good way of categorizing it.

Offline

**RaZgRiZ****Moderator**

ThaOneDon wrote:

I'll see what i can do with the stuff thats related but a lot of it isn't so theres really no good way of categorizing it.

At least make it prettier. Hide the links inside the URL tag and use just the text instead. That's one way to do it.

*Last edited by RaZgRiZ (2014-11-22 21:55:38)*

Offline

**ThaOneDon****Member**

Working on it

DONE

*Last edited by ThaOneDon (2014-11-23 08:31:51)*

Offline

**RaZgRiZ****Moderator**

ThaOneDon wrote:

Working on it

DONE

Err, add a little spacing too and some title sizing. It's not mentioned but i think this forum supports basic bbcode so it should be possible.

Offline

**ThaOneDon****Member**

Done

Updates 24/11/2014

Tech

*Highlight Microdisparity for Improved Gloss Depiction

*Implicit Skinning: Real-Time Skin Deformation with Contact Modeling

*Last edited by ThaOneDon (2014-11-24 01:57:35)*

Offline