GRAPP 2022 Abstracts


Area 1 - Geometry and Modeling

Full Papers
Paper Nr: 9
Title:

Visually Improved Erosion Algorithm for the Procedural Generation of Tile-based Terrain

Authors:

Fong Y. Lim, Yu W. Tan and Anand Bhojan

Abstract: Procedural terrain generation is the process of generating a digital representation of terrain using a computer program or procedure, with little to no human guidance. This paper proposes a procedural terrain generation algorithm based on a graph representation of fluvial erosion that offers several novel improvements over existing algorithms. Namely, the use of a height constraint map with two types of locally defined constraint strengths; the ability to specify a realistic erosion strength via level of rainfall; and the ability to carve realistic gorges. These novelties allow it to generate more varied and realistic terrain by integrating additional parameters and simulation processes, while being faster and offering more flexibility and ease of use to terrain designers due to the nature and intuitiveness of these new parameters and processes. This paper additionally reviews some common metrics used to evaluate terrain generators, and suggests a completely new one that contributes to a more holistic evaluation.
Download

Short Papers
Paper Nr: 2
Title:

Opportunities with Slippy Maps for Terrain Visualization in Virtual and Augmented Reality

Authors:

Shaun Bangay, Adam A. Cardilini, Nyree L. Raabe, Kelly K. Miller, Jordan Vincent, Greg Bowtell, Daniel Ierodiaconou and Tanya King

Abstract: Map tile servers using the slippy map conventions provide interactive map visualizations for web applications. This investigation describes and evaluates a viewpoint sensitive level-of-detail algorithm that mixes slippy map tiles across zoom levels to generate landscape visualizations for table top VR/AR presentation. Elevation tiles across multiple zoom levels are combined to provide a continuous terrain mesh overlaid with image data sourced from additional tiles. The resulting application robustly deals with delays in loading high resolution tiles, and integrates unobtrusively with the game loop of a VR platform. Analysis of the process questions the assumptions behind slippy map conventions and recommends refinements that are both backward compatible and would further advance use of these map tiles for VR experiences. These refinements include: introducing tiles addressed by resolution, ensuring consistency between tiles at adjacent zoom levels, utilizing zoom values between the current integer levels and extending tile representations beyond the current raster and vector formats.
Download

Paper Nr: 8
Title:

Incremental Online Reconstruction of Locally Quadric Surfaces

Authors:

Josua Bloeß and Dominik Henrich

Abstract: Representing surfaces of a digital 3D model using high level geometric information is key to lots of geometric processing and other use cases. In order to obtain these 3D models, various scanning methods have been proposed. We contribute a method for incremental reconstruction of surfaces from a series of point clouds. For this, we use a robust over-segmentation technique on a point cloud and build a memory efficient graph structure upon it. Over time, we expand this graph structure as global representation. We also propose a fast fitting algorithm for quadric surface patches to the graph structure. We validate the overall performance in our experiments.
Download

Paper Nr: 33
Title:

Geometry Compression of Triangle Meshes using a Reference Shape

Authors:

Eliška Mourycová and Libor Váša

Abstract: Triangle mesh compression is an established area, however, some of its special cases are yet to be investigated. This paper deals with lossy geometry compression of manifold triangle meshes based on the EdgeBreaker algorithm using a reference shape known to both the encoder and the decoder. It is assumed that the shape of the reference object is similar to the shape of the mesh to be encoded. The predictions of vertices positions are done extrinsically, i.e. outside the reference shape, and then orthogonally projected onto its surface. The corrections are encoded by two integer numbers, denoting the layer order and an index of a hexagon in a hexagonal grid generated on the surface of the reference shape centered at the prediction point. The availability of a reference mesh results in a smaller bitrate needed for comparable error when compared to a state of the art static mesh compression algorithm using weighted parallelogram prediction.
Download

Paper Nr: 46
Title:

Estimating Body Shapes from Measurements

Authors:

Margarida Lima, Joaquim Jorge and João Pereira

Abstract: e-Commerce now represents more than a third of apparel sales in the USA and accounts for most sales growth year on year. However, it is still hard for people to buy clothes online because they have no idea how they will look. Thus, we present an approach to model an approximation of a human body shape with a given set of body measurements to fit virtual clothes. To estimate a new body shape from body measurements, we developed two different models by using respectively linear transformations and PCA weights. Additionally, we selected the minimum number of body measurements required to estimate a similar shape as the ground truth. Finally, we evaluated our approach by comparing our results with estimations and visual evaluation via pictures and measurements taken from real people. Results show that we can approximate human shape through measurements with sufficient fidelity to simulate garment fitting.
Download

Paper Nr: 23
Title:

Multi-view NURBS Volume

Authors:

Wanwan Li

Abstract: Non-Uniform Rational B-Spline (NURBS) curves and surfaces are widely used in modern geometric modeling systems. NURBS volumes, also called volumetric NURBS, are one powerful NURBS representation of volumetric modeling. However, due to the complex nature of NURBS volumes, it is a challenging task for users to fine-tune the NURBS volumes design manually. In this paper, we present a novel approach for multi-view NURBS volume geometric modeling. Given users’ conceptual design for several different views of a 3D model, we devise an optimization algorithm to automatically reconstruct the 3D NURBS volume which is matching with these designs by projecting it along with different view directions. In the end, we discuss the results generated with our approach through a series of numerical experiments.
Download

Paper Nr: 26
Title:

Non-planar Surface Shape Reconstruction from a Point Cloud in the Context of Muscles Attachments Estimation

Authors:

Josef Kohout and Martin Cervenka

Abstract: Knowledge of muscle attachments on bones is essential for musculoskeletal modelling. A muscle attachment is often represented by points (in 3D) obtained by a manual digitisation system during dissection. Although this representation suffices for many purposes, sophisticated musculoskeletal models commonly require representing a muscle attachment by a surface patch or at least by a closed boundary curve. In this paper, therefore, we propose an approach to automatic shape reconstruction from such point sets. It is based on iso-contour extraction from a scalar field of distances to geodetics connecting the pairs of points (from the input set) as identified by a state-of-the-art algorithm for 2D curve reconstruction running on the input points transformed to 2D. We investigated the performance of 15 existing state-of-the-art algorithms with public implementations on the TLEM 2.0 data set of muscle attachments. The best results were obtained for the lenz algorithm with just one unacceptable reconstruction when standard projection onto a best-fit plane was used to transform the input 3D points to 2D. The second algorithm was α-shape with three unacceptable reconstructions, whereas in this case, the multidimensional scaling technique was exploited to transform the points.
Download

Area 2 - Rendering

Full Papers
Paper Nr: 3
Title:

Real-time Rendering of Indirectly Visible Caustics

Authors:

Pierre Moreau and Michael Doggett

Abstract: Caustics are a challenging light transport phenomenon to render in real-time, and most previous approaches have used screen-space accumulation based on the fast rasterization hardware of GPUs. This limits the position of photon collection points to first hit screen space locations, and leads to missing caustics that should be visible in a mirror’s reflection. In this paper we propose an algorithm that can render caustics visible via specular bounces in real-time. The algorithm takes advantage of hardware-accelerated ray tracing support found in modern GPUs. By constructing an acceleration structure around multiple bounce view ray hit points in world space, and tracing multiple bounce light rays through the scene, we ensure caustics can be created anywhere in the scene, not just in screen space. We analyze the performance and image quality of our approach, and show that we can produce indirectly visible caustics at real-time rates.
Download

Paper Nr: 20
Title:

A Hybrid System for Real-time Rendering of Depth of Field Effect in Games

Authors:

Yu Wei Tan, Nicholas Chua, Nathan Biette and Anand Bhojan

Abstract: Real-time depth of field in game cinematics tends to approximate the semi-transparent silhouettes of out-of-focus objects through post-processing techniques. We leverage ray tracing hardware acceleration and spatio-temporal reconstruction to improve the realism of such semi-transparent regions through hybrid rendering, while maintaining interactive frame rates for immersive gaming. This paper extends our previous work with a complete presentation of our technique and details on its design, implementation, and future work.
Download

Short Papers
Paper Nr: 12
Title:

TauBench: Dynamic Benchmark for Graphics Rendering

Authors:

Joel Alanko, Markku Mäkitalo and Pekka Jääskeläinen

Abstract: Many graphics rendering algorithms used in both real time games and virtual reality applications can get performance boosts by reusing previous computations. However, the temporal reuse based algorithms are typically measured using trivial benchmarks with very limited dynamic features. To this end, we present two new benchmarks that stress temporal reuse algorithms: EternalValleyVR and EternalValleyFPS. These datasets represent scenarios that are common contexts for temporal methods: EternalValleyFPS represents a typical interactive multiplayer game scenario with dynamically changing lighting conditions and geometry animations. EternalValleyVR adds rapid camera motion caused by head-mounted displays popular with virtual reality applications. In order to systematically assess the quality of the proposed benchmarks in reuse algorithm stress testing, we identify common input features used in state-of-the-art reuse algorithms and propose metrics that quantify changes in the temporally interesting features. Cameras in the proposed benchmarks rotate on average 18.5× more per frame compared to the popular NVidia ORCA datasets, which results in 51× more pixels introduced each frame. In addition to the camera activity, we compare the number of low confidence pixels. We show that the proposed datasets have 1.6× less pixel reuse opportunities by changes in pixels’ world positions, and 3.5× higher direct radiance discard rate.
Download

Paper Nr: 21
Title:

Area Lights Voxelization for Light Propagation Volumes

Authors:

Cristian Lambru, Florica Moldoveanu, Anca Morar and Victor Asavei

Abstract: Simulation of the area light sources direct illumination is a topic of interest in the field of Computer Graphics. In the real world, all light sources have a surface from which light is emitted. Thus, for a physically correct simulation of the light transport in graphical applications, area light sources are required. In addition, there are complex lighting effects that can only be simulated with such light sources. In this paper, we present an improvement of the direct lighting simulation for area light sources within the real-time global illumination technique called light propagation volumes. Our method is based on a voxelization of the area light source geometry in a voxel volume of the same resolution as the light propagation volume used in the global illumination technique. With a sample for every voxel that intersects a triangle, for every triangle of the mesh, we obtain an optimal distribution of the samples needed to approximate the direct illumination of the area light source for the light propagation volumes technique.
Download

Paper Nr: 22
Title:

A Non-Photorealistic Rendering Technique for Art-directed Hatching of 3D Point Clouds

Authors:

Ronja Wagner, Ole Wegen, Daniel Limberger, Jürgen Döllner and Matthias Trapp

Abstract: Point clouds or point-based geometry of varying density can nowadays be easily acquired using LiDAR cameras or modern smartphones with LiDAR sensors. We demonstrate how this data can be used directly to create novel artistic digital content using Non-Photorealistic Rendering techniques. We introduce a GPU-based technique for art-directable NPR rendering of 3D point clouds at interactive frame-rates. The technique uses either a subset or all of the points to generate oriented, sketchy strokes by taking local curvature and normal information into account. It uses X-Toon textures as part of its parameterization, supports hatching and cross hatching, and is inherently temporal coherent with respect to virtual camera movements. This introduces significant artistic freedom that is underlined by our results, which show that a variety of different sketchy styles such as colored crayons, pencil, pointillism, wax crayons, blue print, and chalk-drawings can be achieved on a wide spectrum of point clouds, i.e., covering 3D polygonal meshes as well as iPad-based LiDAR scans.
Download

Paper Nr: 44
Title:

RTSDF: Real-time Signed Distance Fields for Soft Shadow Approximation in Games

Authors:

Yu W. Tan, Nicholas Chua, Clarence Koh and Anand Bhojan

Abstract: Signed distance fields (SDFs) are a form of surface representation widely used in computer graphics, having applications in rendering, collision detection and modelling. In interactive media such as games, high- resolution SDFs are commonly produced offline and subsequently loaded into the application, representing rigid meshes only. This work develops a novel technique that combines jump flooding and ray tracing to generate approximate SDFs in real-time. Our approach can produce relatively accurate scene representation for rendering soft shadows while maintaining interactive frame rates. We extend our previous work with details on the design and implementation as well as visual quality and performance evaluation of the technique.
Download

Paper Nr: 5
Title:

Development of a Platform-independent Renderer for the Rendering of OpenStreetMap Indoor Maps in Flutter

Authors:

Julia Richter, Robin Thomas, David Lange, Thomas Graichen and Ulrich Heinkel

Abstract: With the development and spread of new localisation technologies, the construction of bigger buildings, as well as the continuous growth of digitalisation, the importance of indoor maps rises. However, developers who want to make use of indoor maps face several obstacles. Among them are the often costly data acquisition, the increased development overhead for diverse platforms, plus the missing support of indoor rendering in many libraries. In this work, the development of a free solution for platform-independent rendering of indoor data is presented based on public geodata that is provided by OpenStreetMap. For this goal, existing open source technologies such as the Flutter toolkit and the Mapsforge library were used in order to develop a flexible and freely accessible rendering engine that directly integrates in outdoor maps and leads to a seamless rendering of both indoor and outdoor in one map view. To prove platform independence and to measure performance, a prototype app was developed in Flutter. Finally, possibilities and limitations of this approach were examined in more detail.
Download

Paper Nr: 17
Title:

A Lightweight Photon Tracing Method for Visualising Caustics

Authors:

Adrian De Barro, Keith Bugeja, Sandro Spina, Mark Magro and Kevin Napoli

Abstract: In this paper we present a biased lightweight photon tracing method for the visualisation of caustics. The caustics volume bounds reflective and transmissive media and regulates the propagation of photons within these media. The volume uses partitioning and refinement to control tracing accuracy; this is modulated at runtime using a level-of-detail approach to improve performance without visible loss to accuracy. The system schedules traced photons for projections via a controllable number of face projectors tied to the volume. A straightforward splatting algorithm was implemented for this paper; however, more advanced splatting algorithms may be employed for improved visual quality.
Download

Paper Nr: 18
Title:

Anvil: A Tool for Visual Debugging of Rendering Pipelines

Authors:

Kevin Napoli, Keith Bugeja, Sandro Spina, Mark Magro and Adrian De Barro

Abstract: Debugging software can be challenging and numerous tools are used to aid in this task. Moreover, inspecting and debugging software of a certain nature such as those found in the subdomain of physically based rendering, where stochastic methods are often utilised, can be even more challenging. Traditional debugging in these cases is not ideal and in many cases not sufficient to help pinpoint certain issues, such as finding defects in the distribution of reflected rays in a ray-based rendering scenario. To address these issues we propose Anvil, a visual debugging tool that aims to seamlessly integrate within user applications, adhering to the what you don’t use, you don’t pay for C++ zero-overhead principle. Anvil is meant to be flexible, reusable, and extensible while adopting a low memory footprint. To achieve its goals, Anvil makes use of reflection-like techniques, adopts in situ analysis, and provides event hooks to communicate with the user application.
Download

Area 3 - Animation and Simulation

Full Papers
Paper Nr: 13
Title:

Animating and Adjusting 3D Orthodontic Treatment Objectives

Authors:

Maxime Chapuis, Mathieu Lafourcade, William Puech, Gérard Guillerm and Noura Faraj

Abstract: In this paper, we present an interactive system to adjust and animate 3D orthodontic treatment objectives, the main goal is to improve the communication tools used by orthodontists to exchange with their patients. Given a 3D pathological patient model and a treatment objective, we propose to automatically generate intermediate steps using script-like treatment scenarios. The intermediate steps can then be adjusted using intuitive manipulators, and used to produce an animation of the treatment. The resulting animation is a useful tool to help patients visualize the potential evolution of their dentition and accept the treatment. The proposed system relies on the registration of a reference model on both the treatment objective and the patient-specific 3D segmented mesh, to automatically position key feature points and create control curves. These primitives are used to both, guide teeth movements during the animation, and provide manipulators to allow for user interactions. The key contributions of this work are (a) the use of a registered reference model to position and create intuitive control primitives, and (b) the introduction of script-like treatment scenarios to facilitate and minimize user interactions during the creation of intermediate treatment steps.
Download

Short Papers
Paper Nr: 11
Title:

Pan-zoom Motion Capture in Wide Scenes using Panoramic Background

Authors:

Masanobu Yamamoto

Abstract: Measuring a subject three-dimensionally from multiple cameras, the measurable area is a common field of view from cameras. When the subject goes out of the field of view, the cameras must follow the subject. In this research, the viewpoint of the camera keeps to be fixed and the pan and zoom functions are used to operate the cameras so that the subject body could be always shot near the center of the image. The problem is camera calibration. Our approach is to use a panoramic image. Each camera pans in advance to take a background image and create a panoramic image of the background. Then, the background image around the subject body is collated with the panoramic image, the pan rotation angle and the zoom ratio are obtained from the matching position, and the camera is calibrated. The body motion is captured from the multi-view motion image using the camera parameters obtained in this way. Since the viewpoint of the camera is fixed, the shooting range is not so wide, but it is still possible to capture an athlete’s floor exercise in the gymnasium.
Download

Paper Nr: 36
Title:

Towards Lightweight Neural Animation: Exploration of Neural Network Pruning in Mixture of Experts-based Animation Models

Authors:

Antoine Maiorca, Nathan Hubens, Sohaib Laraba and Thierry Dutoit

Abstract: In the past few years, neural character animation has emerged and offered an automatic method for animating virtual characters. Their motion is synthesized by a neural network. Controlling this movement in real time with a user-defined control signal is also an important task in video games for example. Solutions based on fully-connected layers (MLPs) and Mixture-of-Experts (MoE) have given impressive results in generating and controlling various movements with close-range interactions between the environment and the virtual character. However, a major shortcoming of fully-connected layers is their computational and memory cost which may lead to sub-optimized solution. In this work, we apply pruning algorithms to compress an MLP-MoE neural network in the context of interactive character animation, which reduces its number of parameters and accelerates its computation time with a trade-off between this acceleration and the synthesized motion quality. This work demonstrates that, with the same number of experts and parameters, the pruned model produces less motion artifacts than the dense model and the learned high-level motion features are similar for both.
Download

Paper Nr: 30
Title:

Neuranimation: Reactive Character Animations with Deep Neural Networks

Authors:

Sebastian Silva, Sergio Sugahara and Willy Ugarte

Abstract: The increasing need for more realistic animations has resulted in the implementation of various systems that try to overcome this issue by controlling the character at a base level based on complex techniques. In our work we are using a Phase Functioned Neural Network for generating the next pose of the character in real time while making a comparison with a modified version of the model. The current basic model lacks the ability of producing reactive animations with objects of their surrounding but only reacts to the terrain the character is standing on. Therefore, adding a layer of Rigs with Inverse Kinematics and Blending Trees will allow us to switch between actions depending on the object and adjust the character to fit properly. Our results showed that our proposal improves significantly previous results and that inverse kinematics is essential for this improvement.
Download

Paper Nr: 34
Title:

Accelerated Airborne Virus Spread Simulation: Coupling Agent-based Modeling with GPU-accelerated Computational Fluid Dynamics

Authors:

Christoph Schinko, Lin Shao, Johannes Mueller-Roemer, Daniel Weber, Xingzi Zhang, Eugene Lee, Bastian Sander, Alexander Steinhardt, Volker Settgast, Kan Chen, Marius Erdt and Eva Eggeling

Abstract: The Coronavirus Disease 2019 (COVID-19) has shown us the necessity to understand its transmission mechanisms in detail in order to establish practice in controlling such infectious diseases. An important instrument in doing so are mathematical models. However, they do not account for the spatiotemporal heterogeneity introduced by the movement and interaction of individuals with their surroundings. Computational fluid dynamics (CFD) simulations can be used to analyze transmission on micro- and mesostructure level, however become infeasible in larger scale scenarios. Agent-based modeling (ABM) on the other hand is missing means to simulate airborne virus transmission on a micro- and mesostructure level. Therefore, we present a system that combines CFD simulations with the dynamics given by trajectories from an ABM to form a basis for producing deeper insights. The proposed system is still work in progress; thus we focus on the system architecture and show preliminary results.
Download

Paper Nr: 45
Title:

Reconstruction of the Face Shape using the Motion Capture System in the Blender Environment

Authors:

Joanna Smolska and Dariusz Sawicki

Abstract: Motion Capture systems have been significantly improved during the last few years, mainly since they found usage in the entertainment industry and medicine. The main role of Motion Capture systems is to control and record the position of a set of selected points on the scene. Project described in the following article aimed at developing a method of reconstructing the shape of a specific face with the possibility of controlling its movement for the purposes of computer animation. We conducted experiments in a Motion Capture laboratory using Qualisys Miqus M1 and M3 cameras and created a low-poly face model. Moreover, we proposed an algorithm of shape reconstruction using the analysis of the position of a set of points (markers) applied to the surface of the face and Blender software. Finally, we analyzed advantages and disadvantages of both approaches to facial motion capture. Taking into account the publications of recent years, a brief analysis of the trends in the development of facial reconstruction methods has also been carried out.
Download

Area 4 - Interactive Environments

Full Papers
Paper Nr: 1
Title:

Prototyping Context-aware Augmented Reality Applications for Smart Environments inside Virtual Reality

Authors:

Jérémy Lacoche and Éric Villain

Abstract: Prototyping context-aware augmented reality applications is a difficult task that often requires programming skills and is then not available for everyone. We aim to simplify this process thanks to a new virtual reality authoring tool for the creation of augmented reality applications for smart environments that can adapt to an evolving context of use. To do so, this tool introduces two main novelties regarding previous work. First, it proposes a prototyping step in the digital twin of the target environment where the author can create multiple versions of the content (visual aspect, modality, layout, area of visibility, etc.) and defines for each of them which context of use is targeted thanks to a dedicated diegetic user interface. Second, it includes the possibility to create new context variables with a visual programming approach that can leverage the smart environment sensors and actuators. The created application can then be deployed on various augmented reality devices and can support dynamic adaptations. We illustrate this tool with the creation of an application for smart buildings that can fit the needs of its various occupants. Thanks to a user study, we also present some usability feedback of this tool to assess its relevance and to provide guidelines for the future of this field of research.
Download

Paper Nr: 15
Title:

A VR Application for the Analysis of Human Responses to Collaborative Robots

Authors:

Ricardo Matias and Paulo Menezes

Abstract: The increasing number of robots performing certain tasks in our society, especially in the industrial environment, introduces more scenarios where a human must collaborate with a robot to achieve a common goal which, in turn, raises the need to study how safe and natural this interaction is and how it can be improved. Virtual reality is an excellent tool to simulate these interactions, as it allows the user to be fully immersed in the world while being safe from a possible robot malfunction. In this work, a simulation was created to study how effective virtual reality is in the studies of human-robot interaction. It is then used in an experiment where the participants must collaborate with a simulated Baxter to place objects delivered by the robot in the correct place, within a time limit. During the experiment, the electrodermal activity and heart rate of the participants are measured, allowing for the analysis of reactions to events occurring within the simulation. At the end of each experiment, participants fill a user experience questionnaire (UEQ) and a Flow Short Scale questionnaire to evaluate their sense of presence and the interaction with the robot.
Download

Paper Nr: 32
Title:

Enriching the Visit to an Historical Botanic Garden with Augmented Reality

Authors:

Rafael Torres, Stefan Postolache, Maria Beatriz Carmo, Ana Paula Cláudio, Ana Paula Afonso, António Ferreira and Dulce Domingos

Abstract: The use of Augmented Reality in mobile guide applications for natural parks and gardens enables compelling and memorable experiences that enrich visits. But the creation of these experiences is still riddled with several challenges concerning technology and content production. This paper presents guidelines for the development of AR experiences in mobile applications that support visits to gardens or natural parks, providing a list of technological and multimedia content elements that should be considered. We applied these guidelines in the development of a mobile application for a Botanical Garden, implemented for Android and iOS. We conducted a study with volunteers during visits to the garden and the results revealed high levels of perceived app usability and strong agreements about app features, which allow us to accept that the app was evaluated positively.
Download

Paper Nr: 37
Title:

An Efficient Workflow for Representing Real-world Urban Environments in Game Engines using Open-source Software and Data

Authors:

Arash Shahbaz Badr and Raffaele De Amicis

Abstract: Game engines (GEs) constitute a powerful platform for visualizing real geographies in immersive virtual space, and in the last two years, remarkable strides have been made by the leading providers of Geographic Information System (GIS) software and services, including Esri and Cesium, toward integrating their products in GEs. Notwithstanding the strengths of GEs, they lack support for many common GIS file formats, and there exist only limited georeferencing possibilities. Visualizing large-scale geolocations involves high authoring costs, and the shortcomings of GEs further complicate the workflow. In this paper, we present a workflow and its implementation for creating large immersive virtual environments that accurately represent real-world urban areas. The benefits of the presented development are threefold. First, it makes the process more efficient by automating multiple steps and incorporating a large portion of the workflow inside the GE. Second, it facilitates an interactive framework by allowing the developer to efficiently extend the scene components with functionalities and interactions. Third, it entirely relies on open-source software and data, making it suitable for many non-commercial domains. To showcase the effectiveness of the tool, we created a virtual replica of an actual city consisting of the terrain, the streets, and the buildings.
Download

Paper Nr: 39
Title:

Hybrid MBlur: A Systematic Approach to Augment Rasterization with Ray Tracing for Rendering Motion Blur in Games

Authors:

Yu Wei Tan, Xiaohan Cui and Anand Bhojan

Abstract: Motion blur is commonly used in game cinematics to achieve photorealism by modelling the behaviour of the camera shutter and simulating its effect associated with the relative motion of scene objects. A common real-time post-process approach is spatial sampling, where the directional blur of a moving object is rendered by integrating its colour based on velocity information within a single frame. However, such screen space approaches typically cannot produce accurate partial occlusion semi-transparencies. Our real-time hybrid rendering technique leverages hardware-accelerated ray tracing to correct post-process partial occlusion artifacts by advancing rays recursively into the scene to retrieve background information for motion-blurred regions, with reasonable additional performance cost for rendering game contents. We extend our previous work with details on the design, implementation, and future work of the technique as well as performance comparisons with post-processing.
Download

Paper Nr: 47
Title:

Teaching and Learning 3D Transformations in Introductory Computer Graphics: A User Study

Authors:

Thomas Suselo, Burkhard C. Wünsche and Andrew Luxton-Reilly

Abstract: Three-dimensional (3D) transformations are fundamental in computer graphics and hence an important component of introductory courses in this field. So far there has been no research investigating the learning challenges and whether they are predominantly related to the underlying mathematics, problem solving skills, programming issues, or a lack of visuospatial abilities. In this paper we present a user study investigating which 3D transformation concepts students struggle with and why. Our results suggest that most students understand primitive transformations, but often make errors with sequences of transformations, e.g., due to not understanding how transformations affect each other or what the correct order of operations is in English language, OpenGL code, or as a matrix product. Other frequent errors are misunderstanding the rotation direction (i.e., clockwise vs. anti-clockwise) and misinterpreting scaling factors. In addition, many students seem to lack spatial reasoning skills to interpret images of 3D transformations and to make mental models of their effect. Our results illustrate common misconceptions and problems, and we discuss strategies for educators to improve the teaching of 3D transformations in computer graphics.
Download

Short Papers
Paper Nr: 16
Title:

ReforestAR: An Augmented Reality Mobile Application for Reforest Purposes

Authors:

Matias Luna, Enrico Gomes, Alexandrino Gonçalves, Nuno Rodrigues, Anabela Marto and Rita Ascenso

Abstract: Forest areas are essential to life on planet Earth and, without them, human society would not be able to sustain itself. Yet, they are under constant threat of reduction and damage, and many of them have been lost worldwide. Besides preventing their disforestation in the first place, awareness for maintaining, replanting, and restoring these areas should be raised to live in a greener and healthier environment. Information in modern times is mostly passed on through technology and, with the rise of Augmented Reality (AR), coupling both smartphones (devices that most people use in current times) and nature seems like an opportunity to raise consciousness to the cause. In this paper, we present an application to relate both Augmented Reality and reforestation, to raise awareness of environmental damages by allowing users to virtually place 3D models of trees on a real surface, using their own mobile phones, helping to plan and to visualize the replanting process over previously destroyed areas.
Download

Paper Nr: 31
Title:

A Preliminary Development of the Morris Maze Procedure in Virtual Reality

Authors:

Jesús Moreno, Juan M. Jurado, José E. Callejas-Aguilera and J. R. Jiménez-Pérez

Abstract: The Morris Water Maze (MWM) has become one of the most widely used laboratory tools in behavioural neuroscience. It has been used in some of the most sophisticated experiments in the study of spatial learning and memory with animals. However, human-based studies have been very limited due to the use of unrealistic scenarios, usually presented on a computer screen where participants’ attention is poorly controlled. Recent advances in virtual reality (VR) enable the generation of 3D environments with a high level of realism and user’s immersion. The user’s attention plays a key role in spatial learning. Current VR systems integrate eye-tracking devices to measure the user’s attention over virtual entities. In this paper, we present an easy-to-use game-based simulator of the MWM, using eye-tracking VR technology to extract information about the user’s attention. This research still in progress has achieved important hints according to the design of the virtual scenario, user interaction and experimentation. The study conducted in this paper validates the technology as a novel way to perform MWM focused on spatial learning and memory with human participants.
Download

Paper Nr: 41
Title:

A Controlled Virtual Reality Exposure Therapy Application for Smartphones

Authors:

Joana Teixeira, Bruno Patrão and Paulo Menezes

Abstract: Exposure therapy (ET) is often used as a therapeutic process for the treatment of a psychological disorder. Usually, this type of therapy is challenging to apply traditionally as the therapist must expose the patient safely to the cause of the disorder. To help surpass this problem, a virtual reality (VR) application was developed to support exposure therapy. As these therapies are based on a gradual and repetitive process, with this application, the patient can be exposed to the phobic element at different levels of anxiety intensity as prescribed by the therapist. This application was designed to be used either during the therapeutic sessions or at home. While using it in therapeutic sessions, it allows the therapist to include the analysis of physiological signals, escape movements, or other reactions during the exposure. At home, as homework for the therapy sessions, it will allow the patient to keep training what was learned during therapy. It is being developed as a serious game for smartphones, and users will only need a cardboard-like VR headset.
Download

Paper Nr: 19
Title:

A Natural Interaction Paradigm to Facilitate Cardiac Anatomy Education using Augmented Reality and a Surgical Metaphor

Authors:

Dmitry Resnyansky, Nurullah Okumus, Mark Billinghurst, Emin Ibili, Tolga Ertekin, Düriye Öztürk and Taha Erdogan

Abstract: This paper presents a design approach to creating a learning experience for cardiac anatomy by providing an interactive visualisation environment that uses Head Mounted Display (HMD)-based Augmented Reality (AR). Computed tomography imaging techniques were used to obtain accurate model geometry that was optimised in a 3D modelling software package, followed by photo-realistic texture mapping using 3D painting software. This method simplifies the process of modelling complex, organic geometry. Animation, rendering techniques, and AR capability were added using the Unity game engine. The system’s design and development maximises immersion, supports natural gesture interaction within a real-world learning setting, and represents complex learning content. Hand input was used with a surgical-dissection metaphor to show cross-section rendering in AR in an intuitive manner. Lessons learned from the modelling process are discussed as well as directions for future research.
Download

Paper Nr: 29
Title:

Influence of Texture Fidelity on Spatial Perception in Virtual Reality

Authors:

Andrei I. Lucaci, Morten T. Bach, Poul A. Jensen and Claus B. Madsen

Abstract: In this paper, we investigate the influence of texture fidelity on spatial perception in a standalone virtual reality application. To investigate this, we implemented a detailed virtual representation of an actual physical environment, namely a small one-bedroom apartment. The virtual apartment representation was tested in two different visual styles: high fidelity realistic textures, and a ”paper model” texture. Some test subjects experienced the virtual models using the actual physical apartment as transitional environment, other subjects experienced the model at an unrelated physical location. The environments were evaluated with 20 participants aged 20 to 61 and results indicated a systematic overestimation of distances in virtual reality for all conditions. The results showed that a higher texture fidelity had a positive influence on precision but no significant influence on accuracy. It was also showed that transitional environments negatively influenced precision, but had no significant influence on accuracy. Self assessments of presence from the experiment supported previous claims about a correlation between the level of detail in an environment and presence, but not a correlation between presence and distance perception.
Download