To utilize music therapy (administration of specific, research proven beat frequencies) combined with visual parallel realities to create an experience aimed at calming the viewer. We believe, given the magnitude of research articles done on the effects of music therapy on anxiety, depressive moods and similar things, our product will be able to deliver virtual immersive environments, using Virtual Reality, that administer effective music therapy to the end user.
Solutions we found to the problems faced in the previous scene:
- Occlusion Culling: Don't render what you don't see. The image below shows the Occlusion map generated for the current scene. We have designed our scene in a way to ensure maximization of Occlusion.
-
LOD (Level of Depth): Unreal Engine supports a mechanism to divide a mesh into several simpler meshes. The farther the renderer (the camera) from the mesh, the simpler (less number of vertices, hence requiring less computation) the mesh becomes.
-
Static/Dynamic Batching: Similar objects are drawn on the screen together, making sure their textures, materials etc. are loaded only once.
-
Baked lighting: Unity's system of pre-calculating the lighting in the scene, and baking (merging the lights and shadows) onto the very texture of the mesh under consideration.
The above factors (along with other several) allows Unity to deploy the scene on Android.
Development begins for a new scene.
First four build tests FAILED! Build APKs available in 'product/Forest Scene', with additional instructions to download and extract the Unity project.
The basic scene got built, and it looked like the following:
Now that we run the application on a Google Pixel, we see the main problem accompanying deployment of high-end graphics on Android devices: the amount of requests the application makes to the GPU to render the application. After a bit of search online, we find the following to be the problems:
-
The above scene, simply put, is intensive. This implies it has lot of vertices (and polygons) for the GPU to render.
-
Lighting and shadows takes computation power (lots of it) to render.
-
Imagine a highly detailed object (~ 1000 vertices) in the scene. When the player sees that object from a considerable distance, they don't see the details (just the normal shape and some texture). But Unity renders the entire mesh, textures, and lighting as if you were viewing the object up close. This assumption takes GPU resources.
We look to minimize these in our next commit. The core of these issues is the fact that mobile GPUs have to undergo stereo-pass rendering (rendering one frame twice, one for each eye) in order to run a VR application on Android. The increased load leads to FPS drop, latency, and distorted graphics.
- Initializing a basic Unity 3D project and beginning development of a simple VR scene.
-
Collection of research articles that may help to generate effective virtual environments.
-
Testing of deployment of a simple Flutter application on ZEIT.