Experiential Design / Final Project

07.07.2025 - 03.08.2025 (Week 12 - Week 15)
LIEW XIAO HUI / 0353121
BACHELOR OF DESIGN (HONOURS) IN CREATIVE MEDIA / EXPERIENTIAL DESIGN
Final Project: Completed Experience

JUMPLINK
Instructions
Final Project: Completed Experience
Feedback
Reflections


INSTRUCTIONS


Final Project: Completed Experience
In this task, we are required to further develop and complete the prototype application into a fully working and functional product experience.

Progress
After submitting the Task 03 prototype, I continued developing the app by completing the remaining features, including the "Display Correct Step" function. I also worked on fixing existing bugs, such as the distorted and blurry scenario environment caused by using a large sphere in the prototype. Additionally, I will continue exploring ways to adjust the AR camera to be positioned inside the car at the driver’s eye level, and if possible, implement a fix. If time permits, I plan to further polish the app and resolve any remaining issues.
During Week 12's Monday online class, I had a consultation session with Mr. Razif. Last week, I had sent him my project file and asked for assistance in fixing the AR camera position so that it stays at the driver’s eye level instead of spawning the car based on the detected plane. In this session, I followed up with Mr. Razif on whether he had found a solution. Unfortunately, he had not tested it yet, so he planned to use the class session to try out potential fixes using my project file.
I also informed Mr. Razif about the issues I faced with the environment setup. I had used a large sphere to simulate the surroundings, but the view from inside the car, especially through the windows appeared distorted and blurry. Additionally, due to the sphere’s limited internal space, animating the car model to simulate movement was challenging. Although I tried enlarging the sphere, it only made the environment appear more unrealistic. Mr. Razif suggested trying a skybox for the environment. I explained that I had tested this before, but since we are using the Vuforia Engine and AR Camera, the skybox only works in Unity's editor view and not when exported to mobile. As a workaround, Mr. Razif recommended including a Unity camera in the scene positioned at (0, 0, 0), then applying the skybox environment to see if that setup would function properly during mobile export.

After the consultation session, I proceeded to test the solution suggested by Mr. Razif. Following his advice, I added a Unity camera to the scene and applied the skybox I had previously created. Although the skybox appeared correctly in the Unity editor, it still did not show up after exporting the project to my phone, indicating that the Unity camera wasn’t functioning as intended within the AR setup.


Figure 1.1.1, 1.1.2, 1.1.3 Progress screenshot.

To find an alternative, I considered using a dome model instead. Since 360 images are curved and the road needs to appear flat, I thought a dome might help achieve the effect I wanted. Combining a realistic curved environment with a flat driving surface. I searched for dome models in the Unity Asset Store, downloaded a suitable one, and imported it into my project. However, the dome didn’t display any images after applying the texture, and I couldn’t find any useful tutorials or documentation on how to make it work. As a result, I decided to look for another approach.


Figure 1.2.1, 1.2.2, 1.2.3 Progress screenshot.

Next, I explored importing images of trees, buildings, barriers, and lights, assigning them to planes for easier animation without distortion. For the static sky, I continued using a large sphere with a sky texture. This approach helped avoid blurriness and provided a clearer, more stable AR environment.
To test whether this new approach would work effectively, I started by using Google Maps to capture building perspectives that would align well with the driver's eye-level view. I adjusted the viewpoint to get a clear, frontal angle of the buildings and used the print screen feature to download the image. Then, I imported the image into Adobe Photoshop to remove the background elements like the sky, barriers, and any overlapping buildings.


Figure 1.3.1, 1.3.2, 1.3.3 Progress screenshot.

In Unity, I created a quad and adjusted its rotation and position to simulate the buildings from the driver’s view. I had consulted AI on whether to use a quad or a plane, and the recommendation was to use a quad since it’s more performance-friendly and has a smaller file size making it more suitable for AR.
To apply the transparent PNG image to the quad, I created a new material using the Unlit Universal Render Pipeline shader. I set the surface type to “Transparent” and the blending mode to “Alpha,” then assigned the image to the Base Map. After applying this material to the quad, I tested a few barrier objects to check if the visual looked correct from the driver’s perspective and it worked well.


Figure 1.3.4, 1.3.5, 1.3.6 Progress screenshot.

Once confirmed, I proceeded to extend the barriers and added them to the right side of the car to create an oncoming lane. 


Figure 1.3.7, 1.3.8 Progress screenshot.

Initially, I reused the same material and plane for the road, adjusting its tiling to switch from two to three lanes. However, the result didn’t look visually accurate. So, I duplicated the road image, edited it in Photoshop, and imported it back into Unity as a new texture. I then assigned this to a new plane to create the opposite lane and duplicated the barriers to match.


Figure 1.3.9, 1.3.10, 1.3.11 Progress screenshot.

For the background buildings, I used the same quad and material setup, applying the cleaned-up building images using the unlit transparent material approach.


Figure 1.3.12 Progress screenshot.

Next, I moved on to creating the streetlights. I avoided using quads here, as a flat image would look unrealistic when animated, especially due to the lack of angle change during movement. Instead, I searched the Unity Asset Store for 3D streetlight models that matched the visual style of my scene. I initially tried one model, but it appeared pink due to unassigned materials. 


Figure 1.4.1, 1.4.2, 1.4.3 Progress screenshot.

After troubleshooting, I decided to switch to a second free asset pack that included several streetlight variations with better material setup. I imported this new pack and placed the streetlights in the scene, adjusting their positions accordingly to enhance realism. 


Figure 1.4.4, 1.4.5, 1.4.6 Progress screenshot.

Since the scene felt a bit empty and didn’t reflect a typical Malaysian road environment, I decided to add a highway guide sign. Initially, I planned to use a 3D model, but after searching online sources and the Unity Asset Store, I couldn’t find any Malaysian-specific highway signs. 


Figure 1.5.1, 1.5.2 Progress screenshot.

I also tried using Malaysia’s highway sign image in Tripo AI to generate a 3D model, but the result was unsatisfactory. 


Figure 1.5.3 Progress screenshot.

So instead, I used an image of the highway guide sign, removed its background, created a quad in Unity, and assigned the image to it.


Figure 1.5.4, 1.5.5, 1.5.6 Progress screenshot.

After animating all the environmental elements such as barriers, streetlights, and buildings. I tested the AR experience on my phone to observe how everything worked from the user’s perspective. 


Figure 1.6.1, 1.6.2 Progress screenshot.

During testing, I noticed that the current environment still lacked realism. Specifically, the front view appeared empty, showing only the sky, which created an unnatural feeling as if the road suddenly disappeared ahead.


Figure 1.7.1, 1.7.2, 1.7.3 Progress screenshot.

To improve this, I searched for a more suitable 360° image on Google Maps and found a highway view surrounded by trees and mountains. I downloaded the 360° image and imported it into Unity. When applied to the large sphere used for the background, this new image provided a better sense of depth and visual continuity from the driver’s eye level, especially since the mountains and trees filled the forward view effectively.


Figure 1.7.4, 1.7.5 Progress screenshot.

I attempted to animate the entire sphere to simulate backward movement of the environment as the car moved forward. However, this approach didn’t work well. The more the sphere moved, the smaller its internal space became, eventually causing the road visuals to clip awkwardly through the sphere. To resolve this, I returned to using individual environmental elements on flat surfaces (quads) with transparent tree images. I edited the tree images in Adobe Photoshop to remove the background and applied them as materials to quads in Unity.


Figure 1.7.1 - 1.7.4 Progress screenshot.

After assigning the tree images to the quads, I proceeded to animate them to simulate the effect of trees moving backward as the car drives forward.


Figure 1.7.5, 1.7.6 Progress screenshot.

When testing on the phone, I encountered an issue where the animations started immediately after navigating from the home page, rather than triggering after the car was spawned using the plane finder. To fix this, I added a script to the plane finder so that the environment animation only begins after the user taps to place the car. I also configured the animation to play only once by disabling the loop setting.


Figure 1.8.1, 1.8.2, 1.8.3 Progress screenshot.

After exporting the updated version to my phone, I noticed that the environment still appeared to end too abruptly. To improve the illusion of depth and continuity, I duplicated the tree quads, extended their length, and applied the same animation, making the simulated movement more seamless and immersive. Throughout this process, I frequently checked the scene from the driver’s eye-level perspective to ensure the visuals looked natural. Initially, I kept the mountain backdrop in the tree image, thinking it would add visual depth. However, it ended up looking awkward from the interior view, so I removed the mountains and kept only the tree elements.


Figure 1.9.1, 1.9.2 Progress screenshot.

After completing the scenario environment, I proceeded to develop the "Display Correct Step" feature. I began by searching for a shader that could outline specific car parts for highlighting, along with descriptive labels. I initially downloaded a free shader from the Unity Asset Store, but it didn’t work due to incompatibility with the Universal Render Pipeline (URP). After trying several URP-compatible shaders, none of which worked or had proper documentation. 


Figure 2.1.1, 2.1.2, 2.1.3 Progress screenshot.

Since I spent too much time searching for a free shader in the Asset Store, I decided to follow a YouTube tutorial to create one myself using Shader Graph. I looked for a shader with the visual style I wanted and followed the tutorial step by step to build a forcefield effect. 


Figure 2.2.1 Progress screenshot.

The process was quite complex, involving multiple nodes and connections. 


Figure 2.2.2 - 2.2.6 Progress screenshot.

Initially, everything worked well and matched the tutorial, but towards the end, the shader’s output began to differ. Despite double-checking for any missed or incorrect steps, the final result especially how the forcefield wraps around the sphere didn’t match the tutorial. I spent time troubleshooting but couldn’t pinpoint the exact issue.


Figure 2.2.7 - 2.2.10 Progress screenshot.

After troubleshooting without success, I switched to another tutorial designed for URP. This time, I faced an issue locating procedural patterns in Shader Graph, but eventually found them in the Assets folder and dragged them in manually. 


Figure 2.3.1, 2.3.2, 2.3.3 Progress screenshot.

I followed the YouTube tutorial carefully as I didn’t want to spend more time recreating the forcefield Shader Graph, I was really hoping it would work. Fortunately, when I tested it using a cube in my scene with the shader material applied, it worked well.


Figure 2.3.4 - 2.3.9 Progress screenshot.

Next, I applied the shader to individual car parts. To avoid modifying the original model, I created new empty GameObjects inside the car model with mesh filters and renderers that used the shader. This gave the desired glowing effect. 


Figure 2.3.10 - 2.3.14 Progress screenshot.

I continued by changing the shader pattern to a hex lattice for a more tech-inspired look that matches the AR application's theme. I also fine-tuned some settings in the Shader Graph to achieve the visual style I wanted.


Figure 2.4.1 Progress screenshot.

Then, I applied the completed shader material to all relevant car parts based on the correct interaction steps. It worked well on the pedals and side mirrors. 


Figure 2.4.2, 2.4.3 Progress screenshot.

However, when applied to the steering wheel, it appeared completely white. I tried adjusting the parameters in the Shader Graph but realized any changes would also affect the other parts using the same material. To solve this, I created a new material using the same Shader Graph and adjusted its parameters specifically for the steering wheel. Although the final result wasn’t exactly the same as the pedals and side mirror due to the steering model affecting how the hex lattice displays, I managed to make it visually similar enough to maintain consistency.


Figure 2.4.4, 2.4.5 Progress screenshot.

Then I created 3D TextMeshPro objects beside each highlighted part for explanation. Positioning the text was tricky, I spent time adjusting each one carefully and verified the visibility from the driver's perspective.


Figure 2.5.1- 2.5.4 Progress screenshot.

To synchronize the display with user interaction, I implemented a button that, when clicked, activates the current step’s car part (with force field effect) and its corresponding text. The animation uses a fade-in and fade-out effect by adjusting the alpha value of the shader through a float parameter. 


Figure 2.5.5 - 2.5.10 Progress screenshot.

I also wrote a script to deactivate the current step and activate the next one automatically at the end of each animation. After confirming the script worked, I implemented all steps. 


Figure 2.5.11 - 2.5.15 Progress screenshot.

Upon completion of the final step, a panel appears giving the user the option to replay or proceed. 


Figure 3.1.1 Progress screenshot.

I scripted the replay button to reset all relevant animations. Initially, I attempted to access animators from a list of GameObjects but it didn’t work on mobile. 


Figure 3.1.2, 2.1.3, 3.1.4 Progress screenshot.

I then modified the script to directly reset each animator’s state, which resolved the issue.


Figure 3.1.5, 3.1.6, 3.1.7 Progress screenshot.

At this point, the MVP feature of the app was complete. I continued polishing the experience and addressing remaining issues. One key problem was that the environment animation triggered immediately after spawning the car, which could confuse new users. To fix this, I delayed the animation start by 3 seconds. 


Figure 4.1.1 Progress screenshot.

I created a countdown panel with an animated “3-2-1” scale-up and fade-out effect before the scene begins. I also ensured the countdown was included in the replay and spawn sequences.


Figure 4.1.2 - 4.1.5 Progress screenshot.

I also continued working on fixing the issue of keeping the AR camera consistently positioned at the driver's eye level, using a new solution suggested by AI. Unfortunately, the method still didn’t work as expected.


Figure 4.2.1, 4.2.3 Progress screenshot.

Later, I had an online consultation with Mr. Razif, where I presented the completed application and shared a recorded video demonstrating the user experience. I also asked if there were any updates regarding the camera position fix, since I had sent him my project file three weeks ago. Mr. Razif responded that he had tested the file I previously submitted. While the solution partially worked, the car model still appeared slightly offset from the camera. He asked me to send him the latest project version especially now that I’ve implemented the new environment so he can test it again. He also commended my progress and acknowledged that the application was coming together well.


Figure 4.3.1 Completed application.

Since the submission deadline was extended to Week 15 weekend, I used the additional time to continue working on fixing the camera position to match the driver’s eye level. I sought help from one of my game development groupmates, who is a computer science student and familiar with Unity. He suggested making the car model a child of the AR Camera and setting it to static so it would stay fixed at the driver’s eye position. However, this approach didn’t work, as it caused problems with the scenario animation the car needed to remain stationary on the road to align with the animation, but making it static would break this setup.
Next, I also experimented by creating prefabs for the car model and the environment, intending to control their spawning via script, so they wouldn’t rely on the ground plane detection stage.


Figure 5.1.1 - 5.1.4 Progress screenshot.

Although the script successfully spawned the car model, the environment and other models were missing. Furthermore, the spawned car model appeared upside down, and the camera view ended up facing from the gear lever towards the back seat, which was not intended.


Figure 5.2.1, 5.2.2, 5.2.3 Progress screenshot.

I suspected this might be due to the model’s rotation settings in Unity, so I imported the model into Blender, adjusted its rotation, exported it as an FBX file, and re-imported it into Unity. Unfortunately, the issue persisted: the car spawned upside down, and the environment assets were still missing.


Figure 5.3.1, 5.3.2 Progress screenshot.

I spent nearly a week experimenting with different approaches and scripts to fix the camera to the driver’s eye position, aiming to provide the best user experience. Despite my efforts, none of the solutions worked. I also consulted Mr. Razif for assistance, as I had sent him my project file the previous week. However, he was occupied with Application Design 02 submissions and was unable to look into it. In the end, the camera could not be fixed to the driver’s eye position.


Figure 5.4.1 Progress screenshot.

Given the time I had already spent on this issue and concerned that the application lacked sufficient content, I decided to proceed with developing a second scenario to ensure the project met submission requirements. I duplicated the GameObject from Scenario 01 since its object positioning was already arranged and disabled components unnecessary for Scenario 02.
Scenario 02 involves a flooded road section, so I researched how to create water in Unity. After downloading the appropriate package and watching tutorial videos, I imported the assets and applied shaders to cubes to represent water. 


Figure 6.1.1 - 6.1.4 Progress screenshot.

To make the flooded area appear more realistic, I slightly tilted the road to simulate a sloped surface where water would logically collect after rain. 


Figure 6.2.1 Progress screenshot.

I also changed the water color to a dark brown to mimic muddy rainwater, as clear blue water wouldn’t fit the setting. I adjusted the flooded area’s location to match the driver’s eye view perspective.


Figure 6.3.1 - 6.3.4 Progress screenshot.

For environmental context, I searched for highway bridge models online, as floods often occur beneath bridges. 


Figure 7.1.1, 7.1.2 Progress screenshot.

After importing a suitable bridge model into Unity, I used Google Maps to study real-world road environments beneath highways, which gave me ideas for the surrounding elements I needed to create. 


Figure 7.2.1 Progress screenshot.

I downloaded road fences from the Unity Asset Store, imported them into Unity, and arranged them along both sides of the road. Since the road had a slope, I had to separate the fence models into multiple parts and manually adjust their angles to align them with the road's tilt. I repeated the same process for the opposite direction lanes.


Figure 7.3.1 - 7.3.7 Progress screenshot.

When creating the grassy areas alongside the road, I encountered issues with a downloaded asset package that didn’t work properly. 


Figure 7.4.1, 7.4.2 Progress screenshot.

As an alternative, I created my own grass using a texture image. I applied the image as a base map to a material, assigned the material to a cube, and resized it to form a rectangle to act as the grass surface. I also adjusted the texture tiling to prevent distortion and placed the grass alongside the fences.


Figure 7.5.1 - 7.5.7 Progress screenshot.

After completing the basic environment setup, I added a car model to the opposite lane to prevent the scene from appearing empty.


Figure 7.6.1 Progress screenshot.

However, the environment still looked sparse, so I looked for a field model online to place on both sides of the road. Unfortunately, the FBX file I downloaded didn’t include textures. I solved this by creating new materials in Unity and assigning colors to the ground and grass areas, ensuring they matched the grass texture I had used previously for consistency.


Figure 7.8.1, 7.8.2 Progress screenshot.

I noticed that the bridge model lacked textures and appeared as a smooth light grey, which didn’t resemble a typical concrete highway bridge. I searched for a concrete texture image online, imported it into Unity, and assigned it as a material to the bridge.


Figure 7.9.1, 7.9.2, 7.9.3 Progress screenshot.

The fence models had the same issue, so I applied a metal texture to them to enhance their realism.


Figure 7.9.4, 7.9.5, 7.9.6 Progress screenshot.

To maintain visual consistency, I reused the 360-degree background image from Scenario 01. Despite using the same image, the addition of 3D models around the car in Scenario 02 provided enough variation to differentiate the scenes. After completing the environment setup, I built the app on my phone to test user experience. I noticed the scene lagged due to the large field models, so I reduced their size to improve performance.


Figure 7.10.1, 7.10.2, 7.10.3 Progress screenshot.

For Scenario 02 animations, I encountered challenges with the tilted road. When animating, the car models appeared to float unnaturally above the road. To fix this, I reverted the road and fences to a flat 0° angle. I then tilted the water object slightly to maintain the visual depth of the flooded area. 


Figure 8.1.1, 8.1.2 Progress screenshot.

I continued animating other elements, ensuring they all moved within a consistent range to maintain realism. 


Figure 8.2.1 - 8.2.5 Progress screenshot.

For car models, I varied their movement speeds to reflect real-world traffic behavior, where some vehicles move faster than others.


Figure 8.3.1 - 8.3.4 Progress screenshot.

Once animations were done, I created scenario pop-ups, buttons, feedback panels, and step-by-step instruction displays, using duplicates of the panels from Scenario 01. I updated the content to suit Scenario 02 and renamed the panels to avoid confusion. I also reassigned OnClick events to ensure the panels and animations functioned correctly.


Figure 9.1.1 - 9.1.6 Progress screenshot.

I updated the step-by-step guide for Scenario 02 and searched through the car model’s components to apply a force field shader that highlights parts during instructions. 


Figure 9.2.1, 9.2.2, 9.2.3 Progress screenshot.

I initially assigned the shader to the hazard light button’s mesh, but since it was grouped with other elements like the airbag and horn, the shader affected the entire dashboard. To fix this, I created a small cube, positioned and sized it to overlay the hazard light button, and applied a custom shader material with adjusted masking power to ensure it didn’t obscure the button’s details.


Figure 9.3.1 - 9.3.6 Progress screenshot.

I then animated the cube to fade in and out alongside the instructional steps. Using scripts, I controlled the sequence of enabling and disabling these animations to synchronize with user interactions.


Figure 9.4.1, 9.4.2, 9.4.3 Progress screenshot.

Next, I created a script for the skip button, allowing users to jump from Scenario 01 to Scenario 02 seamlessly. The script ensured all Scenario 01 animations stopped before Scenario 02 animations began. I also re-applied the replay script from Scenario 01 to allow users to replay the countdown animation.


Figure 9.5.1 - 9.5.5 Progress screenshot.

Throughout this process, I frequently built the app on my phone to test functionality, fixing issues like inactive animations or improperly linked UI panels as they arose. 


Figure 10.1.1, 10.1.2, 10.1.3 Progress screenshot.

To enhance immersion, I searched for sound effects such as ambient noise, braking sounds, and rain. I learned how to integrate sound effects into Unity, using scripts to control their playback through animation events, ensuring that sounds played only when necessary.


Figure 11.1.1 - 11.1.7 Progress screenshot.

Additionally, I improved the user feedback panel by changing its background color to red or green based on user responses. This visual feedback provided immediate clarity, allowing users to quickly identify correct or incorrect choices without needing to read the text.


Figure 12.1.1, 12.1.2 Progress screenshot.

Finally, I uploaded a custom app icon into Unity to replace the default Unity logo, ensuring a polished presentation. After thoroughly testing the final build on my phone to confirm everything worked smoothly, I recorded the video presentation and walkthrough video for submission.


Figure 13.1.1, 13.1.2 Progress screenshot.

Final Project Completed Experience
Unity Project File -
Google Drive
Video Presentation


Figure 14.1 Video presentation.

Walkthrough Video


Figure 14.2 Walkthrough video.


FEEDBACK
Week 12
Last week, I sent my project file to Mr. Razif for help fixing the AR camera to stay at the driver’s eye level instead of spawning based on the detected plane. Since Mr.Razif hadn’t tested it yet, he planned to use class time to try potential fixes.
I also shared the issues I faced with the environment. Mr. Razif suggested using a skybox, but I explained it doesn’t work with Vuforia on mobile. He then recommended adding a Unity camera at (0, 0, 0) and applying the skybox to test if it works during export.
Week 14
I presented the completed application along with a recorded video showcasing the user experience. I also followed up with Mr. Razif about any updates on fixing the camera position, as I had sent my project file three weeks ago. Mr. Razif shared that he had tested the previous version, and while the fix partially worked, the car model still appeared slightly misaligned. He requested the latest version of my project especially with the updated environment to test it again. He also praised my progress and noted that the app is coming together nicely.
Week 15
I privately messaged Mr. Razif via WhatsApp on Friday to follow up on any updates regarding the camera fix to the driver’s eye position, as I had sent him the latest project file last week. Despite trying several methods on my own, I was still unable to find a working solution, so I reached out to see if he had any insights or suggestions. However, he mentioned that he had not yet had the time to test it out due to his busy schedule.


REFLECTIONS
Throughout this project, I’ve developed my problem-solving skills significantly. Creating an application is not an easy task, as unexpected issues can arise due to carelessness, software limitations, or other unforeseen factors. I spent a lot of time trying to achieve the desired outcome, especially in fixing the camera to align with the driver’s eye position, but ultimately, I was unsuccessful. In the end, I realized this might be due to the limitations of using the Vuforia engine. I also began to think that keeping the camera movable and spawning 3D models based on plane detection might provide a more authentic AR experience, as users may want to observe the virtual environment from different angles. To allow users to experience the view from the driver’s perspective, it would be advisable to detect a flat surface near the user or directly beneath the seat to anchor the AR content accordingly.
Another challenge I faced was the uncertainty of whether the content I had developed was sufficient. Therefore, with the extended time, I chose to focus on enhancing the quality of the first scenario before moving on to create a second one, ensuring that the overall application quality was not compromised.
This module has taught me a lot, from starting as a complete beginner in Unity to being able to create an AR mobile application independently. I believe that with more practice and familiarity with Unity, I’ll eventually be able to develop additional app features, including marker-based AR experiences. I’m satisfied with the outcome of my project, but I’m also motivated to continue learning and improving my skills in the future.

Comments