LIEW XIAO HUI / 0353121
BACHELOR OF DESIGN (HONOURS) IN CREATIVE MEDIA / EXPERIENTIAL DESIGN
Task 03: Project MVP Prototype
JUMPLINK
Instructions
Week 08 - Exporting Unity Project to Mobile Devices
Week 09 - Importing 3D Models into Unity (Markerless AR)
Task 03: Project MVP Prototype
Feedback
Reflections
Week 08 - Exporting Unity Project to Mobile Devices
In this week’s class, we learned how to export a Unity project to a mobile device. For iPhone builds, a Mac is required, while Android builds can be done using a Windows laptop.
Before exporting to a phone, several settings in Unity need to be configured. First, ensure the correct platform is selected, either Android or iOS. Then, connect our phone to the laptop using a USB cable. The phone must be switched to Developer Mode and USB debugging enabled. Additionally, set the phone’s USB connection mode to either File Transfer or USB Tethering.
Once connected, refresh the
Run Device list in Unity, and
the phone model should appear. Select the device to prepare it for
exporting.
Next, in
Player Settings, update the
Company Name to our own name
(since there’s no official company) and set the
Product Name to our app’s name.
The Version number can be
customized, but typically, version 1.0.0 is used for the final published
version.
Key settings to adjust:
-
Under Other Settings, uncheck Auto Graphics API and remove Vulkan from the list.
-
Uncheck Override Default Package Name.
-
Set the Version to match the one entered earlier.
-
Set Minimum API Level to Android 12L (API Level 32).
-
Under Target Architectures, enable ARM64.
Figure 1.1.3 - 1.1.7 Progress screenshot.
Once these settings were applied, we created a simple canvas with a panel and centered text to test the export process.
Afterward, we imported the Vuforia Engine package and set up an AR Camera and Image Target, allowing us to export the project to a phone and test AR functionality.
Figure 1.2.1, 1.2.2 Progress screenshot.
We also learned how to create a markerless AR experience using Vuforia’s Plane Finder and Ground Plane Stage. By placing the Ground Plane Stage object inside the Plane Finder’s Content Positioning Behaviour, we could make a 3D object like a cube appear on any detected flat surface.
To test the Plane Finder before exporting to a phone, we used the Vuforia Emulator, which comes with the Vuforia Engine package. This allowed us to use a webcam to simulate plane detection and check if everything worked properly.
Additionally, we practiced using events triggered when an image target is found. For example, when an image target is detected, both the Plane Finder and Ground Plane Stage would become active, while the image target itself would be disabled.
Lastly, we explored adding a button that appears with an animation once the image target is found. When clicked, this button would activate the Plane Finder and Ground Plane Stage.
Week 09 - Importing 3D Models into Unity (Markerless AR)
During Week 09’s class, we learned how to download 3D models from the Unity
Asset Store and import them into Unity. After importing, we dragged the
model into the Hierarchy, created an empty GameObject, renamed it, and moved
all components of the 3D model under this new GameObject. Once organized, we
deleted the original imported file and saved the renamed GameObject back
into the Assets folder.
Following last week’s lesson, we imported Vuforia into the project, then set up the Plane Finder and Ground Plane Stage in Unity. We placed the 3D model under the Ground Plane Stage, adjusted its position and scale, and tested the functionality using the Vuforia emulator and a webcam to confirm the Plane Finder and 3D model spawning worked properly.
Since the model appeared too small when spawned, we created a "Live View" button on the canvas and added an OnClick event. By assigning a script to the button, we allowed the 3D model to scale up to a realistic, actual-world size when clicked.
Additionally, Mr. Razif introduced us to ProBuilder, a Unity tool that lets us create custom rooms or building structures directly within Unity, which is especially useful for prototyping and environment design.
Task 03: Project MVP Prototype
In Task 03, we were required to develop the AR mobile application
proposed earlier using Unity. The focus of this project was to ensure that
the MVP (Minimum Viable Product) functions effectively while exploring
solutions to achieve the desired outcomes for the application.
Progress
To begin developing the AR mobile application in
Unity, I started by reviewing the video tutorials provided by Mr. Razif.
Since I wasn’t sure how to enable navigation between different pages in
the app when a button is clicked, I focused first on the Screen Navigation
tutorial to understand the proper workflow.
After going through the tutorial, I learned that different pages within the application can be managed by separating them into individual scenes. With that understanding, I proceeded to set up the project by switching the build platform to Android and making the necessary adjustments in the Player Settings to prepare for mobile development.
After that, I created several different scenes, renamed them appropriately, and saved them inside the Assets/Scenes folder for better organization.
I then proceeded to set up a Canvas for the UI, changed its UI Scale Mode to Scale With Screen Size, and adjusted the reference resolution to match the resolution set in the Game view for consistency. Next, I created a Panel as a child of the Canvas and changed its color to white. However, when previewing it in Game mode, it appeared greyish instead of pure white. After troubleshooting online, I discovered it might be due to a material being assigned to the panel. I resolved this by removing the material and setting it to None, which restored the proper white color.
![]() |
Figure 3.4.1 Progress screenshot. |
Moving on, I began building the home page layout. When adding text elements, I realized the WorkSans font wasn’t available in Unity by default. I searched for a solution and followed a tutorial to download the WorkSans font from Google Fonts. After importing the font files into Unity, I used the Font Asset Creator to generate a font atlas, saved it, and was then able to apply WorkSans to my text components.
For the home page’s top section, I imported the image I wanted to use and resized it accordingly. Since this section was designed to be divided into three parts, I attempted to use a mask to crop the image into separate areas.
However, when I tried to add a border radius to the masked areas, I realized Unity doesn’t support border-radius properties like Figma does. I looked for solutions online, but none worked for mask objects.
As an alternative, I tried to attach a white PNG with rounded corners onto the mask, but that didn’t give the desired result either.
To resolve this, I decided to directly export the images with border radius applied from Figma, ensuring the correct position and shape were preserved. After importing these images into Unity and setting them as Sprites (2D and UI), I was able to drag and drop them into the Image game objects easily. However, I noticed that the images I imported into Unity appeared more transparent than expected compared to how they looked in Figma and on my local files.
After some research and consulting ChatGPT, I couldn’t find a reliable fix. Eventually, I decided to export the images in JPG format instead of PNG, which resolved the issue and restored the correct opacity and appearance.
I then continued by creating the buttons for the application. However, after adding text to the buttons, I noticed an issue, a white background appeared behind each letter.
![]() |
Figure 3.9.1 Progress screenshot. |
After researching the problem, I discovered it was likely caused by how the font assets were generated. The recommended fix was to increase the padding when generating the font asset. I applied this solution by adding padding and regenerating the font asset, which successfully resolved the issue.
To achieve rounded corners on the buttons, I applied the same method as before by attaching a white circle image to the button, allowing it to visually create the effect of rounded corners.
When it came to adding an outline effect around the button, I initially tried using Unity’s built-in Outline component. However, I quickly realized the outline wasn’t properly matching the button’s rounded shape. It appeared more squarish as Unity's outline essentially duplicates the shape and offsets it, which works fine for rectangles but not for rounded buttons. When increasing the outline thickness, it caused gaps and inconsistent spacing around the button.
To solve this, I created a new UI Image object, gave it the same shape and size as the button but scaled it up slightly, placed it behind the button, and used it as a custom outline. This gave a clean, consistent border around the button, matching its rounded shape.
Next, I moved on to creating the bottom navigation bar for the app. I exported the navigation icons from Figma as PNG files and imported them into Unity, adjusting their sizes and positions as needed to fit the layout.
Following the video tutorial on screen navigation, I created a Scripts folder in the Assets directory and wrote a new script following the structure shown in the tutorial. Additionally, I went to the Build Settings, added all the scenes I had created, and ensured they were checked so they could be included when building and exporting the app to my Android phone for testing.
When the user taps the Get Started button, I planned to show a screen reminding them to rotate their phone for a better AR experience. Initially, I debated whether to create this as a separate scene or as part of the existing home page scene. After getting suggestions from ChatGPT, I decided to build it within the same scene using a new panel instead of a separate scene. Since the rotate reminder doesn’t require complex functionality and can be quickly dismissed before transitioning to the next scene. I created a new panel, added a gradient background, a rotate phone icon, and some instructional text.
To enhance interactivity and make the instruction clearer, I added a simple animation to the rotate icon, showing it turning as a visual cue. I also looked into ways for the page to automatically proceed to the next scene after the animation finishes playing. I discovered that Unity allows events to be added to animations, so I inserted an event at the end of the animation timeline to trigger a scene change.
After that, I moved on to developing the app’s MVP feature, simulating road scenarios. Using the same setup steps as in earlier scenes, I created a canvas and panel, added guiding text, and positioned the back button with an icon exported from Figma. I attached the screen navigation script from the video tutorial to the canvas, allowing me to assign an onClick event to the back button, letting users navigate back to the home page.
For the on-screen instructions prompting users to scan a flat surface, I added a fade in and fade out animation to make the guidance feel more natural and unobtrusive. At the end of the animation, I included an event to disable the panel entirely, as simply fading it out would leave it active and block interaction with other elements.
Next, I created the scenario popup card, which appears in the middle of the screen to inform users about an event happening and give them five seconds to respond. I imported icons, resized and positioned them, and added descriptive text. One challenge I faced was that the canvas size in Unity (1080×1920) differed from the iPhone 14 Pro frame I designed in Figma, so I needed to scale up and adjust all the UI elements accordingly.
To style the response button with a gradient outline, I found that Unity doesn’t natively support gradient strokes. As a solution, I exported a white circle with a gradient outline from Figma and imported it into Unity as an image. I then placed this image behind the response button to achieve the desired visual effect.
Following a similar process, I created the correct and incorrect response popups, changing the icons, button placements, and texts based on whether the user responded correctly or not. All elements were laid out according to the design I prepared earlier in Figma.
After that, I moved on to developing the most crucial feature of the entire application, the markerless AR driving simulation. I began by searching for suitable car models online, as I had no experience creating 3D models from scratch and knew it would be unrealistic to learn modeling from the ground up within the project’s time frame. I first browsed the Unity Asset Store, recalling how Mr. Razif used models from there during class, and the quality seemed decent. Unfortunately, none of the available free car models met my needs. Nearly all had the steering wheel positioned on the left side, while my application required a right-hand drive car to simulate driving in Malaysia.
I then expanded my search to other online 3D model sources. While I did come across a few models with the steering on the right, they tended to be overly luxurious and didn’t suit the purpose of a standard learner’s car for a training simulation.
After spending quite a bit of time searching, I finally found a suitable car model.
Initially, my plan was to open the car model in Blender, animate the scenario there, and then import it into Unity. However, when I loaded the model in Blender, I ran into a major issue where the car’s colors completely changed, with a red car turning green. I had no idea why this happened and tried following some online tutorials and guides about reassigning materials in Blender, but none of the solutions worked. Since I wasn’t familiar with Blender, troubleshooting this issue became time-consuming and unproductive.
As a result, I decided to import the car model directly into Unity and see what would happen. The same color issue appeared initially, with the car rendering in green. Thankfully, it was much easier to fix in Unity. I reassigned the materials, changing the green parts back to red, and most of the other elements retained their correct colors without much adjustment, which was a relief.
With the model corrected, I proceeded to install the Vuforia package and input my license key as instructed by Mr. Razif. I then set up the Plane Finder and Ground Plane Stage, making the car model a child of the Ground Plane Stage. I tested it in Game mode using my webcam to ensure the basic spawning function worked and confirmed that I hadn’t missed any steps.
Since my app focused on simulating driving scenarios, it was essential for the car to spawn with the viewpoint positioned at the driver’s eye level. I searched for solutions and, based on suggestions from ChatGPT, attempted to create an empty GameObject inside the car model to act as a viewpoint and attached a recommended script to the Plane Finder while disabling the Content Positioning Behaviour. Unfortunately, even after several attempts and adjustments, this setup didn’t work when I exported and tested it on my phone. Since I was running out of time and the issue remained unresolved, I decided to pause this part and continue developing other parts of the application first, while planning to seek consultation from Mr. Razif for a proper solution later.
After that, I worked on designing the onboarding page. Like before, I exported icons and images from Figma, imported them into Unity, and adjusted their sizes and spacing for the layout. I then created a simple animation where the scan icon zooms in slightly while the app’s logo and tagline gradually fade in. Once the animation played, I set it to automatically navigate to the home page by attaching the navigation script I had previously prepared.
After completing the onboarding page, I moved on to work on the bottom section of the home page. Initially, I wasn’t sure how to develop this section since it resembled a card layout containing an icon, text, and a call-to-action icon in the top-right corner. After considering the user experience, I decided to design it as a button for easier interaction. However, I wasn’t certain how to combine text and icons within a button in Unity. I consulted ChatGPT and learned that it’s possible by creating an image object as a child of the button object. Following that, I added a button, attached the white circle with a gradient outline to it, and adjusted its size and text. Then I created an image object inside the button and imported the icon exported from Figma.
Later, I realized I had overlooked adding the scan icon at the top section above the image, so I went back and inserted it. I also discovered that the home page wasn’t scrollable, which caused some of the buttons at the bottom to be hidden behind the navigation bar, making them inaccessible. After doing some research, I learned that I needed to create a Scroll View component and move the existing panel into the Content section of the Scroll View. By adjusting the panel size and the Viewport of the Scroll View, I managed to make the home page scrollable as intended.
When I exported the app to my Android phone for testing, I noticed that the layout on the home page appeared with excessive spacing and misaligned elements, very different from how it looked within Unity’s Edit and Game modes. Through a series of trial-and-error tests, I discovered that the issue was caused by incorrect anchor settings. I carefully experimented with different anchor configurations for each component on the home page to achieve proper positioning and layout alignment. After adjusting all the anchors accordingly, the arrangement finally matched the layout I designed in Figma, eliminating the awkward spacing and alignment issues.
I also noticed I had forgotten to apply drop shadows to the buttons in the bottom section. When I explored Unity’s shadow options, I found they differed from Figma. In Figma, I could control blur and spread to create a natural, blended drop shadow. Unity’s shadow settings, however, only allowed adjustments to color and effect distance, resulting in a stiff, less refined look. I searched for solutions, and ChatGPT suggested layering multiple shadows with slight color and distance variations to simulate a softer, spread-out shadow effect.
Next, I proceeded to create animations for the scenario popup and the response feedback cards. I set up the scenario card to fade in and out, adding an event at the end of the animation to disable the current panel and activate another panel containing the response buttons.
For the response buttons, I assigned OnClick events so that when a user selects an option, it automatically disables the current panel and activates the corresponding feedback panel, either correct or incorrect while displaying a brief explanation for the outcome.
In class, I explained to Mr. Razif about the issue I was facing where I couldn’t fix the camera to the driver’s eye position and the car would spawn anywhere based on the detected plane, with new spawns possible each time the user tapped a new plane. He wrote a custom script to help solve this problem. The intention was to lock the car’s spawn position and allow me to manually adjust the camera to the ideal driver-eye viewpoint.
However, even after attaching the script and disabling the Content Positioning Behaviour component as instructed, it still didn’t work as expected. I attempted to troubleshoot it myself by modifying the script, trying to prevent additional spawns once the car appeared and ensuring the model wouldn’t relocate when new planes were detected.
I also experimented with different scripting approaches to lock the camera’s position relative to the car’s interior, but none of these attempts successfully achieved the intended result.
During this process, while testing if the script worked, I encountered an issue where two car models would spawn simultaneously when I exported the app to my phone. I initially suspected there might be a problem with my project file or the script itself. Despite refining and checking the code multiple times, the issue persisted. Eventually, I decided to restart Unity and re-export the project and after doing that, the problem was resolved.
![]() |
Figure 8.3.12 Progress screenshot. |
Since this issue was taking up a lot of time, I decided to move forward with developing the scenario environment. I referred to Mr. Razif’s tutorial on YouTube for creating a skybox. Initially, I tried downloading 360° Malaysia road images online, but options were very limited. I chose an example image to test the workflow and followed the tutorial to set up the skybox.
I explored ways to obtain suitable 360° images. I discovered a YouTube tutorial on using Street View Download 360. At first, the website appeared to require a purchase, but after scrolling down, I found a free download link. After installing the software, I browsed Google Maps for suitable Malaysian street views, copied the map link, and used the tool to generate and download a 360° image.
However, I noticed that although the image displayed correctly in the Editor mode, it didn’t appear in Game mode when using Vuforia’s AR Camera. After consulting ChatGPT, I learned that skyboxes don’t render with AR Cameras in Vuforia. Solutions were suggested, but neither worked when tested.
I then followed the second tutorial from Mr. Razif, which involved creating a large inverted sphere, applying an external shader, and mapping the 360° image onto it. This approach worked smoothly, both in Unity and when exported to my Android device. When testing on my phone, I realized the driver’s eye viewpoint still wasn’t aligned properly — the road wasn’t visible from that position. To fix this, I repositioned the sphere upward to achieve the desired perspective.
During Week 11’s class, I updated Mr. Razif on my progress and remaining issues, including the partially fixed car spawning script. Although the car no longer spawned repeatedly after the first tap, the plane detection square remained visible. Mr. Razif advised me to add a couple more lines to the script to disable the Plane Finder after the car was placed.
Mr. Razif also tested the AR experience on my Android device and confirmed that the car’s spawn distance varied based on plane detection range, which wasn’t ideal for this simulation. Since troubleshooting this in class wasn’t practical, Mr. Razif requested that I send him my full project for testing. Due to the file size being around 10GB, I compressed the project folder and uploaded it to my Google Drive, then shared the link with him for further debugging and assistance.
I then continued searching for more car models from various online sources to use for simulating the road scenarios. I imported several suitable car models into Unity and tested whether their materials were applied correctly, as some models would lose their assigned materials upon import.
After gathering all the car models with proper materials, I began creating animations to simulate their movement on the road. However, during this process, I realized an issue because I was using a sphere as the environment, the road surface in front and behind the center point of the sphere was curved. When animating the car models to move forward, they would end up moving off the sphere’s surface. To address this, I tilted the car models slightly upward to give the visual impression of moving along the road from the driver’s eye view.
This workaround, however, introduced other limitations, the available space
for car movement was very restricted due to the sphere’s size. Additionally,
if I wanted to lock the camera position at the driver’s eye level inside the
car, the car itself couldn't move. The only feasible solution was to move the
sphere environment backwards, simulating forward car movement. But increasing
the sphere’s size to extend the movement distance caused the 360° image
texture to stretch and blur, making the environment look unrealistic.
Another
issue I encountered was with 360° images that already included parked or
moving vehicles. These static vehicles clashed with the animated car models,
breaking immersion. I tried switching to cleaner road images without other
vehicles and re-animating the car models (with a slight upward tilt), but the
same movement space limitation problem remained.
To solve this, I experimented with creating a flat plane in Unity, assigning a road texture to it, and animating it as the moving road surface.
I reanimated the car models moving forward without any tilt, while the player’s car (anchored to the camera for driver’s perspective) remained stationary. The road plane was then animated to move backwards, creating the illusion of motion from inside the car and this worked well when viewed from the driver’s eye level. I attempted combining the plane with the sphere environment so the road and scenery would work together. It aligned properly in Unity’s game view, but when exported to my phone, the alignment broke and the road and background wouldn’t stack as intended.
I also tried experimenting with a cylinder as an alternative to combine both flat and curved surfaces, hoping it would work like a hybrid between a plane and sphere. But after testing with image textures and applying wrapping scripts, it didn’t produce the expected result as the texture mapped incorrectly and didn’t achieve the visual I needed.
Since resolving the environment simulation issue was taking too much time without a clear solution, I decided to move on and complete the animations first.
I also looked for 3D models of tyre fragments to use as road obstacles, but most online sources only provided new tyre models. To solve this, I found a suitable image of a tyre fragment online and used TripoAI to generate a 3D model from it. I then imported the model into Unity and applied a tyre material. Additionally, I downloaded an extra tyre model to place alongside the fragment for more variation on-screen.
Hoping to find a better environment solution, I explored Unity’s Asset Store for any plugins that could generate real-world environments. I discovered Cesium, which can generate photorealistic 3D tiles based on specific coordinates. Unfortunately, I found it only supported cities like New York and California, not regions in Asia such as Malaysia or Korea — so I’ll need to explore other solutions later for the environmental simulation.
Finally, I added events at the end of each car animation so that, once the animation completes, a scenario card will automatically appear on screen to prompt the user’s response. I also worked on scripting the animations so they wouldn’t start automatically when the scene loads but would only play once the Ground Plane Stage is detected and active.
Figure 8.14.1, 8.14.2 Progress screenshot.
What Has Been Completed
- Onboarding Page: Designed and implemented with animations and screen navigation.
- Home Page: Completed overall layout, navigation buttons, and UI elements.
- Markerless AR: Car model successfully spawns on the detected plane. Animations now play automatically when the Ground Plane Stage appears.
- Scenario Cards & Buttons: Scenario prompts with interactive response buttons added, functioning as intended.
What's Still Pending
- Fixed Camera at Driver’s Eye Position: Need to properly anchor the camera inside the car model at the correct driver’s perspective.
- Scenario Environment: Finalize and implement a suitable environment setup for realistic road scenario simulations in AR.
- Display Correct Step Feature: Develop the feature to display the correct driving step or action after the user responds to a scenario.
Final Project MVP Prototype
Video Presentation
Walkthrough Video
Week 10
To control the camera’s position and anchor it at the driver’s eye level, Mr. Razif provided a script for testing.
REFLECTIONS
Through this project, I realized that many features and layouts I
initially envisioned couldn’t be created as smoothly as expected. As
someone new to Unity, I was unfamiliar with many of its tools and
workflows. Unlike designing interfaces in Figma, creating layouts
and interactions in Unity proved to be much more complex and
required a lot of adjustments. It was quite challenging to learn a
new software platform within a short time while also handling a
heavy workload to complete a functional mobile application. Another
difficulty I encountered was the limited availability of video
tutorials or resources specifically for using Vuforia. Whenever I
ran into problems or wanted to search for solutions, it was often
time-consuming and frustrating.
A lot of time was spent on
trial and error, testing different methods, adjusting settings, and
experimenting with ideas. However, seeing the application
successfully exported and running on my phone was truly satisfying
and something I felt proud of. I also realized there are limitations
when it comes to achieving exactly what I had imagined. While I
believe it’s possible to develop the app to meet my full
expectations, it would require much more experience with Unity and
additional time for exploration and experimentation.
Overall,
despite the difficulties, this project has been a valuable learning
experience for me. From knowing nothing to being able to create a
functional AR mobile application, the journey of encountering
problems and actively finding solutions helped me grow and gain
practical skills I wouldn’t have otherwise acquired.
Comments
Post a Comment