Experiential Design / Task 01

21.04.2025 - 12.05.2025 (Week 01 - Week 04)
LIEW XIAO HUI / 0353121
BACHELOR OF DESIGN (HONOURS) IN CREATIVE MEDIA / EXPERIENTIAL DESIGN
Task 01: Trending Experience

JUMPLINK
Instructions
Task 01: Trending Experience
Week 02 - Experience Design & Marker-Based AR Experience
Week 03 - Marker-Based AR Experience
Week 04 - AR Button and Simple Animation
Research
AR Project Ideas Proposal
Feedback
Reflections


INSTRUCTIONS


Task 01: Trending Experience
In this task, we are required to complete weekly tutorial and practical exercises, submit a short reflective report on our findings, and propose three potential ideas for our AR project.

Progress

Exercises
Week 02 - Experience Design & Marker-Based AR Experience
In Week 02, Mr. Razif briefed us on experiential design, focusing on user journey mapping. A journey map helps understand how users interact with a product or service, allowing designers to view the experience from the user’s perspective.
An empathy map explores the user’s thoughts, feelings, and actions, while a journey map covers the entire user experience, including pain points, gain points, and touchpoints to help develop solutions. The current journey map reflects the present experience, while the future journey map shows the ideal experience.


Figure 1.1, 1.2, 1.3 Week 02 lecture.

Then, we were divided into groups and tasked to choose a physical place as a topic and complete a current journey map. Mr. Razif also provided some examples for us to review.
Examples: 
https://miro.com/app/board/o9J_kmK3TZo=/
https://miro.com/app/board/o9J_lLpvx5s=/
My team suggested two ideas: a concert at Axiata Arena and a car service center. We agreed on the concert as most of us had prior experience and could relate to the user’s perspective. After deciding, we worked smoothly, each member added points to a Miro board created by one teammate. We discussed the journey map’s flow, phases, pain points, and gain points, then proposed AR-based solutions for the identified pain points, aligning with our upcoming AR project.


Figure 1.4, 1.5 Progress screenshot.

After completing the journey map, we presented it to Mr. Razif and our classmates and received feedback. For the solution we proposed in the 'queue for entry' phase, a fast-track AR check-in with facial recognition for VIPs. Mr. Razif suggested adding scheduled entry times based on seating areas to further reduce queues, even with the AR check-in system in place.


Figure 1.6, 1.7 Group presentation.

After class, we were reminded to complete the future journey map as well. We refined the current state journey map based on the feedback received, added additional details, and completed the future state journey map


Figure 1.8, 1.9 Progress screenshot.

Final Journey Map
Current state

Figure 1.10 Final current state journey map.

Future state

Figure 1.11 Final future state journey map.

This week, I learned the basic concepts of experiential design, along with how to create user mapping and journey maps to better understand users. Through these tools, we can propose solutions or designs that address users’ needs and improve their experiences. I’m quite familiar with creating journey maps since I’ve done them in several previous modules’ assignments. However, I discovered that there are different types of journey maps, the current journey map and the future journey map. The future journey map is used to envision the ideal experience and imagine what the customer experience could be in the future.

Week 03 - Marker-Based AR Experience
During the Week 03 lecture class, Mr. Razif explained the differences between AR, MR, and VR.
AR involves overlaying digital information on a screen, typically using devices like smartphones or tablets, with interactions happening through the screen itself.

Figure 2.1 Week 03 lecture.

MR, on the other hand, uses headsets such as smart glasses to provide a shared experience, allowing users to interact freely with digital elements in their environment. For example, using Microsoft’s HoloLens.

Figure 2.2 HoloLens.

After the explanation, we tested AR interactions on our smartphones using Google.com. We searched for animals like tigers, cats, and dogs to view them in AR. This helped us check whether our devices could support AR experiences, which would be important for testing our future project content. 

Figure 2.3 Week 03 lecture.

Initially, I tried launching the AR through Safari, but it only displayed the 3D model. I then attempted it through Chrome, but it still didn’t work. Eventually, we discovered that the AR experience could only be launched using the Google app.


Figure 2.4, 2.5, 2.6 AR experience.

Following that, we carried out a group exercise to brainstorm a problem statement and propose an AR solution. 


Figure 2.7, 2.8 Class exercise.

We chose a gym as our scenario, imagining ourselves as first-time or introverted gym users who might feel overwhelmed by the variety of equipment and unsure how to use them safely. From there, we crafted a problem statement and designed an AR-based solution to guide and support these users. 
I help in producing mockups for the proposed AR experience. Since it’s hard to visualize AR interactions through words alone as Mr. Razif mentioned, we created two key mockups: one showing the scanning of a QR code on a piece of gym equipment, and another displaying a virtual avatar demonstrating how to use the equipment with different exercise options.
Initially, I tried collaging images with Photoshop, but it was time-consuming and didn’t meet expectations in a short time. So, we opted to use AI-generated visuals that better matched our concept. 

Figure 2.9, 2.10, 2.11 Progress screenshot.

After preparing the slides, we presented our solution to the class and Mr. Razif.


Figure 2.12 Final presentation slides.

Finally, we had a tutorial on creating a simple marker-based AR using Unity and Vuforia. Mr. Razif guided us step-by-step through account registration, setting up the license key, and adding images to the target database.


Figure 2.13 - 2.18 Progress screenshot.

Since I had already reviewed the tutorial video provided by Mr. Razif last week and tried creating a marker-based AR at home, this in-class session gave me another opportunity to practice the process and reinforce my understanding.


Figure 2.19, 2.20, 2.21 Video tutorial attempt.

During this week, I gained a better understanding of the differences between AR (Augmented Reality), MR (Mixed Reality), and VR (Virtual Reality). I also had the chance to try out AR provided by Google, which was quite interesting. We could place animals anywhere in the real world, zoom in and out, and rotate them 360 degrees.
In the group class exercise, we explored the use of AR in a gym room. I found that AR has both pros and cons in this context. On the positive side, AR can help new users or introverted individuals who may not know how to use the gym equipment properly. It could provide interactive guidance and show different workout options using the equipment. However, one of the main drawbacks is that users would need to hold their smartphones to access the AR experience, which isn't ideal since many exercises require the use of both hands. In this case, headsets or glasses would offer a much better user experience in my opinion.
As for the Unity tutorial, I found it relatively easy to follow for creating a simple marker-based AR experience. Using Vuforia to generate a license, importing images into the database, and setting everything up in Unity was straightforward. As someone new to Unity and AR development, I found it exciting and engaging. The tutorial also deepened my understanding of marker-based AR, where an image from the database is scanned and a visual or information appears overlaid on the screen.

Week 04 - AR Button and Simple Animation
During Week 04, there was no class due to a public holiday. However, Mr. Razif shared a recorded video tutorial from the Wednesday class. I watched the video and followed along to complete the exercise.

Figure 3.1 Recorded tutorial.

Using the same Unity project file from last week (for the image target AR), I first set the game’s aspect ratio to 9:16. Then, I created a Canvas, which automatically generated an EventSystem. To make buttons functional in Unity, an EventSystem is required. If the button isn’t working, check whether an EventSystem exists in the scene. If it’s missing, can manually add one by right-clicking in the Hierarchy, selecting UI > EventSystem.
For the Canvas, set the UI Scale Mode to Scale with Screen Size. Update the Reference Resolution to match the aspect ratio that has been set for the game (1080:1920).
Next, under the Canvas, I created a Button, renamed its title, and updated the button’s text. 

Figure 3.2, 3.6 Progress screenshot.

To assign a function to the button, I selected the button, in the Inspector, scrolled down to the On Click() section, dragged the target object (such as a cube, sphere, or button) into the slot, and selected GameObject > SetActive(bool). If the checkbox is ticked, the object will appear when the button is clicked; if unticked, it will hide.


Figure 3.7, 3.8 Hide and show buttons.

Next, I learned how to create a simple animation in Unity. First, go to Window > Animation and drag the Animation window to the bottom panel. This opens a timeline panel, similar to what we’d see in Premiere Pro or After Effects, where you can create keyframes for your animation.
Then, click Create, make a new folder named Animation inside the Assets folder, and save the animation file there. Select the 3D object want to animate, click the Record button in the Animation window, and start moving the object. Unity will automatically generate keyframes on the timeline as you make changes. Adjust the keyframes and object positions to complete the simple animation.

Figure 3.9 Progress screenshot.

After that, I learned how to control the play and pause of an animation using a button. In the On Click() section of the button’s Inspector, I dragged the animated 3D object into the slot. Then, by clicking No Function, I selected Animator > bool Enabled. This allows the animation to be toggled on or off using the button’s checkbox, working similarly to the SetActive function.


Figure 3.10, 3.11, 3.12 Progress screenshot.


Figure 3.13 Play and pause animation.

Lastly, as mention in the tutorial, I created separate buttons to control the animations for both the cube and sphere individually. Now, each button can independently start or stop the animation of its respective 3D object. The left button controls the animation of the 3D cube, while the right button controls the animation of the 3D sphere.


Figure 3.14 Separate button to control animation.

This week, we didn’t have a physical class, but Mr. Razif provided video tutorials for Unity. Through these tutorials, I gained a clearer understanding of how to create the layout for an AR application, even though for this exercise, it mainly involved setting up buttons to control visuals and animations.
I did start to wonder whether the entire layout design for an AR application would be created solely within Unity, or if it might involve integrating other software as well. I’m also looking forward to learning how to import 3D models into Unity so that they can appear in a markerless AR setup.
For now, we’re still focusing on marker-based AR, where visuals or animations overlay onto an image or object when scanned. As more buttons and functions are added, the steps to manage everything are getting a bit more complex, but at this stage, I’m still able to keep up and stay on track.

Research
Before proposing ideas for our AR project, I took some time to research online to better understand how AR works. I wanted to have a clearer grasp of the technology so I could come up with ideas that are both creative and manageable within my own skillset.
I started by learning about the different types of augmented reality. From there, I realized that each type of AR operates differently and offers a unique user experience. It’s important to choose the right type of AR for a project because each one has its own advantages and limitations that can affect the final outcome and user experience.


Figure 4.1 - 4.6 Research.

Besides the more commonly known marker-based and markerless AR, I also learned about projection-based AR, superimposition-based AR, and user-defined markerless AR. I felt that projection-based AR might not be suitable for this module since it requires a physical projector and supporting camera to track and interact with the projected surface, something difficult to achieve for this assignment.


Figure 4.7, 4.8, 4.9 Progress screenshot.

This research also made me realize how much AR is already integrated into everyday applications we use without even noticing it. It helped me connect what I was learning with real-life examples and gave me new ideas about how AR can be applied in different situations.


Figure 4.10 Research - pdf.

AR Project Ideas Proposal
From the research I did on AR, I was able to come up with a few ideas and inspirations for my proposed AR project. I listed down several initial concepts that I thought could be both practical and creative:
First Idea:
An AR application that allows users to view furniture in actual size within their real-world environment. Users would be able to change the furniture’s color and paint options in real-time. This concept aims to help people visualize how a piece of furniture would look in their space before making a purchase, offering a clearer and more accurate experience than relying on imagination alone.
Second Idea:
An AR recipe assistant. The idea is for users to scan an ingredient with their phone, and the app would suggest recipes that use that ingredient. Users can then choose a cuisine they prefer and follow step-by-step instructions presented in AR. This would be especially helpful for people today who might not have much cooking experience but still want to prepare meals at home.
Third Idea:
A creative AR portfolio gallery, inspired by my own situation as I prepare for an upcoming internship. As a design student, I know how important it is to showcase my work in a unique and engaging way. This idea involves creating an AR gallery where students or designers can display their artworks and projects. It would offer a more immersive and memorable presentation experience, helping their portfolios stand out during interviews or presentations.

Figure 5.1 Initial ideas.

After listing down these initial ideas, I continued my research online to find out whether similar designs or applications already exist in the market. I believe it’s important to avoid creating something too similar to existing apps, as it wouldn’t bring anything new to the table. By checking what’s already available, I can also identify what gaps or weaknesses those apps have, and explore opportunities to improve or offer a different user experience. Through this process, I aimed to not just avoid repetition but to challenge myself to think more creatively and develop an idea that stands out.
For my first idea, AR interior design, I discovered through my research that IKEA had already launched a similar application called IKEA Place about 7 years ago. Since the concept was already well-established, I decided it would be better to leave this idea aside and focus on exploring fresh, more unique concepts.

Figure 5.2 Ikea Place.

As for the second idea, AR recipe assistance, I came across several articles and examples of similar concepts that already exist. Interestingly, most of them tend to lean more towards Mixed Reality (MR) technology rather than pure AR. This is because using MR through a headset or smart glasses offers a more practical and hands-free experience for cooking. In comparison, AR applications that require users to hold a phone or tablet during the cooking process can be inconvenient and disrupt the workflow. Considering this, I also decided to rethink this idea and look for alternative directions.

Figure 5.3 AR cooking assistance.

For the third idea, I’m thinking of combining elements from existing AR designs with a creative twist.

Figure 5.4 Business card.

The concept starts with a marker-based AR experience where scanning a business card triggers a visual or animation related to the owner’s identity or branding.
After this initial interaction, the experience would transition into a markerless AR mode. In this phase, the user could explore the owner’s portfolio or artworks in a 3D, immersive environment where the pieces are displayed around the user in their physical space.
This idea addresses a limitation I noticed in many current AR business card applications, where the interaction often stops at a simple animation overlay or a link to social media. By adding an interactive markerless AR gallery, it gives the audience a deeper, more memorable, and engaging way to experience the owner’s work. Effectively turning a basic business card into an immersive virtual showcase.

Figure 5.5 Progress screenshot.

The next idea is inspired by my own experience developing a board game in the Game Design module. I realized that players often feel reluctant to refer to the rulebook because of its lengthy, text-heavy instructions, which can be boring or difficult to understand without visual aid. As a result, players sometimes end up creating their own house rules when they’re unsure about the official ones.
To address this, I propose an AR application where players can scan specific board game components (cards, dice, tokens, or spinners) to access interactive, step-by-step visual guides. The app would use animations and visuals to explain how to set up the game, gameplay mechanics, and rules, making it easier for players to learn and follow. Additionally, I’m considering adding a mini-game feature within the app. This would allow players to practice game mechanics and familiarize themselves with the rules in a playful, guided environment before starting the actual game.

Figure 5.6 Progress screenshot.

Besides, the next idea is inspired by current road situations and my own personal experience during driving lessons and tests. Often, learners only encounter a limited range of scenarios within the controlled environment of a driving school. As a result, once they’ve obtained their license and face unexpected or complex situations in real-life traffic, they might feel unprepared or even panicked.
To tackle this issue, I propose developing an AR application that allows users to simulate driving scenarios anywhere, by scanning an empty space (like a desk or floor). The app would then display a virtual car interior view on the screen, giving the user a first-person driving perspective.
Various real-world traffic situations would appear on the screen, such as emergency vehicles approaching, pedestrians crossing unexpectedly, or road obstacles, and the user would have to respond appropriately. 

Figure 5.7 Progress screenshot.

After finalizing three proposed AR project ideas that I was satisfied with, I booked an online consultation session with Mr. Razif to get feedback on the feasibility and potential improvements for each idea. I wanted to make sure the concepts I had in mind weren’t too ambitious for this module’s scope and to hear any suggestions for refinement.

Figure 5.8 Online consultation.

During the consultation, I presented all three ideas in detail. Mr. Razif provided constructive feedback on each of them, pointing out their strengths and challenges. Based on his advice, I refined the proposed idea and also decided to proceed with the AR Driving Tutorial idea.
Following his lecture class, I also worked on adding a problem statement for each one. This made the proposed concepts more complete and clearer in terms of the issues they aim to address.


Figure 5.9 Final AR project ideas proposal.


FEEDBACK
Week 02
Exercises:
Mr. Razif suggested adding scheduled entry times based on seating areas to further reduce queues, even with the AR check-in system in place. ('Queue for entry' phase, a fast-track AR check-in with facial recognition for VIPs)

Week 03
A reflective report should cover the things you’ve learned through your research and the exercises completed in class. It can also include reflections related to the ideas you’ve proposed during the project.
Proposed Ideas:
Create something where AR functions as a component of the board game. For example, using an AR dice so players can still play if the physical dice goes missing. Avoid focusing on mini-games or tutorials. Instead, think about tasks within the board game that an AR app could replace or enhance. Consider how AR can add value to the board game experience.
Out of the three proposed ideas, all are doable. The second idea is a bit more complex, while the third idea is the most interesting and currently no one else is doing it. 3D models can be sourced from online resources.


REFLECTIONS
Through the research I’ve done on both marker-based and markerless AR, I realized that many applications and activities in our daily lives are already using AR without me noticing it before. For example, social media platforms like Instagram and Facebook offer face filters when taking photos. The system detects facial features, and the filters, sometimes with animations, overlay onto the user’s face.
From this research and by reviewing the work done by our seniors, I discovered that relying solely on marker-based AR can feel a little boring, as it typically involves scanning an image or object and then displaying digital visuals or information over it. I also noticed that many of the seniors' projects tend to focus on education-related AR applications. Based on these findings, I wanted to explore and propose an idea that makes use of markerless AR, and if possible, combine it with marker-based AR to offer a more engaging and dynamic user experience.
By following the tutorials taught by Mr. Razif, I gradually learned how to create marker-based AR using Unity. As someone who has never used Unity before, it was a completely new experience for me. There were quite a number of steps involved in setting it up, such as obtaining a Vuforia license, importing images into the database, and configuring the settings. However, the process turned out to be fairly straightforward, it just requires memorizing the correct sequence of steps.
For my final project idea, I decided to create an AR driving tutorial using markerless AR, which involves placing a 3D car interior model into an empty space and animating a few driving scenarios. I’m looking forward to learning more about Unity and how to build this AR application effectively.

Comments