GAN YI KIAN / 0374572
Bachelor of Design (Hons) in Creative Media
Bachelor of Design (Hons) in Creative Media
Experiential Design
Task 3 / Project MVP Prototype
Note
Week 8
Piane Finder - automatically detects the ground
Ground Piane Stage - where you item will appear
Week 9
Today we will learn how to bring a 3D interior design into Unity. First, go to the Assetstore and select a bedroom design to import into Unity.
After importing, these interior designs will appear on the scanned items. How to add a script to the button, when you click the button, the interior space items will become larger, just like the actual size that will appear in real life.
INSTRUCTIONS
Task 3 / Project MVP Prototype
According to the proposal I adjusted for Task 2, I think the technology I need is additional knowledge in class, so I need to explore how to create my prototype. In the example of previous students provided by Mr. Razif, I think Ivy's blog really helped me a lot, including how to scan 3D models and bring them into Unity and other more in-depth operations
First I need to download Polycam to scan the object, and then upload the scanned data to the model target generator provided by Vuforia.
The items I need to scan are water bottles, tables, chairs and pens
I tried to use Ploycam to scan 3D objects, but when I scanned a water bottle, I found it difficult to achieve a perfect scan, and my scans always had incorrect shapes. So after trying for a day, I decided to change my original idea of scanning 3D objects and scan 2D photos instead.
The result is like the picture below, which cannot perfectly show the true shape of the object, so I changed my thinking. I changed the scanning of objects to scanning pictures to display the labels on them. I think this is more flexible. Although it will lose the user experience compared to directly scanning objects, it is a big step for me who has just started this subject.
This is the scanned photo I plan to print, and users can collect other words by matching the pictures on the app.
I was very happy to hear in the week 10 class today that Mr. Razif announced that the submission date of Task 3 will be postponed to next week. This is absolutely good news for me because I feel that my progress is slow, and I need more time to complete the MVP.
Then I introduced the fonts, made the overall label card look the same as the proposal, and added a script to the Speaker Icon to add sound to ensure it runs smoothly. Add two buttons, Language and Save, below the label card, and then you can click the buttons to add it to other interfaces.
These are the interfaces I want to create. I don’t know if they can be realized in Task 3MVP. If not, they will need to be completed in the Final Presentation.
1. Homepage
2. Page before scanning (before opening camera)
3. Page after scanning (after opening camera)
4. Language page
5. Collection page
6. Save animation
I spent a lot of time on the AR lens before and after page. I want to set the AR button in the upper right corner to be on and off, and then different instructions will appear on the screen before and after the switch, and I chat with the AI about how to make the effect I want.
At first, when I entered this page, I hoped that the camera was turned off. However, I tried many methods, but AI said that my Vuforia version was new, so it was turned on by default, and it required a complicated method to turn off the camera. So I thought of using a panel to block the camera. The camera was actually already running. After clicking the button, the blocker disappeared and the scanned image was displayed.
Another difficult thing is that the "Back button" on each scene does not work properly due to script settings, so I have to repeatedly check whether each button and script are correct, and chat with ChatGPT to help me successfully set the steps for the Back button.
1. Homepage ☑
2. Page before scanning (before opening camera) ☑
3. Page after scanning (after opening camera) ☑
4. Language page
5. Collection page ☑
6. Save animation
What haven't I completed yet?
Language page
Save animation
The remaining Labelcard(chair,table and pen)
Detailed information on each object on the collection page
All Chinese labelcard (bottle,chair,table and pen)
Final Presentation
REFLECTION
Experience
In this assignment, I created an AR app using Unity and Vuforia. I initially tried 3D object scanning, but it didn't work well, so I switched to scanning 2D images. I also designed the main pages of the app, such as the main menu (MainMenu), ARscene, and Collectionscene. Some features were difficult to implement, but I gained experience by trying and solving problems one by one.
Observations
I noticed that 3D scans often had breakpoints or were not clear, while 2D image scans were more stable. The app's user interface needed careful setup - sometimes buttons wouldn't respond properly unless they were laid out correctly. I also found the new version of Vuforia to have some limitations, such as not being able to switch cameras easily. Timing was also a challenge, as progress was slower than I had planned.Findings
I’ve found that using simple techniques like 2D image tracking can help make my app more reliable. A clear flow helps users know what to do next. Regularly testing my app helps me catch issues early. I’ve also found that small details — like icons, labels, or animations — can really improve an app’s user experience.









Comments
Post a Comment