Banau is a free fun prototyping application
developed natively for the Apple Vision Pro.
developed natively for the Apple Vision Pro.
[Personal Project] Role: Apple Vision Pro swift engineer, interactive designer, product designer, animator, shader engineer.
01 | Problem
____
____
Designing mixed reality experiences presents two major challenges. First, translating 2D designs into 3D spaces is complex because tools like Figma are optimized for flat screens, making it difficult to predict how designs will function in 3D environments.
Second, adapting user experiences for immersive interactions requires careful consideration of depth, scale, and user movement, as 2D mockups may feel disjointed in a 3D setting. Mixed reality UI design resembles theater set design, where initial sketches are just the starting point, and the real test comes when elements are built and used interactively. Similarly, effective 3D UI design must be iterated and refined in the immersive space itself.
Second, adapting user experiences for immersive interactions requires careful consideration of depth, scale, and user movement, as 2D mockups may feel disjointed in a 3D setting. Mixed reality UI design resembles theater set design, where initial sketches are just the starting point, and the real test comes when elements are built and used interactively. Similarly, effective 3D UI design must be iterated and refined in the immersive space itself.
02 | Banau [Personal Fix]
____
____
My objective was to develop a straightforward Apple Vision Pro app using native Swift and Reality Composer Pro, designed specifically for blocking out concepts in three-dimensional space.
The concept of blocking out reality to prioritize digital content draws inspiration from “white boxing” or “grey boxing” techniques in traditional 3D design. By removing extraneous details, this approach allows a sharper focus on core spatial relationships and interaction flows. This rapid prototyping method is invaluable for refining foundational UX and UI elements before incorporating high-fidelity visuals.
The aim was to create a fast and fluid tool for prototyping and iterating on mixed-reality interfaces. By seamlessly bridging 2D design, 3D layouts, and on-device testing, the process shortens the design cycle, enabling the creation of polished and intuitive user experiences more efficiently.
Video highlights of the app
Features:
____
____
1. Immersive 3D Workspace: Users can enter an immersive 3D space where they can create and interact with virtual objects seamlessly.
2. Object Creation: Add, delete, rotate, lock, and modify 3D objects like cubes with control over dimensions and colors.
3. Undo: Experiment with the ability to easily undo actions
4. Save/Load: Manage your projects for continuity across sessions. Files are stored locally on your device.
5. Customizable Finger Controls: Assign unique actions to finger combinations for efficient design workflows.
6. Enable Shadows: to enhance realism and depth.
7. Export USD: Users can export the scene project as a usd and bring it inside Blender for further development.
Target Audience:
____
____
The application is designed for students, technologists, designers, 3D artists, and anyone interested in creating and exploring 3D content in an immersive mixed reality environment. It provides a simple toolset for bringing ideas to life and prototype 3D designs directly in the Vision Pro headset.
FAQs:
Future Goals
____
Future Goals
____
While the core functionality is complete, several advanced features could enhance the project. These include implementing Boolean operations for 3D modifications (similar to Blender) and dimension controls through interactive handles.
Initial attempts at cube dimension modification faced technical challenges with transformation persistence, dimensional accuracy, and eye-tracking conflicts during scaling operations. These features remain under development for future integration.
Initial attempts at cube dimension modification faced technical challenges with transformation persistence, dimensional accuracy, and eye-tracking conflicts during scaling operations. These features remain under development for future integration.
Will I continue to develop this?
____
____
This project was primarily developed as a prototyping tool for another spatial application in development (Bitsharm). While I will maintain it and address any critical bugs, active development will be limited. Any relevant insights or improvements discovered during the development of the main application may be incorporated here, but this will not be the primary focus.
Challenges:
____
____
This project came with many challenges. I originally started by building upon Apple's Sample project:
From there, I implemented:
1. Hand gestures
2. Functionality to add another cube
3. Delete/Reset
4. Finger gestures
5. Shader Material
6. Overall UI enhancements
7. Grid implementation [Still needs refinement]
From there, I implemented:
1. Hand gestures
2. Functionality to add another cube
3. Delete/Reset
4. Finger gestures
5. Shader Material
6. Overall UI enhancements
7. Grid implementation [Still needs refinement]
As development progressed, I realized this could be a standalone application. Putting myself in the user's shoes, I identified two essential features: project saving/storage and model exporting for further refinement in 3D software. This is where the real challenges began—integrating Swift Data and USD export proved particularly difficult.
Challenge #1: Magnify Gesture and Cube Scaling
When implementing the MagnifyGesture(), storing and loading the scale values for cubes using SwiftData proved challenging. The cube’s dimensions were not persisting correctly, leading to inconsistencies in saving and loading states. To resolve this, I implemented a mechanism to accurately track and store the scale value, ensuring compatibility with SwiftData’s storage system.
Challenge #2: Model Jumping During Drag
Dragging entities caused positional inaccuracies. This issue was mitigated by creating a DragOffsetComponent to maintain the relative offset between the touch point and the entity’s position. The component:
• Stored the offset when the drag began.
• Used this offset to calculate the correct target position throughout the gesture. Smooth animations were introduced using .interactiveSpring to provide natural movement. Additionally, cleanup processes were implemented to remove the component when dragging ended while preserving undo functionality.
Challenge #3: Rotational Dimension Changes
Rotating a cube resulted in unexpected dimension changes upon releasing the gesture, though the rotation angle remained accurate. This issue was traced to improper scale handling in the updateBox method, which reset scale values due to mismatches between the entity and BoxState scales.
To address this:
• Scale handling in updateBox was modified to preserve the current scale during rotation.
• A threshold check was added to detect significant changes.
• Access to updateBoxScale was corrected by referencing self.boxState (part of BoxStateManager) instead of directly using boxState.
• Scale handling in updateBox was modified to preserve the current scale during rotation.
• A threshold check was added to detect significant changes.
• Access to updateBoxScale was corrected by referencing self.boxState (part of BoxStateManager) instead of directly using boxState.
This solution ensured that scale values were preserved during rotation and maintained compatibility with save, load, and export functionalities.
Challenge 4: Shader Delays
Shader-related operations caused noticeable delays. I optimized the implementation by batching related operations, including transforms, dimensions, and shaders, to minimize redundant updates. The code was reorganized to calculate dimensions once at the start and reuse them throughout, effectively eliminating duplicate computations and improving performance. (Hopefully)
Challenge 5: USD - Implementation & Rotation Issue
There were difficulties in implementing a USD export feature. It is vital to have the correct format for USD. Then resolving rotation-related inconsistencies in the USD export process. After optimization and debugging, the rotation issue was corrected to align with expected functionality.
Learning Lessons:
____
____
Custom Finger Gesture is incredibly enjoyable to use... however, it requires practice to master. My favorite aspect of this application is integrating Custom Finger Gesture functionality. My goal was to design in 3D space without relying on visible UI elements, allowing my focus and concentration to remain fully on the interaction between the user and the 3D environment. Implementing finger gestures as hot cues was an exciting challenge. However, I realized how unaware I am of my finger movements, as I often found myself unintentionally doing. This can only mean that voice control is the next phase...
Building this app taught me the importance of a robust state tracking system for managing user flow and ensuring predictable application behavior. It highlighted the need to align user actions with backend processes, such as saving states and handling data persistence, to provide a seamless experience. I gained insights into managing events, capturing changes, and implementing undo/redo functionality, which are critical for iterative workflows. The process also emphasized systematic debugging and optimization, particularly in addressing performance and state-handling issues. Designing without auto-save reinforced the value of intentional user actions while safeguarding against data loss. Overall, this experience improved my technical skills and ability to think holistically about user experience and app reliability.
Final Thoughts
____
____
This project was driven by multiple objectives: improving my swift skills, creating something useful for others in the tech and design space, and exploring new dimensions for prototyping and project development. It’s designed for students, technologists, and designers to unlock new creative potential and experiment with 3D spaces. Personally, I plan to use this application in my own projects.
If you’d like to stay updated on this and other projects, follow Sharmscript, where I share progress and documentations of my work.
If you’d like to stay updated on this and other projects, follow Sharmscript, where I share progress and documentations of my work.
I began this application on December 1, 2024 and completed it in approximately 25 days, often working over 8 hours a day, including weekends and nights. If you enjoy using it, thank you. If you find this application helpful and would like to express your gratitude, please consider supporting a cause that is dear to me:
The Bahini Education Project is a non-profit organization committed to empowering girls and women in Nepal through education and vocational training. Their work offers scholarships, safe housing, and skill-building opportunities for at-risk individuals, fostering sustainable change and breaking cycles of poverty and vulnerability.
Development Updates
____
Jan 1, 2025.
Issue: When rotating objects, sometimes the rotation reverts back to its previous position instead of maintaining the new rotation, likely due to asynchronous BoxState updates conflicting with entity transforms. The root cause appears to be in how rotation updates are handled in `updateBox` and `rotateGesture`, where the stored rotation state might overwrite the new rotation during update cycles.
The solution was to ensure rotation state is updated immediately during the gesture and only apply stored rotations when necessary in `updateBox`, while also adding proper rotation persistence checks in `BoxStateManager`. By implementing these fixes, the rotation should persist correctly and prevent unwanted reversions to previous positions.
Issue: When rotating objects, sometimes the rotation reverts back to its previous position instead of maintaining the new rotation, likely due to asynchronous BoxState updates conflicting with entity transforms. The root cause appears to be in how rotation updates are handled in `updateBox` and `rotateGesture`, where the stored rotation state might overwrite the new rotation during update cycles.
The solution was to ensure rotation state is updated immediately during the gesture and only apply stored rotations when necessary in `updateBox`, while also adding proper rotation persistence checks in `BoxStateManager`. By implementing these fixes, the rotation should persist correctly and prevent unwanted reversions to previous positions.