Positioning Prototype


Context

The product is oriented towards artists who aren’t necessarily tech-literate. Because of this we try to avoid having the virtual curation of ready-mades be a technical exercise full of frustrating encounters with foreign digital interfaces. Instead we want to encourage artists to engage with the real physical environments they were working within and create work in-situe with sensitivity to the site’s specificities.

To achieve this goal. We felt that it was important for the positioning system to be as intuitive and seamless as possible. All spatial transformations: scaling, rotating, translations, undoing and redoing actions… are all possible via single gestures or diegetic interfaces. If the artist doesn’t require numeric precision, then they shouldn’t have to open UI panels which could take them out of the real environments surrounding them.

Rotations

Thinking of rotations in global Euler-coordinates doesn’t correspond naturally to how the non-digitally versed among us think and use rotation in their everyday lives. We thought a more joint rather than axis-oriented metaphor of a ball and ring represented a more intuitive alternative which would feel more at place in a real-world AR view. The typical Euler-gizmo also tends to become rather finicky on a touch based device, one axis of a Euler-gizmo might even become practically impossible to manipulate at certain angles due to foreshortening. We solved this problem by making the ring-ball gizmo view dependent (/bill-boarded). This makes the gizmo less mathematically precise, but in our case the tradeoff is worth it, especially considering artists are always free to use non-diegetic UI should they require such precision.

Custom Shaders

All the gizmos and selection effects were created procedurally using custom Shaders in Unity’s Universal Render Pipeline (URP). Artists have the ability to drag an object perpendicularly to a given AR plane. If the object is far off the ground we display a shadow to indicate where it was relative to the plane it was attached to. URP only supports using a single light source for shadow mapping, so instead of appropriating the scene’s dynamic shadows for the sake of a single indicator I created a Shader which projects the selected objects geometry onto its attached plane to create a dynamic custom shadow effect.

Used on individual objects this ends up costing about the same as traditional shadow-mapping techniques. It will never suffer from resolution problems or “shadow acne” (as it’s being rendered directly in the main render-texture, it’s not a shadow-texture being sampled from), and doesn’t have the added memory cost of requiring an extra frame buffer behind the scenes.

The Caveat, however, is that it can only use planes (or other primitive shapes with simple projection calculations) as shadow receivers. Arbitrary meshes are not realistically compatible with the technique as it would require raytracing and become prohibitively expensive. You could probably approximate arbitrary meshes by converting all your shadow receivers into some sort of SDF asset and then use that in the Shader, but this would massively increase the complexity of these systems.

Leave a comment

Log in with itch.io to leave a comment.