Research
To begin with, I performed a deep dive into the current live solution myself. It was made up of 4 components:
Blue Path: These were a stack of blue circles, mimicking the path on the map, directing user straight in AR.
Waypoint Sign: This was a 2D blue circle with an arrow, pointing the direction the user needed to go in worldspace at the junction of a turn.
Heading Correction Pointer: A 2D UI element that pointed at the edge of the screen then the user wasn't looking in the correct direction.
Arrival Pointer: Another 2D worldspace element which would sit in front of the user's destination.
The combination and interplay of these components, hiding and showing depending of specific states and logics, works well most of the time. There are, however, a number of couple of problems with it, which can have a big impact of users.
Split Screen: When the AR is shown, it shows in a window from the top of the screen, with the map still present below it. This creates a busy and distracting hierarchy, and more cognitive load.
Conflicting States: Sometimes states would overlap there would be multiple components on screen at once and present conflicting information, which can be confusing for users.
Reliance of Show/Hide: The whole system relied on showing and hiding different elements, based on proximity and navigation logic. This can lead to some components getting close to the screen before disappearing, and requires the user to constantly be switching their attention to different components.
Out of these discoveries, I also had a realisation:
All 4 components are just pointing in a certain direction.... So why couldn't they be 1 single component?