• https://dl.acm.org/doi/10.1145/3290605.3300458 Brief Summary of the Paper:

  • This paper explores the collaboration between a giant and a miniature person in the context of Mixed Reality (MR). It investigates different methods to represent and control miniatures and examines how the interaction dynamics between the two scales can be managed. Novel Contributions of the Paper:

  • While previous studies on MR collaboration have touched upon multi-scale collaboration, they have mainly focused on virtual environments. Some studies have explored real-world scenarios, but none have incorporated 360-degree video and handheld tangible cameras like this paper does. What I like about it (and why):

  • I appreciate that the paper conducts four studies, each with different parameters. This expands the potential design space for interactions between giants and miniatures, allowing for a more comprehensive understanding of the topic. What I don’t like about it (and why):

  • One drawback is that despite the system focusing on interactions between a giant and a miniature, each study only evaluates the experience from one perspective. The methods of virtual representation and miniature control might impact the experience of the miniatures, while the use of 360-degree video and camera placements could affect the experience of the giants. For example, giving miniatures independent control might cause motion sickness, as it becomes difficult for giants to monitor and consider the movements of the miniatures. However, these aspects were not evaluated in the studies. What could have been done differently (and why):

  • The researchers could have studied the correlation between different interaction designs and specific tasks. For instance, certain tasks might require giants to accurately know the Field of View (FoV) of miniatures, in which case users might prefer the Frustrum method. On the other hand, an “Avatars Only” approach might be more suitable for different tasks. This would provide insights into which interaction design works best for different scenarios. What should be done next (and why):

  • Introducing a feature that allows miniatures to instruct giants on their desired positioning could enhance collaboration. If a miniature wants to view an object from a specific angle, they should be able to indicate their preferred viewpoint, enabling the giant to adjust their position accordingly. Additionally, leveraging systems that enable real-time 3D reconstruction, such as Remixed Reality, could empower miniatures to freely move around the scene and change their viewpoints autonomously. These features would improve the autonomy and independence of miniatures, while still facilitating Giant-Miniature collaboration in MR remote collaboration scenarios.

  • I wonder if I can build upon this research in the context of MR remote collaboration. (blu3mo)