Brief Overview of the Paper: The paper introduces a new method for selecting objects in virtual reality (VR) using gaze assistance. It involves three main steps: 1) pointing cone, 2) visualizing the borders of occluded objects, and 3) gaze pursuit. This method enables users to select objects that are partially hidden. It also shows that the selection time and accuracy are less affected when objects are highly occluded, compared to traditional controller-based methods.

Contributions Beyond Previous Work: While previous studies focused on eye pursuit of moving stimuli and its application in target selection, this paper extends the concept to three-dimensional gaze-based static object selection.

Positive Aspects: The approach differs from traditional controller-based selection methods and performs better when multiple objects are closely located. Leveraging the natural gaze behavior of users for selection is considered smart and efficient.

Areas for Improvement: The cone-casting aspect may not be engaging, and there could be more consideration for accessibility, especially for users with varying abilities. Testing the system under more extreme conditions, such as a higher density of objects or longer distances, could provide insights into its limitations.

Suggestions for Improvement: The study could have included more extreme scenarios to evaluate the system’s performance limits effectively. Additionally, a detailed analysis of the learning curve for users adapting to this interaction style could offer valuable insights.

Future Directions: Exploring the application of this technique for determining focal length could be a potential next step. By utilizing cues and detection techniques similar to those used for object selection, it might be feasible to estimate the user’s focus distance. This could lead to the development of a 3 Degrees of Freedom (DoF) gaze selection system.