• I will talk about three points: progress, thoughts, and plans.

  • After that, I will discuss future plans and directions.

  • Synchronization among students

    • First, I would like to introduce a good way to express the difference between the desired state and the traditional live streaming that I came up with during the thought process, although it is not a specific progress.

    • This is a two-dimensional graph where the x-axis represents real-time and the y-axis represents the time of the viewed video.

      • For example, traditional TV and some streaming apps would look like this.
        • Each line represents a viewer, and in the case of TV, all lines are straight and overlapping, indicating synchronization.
      • On platforms like YouTube, each line is scattered and asynchronous because viewers watch the video while pausing or skipping. Therefore, the shape of each line varies.
        • The slope of the graph represents the playback speed. For example, in this figure, the blue user fast-forwarded at this point.
      • Nico Nico Douga has a similar shape to YouTube, but with the addition of pseudo-synchronized information flow, it would look like this with arrows indicating the flow of information.
    • The desired state would look like this, compared to the existing ones.

      • In the traditional state, each line is independent, but in the desired state, they gradually come closer and merge, indicating synchronization.
      • The highlighted yellow section represents the synchronized part, where two-way interaction is possible.
        • If we add arrows for pseudo-synchronization, the graph would look like this.
    • Next, I will present the progress on this aspect.

      • To achieve this merging functionality, we will implement it in a way where each device autonomously adjusts its speed while observing the playback position of other students’ devices.

      • Therefore, we need to determine a function that calculates the slope, which represents our own playback speed, based on the playback position of others at a certain time.

      • As a very simple example, I tried calculating how far away I am from the average playback position of all devices, and the farther away I am, the greater the change in slope, i.e., playback speed.

        • The graph on the left is the same as the hand-drawn figure earlier, and the graph on the right represents the slope, i.e., the graph of playback speed.
        • The maximum and minimum slopes are defined, limiting the range to 0.9-1.3 times the normal speed, which should still be understandable.
      • I also tried another approach, where the closer I am to other players, the stronger the weight of the change, resulting in a larger slope.

        • (Please ignore the shaky parts here.)
        • However, since I implemented the weight increase based on proximity using a fractional function of 1/t, it resulted in a steep change in slope.
      • The ideal scenario would be a stable change in playback speed, with minimal frequency and amount of slope changes.

        • Specifically, I want to minimize the number and amount of changes in slope.
        • One reason is that when the speed changes in AVPlayer, there is a brief decrease in audio volume, and I want to avoid that.
          • I am also looking for ways to solve this issue in the implementation, but regardless, I want to keep the changes in playback speed stable to avoid causing discomfort to viewers.
    • If you have any ideas or topics to research related to this method, please let me know.

    • Also, I have actually implemented this in the video shown here.

      • There are two simulators.
      • Currently, only these two are being watched, and they gradually get closer and synchronize.
      • For now, I am using a function that simply moves towards the average value mentioned earlier.
  • Now, let’s talk about the progress and thoughts on the UI.- When considering the UI, this app can be seen as both a “synchronous interactive Youtube” and a “time-manipulating Zoom”.

    • I would like users to perceive it as the latter, a “time-manipulating Zoom”.
    • The reason is that it would be more enjoyable to attend classes while considering only one’s own timeline, without worrying about whether one is synchronizing with others or where others are looking.
  • In that case, I’m also thinking that the traditional Youtube-like seek bar UI might be different.

  • Currently, I am experimenting with a UI for manipulating the relative playback position.

    • It involves dragging the bottom of the screen to move the playback position.
    • The demo I have implemented with minimal features looks like this. 映像
  • Personally, I think it’s quite good, but I think there might be some bias, so I want to have others try it as well.

  • Next, I will talk about the idea of pagination and incorporating the concept of pages.

    • The current implementation presents comments within a 5-second timeframe.
    • As I expected, many people who participated in the user test found it difficult because the written text disappears immediately.
    • As a proposed idea from one of the participants, I received the suggestion of incorporating the concept of pages.
  • The idea is to divide the video into units of about 10 seconds and treat each unit as a page.

    • By using the familiar concept of pages, it should become a more understandable system.
  • I plan to implement this and try out the user experience.

  • Lastly, I will discuss the future direction.

    • As our goals and directions, I am considering the following three points:
    • In the final presentation, I want to demonstrate that this app can facilitate communication and have a positive impact on understanding and enjoyment of classes.
    • If we can distribute the app, I would like to use it during the presentation since we have put effort into creating this kind of product.
    • Additionally, I want to present the concept of a world where synchronous and pseudo-synchronous interactions are integrated, not limited to just classes.
      • I am still brainstorming about this future expansion, but for example, I think it could be applied to yoga lessons or musical sessions.
  • Feedback

    • Sigmoid function
    • It might be good to demonstrate with a running video.
      • The scenery represents the teacher.
    • Algorithm
      • Perform clustering of positions and calculate the average.
    • Ghost function
    • Apart from lectures,
    • The graph was praised, which made me happy.
      • It was a conversation about an ability that becomes important in a doctoral program, like “It’s the crystallization of your thoughts.”
      • Boosting self-esteem.
    • Overall feedback
      • The reaction at the final presentation will greatly depend on whether people are impressed or not, so keep up the good work for the next three months.
      • Progress tends to increase significantly towards the end, so I believe you can do your best from now on.