Future Plans

  • For now, by the end of winter break:
    • Introduce the Honk-style
    • Fix the AVPlayer
    • Start working on the teacher’s side as well
    • Basically, make it at a level where we can release it

Feedback

  • In this presentation, it wasn’t clear what kind of experience/value is created through the partial synchronous experience

    • For example, it should have been clear why the “desired form” (elastic synchronization) is important
    • I showed a summary in Asynchronous Dialogue Scenes in School Classes later on, and they seemed to understand it somewhat
      • I think the expression “a feeling of talking to the person next to you” is the most accurate
    • However, I still want to talk about it based on user tests
  • Next time, I wanted to talk about more specific things (e.g. examples of user tests) (reflection)

  • Social gathering

    • drinami and PM Fujii are strict PMs, it seems (more than last year’s person)
    • I heard that they have become more flexible this year, though

  • I’m Aoyama, a 11th-year student at Gunma International High School.

  • After explaining the overview, I will talk about the progress and things I would like to discuss.

  • The project name is like this, but there are hardly any notebook elements.

  • Overview

    • In a nutshell, we are developing an iPad app that allows students to watch class videos in a form that combines synchronous and asynchronous dialogue between students.

    • First, as background,

      • In synchronous classes like Zoom, you cannot individually control the playback position of the video.
      • In asynchronous classes like on-demand, you can do that, but it is difficult to maintain student-to-student dialogue.
      • There is a trade-off relationship between them.
    • This project aims to break this trade-off by integrating synchronous and asynchronous interaction among students.

    • First, let’s talk about what synchronization means. I think it can be divided into two elements: “synchronization of information” and “sense of time sharing.”

      • Synchronization of information means that information is shared bidirectionally. It involves sharing opinions and emotions among students.
      • As for the “sense of synchronization,” I haven’t come up with a good expression yet, but I think you can somehow understand the feeling of sharing time.
        • This is a hypothesis, but I think that having a high resolution in the time dimension increases this sense.
    • In practice, how can we maintain “synchronization of information” and the “sense of synchronization” while enabling asynchronous manipulation of the video’s timeline?

      • Regarding synchronization of information, ((explained with a graph))
      • As for the sense of synchronization, of course, it is not a problem when there is synchronization. But in parts like this, where there is no synchronization with anyone in particular, we have implemented a mechanism that gives a pseudo sense of synchronization, like the comments on Nico Nico Douga, by sharing the process of handwritten text dialogue, which serves as a high-resolution element in the time dimension. I believe that sharing the process of writing handwritten text can generate a sense of synchronization.
  • Now let’s move on to the progress report.

  • Student-to-student synchronization

    • Each device adjusts its own speed by looking at the playback position of other devices.
    • As for how to adjust it, I showed a simple model at the previous meeting, but
    • After trying various methods, the one that is currently closest to the ideal is using clustering similar to DBSCAN
      • It is strongly attracted to lines in the same cluster and weakly attracted to lines from other clusters.
    • These are examples of randomly generated starting points, and
    • It seems to be working well as it comes closer to where it should be and does not forcefully connect to parts that are too far away. So I think it’s quite ideal.
    • Also, the graph of the playback speed here is gradually changing towards synchronization, so it looks good.
  • Next, about user testing,- I realized that I was losing the sense of urgency to conduct honest experiments during the process, and I started to feel anxious after receiving Professor Inami’s message the other day.

    • So, I tested it once yesterday, and I’ll talk about that.
  • I tried the DBSCAN-based playback control again.

  • I created something that allows monitoring the playback position of each viewer before the user test, so you can see the graph I drew with actual user test data like this.

  • Similar to the simulation, you can see that people who were originally watching asynchronously gradually come closer and synchronize.

  • The second graph is a macroscopic view of the same graph, and you can see that the parts that were originally growing separately in the first graph come together after about a minute.

  • I set the clustering parameters and such based on my intuition, but it worked better than I expected.

  • One thing that caught my attention is that there were not as many asynchronous playback position manipulations, such as stopping or rewinding the video, as I had anticipated.

    • There could be several reasons for this.
    • It would be good if it’s because it’s just a user test and the participants are not very interested in the content of the class.
    • However, if it’s because the UI for manipulating the relative position of the video is difficult to use, as I mentioned in the previous meeting, it would be a problem.
    • I’m waiting for feedback on this.
  • Also, regarding the “startling” sensation that Professor Inami mentioned, as users, we are just watching synchronously as usual, so I don’t think such emotions would arise.

    • However, since we can provide a sense of synchronization without causing discomfort while enabling asynchronous operations, I think it’s good.
    • Also, if users realize after using it that the graph, for example, was actually synchronized, it could create a sense of surprise. (At least for me, I was watching the graph growing in real-time, and I had quite a bit of that kind of emotion.)
  • As for other progress,

  • I have been working on fixing various minor issues with the UI and video player, and it’s gradually becoming a usable app for watching videos.

    • However, there was one person in yesterday’s user test who couldn’t watch the stream, so I think video streaming on iOS is difficult.
      • If there is anyone who is knowledgeable about this, I would like to consult with them.
  • Also, although it’s not exactly video processing, I have been working on implementing a feature that detects animation in the lecture slides and warns not to write on them.

That’s all for the progress, and as for other points I want to discuss,

First, it’s about when users who don’t have an Apple Pencil use this app.

  • I think it would be less likely to be used if it can only be used by Apple Pencil owners, so I want it to be usable on iPads as well.
    • So, I was thinking about how to achieve the “sense of synchronization” that I mentioned earlier without using handwritten text.
    • Recently, I came across an app called Honk, which seems to be very helpful.
      • It’s an app where the content being typed is always displayed to the other person, and the input process is shared.
    • I think the way they create this “sense of synchronization” is really good, so I’m thinking of imitating it.
  • It’s a form where you can type in your speech bubble while watching the video, and it is shared synchronously or pseudo-synchronously with others.

🤔 Macro discussion

  • I think this form of synchronization could be applied to things other than classes in the future.

    • So, for the sake of clarity in the discussion, I want to give it a name like “pseudo-synchronization” or something like that, and for now, I think “elastic synchronization” is a good word. If you have any other suggestions, please let me know.
  • Also, regarding this “elastic synchronization” (?),

  • I still have some points where I haven’t finished reading the book or lack knowledge, but if I search for discussions on temporal logic, etc.- In the Middle Ages, when clocks were not widely available, time varied depending on the location, and synchronization between distant places was not possible.

  • However, as time went on, trains were invented, telephones became available, and platforms like Zoom were created, which made the world smaller and the need for synchronization increased, leading to a more synchronized world.

  • With Kineto, it seems possible to return to the asynchronous sense of the past while maintaining synchronization.

    • So, while it may not seem significant, I think it can be positioned as an interesting topic of conversation.

Future plans:

  • User testing
    • I plan to contact my teacher and try using Kineto in my school’s classes by the end of winter break, even if the students don’t have access to Pencil.
    • Once I confirm that it works in my own school, I plan to reach out to teachers in other schools as well.
    • Testflight
  • Final presentation
    • I haven’t finalized the specific details yet, but I’ll share what I’m currently considering.
    • First, I want to make the app available on the App Store or Test Flight so that part or all of the presentation can be viewed through the app.
    • As for the flow, I plan to start by explaining the project’s overview and usage examples, and then discuss the future expansion of elastic synchronization.

Points for discussion:

  • Name for XX synchronization
  • Introduction of Honk-like communication methods