pKineto#elastic Synchronization

  • Discrete Algorithms
    • Discrete structures tend to cause combinatorial explosions.

    • Therefore, we can use algorithms of discrete systems by considering the relationships between students as a Graph.

DAG: Directed Acyclic Graph

  • Common in chronological order?

  • #blockchain

  • A possible approach from Natural Language Processing

    • It may be possible to connect people with similar writing emotions or distant emotions. (suggested by someone from Minerva)
  • It may be good to do it in Autonomous Distributed Systems.

    • Similar to the concept of Attraction Phenomenon.
    • No conductor.
    • Why?
      • It naturally aligns with the form of Communication.
      • Each student makes unpredictable movements (suddenly slowing down or exiting).
        • So, it seems better for each device to use its own brain to move with a certain degree of unpredictability, rather than trying to manage it centrally.
      • Also, it’s easier to implement.
        • Not familiar with backend.
        • With this approach, you just need to use your local brain and share information through Firebase Realtime Database.
      • It also seems more interesting to implement.
        • This might be the most interesting part, although it’s not obvious.
    • There is also the term Self-organization.
    • As a simple implementation, we can use the Kuramoto Model.
      • Well, but it may not have much meaning.
  • Introduce the concept of longitudinal wave.

    • Place a longitudinal wave in the video and make it exert force towards the neighboring points.
    • Define the physics when force is applied.
  • [[ Collective Entrainment and Complex Networks in Coupled Oscillators]]

    • Described using phase models.

    • image

    • Actually, it’s not necessary to be an oscillator, so we can simply add as velocity, right?

    • We can create a version that is not an oscillator in the Kuramoto model.

    • Is it better to change the entrainment force (K) for each student?

      • Based on how much they want to connect with others.
      • For example, if they frequently have conversations, K can be stronger.
    • Forced entrainment

      • It is not interaction, but a force from above that causes entrainment.
      • Adding periodic external pressure, for example.
        • It would be great if we could create synchronization points based on video lectures.
      • Pacemaker
  • Vertical axis: Real time, horizontal axis: Video time. I just realized that expressing it in a graph makes it easier to understand.

    • Actually, I don’t understand why I didn’t realize it until now.
  • I want to do a simulation.

def dxdt(x, t):
    gradients = []
    k = 5
    for i in range(len(x)):
        gradients.append(1 + min(0.3, max(-0.1, (sum(x) / len(x) - x[i]) / k)))
    return gradients
    - Trial 2
        - Attracted to other lines, with stronger weight for closer ones (using inverse function).
        - The slope is limited to the range of (-0.1~0.3) for each attraction (the downward slope is restricted more than the upward slope).
            - I want to solve the problem of oscillating up and down.
        - k = 1.3
            - ![image](https://gyazo.com/91018ed9e7361e1ee646b1834d48157a/thumb/1000)![image](https://gyazo.com/1c2058d47bfc73608f5015819ddbefb4/thumb/1000)
        - k = 0.2- ![image](https://gyazo.com/02950b7967645d5ab539542679289e22/thumb/1000)![image](https://gyazo.com/dc1f25776bd2ec9b8197e4b2037675b1/thumb/1000)
- It would be better to have a slightly gentler curve (In this case, it cannot be helped because the function used is 1/x).

.py

def dxdt(x, t):
    gradients = []
    k = 1.3 # It is good to have a weakness that does not collide with the ceiling or floor.
    for x_i in x:
        gradient = 1
        for x_j in x:
            if abs(x_i - x_j) > 0.1: # If they get too close, they will flip and jump in the opposite direction, so leave them alone if it is less than or equal to 0.1.
                gradient += (k / ((x_j - x_i)))/len(x)
        # 0.3, -0.1 are restrictions on the slope
        # We want the downward slope to not slow down too much, so the restriction is stronger than the upward slope.
        gradients.append(min(1.3, max(0.9, gradient)))
    return gradients
    
  • Sigmoid curve
    • There are various types, such as sigmoid function and normal distribution.
    • Many phenomena that exist in nature follow this S-shaped curve.

    • Sigmoid function .py
def sigmoid(x):
    return 1 / (1 + np.exp(-x))

# Function for the differential equation
# When x takes this value,
def dxdt(x, t):
         gradients = []
         for x_i in x:
             gradient = 1
             for x_j in x:
                gradient += (sigmoid(x_j - x_i) - 0.5) / len(x)
             gradients.append(gradient)
         return gradients
- ![image](https://gyazo.com/b94046421a33d48e6204c934c86dbf70/thumb/1000)![image](https://gyazo.com/b945b4f978d3a41d7ca4746e3f6490a0/thumb/1000)
  • Clustering .py
def dxdt(x, t):
        gradients = [0]*len(x)
        t_0 = t[0]
        np_x = np.transpose([np.array(x)])
        initial_kmeans = kmeans_plusplus_initializer(np_x, 2).initialize()
        instances = xmeans(np.array(np_x), initial_kmeans, ccore=True)
        instances.process()
        clusters = instances.get_clusters()
        print(clusters)
        for cluster in clusters:
            for i in cluster:
                gradient = 1
                for j in cluster:
                    gradient += (sigmoid(x[j] - x[i]) - 0.5) / len(x) 
                gradients[i] = gradient
        return gradients
- ![image](https://gyazo.com/26090456f7b3276e481564b7ac22d51a/thumb/1000)![image](https://gyazo.com/d3fb8ecbd8984a40c43af2762f01ae3a/thumb/1000)
- It worked quite well.
- Used x-means (an automatic version of k-means) for clustering.
- The top two clusters are unstable, causing the graph to be shaky.
    - This should be addressed somehow.
- Also, it's not very elegant. It would be even better if it could be done with simpler functions.
  • Clustering based on DBSCAN .py
def dxdt(x, t):
    eps = 5

    gradients = []

    for x_i in x:
        xDifs = []
        for x_j in x:
            xDifs.append(x_j-x_i)
        xDifs.sort()

        upperBorder = x_i
        positiveXDifs = list(filter(lambda x: x >= 0, sorted(xDifs)))
        for i in range(len(positiveXDifs)-1):
            if (abs(positiveXDifs[i+1] - positiveXDifs[i]) > eps):
                break
            upperBorder = x_i + positiveXDifs[i+1] 

        lowerBorder = x_i
        negativeXDifs = list(reversed(list(filter(lambda x: x <= 0, sorted(xDifs)))))
        for i in range(len(negativeXDifs)-1):
            if (abs(negativeXDifs[i+1] - negativeXDifs[i]) > eps):
                break
            lowerBorder = x_i + negativeXDifs[i+1]

        gradient = 1
        xInRange = list(filter(lambda x: lowerBorder <= x and x <= upperBorder, x))
        for x_other in x:
            if lowerBorder <= x_other and x_other <= upperBorder:
                gradient += (sigmoid(x_other - x_i) - 0.5) / len(x)
            else:
                gradient += ((sigmoid(x_other - x_i) - 0.5) / 10) / len(x)
        gradient = max(gradient, 0.8)
        gradient = min(gradient, 1.2)
        gradients.append(gradient)

    return gradients
- Messy and long (I made it sloppy for now because I will rewrite it in Swift).
- Mechanism:
    - Used DBSCAN-like clustering.
        - If it's within the cluster, attract normally with sigmoid.
        - If it's outside the cluster, attract weakly.
    - Also added restrictions, 0.8~1.2.
    - The value of eps is a matter of concern.
- The following is a randomly generated example (eps=5).
    - ![image](https://gyazo.com/83bf23f3b9d5cc4040818cf3c2c71ced/thumb/1000) ![image](https://gyazo.com/02411fe88e47425a40e3b040f093f7fe/thumb/1000)- ![image](https://gyazo.com/d4eb7cebc41085ba6d369ab9a2463b95/thumb/1000) ![image](https://gyazo.com/6f0a40bc30b7261c691534c186268f51/thumb/1000) ![image](https://gyazo.com/f257a0189adde426ee6fefc528541d1b/thumb/1000) ![image](https://gyazo.com/d770a92d0e0aa29968447f5057bfd77c/thumb/1000)
  • image

  • It’s quite an ideal system.

  • Issues:

    • When the number of people becomes extremely large, they all become one cluster.
    • Some kind of limit should be set.
    • It might be possible to use a model that uses sigmoid and average (TODO).
    • It might also be possible to do clustering with various eps values, where the ones included with smaller eps values have stronger weights.
  • Implementation:

    • The one seen with KinetoWatcher.
    • image
    • This is just a model that attracts the average value (Trial 1).