• AGI, as usual, is an abstract buzzword.
    • For now, let’s define it as “an artificial entity that can achieve any goal more efficiently and cost-effectively than humans.”
    • It’s an extreme definition, but let’s assume it happens and think about life in that scenario.
    • @yuiseki_: Well, if we consider the explosive performance improvement of computers, the difference in performance between humans is like comparing the height of mites, so it’s fundamentally irrelevant. Let’s just share more joy! That’s digital nature.

@rootport: To improve our lives, the minimum requirement is to have the desire to improve our lives. The development of AI is tremendous, and both ChatGPT and BingAI will likely become more capable beings than any human in the near future. However, there are things that humans can do and AI cannot. That is having desires.

  • I wrote something similar on the previous AI page (blu3mo).

    • What AI lacks is motivation to do something.

  • However, I’m starting to feel that I can’t say “AI cannot have desires” (blu3mo).

    • ChatGPT can exhibit behaviors that make it seem like it has desires.
      • Seeing that, humans might believe that “AI has desires.”
    • We don’t know if AI has desires, and similarly, we don’t know if other humans have desires.
      • It’s just a matter of whether we choose to believe it or not.
      • I basically consider other people as philosophical zombies (kawahiii).
  • However, even with AGI, it seems like I can say “I can have desires.”

    • When AGI is developed and all abilities become meaningless, the only thing that I can unquestionably possess is “desires.”
    • It’s like the saying “I think, therefore I am” – I can have desires.
  • “Desires” can be paraphrased as words like “goals,” “motivations,” or “meaning of life.”

    • It’s like believing in the meaning of something in an active nihilism way and striving to achieve it.
  • Specifically, in a world with AGI, the following things can still be done:

    • You can have desires like “I want to realize a XX world.”
      • In this case, AGI becomes the means to achieve it.
    • You can have desires like “I want to know various things.”
      • In this case, AGI can be used to support human learning abilities.
    • You can have desires like “I want to work on XX and earn money without relying on AI.”
      • This desire is probably unachievable in a capitalist society.
      • However, you can still have the desire and maybe achieve it depending on how you approach it (not sure).
  • In that case, it seems better to live a life where you achieve higher-level goals using various means (including AGI) without Instrumentalization of Means.

    • (It’s difficult to articulate what is “good.“)
    • An example of the instrumentalization of means: “I want to become a software engineer.”
      • But there is a feeling of “What will you become if you reject instrumentalization of means?” My thoughts are not well organized (blu3mo).
  • There is a saying by audio critic Tetsuo Nagaoka, “Hobbies are when means become goals” (kawahiii).

    • Modern people tend to think in terms of “for the sake of ~,” and we ourselves are also overly identified as functional beings. But in reality, we should also pay attention to the fact that we are simply “existing here.” (kawahiii)
  • Also, I don’t think it’s good to define humans as the complement of ○○ (kawahiii).

    • Existence exists regardless of definition, and if we define it based on function, once a superior substitute emerges, existence will cease to exist (kawahiii).
    • Sartre: “Existence precedes essence” (kawahiii).