• I wonder if it’s possible to create an Anagram using the approach of Deep Learning.

  • I want to try it with University of Tokyo 1S Information Alpha.

  • For example, when generating human faces using GAN, if we feed both the original version and the upside-down version to the Discriminator every time, we can create an anagram (?) of human face photos.

  • If we do the same thing with Conditional GAN, is it possible to generate characters?

  • @nbaji9: An anagram that combines letters and pictures, like “dear / read”.

  • #anagramart https://t.co/wmuFtsGaTV

    • It would be nice to have a model that can be used to generate this kind of artwork.
  • Log 20220606

    • I think it will be difficult without changing the data, so let’s start by successfully training DCGAN with MNIST.
    • After that, I’ll try using notMNIST, which seems promising.
    • DCGAN
      • It works well when only rotated.
      • When anagrammed, it gets stuck in a bad state.
        • I think it’s because it hasn’t generalized well, so I’ll try increasing the Dropout.
        • What happens if it’s a simple one like 0, 1, or 2? (blu3mo)
          • 0 is successful.
          • 1 is successful once out of two attempts.
            • image
              • Unlike MNIST, it’s not always tilted to the right, which is evidence that it’s working well (blu3mo).
      • notMNIST + DCGAN
        • image
          • 1000 iterations
          • It’s okay.
        • image
          • 5000 iterations
          • Various styles are coming out, so it was worth using nonMNIST.
        • What if we do an anagram with this?
          • image
          • It’s impossible.
        • What could be the problem?
          • https://gangango.com/2018/11/16/post-322/
            • The Discriminator is too smart, and the gradients that the Generator can learn have disappeared.
              • That seems to be it (blu3mo).
              • Maybe we should make the Discriminator dumb.
  • Log 20220523

    • For now, let’s train a GAN to generate “9” using MNIST.
      • 400 image
      • It worked ✅.
    • Then, let’s try feeding the output of the generator to the Discriminator with a flipped version.
      • We can easily do this using Data Augmentation with RandomFlip.
      • Add RandomFlip to the end of the Generator.
        • Huh, that’s strange.
          • 1500 image
          • Only one-directional “9” is coming out normally…?
          • I gave up on making it work with both directions and settled for only half of the “9” being generated with a 50% success rate.
            • No, that’s not true.
            • Even if that were the case, there should still be a flipped “9” in the output.
          • https://www.tensorflow.org/api_docs/python/tf/keras/layers/RandomFlip
          • During inference time, the output will be identical to input. Call the layer with training=True to flip the input.

            • That seems to be it (blu3mo).
        • I found another way.
          • 3500 image
          • Hmm…w
          • I appreciate the effort (blu3mo).
          • Let’s try with other numbers.
            • 5
              • imageimage
        • Let’s try with DCGAN.
          • Normal 5: image
            • Adjustments seem to be necessary.
          • Anagram 5: image
            • It’s completely unsuccessful, haha.
      • Add RandomFlip to the beginning of the Discriminator.
        • This means that the flipped version of “9” will also be recognized as “9”.
        • image
          • Well, that’s what happens.- It doesn’t matter which one, just choose one.
  • This won’t work (blu3mo)

  • For now, let’s leave DCGAN and use a regular GAN to increase the dataset.

  • LRDS https://paperswithcode.com/dataset/letter

    • Doesn’t seem suitable.
    • This dataset seems to be extracted features of character images, not actual character images.
  • #might implement