The Encoder-Decoder model for sequential data was initially called the sequence-to-sequence (seq2seq) model and later evolved into the Transformer model with a focus on attention-based architecture. This is similar to what was discussed in the Autoencoder article.