Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
Next revision Both sides next revision
education [2021/09/08 09:59]
fablpd
education [2021/09/08 10:04]
fablpd
Line 49: Line 49:
 \\ \\
  
-  * **GANs with Transformers**:​ Since their introduction in 2017, the Transformer architecture +  * **GANs with Transformers**:​ Since their introduction in 2017, the Transformer architecture revolutionized the NLP machine learning models. Thanks to the scalability of self-attention only architectures,​ the models can now scale into trillions of parameters, allowing human-like capacities of text generation. However, they are not without their own shortcomings,​ notably due to their max-likelihood training mode over data that contains potentially undesirable statistical associations. An alternative approach to generative learning - Generative Adversarial Networks (GANs) - perform remarkably well when it comes to images, but have until recently struggled with texts, due to their sequential and discrete nature that is not compatible with gradient back-propagation they need to train. Some of those issues have been solved, but a major one - their scalability due to usage of RNNs instead of pure self-attention architectures. Previously, we were able to show that it is impossible to trivially replace RNN layers with Transformer layers (https://​arxiv.org/​abs/​2108.12275,​ presented in RANLP2021). This project will be building on those results and attempting to create stable Transformer-based Text GANs based on the tricks known to stabilize Transformer training or to attempt to theoretically demonstrate the inherent instability of Transformer-derived architectures in adversarial regime. You will need a solid background knowledge of linear algebra, acquaintance with the theory of machine learning, specifically neural networks, as well as experience with scientific computing in Python, ideally with PyTorch experience. Experience with NLP desirable, but not required.
-revolutionized the NLP machine learning models. Thanks to the +
-scalability of self-attention only architectures,​ the models can now +
-scale into trillions of parameters, allowing human-like capacities of +
-text generation. However, they are not without their own shortcomings,​ +
-notably due to their max-likelihood training mode over data that +
-contains potentially undesirable statistical associations. +
-An alternative approach to generative learning - Generative Adversarial +
-Networks (GANs) - perform remarkably well when it comes to images, but +
-have until recently struggled with texts, due to their sequential and +
-discrete nature that is not compatible with gradient back-propagation +
-they need to train. Some of those issues have been solved, but a major +
-one - their scalability due to usage of RNNs instead of pure +
-self-attention architectures. +
-Previously, we were able to show that it is impossible to trivially +
-replace RNN layers with Transformer layers +
-(https://​arxiv.org/​abs/​2108.12275,​ presented in RANLP2021). This project +
-will be building on those results and attempting to create stable +
-Transformer-based Text GANs based on the tricks known to stabilize +
-Transformer training or to attempt to theoretically demonstrate the +
-inherent instability of Transformer-derived architectures in adversarial +
-regime. +
- +
-You will need a solid background knowledge of linear algebra, +
-acquaintance with the theory of machine learning, specifically neural +
-networks, as well as experience with scientific computing in Python, +
-ideally with PyTorch experience. Experience with NLP desirable, but not +
-required.+