Schedule
Asynchronous
Delivery method
Online


0 credit hours
Credits awarded upon completion
Self-Paced
Progress at your own speed
9.1 hours
Estimated learning time
This course provides a practical introduction to using transformer-based models for natural language processing (NLP) applications. You will learn to build and train models for text classification using encoder-based architectures like Bidirectional Encoder Representations from Transformers (BERT), and explore core concepts such as positional encoding, word embeddings, and attention mechanisms.
The course covers multi-head attention, self-attention, and causal language modeling with GPT for tasks like text generation and translation. You will gain hands-on experience implementing transformer models in PyTorch, including pretraining strategies such as masked language modeling (MLM) and next sentence prediction (NSP).
Through guided labs, you’ll apply encoder and decoder models to real-world scenarios. This course is designed for learners interested in generative AI engineering and requires prior knowledge of Python, PyTorch, and machine learning. Enroll now to build your skills in NLP with transformers!
This Course is part of a program
You can only buy it along with program.
Schedule
Asynchronous
Delivery method
Online
Earn necessary number of credit hours for completing this content
Such as PyTorch (Machine Learning Library), Natural Language Processing, Generative AI, Large Language Modeling, Machine Learning Methods, Artificial Neural Networks, Deep Learning