Profile of the Instructor

Prof. Ramaseshan Ramachandran is a visiting professor at the Chennai Mathematical Institute (CMI), specializing in the field of natural language processing (NLP). He served as a venture leader at Cognizant Technologies, where he played a crucial role in leading and overseeing innovative projects related to NLP and artificial intelligence. He holds a Ph.D. in Computer Science from the prestigious Indian Institute of Technology Madras (IITM).
Modules of the Workshop : Day-1
Module – 1 : Introduction to NLP
Concepts Covered :
- Basics of natural language processing (NLP) and understanding text data.
- Preprocessing techniques: tokenization, stemming, lemmatization, word similarity.
Learning Outcomes :
Ability to process and clean text data using fundamental preprocessing techniques.
Module – 2 : Word Embedding Models
Concepts Covered :
- Hyperspace Analogue to Language, COALS, GloVe, Word2Vec.
- Convert words to vectors, similar words, word vector operations.
Learning Outcomes :
Understand and apply word embeddings to represent and analyze textual data.
Module – 3 : N-gram Language Models
Concepts Covered :
- Introduction to language models, context in modeling, chain rule, Markov assumption.
- n-gram language models, perplexity, and quality evaluation.
Learning Outcomes :
Build and evaluate probabilistic language models.
Module – 4 : Neural Language Models
Concepts Covered :
- Disadvantages of probabilistic models and curse of dimensionality.
- Neural models for word embedding and CBOW.
Learning Outcomes :
Understand neural network approaches to build language models.
Modules of the Workshop : Day - 2
Module – 1 : Recurrent Neural Networks (RNNs)
Concepts Covered :
- Challenges in traditional neural networks for NLP.
- Introduction to RNNs, variable-length sequences, training RNNs for language modeling.
Learning Outcomes :
Develop language models for handling variable-length sequences using RNNs.
Module – 2 : Deep Contextualized Models
Concepts Covered :
- Deep contextualized word representation with ELMo.
- Importance of contextualized models.
Learning Outcomes :
Learn the importance and application of contextualized language models.
Module – 3 : Large Language Models
Concepts Covered :
Motivation behind Transformers, attention mechanism, self-attention, multi-head attention.
Learning Outcomes :
Understand the architecture and significance of Transformers in language modeling.
Module – 4 : BERT and GPT
Concepts Covered :
- BERT and GPT models for language modeling.
- Overview of state-of-the-art advancements.
Learning Outcomes :
Comprehend state-of-the-art language models and their applications.
Module – 5 : Bias, Ethics, and Advances in NLP
Concepts Covered :
- Ethical considerations in NLP, including bias in language models.
- Recent advances in language models and their applications.
Learning Outcomes :
Understand ethical challenges in NLP and explore the latest advancements in language modeling.
Intended Audience & Eligibility
Intended Audience:
Anyone who is interested in Machine Learning & NLP.
Eligibility :
Participants should have basic probability, introductory knowledge on Machine Learning.
Certification Criteria
Assignment Score and Attendance will be Mandatory for certification.
Reviews
There are no reviews yet.