Course description
NEXT AI, Navigating Expert Topics in AI, is a short term academic series designed to deliver focused learning on core and emerging areas of artificial intelligence. The first course in this series, Foundational Maths for Machine Learning: Linear Algebra, was conducted very successfully and introduced participants to the key mathematical ideas underlying modern AI and data driven methods.
The next course in the NEXT AI series focuses on optimization methods used in machine learning, a central component in how many AI and data science models learn from data. This course provides a concise yet rigorous introduction to convex optimization theory from a machine learning perspective. It begins with the fundamental principles of convex optimization and duality, building the mathematical framework needed to understand learning algorithms.
These concepts are then applied to supervised learning methods such as regression and Support Vector Machines. The course also introduces the "kernel trick", a powerful idea that allows linear predictors and classifiers to operate in higher dimensional feature spaces. Through this approach, participants will develop an intuitive and mathematical understanding of how optimization techniques shape modern machine learning methods.
Each course in the NEXT AI series is designed as a self contained learning experience. Participants can join any course independently and engage with a specific topic in depth, while building strong conceptual foundations for understanding AI and machine learning.
Profile of the Instructor

Prof. Jayakrishnan U. Nair received a PhD in Electrical Engineering from the California Institute of Technology in 2012 and completed a BTech and MTech in Electrical Engineering from IIT Bombay in 2007. He currently works as a Professor in the Department of Electrical Engineering at IIT Bombay, where he has been serving since June 2014 and also an associate faculty at CMInDS (Centre for Machine Intelligence and Data Science), IIT Bombay. Prior to joining IIT Bombay, he held postdoctoral fellow positions at Centrum Wiskunde and Informatica from June 2013 to May 2014 and at the California Institute of Technology from June 2012 to May 2013.
His primary research interests include queueing theory, communication networks, and heavy tailed phenomena, with a focus on developing analytical frameworks for understanding performance, reliability, and scalability of modern networked systems. His work contributes to both theoretical foundations and practical insights in communication and networked systems.
His research publications have received over 870 citations, with more than 550 citations since 2021. He has an h index of 14 and an i10 index of 20, reflecting sustained contributions and growing impact in his research areas.
Topics to be covered
Convex Optimization
Duality theory
Application to least squares regression, ridge regression, support vector machine (SVM)
Kernel Functions, how to kernelize SVM, ridge regression
Session Details
Throughout the course, subject related questions and conceptual doubts are addressed directly by the course instructor and teaching assistants, ensuring continuous academic support. Assistance for enrolment procedures and other non technical queries is provided through the NPTEL+ platform. All lectures in this series are conducted online in a synchronous format, encouraging real time interaction and active engagement between participants and instructors. High quality recordings are also provided so that participants can revisit the material later.
Date of the Workshop : 11th and 12th April, 2026
Mode of the Workshop : Online (Live)
Course duration : 8 hrs
Timings (IST) (Saturday and Sunday) : 02:00 pm to 06:00 pm
Who May Benefit
Building a strong foundation that seamlessly progresses to advanced topics in Artificial Intelligence, these courses are designed to support learners at every stage of their journey. These short term courses offered by CMInDS IIT Bombay are designed to cater to academic institutions, research centers, and industry and corporates, serving researchers, students, faculty members from other institutes, and industry and corporate professionals seeking to enhance their technical and analytical skills. These short term courses also address the AI needs of industries and corporates, especially professionals in Data Science and AI who are looking to strengthen their fundamental understanding while gaining deeper expertise in specialized areas.
Learning outcomes
The key learning outcomes of this course include developing a rigorous and conceptually clear understanding of convex optimization from the standpoint of a machine learning researcher or practitioner. Participants will build strong intuition for the structure of convex problems, the role of optimality conditions, and the significance of duality in designing and analyzing learning algorithms. This foundation helps learners understand why many machine learning methods can be formulated and solved as optimization problems.
Pre-requisites
The course will assume familiarity/comfort with calculus at the undergraduate level. Prior exposure (even superficial) to supervised learning, including regression, SVM, etc. would be useful, but not necessary.
Hands-on component
Will apply the concepts learned, particularly kernel SVM and kernel regression in real-world supervised learning applications.
Textbooks/References
Hastie, Trevor, Robert Tibshirani, and Jerome Friedman. "An introduction to statistical learning." (2009).
Mohri, Mehryar, Afshin Rostamizadeh, and Ameet Talwalkar. Foundations of machine learning. MIT press, 2018.
Certificate criteria
Attendance is mandatory for getting the certificate
Reviews
There are no reviews yet.