Foundational Maths for ML: Optimization

From: 1,000.00

Last date of Registration : 8th April, 2026

SKU: IIT Bombay | Date - 11th and 12th April, 2026 Category:

Course description

This course will provide a short but rigorous introduction to convex optimization theory for machine learning. This includes a grounds-up coverage of the basics on convex optimization and duality, which is subsequently applied to supervised learning approaches including regression and SVM. Finally, the course will introduce the “kernel trick,” which will be applied to kernelize regression and SVM, enabling linear predictors/separators in higher dimensions. 

Profile of the Instructor

Prof. Jayakrishnan U. Nair received a PhD in Electrical Engineering from the California Institute of Technology in 2012 and completed a BTech and MTech in Electrical Engineering from IIT Bombay in 2007. He currently works as a Professor in the Department of Electrical Engineering at IIT Bombay, where he has been serving since June 2014 and also an associate faculty at CMInDS (Centre for Machine Intelligence and Data Science), IIT Bombay.  Prior to joining IIT Bombay, he held postdoctoral fellow positions at Centrum Wiskunde and Informatica from June 2013 to May 2014 and at the California Institute of Technology from June 2012 to May 2013.

His primary research interests include queueing theory, communication networks, and heavy tailed phenomena, with a focus on developing analytical frameworks for understanding performance, reliability, and scalability of modern networked systems. His work contributes to both theoretical foundations and practical insights in communication and networked systems.

His research publications have received over 870 citations, with more than 550 citations since 2021. He has an h index of 14 and an i10 index of 20, reflecting sustained contributions and growing impact in his research areas.

Topics to be covered

Convex Optimization

Duality theory

Application to least squares regression, ridge regression, support vector machine (SVM)

Kernel Functions, how to kernelize SVM, ridge regression

Session Details

Throughout the course, subject related questions and conceptual doubts are addressed directly by the course instructor and teaching assistants, ensuring continuous academic support. Assistance for enrolment procedures and other non technical queries is provided through the NPTEL+ platform.

Date of the Workshop : 11th and 12th April, 2026

Mode of the Workshop : Online (Live)

Course duration : 8 hrs

Timings (IST) (Saturday and Sunday) : 02:00 pm to 06:00 pm

Fee for the Workshop

Students and Postdocs: Rs.1000 +18% GST = 1180 

Faculty: Rs.2400 +18% GST = 2832

Industry Professionals: Rs.4000 +18% GST = 4720

Who May Benefit

Researchers, Students, Faculty members from other institutes, Industry and Corporate professionals.

Learning outcomes

A rigorous foundation in convex optimization from the standpoint of an ML researcher/practitioner.

Pre-requisites

The course will assume familiarity/comfort with calculus at the undergraduate level. Prior exposure (even superficial) to supervised learning, including regression, SVM, etc. would be useful, but not necessary.

Hands-on component

Will apply the concepts learned, particularly kernel SVM and kernel regression in real-world supervised learning applications.

Textbooks/References

Hastie, Trevor, Robert Tibshirani, and Jerome Friedman. "An introduction to statistical learning." (2009).

Mohri, Mehryar, Afshin Rostamizadeh, and Ameet Talwalkar. Foundations of machine learning. MIT press, 2018.

Certificate criteria

Attendance and MCQs are mandatory for certification

Reviews

There are no reviews yet.

Be the first to review “Foundational Maths for ML: Optimization”