top of page

Ali Tajer
Professor
​
​
Electrical, Computer, and Systems Engineering
​
Rensselaer Polytechnic Institute
​​
​
(518) 276-8237
6040 Jonsson Engineering Center (JEC)
110 8th Street, Troy, NY 12180
Trustworthy Machine Learning
Spring 2022
| Title | Topic | Presenter | 
|---|---|---|
| Lecture 01 | Introduction to trustworthy ML | |
| Lecture 02 | ML overview (common ML models, optimization, ML procedures, SGD) | |
| Lecture 03 | ML overview (practical aspects of ML, optimization techniques, NNs, back propagation, Pytorch) | |
| Lecture 04 | Attacks and adversaries, data inference attacks, membership inference, white-box attacks, information leakage | |
| Lecture 05 | Membership inference attacks against machine learning model | Arif Huzaifa  | 
| Lecture 06 | Comprehensive privacy analysis of deep learning: Passive and active white-box inference attacks against centralized and federated learning | Roman Vakhrushev | 
| Lecture 07 (I) | Information Leakage in Embedding Models | Ryan Kaplan | 
| Lecture 07 (II) | CSI NN: Reverse Engineering of Neural Network Architectures Through Electromagnetic Side Channel | Alex Sidgwick | 
| Lecture 08 (I) | Exploring connec- tions between active learning and model extraction | Anmol Dwivedi | 
| Lecture 08 (II) | High Accuracy and High Fidelity Extraction of Neural Networks. | M. Shahid Modi | 
| Lecture 09 | Introduction to privacy, differential privacy, private distributed learning, privacy evaluation | |
| Lecture 10 | Deep learning with differential privacy | Momin Abbas | 
| Lecture 11 (I) | Scalable private learning with PATE | Zehao Li | 
| Lecture 11 (II) | Differentially private fair learning | Burak Varici | 
| Lecture 12 (I) | On sampling, anonymization, and differential privacy or, k- anonymization meets differential privacy | Charlie Cook | 
| Lecture 12 (II) | Evaluating differentially private machine learning in practice | Sharmishtha Dutta | 
| Lecture 13 | Robustness, robust training, certified defense, robust optimization, adversarial examples, black-box attacks | |
| Lecture 14 | Poisoning attacks against support vector machines | Arpan Mukherjee | 
| Lecture 15 | Manipulating machine learning: Poisoning attacks and countermeasures for regression learning | Dong Hu | 
| Lecture 16 | Explaining and harnessing adversarial examples | Vijay Sadashivaiah | 
| Lecture 17 (I) | Practical black-box attacks against machine learning | Alex Mankowski | 
| Lecture 17 (II) | A robust meta-algorithm for stochastic optimization | Bao Pham | 
| Lecture 18 | Mitigating unwanted biases with adversarial learning | Kara Davis | 
| Lecture 19 | Fairness, fairness measures, counterfactuals, fair representation, certified fairness, bias mitigation, fair classification | |
| Lecture 20 | Equality of opportunity in supervised learning | Matthew Youngbar | 
| Lecture 21 (I) | Fairness through awareness | Farhad Mohsin | 
| Lecture 21 (II) | Learning fair representations | Farhad Mohsin | 
| Lecture 22 (I) | Counterfactual fairness | Rhea Banerjee | 
| Lecture 22 (II) | Fairness constraints: Mechanisms for fair classification | Rhea Banerjee | 
| Lecture 23 | Transparency, explainability, trust, transparency, interpretability | |
| Lecture 24 (I) | Towards a rigorous science of interpretable machine learning | Zirui Yan | 
| Lecture 24 (II) | Algorithmic transparency via quantitative input influence: Theory and experiments with learning systems | Zirui Yan | 
| Lecture 25 (I) | A unified approach to interpreting model predictions | Lucky Yerimah | 
| Lecture 25 (II) | Why should I trust you? Explaining the predictions of any classifier | Lucky Yerimah | 
| Lecture 26 | Explaining explanations in AI | Andrew Nguyen | 
bottom of page
