Posts
What does the 'R-norm' reveal about efficient neural network approximation? (COLT 2023 paper with Navid and Daniel)
How hard is it to learn an intersection of halfspaces? (COLT 2022 paper with Rocco, Daniel, and Manolis)
Books of 2021
How do SVMs and least-squares regression behave in high-dimensional settings? (NeurIPS 2021 paper with Navid and Daniel)
My candidacy exam is done!
[OPML#10] MNSBHS20: Classification vs regression in overparameterized regimes: Does the loss function matter?
[OPML#9] CL20: Finite-sample analysis of interpolating linear classifiers in the overparameterized regime
[OPML#8] FS97 & BFLS98: Benign overfitting in boosting
[OPML#7] BLN20 & BS21: Smoothness and robustness of neural net interpolators
[OPML#6] XH19: On the number of variables to use in principal component regression
How many neurons are needed to approximate smooth functions? A summary of our COLT 2021 paper
[OPML#5] BL20: Failures of model-dependent generalization bounds for least-norm interpolation
[OPML#4] HMRT19: Surprises in high-dimensional ridgeless least squares interpolation
Orthonormal function bases: what they are and why we care
[OPML#3] MVSS19: Harmless interpolation of noisy data in regression
[OPML#2] BLLT19: Benign overfitting in linear regression
[OPML#1] BHX19: Two models of double descent for weak features
[OPML#0] A series of posts on over-parameterized machine learning models
subscribe via RSS