Tag: technical
- What does the 'R-norm' reveal about efficient neural network approximation? (COLT 2023 paper with Navid and Daniel) (12 Jul 2023)
- How hard is it to learn an intersection of halfspaces? (COLT 2022 paper with Rocco, Daniel, and Manolis) (30 Jul 2022)
- How do SVMs and least-squares regression behave in high-dimensional settings? (NeurIPS 2021 paper with Navid and Daniel) (07 Dec 2021)
- [OPML#10] MNSBHS20: Classification vs regression in overparameterized regimes: Does the loss function matter? (04 Nov 2021)
- [OPML#9] CL20: Finite-sample analysis of interpolating linear classifiers in the overparameterized regime (28 Oct 2021)
- [OPML#8] FS97 & BFLS98: Benign overfitting in boosting (20 Oct 2021)
- [OPML#7] BLN20 & BS21: Smoothness and robustness of neural net interpolators (22 Sep 2021)
- [OPML#6] XH19: On the number of variables to use in principal component regression (11 Sep 2021)
- How many neurons are needed to approximate smooth functions? A summary of our COLT 2021 paper (15 Aug 2021)
- [OPML#5] BL20: Failures of model-dependent generalization bounds for least-norm interpolation (30 Jul 2021)
- [OPML#4] HMRT19: Surprises in high-dimensional ridgeless least squares interpolation (23 Jul 2021)
- Orthonormal function bases: what they are and why we care (16 Jul 2021)
- [OPML#3] MVSS19: Harmless interpolation of noisy data in regression (16 Jul 2021)
- [OPML#2] BLLT19: Benign overfitting in linear regression (11 Jul 2021)
- [OPML#1] BHX19: Two models of double descent for weak features (05 Jul 2021)
- [OPML#0] A series of posts on over-parameterized machine learning models (04 Jul 2021)
Archive
learning-theory
technical
over-parameterized
candidacy
least-squares
research
neural-net
personal
books
background