Abramovich, Benjamini, Donoho, et al. 2006.
“Adapting to Unknown Sparsity by Controlling the False Discovery Rate.” The Annals of Statistics.
Aghasi, Nguyen, and Romberg. 2016.
“Net-Trim: A Layer-Wise Convex Pruning of Deep Neural Networks.” arXiv:1611.05162 [Cs, Stat].
Azadkia, and Chatterjee. 2019.
“A Simple Measure of Conditional Dependence.” arXiv:1910.12327 [Cs, Math, Stat].
Azizyan, Krishnamurthy, and Singh. 2015.
“Extreme Compressive Sampling for Covariance Estimation.” arXiv:1506.00898 [Cs, Math, Stat].
Bach, Jenatton, and Mairal. 2011.
Optimization With Sparsity-Inducing Penalties. Foundations and Trends(r) in Machine Learning 1.0.
Banerjee, Arindam, Chen, Fazayeli, et al. 2014.
“Estimation with Norm Regularization.” In
Advances in Neural Information Processing Systems 27.
Barber, and Candès. 2015.
“Controlling the False Discovery Rate via Knockoffs.” The Annals of Statistics.
Baron, Sarvotham, and Baraniuk. 2010.
“Bayesian Compressive Sensing via Belief Propagation.” IEEE Transactions on Signal Processing.
Barron, Cohen, Dahmen, et al. 2008.
“Approximation and Learning by Greedy Algorithms.” The Annals of Statistics.
Barron, Huang, Li, et al. 2008.
“MDL, Penalized Likelihood, and Statistical Risk.” In
Information Theory Workshop, 2008. ITW’08. IEEE.
Bayati, and Montanari. 2012.
“The LASSO Risk for Gaussian Matrices.” IEEE Transactions on Information Theory.
Berk, Brown, Buja, et al. 2013.
“Valid Post-Selection Inference.” The Annals of Statistics.
Bertin, Pennec, and Rivoirard. 2011.
“Adaptive Dantzig Density Estimation.” Annales de l’Institut Henri Poincaré, Probabilités Et Statistiques.
Bien, Gaynanova, Lederer, et al. 2018.
“Non-Convex Global Minimization and False Discovery Rate Control for the TREX.” Journal of Computational and Graphical Statistics.
Bottou, Curtis, and Nocedal. 2016.
“Optimization Methods for Large-Scale Machine Learning.” arXiv:1606.04838 [Cs, Math, Stat].
Bühlmann, and van de Geer. 2011.
“Additive Models and Many Smooth Univariate Functions.” In
Statistics for High-Dimensional Data. Springer Series in Statistics.
Bunea, Tsybakov, and Wegkamp. 2007a.
“Sparsity Oracle Inequalities for the Lasso.” Electronic Journal of Statistics.
Bunea, Tsybakov, and Wegkamp. 2007b.
“Sparse Density Estimation with ℓ1 Penalties.” In
Learning Theory. Lecture Notes in Computer Science.
Candès, and Davenport. 2011.
“How Well Can We Estimate a Sparse Vector?” arXiv:1104.5246 [Cs, Math, Stat].
Candès, and Fernandez-Granda. 2013.
“Super-Resolution from Noisy Data.” Journal of Fourier Analysis and Applications.
Candès, and Plan. 2010.
“Matrix Completion With Noise.” Proceedings of the IEEE.
Candès, Romberg, and Tao. 2006.
“Stable Signal Recovery from Incomplete and Inaccurate Measurements.” Communications on Pure and Applied Mathematics.
Candès, Wakin, and Boyd. 2008.
“Enhancing Sparsity by Reweighted ℓ 1 Minimization.” Journal of Fourier Analysis and Applications.
———. 2014.
“Compressive System Identification.” In
Compressed Sensing & Sparse Filtering. Signals and Communication Technology.
Cevher, Duarte, Hegde, et al. 2009.
“Sparse Signal Recovery Using Markov Random Fields.” In
Advances in Neural Information Processing Systems.
Chartrand, and Yin. 2008.
“Iteratively Reweighted Algorithms for Compressive Sensing.” In
IEEE International Conference on Acoustics, Speech and Signal Processing, 2008. ICASSP 2008.
Chatterjee. 2020.
“A New Coefficient of Correlation.” arXiv:1909.10140 [Math, Stat].
Chen, Y., and Hero. 2012.
“Recursive ℓ1,∞ Group Lasso.” IEEE Transactions on Signal Processing.
Chernozhukov, Chetverikov, Demirer, et al. 2018.
“Double/Debiased Machine Learning for Treatment and Structural Parameters.” The Econometrics Journal.
Chernozhukov, Hansen, Liao, et al. 2018.
“Inference For Heterogeneous Effects Using Low-Rank Estimations.” arXiv:1812.08089 [Math, Stat].
Chetverikov, Liao, and Chernozhukov. 2016.
“On Cross-Validated Lasso.” arXiv:1605.02214 [Math, Stat].
Diaconis, and Freedman. 1984.
“Asymptotics of Graphical Projection Pursuit.” The Annals of Statistics.
Dossal, Kachour, Fadili, et al. 2011.
“The Degrees of Freedom of the Lasso for General Design Matrix.” arXiv:1111.1162 [Cs, Math, Stat].
Efron, Hastie, Johnstone, et al. 2004.
“Least Angle Regression.” The Annals of Statistics.
Elhamifar, and Vidal. 2013.
“Sparse Subspace Clustering: Algorithm, Theory, and Applications.” IEEE Transactions on Pattern Analysis and Machine Intelligence.
Engebretsen, and Bohlin. 2019.
“Statistical Predictions with Glmnet.” Clinical Epigenetics.
Ewald, and Schneider. 2015.
“Confidence Sets Based on the Lasso Estimator.” arXiv:1507.05315 [Math, Stat].
Fan, Rong-En, Chang, Hsieh, et al. 2008. “LIBLINEAR: A Library for Large Linear Classification.” Journal of Machine Learning Research.
Fan, Jianqing, and Li. 2001.
“Variable Selection via Nonconcave Penalized Likelihood and Its Oracle Properties.” Journal of the American Statistical Association.
Friedman, Hastie, Höfling, et al. 2007.
“Pathwise Coordinate Optimization.” The Annals of Applied Statistics.
Giryes, Sapiro, and Bronstein. 2014.
“On the Stability of Deep Networks.” arXiv:1412.5896 [Cs, Math, Stat].
Hallac, Leskovec, and Boyd. 2015.
“Network Lasso: Clustering and Optimization in Large Graphs.” arXiv:1507.00280 [Cs, Math, Stat].
Hall, and Xue. 2014.
“On Selecting Interacting Features from High-Dimensional Data.” Computational Statistics & Data Analysis.
Hawe, Kleinsteuber, and Diepold. 2013.
“Analysis Operator Learning and Its Application to Image Reconstruction.” IEEE Transactions on Image Processing.
Hebiri, and van de Geer. 2011.
“The Smooth-Lasso and Other ℓ1+ℓ2-Penalized Methods.” Electronic Journal of Statistics.
Hegde, and Baraniuk. 2012.
“Signal Recovery on Incoherent Manifolds.” IEEE Transactions on Information Theory.
Hegde, Indyk, and Schmidt. 2015.
“A Nearly-Linear Time Framework for Graph-Structured Sparsity.” In
Proceedings of the 32nd International Conference on Machine Learning (ICML-15).
He, Rish, and Parida. 2014.
“Transductive HSIC Lasso.” In
Proceedings of the 2014 SIAM International Conference on Data Mining. Proceedings.
Hesterberg, Choi, Meier, et al. 2008.
“Least Angle and ℓ1 Penalized Regression: A Review.” Statistics Surveys.
Hsieh, Sustik, Dhillon, et al. 2014.
“QUIC: Quadratic Approximation for Sparse Inverse Covariance Estimation.” Journal of Machine Learning Research.
Janson, Fithian, and Hastie. 2015.
“Effective Degrees of Freedom: A Flawed Metaphor.” Biometrika.
Jung. 2013.
“An RKHS Approach to Estimation with Sparsity Constraints.” In
Advances in Neural Information Processing Systems 29.
Kabán. 2014.
“New Bounds on Compressive Linear Least Squares Regression.” In
Journal of Machine Learning Research.
Kato. 2009.
“On the Degrees of Freedom in Shrinkage Estimation.” Journal of Multivariate Analysis.
Kim, Kwon, and Choi. 2012.
“Consistent Model Selection Criteria on High Dimensions.” Journal of Machine Learning Research.
Koltchinskii. 2011.
Oracle Inequalities in Empirical Risk Minimization and Sparse Recovery Problems. Lecture Notes in Mathematics École d’Été de Probabilités de Saint-Flour 2033.
Kowalski, and Torrésani. 2009.
“Structured Sparsity: From Mixed Norms to Structured Shrinkage.” In
SPARS’09-Signal Processing with Adaptive Sparse Structured Representations.
Langford, Li, and Zhang. 2009.
“Sparse Online Learning via Truncated Gradient.” In
Advances in Neural Information Processing Systems 21.
Lederer, and Vogt. 2020.
“Estimating the Lasso’s Effective Noise.” arXiv:2004.11554 [Stat].
Lee, Sun, Sun, et al. 2013.
“Exact Post-Selection Inference, with Application to the Lasso.” arXiv:1311.6238 [Math, Stat].
Lemhadri, Ruan, Abraham, et al. 2021.
“LassoNet: A Neural Network with Feature Sparsity.” Journal of Machine Learning Research.
Li, and Lederer. 2019.
“Tuning Parameter Calibration for ℓ1-Regularized Logistic Regression.” Journal of Statistical Planning and Inference.
Lockhart, Taylor, Tibshirani, et al. 2014.
“A Significance Test for the Lasso.” The Annals of Statistics.
Lundberg, and Lee. 2017.
“A Unified Approach to Interpreting Model Predictions.” In
Advances in Neural Information Processing Systems.
Mahoney. 2016.
“Lecture Notes on Spectral Graph Methods.” arXiv Preprint arXiv:1608.04845.
Meier, van de Geer, and Bühlmann. 2008.
“The Group Lasso for Logistic Regression.” Group.
Meinshausen, and Bühlmann. 2006.
“High-Dimensional Graphs and Variable Selection with the Lasso.” The Annals of Statistics.
Molchanov, Ashukha, and Vetrov. 2017.
“Variational Dropout Sparsifies Deep Neural Networks.” In
Proceedings of ICML.
Montanari. 2012.
“Graphical Models Concepts in Compressed Sensing.” Compressed Sensing: Theory and Applications.
Naik, and Tsai. 2001.
“Single‐index Model Selections.” Biometrika.
Nam, and Gribonval. 2012.
“Physics-Driven Structured Cosparse Modeling for Source Localization.” In
2012 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).
Ngiam, Chen, Bhaskar, et al. 2011.
“Sparse Filtering.” In
Advances in Neural Information Processing Systems 24.
Nickl, and van de Geer. 2013.
“Confidence Sets in Sparse Regression.” The Annals of Statistics.
Oymak, Jalali, Fazel, et al. 2013.
“Noisy Estimation of Simultaneously Structured Models: Limitations of Convex Relaxation.” In
2013 IEEE 52nd Annual Conference on Decision and Control (CDC).
Pouget-Abadie, and Horel. 2015.
“Inferring Graphs from Cascades: A Sparse Recovery Framework.” In
Proceedings of The 32nd International Conference on Machine Learning.
Qian, and Yang. 2012.
“Model Selection via Standard Error Adjusted Adaptive Lasso.” Annals of the Institute of Statistical Mathematics.
Qin, Scheinberg, and Goldfarb. 2013.
“Efficient Block-Coordinate Descent Algorithms for the Group Lasso.” Mathematical Programming Computation.
Ribeiro, Singh, and Guestrin. 2016.
“‘Why Should I Trust You?’: Explaining the Predictions of Any Classifier.” In
Proceedings of the 22Nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. KDD ’16.
Rish, and Grabarnik. 2014.
“Sparse Signal Recovery with Exponential-Family Noise.” In
Compressed Sensing & Sparse Filtering. Signals and Communication Technology.
Rish, and Grabarnik. 2015. Sparse Modeling: Theory, Algorithms, and Applications. Chapman & Hall/CRC Machine Learning & Pattern Recognition Series.
Ročková, and George. 2018.
“The Spike-and-Slab LASSO.” Journal of the American Statistical Association.
Sashank J. Reddi, Suvrit Sra, Barnabás Póczós, et al. 1995.
“Stochastic Frank-Wolfe Methods for Nonconvex Optimization.”
Schelldorfer, Bühlmann, and van de Geer. 2011.
“Estimation for High-Dimensional Linear Mixed-Effects Models Using ℓ1-Penalization.” Scandinavian Journal of Statistics.
Shen, and Huang. 2006.
“Optimal Model Assessment, Selection, and Combination.” Journal of the American Statistical Association.
Shen, and Ye. 2002.
“Adaptive Model Selection.” Journal of the American Statistical Association.
Su, Bogdan, and Candès. 2015.
“False Discoveries Occur Early on the Lasso Path.” arXiv:1511.01957 [Cs, Math, Stat].
Thrampoulidis, Abbasi, and Hassibi. 2015.
“LASSO with Non-Linear Measurements Is Equivalent to One With Linear Measurements.” In
Advances in Neural Information Processing Systems 28.
Tibshirani, Robert. 1996.
“Regression Shrinkage and Selection via the Lasso.” Journal of the Royal Statistical Society. Series B (Methodological).
———. 2011.
“Regression Shrinkage and Selection via the Lasso: A Retrospective.” Journal of the Royal Statistical Society: Series B (Statistical Methodology).
Tibshirani, Ryan J. 2014.
“A General Framework for Fast Stagewise Algorithms.” arXiv:1408.5801 [Stat].
Trofimov, and Genkin. 2015.
“Distributed Coordinate Descent for L1-Regularized Logistic Regression.” In
Analysis of Images, Social Networks and Texts. Communications in Computer and Information Science 542.
Tschannen, and Bölcskei. 2016.
“Noisy Subspace Clustering via Matching Pursuits.” arXiv:1612.03450 [Cs, Math, Stat].
Geer, Sara van de. 2007.
“The Deterministic Lasso.”
Geer, Sara A. van de. 2008.
“High-Dimensional Generalized Linear Models and the Lasso.” The Annals of Statistics.
———. 2016.
Estimation and Testing Under Sparsity. Lecture Notes in Mathematics.
Wahba. 1990. Spline Models for Observational Data.
Wang, L., Gordon, and Zhu. 2006.
“Regularized Least Absolute Deviations Regression and an Efficient Algorithm for Parameter Tuning.” In
Sixth International Conference on Data Mining (ICDM’06).
Wang, Hansheng, Li, and Jiang. 2007.
“Robust Regression Shrinkage and Consistent Variable Selection Through the LAD-Lasso.” Journal of Business & Economic Statistics.
Wasserman, and Roeder. 2009.
“High-Dimensional Variable Selection.” Annals of Statistics.
Wisdom, Powers, Pitton, et al. 2016.
“Interpretable Recurrent Neural Networks Using Sequential Sparse Recovery.” In
Advances in Neural Information Processing Systems 29.
Woodworth, and Chartrand. 2015.
“Compressed Sensing Recovery via Nonconvex Shrinkage Penalties.” arXiv:1504.02923 [Cs, Math].
Wright, Nowak, and Figueiredo. 2009.
“Sparse Reconstruction by Separable Approximation.” IEEE Transactions on Signal Processing.
Xu, Caramanis, and Mannor. 2010.
“Robust Regression and Lasso.” IEEE Transactions on Information Theory.
———. 2012.
“Sparse Algorithms Are Not Stable: A No-Free-Lunch Theorem.” IEEE Transactions on Pattern Analysis and Machine Intelligence.
Yaghoobi, Nam, Gribonval, et al. 2012.
“Noise Aware Analysis Operator Learning for Approximately Cosparse Signals.” In
2012 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).
Yuan, and Lin. 2006.
“Model Selection and Estimation in Regression with Grouped Variables.” Journal of the Royal Statistical Society: Series B (Statistical Methodology).
Zhang, Yiyun, Li, and Tsai. 2010.
“Regularization Parameter Selections via Generalized Information Criterion.” Journal of the American Statistical Association.
Zhang, Cun-Hui, and Zhang. 2014.
“Confidence Intervals for Low Dimensional Parameters in High Dimensional Linear Models.” Journal of the Royal Statistical Society: Series B (Statistical Methodology).
Zhao, Peng, and Yu. 2006.
“On Model Selection Consistency of Lasso.” Journal of Machine Learning Research.
Zou. 2006.
“The Adaptive Lasso and Its Oracle Properties.” Journal of the American Statistical Association.
Zou, and Hastie. 2005.
“Regularization and Variable Selection via the Elastic Net.” Journal of the Royal Statistical Society: Series B (Statistical Methodology).
Zou, Hastie, and Tibshirani. 2007.
“On the ‘Degrees of Freedom’ of the Lasso.” The Annals of Statistics.