Desparsified lasso
This article needs attention from an expert in Mathematics. The specific problem is: needs a better lead and overview, more general context and categorization. (June 2015)

Desparsified lasso contributes to construct confidence intervals and statistical tests for single or lowdimensional components of a large parameter vector in highdimensional model.^{[1]}
1 Highdimensional linear model
with design matrix ( vectors ), independent of and unknown regression vector .
The usual method to find the parameter is by Lasso:
The desparsified lasso is a method modified from the Lasso estimator which fulfills the KarushKuhnTucker conditions^{[2]} is as follows:
where is an arbitrary matrix. The matrix is generated using a surrogate inverse covariance matrix.
2 Generalized linear model
Desparsifying norm penalized estimators and corresponding theory can also be applied to models with convex loss functions such as generalized linear models.
Consider the following vectors of covariables and univariate responses for
we have a loss function which is assumed to be strictly convex function in
The norm regularized estimator is
Similarly, the Lasso for node wise regression with matrix input is defined as follows: Denote by a matrix which we want to approximately invert using nodewise lasso.
The desparsified norm regularized estimator is as follows:
where denotes the th row of without the diagonal element , and is the sub matrix without the th row and th column.
References
 ^ GEER, SARA VAN DE; BUHLMANN, PETER; RITOV, YA' ACOV; DEZEURE, RUBEN (2014). "ON ASYMPTOTICALLY OPTIMAL CONFIDENCE REGIONS AND TESTS FOR HIGHDIMENSIONAL MODELS". The Annals of Statistics. 42: 1162–1202. arXiv:1303.0518 . doi:10.1214/14AOS1221.
 ^ Tibshirani, Ryan; Gordon, Geoff. "KarushKuhnTucker conditions" (PDF).