Asymptotic theory (statistics)
In statistics, asymptotic theory, or large sample theory, is a framework for assessment of properties of estimators and statistical tests. Within this framework, it is typically assumed that the sample size n grows indefinitely; the properties of estimators and tests are then evaluated in the limit as n → ∞. In practice, a limit evaluation is treated as being approximately valid for large finite sample sizes, as well.
Contents
Overview
Most statistical problems begin with a dataset of size n. The asymptotic theory proceeds by assuming that it is possible (in principle) to keep collecting additional data, so that the sample size grows infinitely, i.e. n → ∞. Under the assumption, many results can be obtained that are unavailable for samples of finite size. An example is the law of large numbers. The law states that for a sequence of independent and identically distributed (IID) random variables X_{1}, X_{2}, …, if one value is drawn from each random variable and the average of the first n values is computed as X_{n}, then the X_{n} converge in probability to the population mean E[X_{i}] as n → ∞.
In asymptotic theory, the standard approach is n → ∞. For some statistical models, slightly different approaches of asymptotics may be used. For example, with panel data, it is commonly assumed that one dimension in the data remains fixed, whereas the other dimension grows: T = constant and N → ∞, or vice versa.
Besides the standard approach to asymptotics, other alternative approaches exist:
 Within the local asymptotic normality framework, it is assumed that the value of the "true parameter" in the model varies slightly with n, such that the nth model corresponds to θ_{n} = θ + h/√n . This approach lets us study the regularity of estimators.
 When statistical tests are studied for their power to distinguish against the alternatives that are close to the null hypothesis, it is done within the socalled "local alternatives" framework: the null hypothesis is H_{0}: θ = θ_{0} and the alternative is H_{1}: θ = θ_{0} + h/√n . This approach is especially popular for the unit root tests.
 There are models where the dimension of the parameter space Θ_{n} slowly expands with n, reflecting the fact that the more observations there are, the more structural effects can be feasibly incorporated in the model.
 In kernel density estimation and kernel regression, an additional parameter is assumed—the bandwidth h. In those models, it is typically taken that h → 0 as n → ∞. The rate of convergence must be chosen carefully, though, usually h ∝ n^{−1/5}.
In many cases, highly accurate results for finite samples can be obtained via numerical methods (i.e. computers); even in such cases, though, asymptotic analysis can be useful. This point was made by Small (2010, §1.4), as follows.
A primary goal of asymptotic analysis is to obtain a deeper qualitative understanding of quantitative tools. The conclusions of an asymptotic analysis often supplement the conclusions which can be obtained by numerical methods.
Modes of convergence of random variables
Asymptotic properties
Estimators
Consistency
A sequence of estimates is said to be consistent, if it converges in probability to the true value of the parameter being estimated:
That is, roughly speaking with an infinite amount of data the estimator (the formula for generating the estimates) would almost surely give the correct result for the parameter being estimated.
Efficiency
This section needs expansion. You can help by adding to it. (November 2017)

Asymptotic distribution
If it is possible to find sequences of nonrandom constants {a_{n}}, {b_{n}} (possibly depending on the value of θ_{0}), and a nondegenerate distribution G such that
then the sequence of estimators is said to have the asymptotic distribution G.
Most often, the estimators encountered in practice are asymptotically normal, meaning their asymptotic distribution is the normal distribution, with a_{n} = θ_{0}, b_{n} = √n, and G = N(0, V):
Asymptotic confidence regions
Regularity
Asymptotic theorems
 Central limit theorem
 Continuous mapping theorem
 Glivenko–Cantelli theorem
 Law of large numbers
 Law of the iterated logarithm
 Slutsky’s theorem
See also
Notes
This article includes a list of references, related reading or external links, but its sources remain unclear because it lacks inline citations. (April 2013) (Learn how and when to remove this template message)

References
 Balakrishnan, N.; Ibragimov, I. A. V. B.; Nevzorov, V. B., eds. (2001), Asymptotic Methods in Probability and Statistics with Applications, Birkhäuser
 Borovkov, A. A.; Borovkov, K. A. (2010), Asymptotic Analysis of Random Walks, Cambridge University Press
 Buldygin, V. V.; Solntsev, S. (1997), Asymptotic Behaviour of Linearly Transformed Sums of Random Variables, Springer
 Le Cam, Lucien; Yang, Grace Lo (2000), Asymptotics in Statistics (2nd ed.), Springer
 DasGupta, A. (2008), Asymptotic Theory of Statistics and Probability, Springer
 Dawson, D.; Kulik, R.; Ould Haye, M.; Szyszkowicz, B.; Zhao, Y., eds. (2015), Asymptotic Laws and Methods in Stochastics, SpringerVerlag
 Höpfner, R. (2014), Asymptotic Statistics, Walter de Gruyter
 Lin'kov, Yu. N. (2001), Asymptotic Statistical Methods for Stochastic Processes, American Mathematical Society
 Oliveira, P. E. (2012), Asymptotics for Associated Random Variables, Springer
 Petrov, V. V. (1995), Limit Theorems of Probability Theory, Oxford University Press
 Sen, P. K.; Singer, J. M.; Pedroso de Lima, A. C. (2009), From Finite Sample to Asymptotic Methods in Statistics, Cambridge University Press
 Shiryaev, A. N.; Spokoiny, V. G. (2000), Statistical Experiments and Decisions: Asymptotic theory, World Scientific
 Small, C. G. (2010), Expansions and Asymptotics for Statistics, Chapman & Hall
 van der Vaart, A. W. (1998), Asymptotic Statistics, Cambridge University Press