Integration by substitution
Part of a series of articles about  
Calculus  





Specialized


This article includes a list of references, related reading or external links, but its sources remain unclear because it lacks inline citations. (September 2017) (Learn how and when to remove this template message)

In calculus, integration by substitution, also known as usubstitution, is a method for finding integrals. Using the fundamental theorem of calculus often requires finding an antiderivative. For this and other reasons, integration by substitution is an important tool in mathematics. It is the counterpart to the chain rule for differentiation.
Contents
Substitution for a single variable
Proposition
Let I ⊆ R be an interval and φ : [a,b] → I be a differentiable function with integrable derivative. Suppose that f : I → R is a continuous function. Then
In Leibniz notation, the substitution u = φ(x) yields
Working heuristically with infinitesimals yields the equation
which suggests the substitution formula above. (This equation may be put on a rigorous foundation by interpreting it as a statement about differential forms.) One may view the method of integration by substitution as a partial justification of Leibniz's notation for integrals and derivatives.
The formula is used to transform one integral into another integral that is easier to compute. Thus, the formula can be used from left to right or from right to left in order to simplify a given integral. When used in the latter manner, it is sometimes known as usubstitution or wsubstitution.
Proof
Integration by substitution can be derived from the fundamental theorem of calculus as follows. Let f and φ be two functions satisfying the above hypothesis that f is continuous on I and φ′ is integrable on the closed interval [a,b]. Then the function f(φ(x))φ′(x) is also integrable on [a,b]. Hence the integrals
and
in fact exist, and it remains to show that they are equal.
Since f is continuous, it has an antiderivative F. The composite function F ∘ φ is then defined. Since φ is differentiable, combining the chain rule and the definition of an antiderivative gives
Applying the fundamental theorem of calculus twice gives
which is the substitution rule.
Examples
Example 1: from right to left
Consider the integral
If we apply the formula from right to left and make the substitution u = φ(x) = x^{2} + 1, we obtain du = 2x dx and hence x dx = ½du. Therefore
Since the lower limit x = 0 was replaced with u = 0^{2} + 1 = 1, and the upper limit x = 2 replaced with u = 2^{2} + 1 = 5, a transformation back into terms of x was unnecessary.
Example 2: from left to right
For the integral
the formula needs to be used from left to right. The substitution x = sin(u), dx = cos(u) du is useful because :
The resulting integral can be computed using integration by parts or a double angle formula followed by one more substitution. One can also note that the function being integrated is the upper right quarter of a circle with a radius of one, and hence integrating the upper right quarter from zero to one is the geometric equivalent to the area of one quarter of the unit circle, or π / 4.
Example 3: antiderivatives
Substitution can be used to determine antiderivatives. One chooses a relation between x and u, determines the corresponding relation between dx and du by differentiating, and performs the substitutions. An antiderivative for the substituted function can hopefully be determined; the original substitution between u and x is then undone.
Similar to our first example above, we can determine the following antiderivative with this method:
where C is an arbitrary constant of integration.
Note that there were no integral boundaries to transform, but in the last step we had to revert the original substitution u = x^{2} + 1.
Substitution for multiple variables
One may also use substitution when integrating functions of several variables. Here the substitution function (v_{1},...,v_{n}) = φ(u_{1}, ..., u_{n}) needs to be injective and continuously differentiable, and the differentials transform as
where det(Dφ)(u_{1}, ..., u_{n}) denotes the determinant of the Jacobian matrix of partial derivatives of φ at the point (u_{1}, ..., u_{n}). This formula expresses the fact that the absolute value of the determinant of a matrix equals the volume of the parallelotope spanned by its columns or rows.
More precisely, the change of variables formula is stated in the next theorem:
Theorem. Let U be an open set in R^{n} and φ : U → R^{n} an injective differentiable function with continuous partial derivatives, the Jacobian of which is nonzero for every x in U. Then for any realvalued, compactly supported, continuous function f, with support contained in φ(U),
The conditions on the theorem can be weakened in various ways. First, the requirement that φ be continuously differentiable can be replaced by the weaker assumption that φ be merely differentiable and have a continuous inverse (Rudin 1987, Theorem 7.26). This is guaranteed to hold if φ is continuously differentiable by the inverse function theorem. Alternatively, the requirement that det(Dφ) ≠ 0 can be eliminated by applying Sard's theorem (Spivak 1965).
For Lebesgue measurable functions, the theorem can be stated in the following form (Fremlin 2010, Theorem 263D):
Theorem. Let U be a measurable subset of R^{n} and φ : U → R^{n} an injective function, and suppose for every x in U there exists φ′(x) in R^{n,n} such that φ(y) = φ(x) + φ′(x)(y − x) + o(y − x) as y → x (here o is littleo notation). Then φ(U) is measurable, and for any realvalued function f defined on φ(U),
in the sense that if either integral exists (including the possibility of being properly infinite), then so does the other one, and they have the same value.
Another very general version in measure theory is the following (Hewitt & Stromberg 1965, Theorem 20.3):
Theorem. Let X be a locally compact Hausdorff space equipped with a finite Radon measure μ, and let Y be a σcompact Hausdorff space with a σfinite Radon measure ρ. Let φ : X → Y be a continuous and absolutely continuous function (where the latter means that ρ(φ(E)) = 0 whenever μ(E) = 0). Then there exists a realvalued Borel measurable function w on X such that for every Lebesgue integrable function f : Y → R, the function (f ∘ φ) ⋅ w is Lebesgue integrable on X, and
Furthermore, it is possible to write
for some Borel measurable function g on Y.
In geometric measure theory, integration by substitution is used with Lipschitz functions. A biLipschitz function is a Lipschitz function φ : U → R^{n} which is injective and whose inverse function φ^{−1} : φ(U) → U is also Lipschitz. By Rademacher's theorem a biLipschitz mapping is differentiable almost everywhere. In particular, the Jacobian determinant of a biLipschitz mapping det Dφ is welldefined almost everywhere. The following result then holds:
Theorem. Let U be an open subset of R^{n} and φ : U → R^{n} be a biLipschitz mapping. Let f : φ(U) → R be measurable. Then
in the sense that if either integral exists (or is properly infinite), then so does the other one, and they have the same value.
The above theorem was first proposed by Euler when he developed the notion of double integrals in 1769. Although generalized to triple integrals by Lagrange in 1773, and used by Legendre, Laplace, Gauss, and first generalized to n variables by Mikhail Ostrogradski in 1836, it resisted a fully rigorous formal proof for a surprisingly long time, and was first satisfactorily resolved 125 years later, by Élie Cartan in a series of papers beginning in the mid1890s (Katz 1982; Ferzola 1994).
Application in probability
Substitution can be used to answer the following important question in probability: given a random variable with probability density and another random variable related to by the equation , what is the probability density for ?
It is easiest to answer this question by first answering a slightly different question: what is the probability that takes a value in some particular subset ? Denote this probability . Of course, if has probability density then the answer is
but this isn't really useful because we don't know ; it's what we're trying to find. We can make progress by considering the problem in the variable . takes a value in whenever takes a value in , so
Changing from variable to gives
Combining this with our first equation gives
so
In the case where and depend on several uncorrelated variables, i.e. and , can be found by substitution in several variables discussed above. The result is
See also
 Probability density function
 Substitution of variables
 Tangent halfangle substitution
 Trigonometric substitution
References
 Ferzola, Anthony P. (1994), "Euler and differentials", The College Mathematics Journal, 25 (2): 102&ndash, 111, doi:10.2307/2687130
 Fremlin, D.H. (2010), Measure Theory, Volume 2, Torres Fremlin, ISBN 9780953812974.
 Hewitt, Edwin; Stromberg, Karl (1965), Real and Abstract Analysis, SpringerVerlag, ISBN 9780387045597.
 Katz, V. (1982), "Change of variables in multiple integrals: Euler to Cartan", Mathematics Magazine, 55 (1): 3&ndash, 11, doi:10.2307/2689856
 Rudin, Walter (1987), Real and Complex Analysis, McGrawHill, ISBN 9780070542341.
 Spivak, Michael (1965), Calculus on Manifolds, Westview Press, ISBN 9780805390216.
External links
The Wikibook Calculus has a page on the topic of: The Substitution Rule 
Wikiversity has learning resources about Integration by Substitution 
 Integration by substitution at Encyclopedia of Mathematics
 Area formula at Encyclopedia of Mathematics