Most functions are hard to work with algebraically. Sine, cosine, the exponential — you can evaluate them at specific points with a calculator, but you cannot divide them, factor them, or slot them easily into algebraic manipulations. Taylor series fix this by replacing any smooth function with an infinite polynomial. Polynomials are easy to differentiate, integrate, and manipulate term by term.

The trade-off is that the polynomial is infinite. But for practical purposes — approximations, physics calculations, error bounds — a few terms are usually enough, and the full infinite series gives the exact function wherever it converges.

Most functions are hard to work with algebraically. Sine, cosine, the exponential — you can evaluate them at specific points, but you cannot factor them or slot them easily into algebraic manipulations. Taylor series fix this by replacing any smooth function with an infinite polynomial. Polynomials are easy to differentiate, integrate, and manipulate term by term.

The trade-off is that the polynomial is infinite. But for practical purposes — approximations, physics calculations, error bounds — a few terms are usually enough.

The Big Idea

A Taylor series expresses a function as an infinite sum of polynomial terms centred at a point a: f(x) = Σ [f⁽ⁿ⁾(a)/n!]·(x−a)ⁿ. If centred at a=0, it is called a Maclaurin series. The coefficients are determined by the derivatives of f at a.

The Formula

Essential Taylor Series (at a=0)

Euler's Formula

eⁱˣ = cos x + i·sin x. This emerges from comparing the Taylor series: substituting ix into the eˣ series and separating real and imaginary parts gives exactly cos x and sin x. At x=π: eⁱᵖ + 1 = 0 — Euler's identity.

Approximations and Error

The nth-degree Taylor polynomial Tₙ(x) approximates f(x) near a. The error is bounded by the Lagrange remainder: |f(x)−Tₙ(x)| ≤ M|x−a|^(n+1)/(n+1)! where M bounds the (n+1)th derivative.

The Core Formula

f(x) = Σ_{n=0}^∞ [f⁽ⁿ⁾(a)/n!]·(x−a)ⁿ = f(a) + f'(a)(x−a) + f''(a)(x−a)²/2! + ···. Each coefficient is determined by the nth derivative of f at the centre point a.

Where Taylor Series Come From

Suppose we want to approximate f(x) near x = a by a polynomial p(x) = c₀ + c₁(x−a) + c₂(x−a)² + ···. For p to match f as closely as possible at x = a, we require p(a) = f(a), p'(a) = f'(a), p''(a) = f''(a), and so on — all derivatives must match. Substituting x = a into p and its derivatives: c₀ = f(a), c₁ = f'(a), 2c₂ = f''(a) → c₂ = f''(a)/2!, and in general cₙ = f⁽ⁿ⁾(a)/n!. This derivation shows why the factorial denominators appear — they cancel the constant factors that appear when differentiating xⁿ repeatedly.

Why Taylor Series Are So Powerful

Once you express a function as a power series, you can: differentiate term by term (getting the derivative's Taylor series), integrate term by term (getting the antiderivative's series), evaluate at specific points to arbitrary precision, solve differential equations whose solutions have no closed form, prove transcendental identities like Euler's formula eⁱˣ = cos x + i sin x, and compute limits of indeterminate forms more precisely than L'Hôpital's Rule.

Deriving the Five Essential Series

eˣ at a=0: f⁽ⁿ⁾(x) = eˣ for all n, so f⁽ⁿ⁾(0) = 1. Series: 1 + x + x²/2! + x³/3! + ··· = Σxⁿ/n!.

sin x at a=0: Derivatives cycle sin→cos→−sin→−cos. At 0: 0,1,0,−1,0,1,... Series: x − x³/3! + x⁵/5! − ··· Only odd powers, alternating signs.

cos x at a=0: Derivatives at 0: 1,0,−1,0,1,... Series: 1 − x²/2! + x⁴/4! − ··· Only even powers, alternating signs.

1/(1−x) at a=0: f⁽ⁿ⁾(x) = n!/(1−x)^(n+1), so f⁽ⁿ⁾(0) = n!. Series: Σxⁿ = 1+x+x²+··· (geometric series, converges for |x|<1).

ln(1+x) at a=0: Integrate the geometric series 1/(1+x) = Σ(−x)ⁿ term by term: ln(1+x) = x − x²/2 + x³/3 − ··· (converges for |x|≤1, x≠−1).

Euler's Formula — The Most Beautiful Equation

Substitute ix into the series for eˣ: e^(ix) = 1 + ix + (ix)²/2! + (ix)³/3! + ··· = 1 + ix − x²/2! − ix³/3! + x⁴/4! + ···. Separating real and imaginary parts: real = 1 − x²/2! + x⁴/4! − ··· = cos x. Imaginary = x − x³/3! + x⁵/5! − ··· = sin x. Therefore e^(ix) = cos x + i sin x. Setting x = π gives Euler's identity: e^(iπ) + 1 = 0, which connects the five most fundamental constants in mathematics.

Taylor Series in Practice — Approximation and Error

The nth-order Taylor polynomial Tₙ(x) approximates f(x) near a. The error is bounded by the Lagrange remainder: |f(x) − Tₙ(x)| ≤ M·|x−a|^(n+1)/(n+1)! where M = max|f⁽ⁿ⁺¹⁾| on the interval. This allows you to determine how many terms you need for a given accuracy. For example: approximating e with the Maclaurin series to within 0.001 requires n terms where 1/n! < 0.001 — satisfied by n = 7 (1/7! ≈ 0.0002).

Computing with Taylor Series — Important Technique

You can compute limits of indeterminate forms using Taylor series more efficiently than L'Hôpital's Rule for complex expressions. lim(x→0) (eˣ − 1 − x)/x²: substitute eˣ = 1 + x + x²/2 + ···, so numerator = x²/2 + x³/6 + ··· Dividing by x²: 1/2 + x/6 + ··· → limit = 1/2. Two lines vs. multiple applications of L'Hôpital's Rule.

Frequently Asked Questions
What is the radius of convergence?
A power series Σaₙxⁿ converges for |x|R, where R is the radius of convergence. R = 1/limsup|aₙ|^(1/n). For eˣ, sin x, cos x: R=∞ (converge everywhere). For 1/(1−x): R=1 (converges only for |x|<1).
Why are Taylor series useful in computing?
Computers cannot evaluate sin(x) algebraically — they compute it using a polynomial approximation derived from Taylor series. Similarly, ln, exp, and trig functions in computer chips are all implemented as polynomial approximations, with enough terms for the required precision.
Gradient, Divergence & Curl
Gradient Divergence Curl
Differential Equations
Differential Equations Basics
References & Further Reading
  • Stewart, J. (2015). Calculus, §11.10. Cengage.
  • Rudin, W. (1976). Principles of Mathematical Analysis, §8.4. McGraw-Hill.
  • Apostol, T. (1974). Mathematical Analysis, Ch. 11. Addison-Wesley.
AM
Dr. Aisha Malik, PhD Mathematics
Senior Lecturer in Applied Mathematics · 12 years teaching calculus at university level

Dr. Malik holds a PhD in Applied Mathematics from the University of Edinburgh and has taught calculus to over 4,000 students at both undergraduate and postgraduate level. Her research focuses on numerical methods for differential equations. She has reviewed this article for mathematical accuracy and pedagogical clarity.

Technically reviewed by: Prof. James Chen, Stanford Mathematics Department