17calculus 17calculus
First Order Second Order Laplace Transforms Additional Topics Applications, Practice
Separation of Variables
Integrating Factors (Linear)
Exact Equations
Integrating Factors (Exact)
Constant Coefficients
Reduction of Order
Undetermined Coefficients
Variation of Parameters
Polynomial Coefficients
Cauchy-Euler Equations
Chebyshev Equations
Laplace Transforms
Unit Step Function
Unit Impulse Function
Square Wave
Shifting Theorems
Solve Initial Value Problems
Classify Differential Equations
Fourier Series
Slope Fields
Existence and Uniqueness
Boundary Value Problems
Euler's Method
Inhomogeneous ODE's
Partial Differential Equations
Linear Systems
Exponential Growth/Decay
Population Dynamics
Projectile Motion
Chemical Concentration
Fluids (Mixing)
Practice Problems
Practice Exam List
Exam A1
Exam A3
Exam B2

You CAN Ace Differential Equations

17calculus > differential equations > second-order, linear

Topics You Need To Understand For This Page

Differential Equations Alpha List


Related Topics and Links

Second-Order, Linear Differential Equations

classification: second-order linear

\(y'' + p(t)y' + q(t)y = g(t)\)

The discussion of second order linear equations is broken into two main areas based on whether the equation is homogeneous, \(g(t)=0\) or inhomogeneous, \(g(t)\neq0\) (also called nonhomogeneous). We discuss solutions to homogeneous equations on this page. Solutions to inhomogeneous equations are discussed on separate pages for two techniques, undetermined coefficients and variation of parameters.

Homogeneous Equations

Since working with second order equations builds on techniques as we go, we will first consider homogeneous equations. Our second order equations look like \(y'' + p(t)y' + q(t)y = g(t)\) and when they are homogeneous \(g(t)=0\) giving us \(y'' + p(t)y' + q(t)y = 0 \).

Constant Coefficients

Homogeneous equations with constant coefficients look like \(\displaystyle{ ay'' + by' + cy = 0 }\) where a, b and c are constants. We also require that \( a \neq 0 \) since, if \( a = 0 \) we would no longer have a second order differential equation.

When introducing this topic, textbooks will often just pull out of the air that possible solutions are exponential functions. Unfortunately, at this point, you just need to take their word for it. Oh, you can check that what they say is correct. But where they came from will remain something of a mystery for now.

The idea is to find the roots of the polynomial equation \(ar^2+br+c=0\) where a, b and c are the constants from the above differential equation. This equations is called the characteristic equation of the differential equation. If we call the roots to this polynomial \(r_1\) and \(r_2\), then two solutions to the differential equation are
\( y_1 = c_1 e^{r_1t} \) and \( y_2 = c_2 e^{r_2t} \).
Usually we add these two solutions together and write it as one solution and write \( y = c_1 e^{r_1t} + c_2 e^{r_2t} \). This table summarizes these comments.

\(\displaystyle{ ay'' + by' + cy = 0 }\)

characteristic equation



\(r_1\) and \(r_2\), real, distinct


\(\displaystyle{ y = c_1 e^{r_1t} + c_2 e^{r_2t} }\)

That's the basic idea. Now, if the roots of the polynomial are complex or repeated, there are slight variations of this idea. But in general, that's how this type of differential equation is solved.

Okay, here are some practice problems for where the roots are real and distinct. Working these should get you familiar with the technique before moving on to more complex problems. Solve these differential equations by determining the general solution.

Practice 1




Practice 2




Practice 3




Practice 4




Practice 5



Solve these differential equations using the initial conditions to determine the particular solutions.

Practice 6

\(y''-4y'-5y=0\); \(y(0)=1, y'(0)=0\)



Practice 7

\(y''-18y'+77y=0\); \(y(0)=4, y'(0)=8\)



Practice 8

\(2y''-11y'+12y=0\); \(y(0)=5, y'(0)=15\)



Real Repeated Roots

So, if the roots are real and repeated, we call this root \(r_1 = r_2 = r\). From the above discussion, we have one solution \( y_1 = c_1 e^{rt} \). The second solution is obtained by multiplying the first solution by t to get \( y_2 = c_2 te^{rt} \). (The reduction of order page contains an explanation of where this comes from.) So the combined solution is \( y = c_1 e^{rt} + c_2 te^{rt} \).

Okay, before we go on to discuss complex roots, let's watch this video discussing these two cases so far. This is a very in-depth discussion with great examples using a spring-mass-damped system. It will be well worth your time to watch it carefully.

MIT OCW - complex roots

Now let's work some practice problems with real, repeated roots. Solve these differential equations by determining the general solution. If initial conditions are given, use them to determine the particular solution.

Practice 9




Practice 10

\(y''+22y'+121y=0\); \(y(0)=2, y'(0)=-25\)



Practice 11




Practice 12




Practice 13




Complex Roots

Euler's Formula

\( e^{\pm i\mu} = \cos(\mu) \pm i \sin(\mu) \)

When the roots in the above discussion are complex, they are in the form \(r_1 = \alpha + i\beta \) and \(r_2 = \alpha - i\beta \) and our solution looks like \(\displaystyle{ y = c_1 e^{(\alpha + i\beta) t} + c_2 e^{(\alpha - i\beta)t} }\). This form is fine but it will look better if we use Eulers Formula to put it into another form.

From calculus we know that
\(\displaystyle{ e^x = \sum_{n=0}^{\infty}{ \left[ \frac{x^n}{n!} \right] }, ~~~ -\infty < x < \infty }\).
This is the Taylor series for \(e^x\) about \(x=0\).
If we let \( x = it \), then this sum becomes
\(\displaystyle{ e^{it} = \sum_{n=0}^{\infty}{\frac{(it)^n}{n!}} = \sum_{n=0}^{\infty}{\frac{(-1)^nt^{2n}}{(2n)!}} + i\sum_{n=1}^{\infty}{\frac{(-1)^{n-1}t^{2n-1}}{(2n-1)!}} }\)
Notice that we have separated the real and imaginary parts of the series (see note below). The first series is the Taylor series for \(\cos(t)\) about \(t=0\) and the second series is the Taylor series for \(\sin(t)\) about \(t=0\). So this gives us Euler's Formula
\( e^{it} = \cos(t) + i \sin(t).\)

Now that we have Euler's Formula, we can solve homogeneous equations with constant coefficients when the characteristic equation has complex roots, just as we did when the roots were real and not equal.

Note: When substituting \( x=it \) we have moved from the real domain to the complex plane. For the sake of argument we will assume this jump is valid without proof since this discussion is only meant to give you a feel for Euler's Formula.

Here is a video explaining this in more detail.

MIT OCW - more with Eulers Formula

After some manipulation, we can write this solution as \( y = e^{\alpha t}( A\cos(\beta t) + B\sin(\beta t) ) \). This video shows the details. It is a continuation of the spring-mass-damped system video.

PatrickJMT - spring-mass-damped system

The following table summaries these three cases.

real distinct roots

\(r_1, r_2\)

\(\displaystyle{ y = c_1 e^{r_1t} + c_2 e^{r_2t} }\)

real repeated roots

\(r_1 = r_2 = r\)

\( y = c_1 e^{rt} + c_2 te^{rt} \)

complex roots

\(r = \alpha \pm i\beta \)

\( y = e^{\alpha t}( A\cos(\beta t) + B\sin(\beta t) ) \)

Okay, let's work some practice problems with complex roots. Solve these differential equations by determining the general solution. If initial conditions are given, use them to determine the particular solution.

Practice 14

\( y''-6y'+10y=0 \)



Practice 15

\( y''+4y'+13y=0 \)


Practice 16




Practice 17




Practice 18

\(y''+4y'+20y=0\); \(y(0)=9, y'(0)=10\)



Practice 19



Non-Constant Coefficients

When the functions \(p(t)\) and \(q(t)\) are polynomials, they are a special classification called Cauchy-Euler Equations, which we cover on this separate page. We also cover Chebyshev's equations here.


Theorem: Existence and the Wronskian

For the second order linear homogeneous differential equation \( y''+p(t)y' + q(t)y = 0\) with initial conditions \( y(t_0) = a_0, y'(t_0) = a_1\), suppose that we have found two solutions \(y_1(t)\) and \(y_2(t)\). Then it is possible to determine the constants \( c_1, c_2 \) so that \( y(t) = c_1 y_1(t) + c_2 y_2(t)\) satisfies the differential equation and initial conditions if and only if the Wronskian \[ W = \begin{vmatrix} y_1(t_0) & y_2(t_0) \\ y_1'(t_0) & y_2'(t_0) \end{vmatrix} \] is not zero.

To use this theorem, we don't always need initial conditions. If we calculate the Wronskian \[ W = \begin{vmatrix} y_1(t) & y_2(t) \\ y_1'(t) & y_2'(t) \end{vmatrix} \] and determine that this Wronskian is non-zero everywhere, then we can still apply the theorem and say that we can construct solutions together with initial conditions specified at any value of t.

Determining Constants - We can use the Wronskian \(W\) to actually calculate the constants \( c_1, c_2 \). By substituting the initial conditions, we get the two equations with two unknowns \[ \begin{array}{rcrcrcr} c_1 y_1(t_0) & + & c_2 y_2(t_0) & = & a_0 \\ c_1 y_1'(t_0) & + & c_2 y_2'(t_0) & = & a_1 \end{array} \]


helpful? 2

We can use Cramer's Rule to solve a system of linear equations. For a reminder on how to do this, check out the linear algebra page.

Here is a video discussing the Wronskian, superposition and uniqueness.

MIT OCW - Wronskian, superposition and uniqueness

After working the practice problems below, you are ready to start on the inhomogeneous case. To get started, check out the top of the undetermined coefficients page.

next: undetermined coefficients →

Search 17Calculus

Real Time Web Analytics
menu top search
menu top search 17