Quantcast
Loading [MathJax]/jax/output/CommonHTML/jax.js

General Solution for a First-Order Hom. Lin. System of Equations

Last modified by
on
Jul 24, 2020, 6:28:07 PM
Created by
on
Jun 14, 2016, 2:36:07 PM
General Solution=c1eλ1t(x1y1)+c2eλ2t(x2y2)real eigenvalues=c1eγt(x1y1)+c2eγt(x2y2)complex eigenvalues
First equation x-value scalar
First equation y-value scalar
Second equation x-value scalar
Second equation y-value scalar
Tags
UUID
54b77428-323d-11e6-9770-bc764e2038f2

Interestingly, the computation of eigenvalues and eigenvectors can serve many purposes; however, when it comes to differential equations, eigenvalues and eigenvectors are most often used to find straight-line solutions to a system of equations.

For this reason, we are going to review how to compute eigenvalues and eigenvectors, putting the computations together to get the form of the general solution; there are different cases that make the form of the general solution slightly different. When you compute eigenvalues and find complex, real-valued, or real-valued repeated eigenvalues the form of the general solution will vary slightly. 

Computation of Eigenvalues

First let's review how to find eigenvalues!

To find eigenvalues, we use the formula:
     Av=λv

where A=(abdc) and v=(xy). This becomes
     (abdc)(xy)=λ(xy), which can be written in components as
     ax+by=λx
     cx+dy=λy

We want to solve for non-zero solution, such that the system becomes
     (a-λ)x +by=0
     cx +(d-λ)y=0

We can prove that given a matrix A whose determinant is not equal to zero, the only equilibrium point for the linear system is the origin, meaning that to solve the system above we take the determinant and set it equal to zero.
     det(a-λbcd-λ)=0

Every time we compute eigenvalues and eigenvectors we use this format,  which can also be written as det(A-λI)=0, where I is the Identity matrix I=(1001). Computation of det(A-λI)=0 leads to the Characteristic Polynomial, where the roots of this polynomial are the eigenvalues of the matrix A. 

We have det(A-λI)=det(a-λbcd-λ)=(a-λ)(d-λ)-bc=0, which expands to the quadratic polynomial
     λ2-(a+d)λ+(ad-bc)=0.
It is also useful to look at how the characteristic polynomial relates to the trace and determinant of a matrix.
Given a matrix A=(abcd) and characteristic polynomial λ2-(a+d)λ+(ad-bc) we also see that the characteristic polynomial is equivalent to λ2-Tλ+D=0 where T=trace=a+d and D=determinant=ad-bc.

The characteristic polynomial always has two roots. These roots can be real or complex, and they do not have to be distinct. If the roots are complex we say that the matrix has complex eigenvalues. Otherwise, we say that the matrix has real eigenvalues. 

You may come across a system where the two roots of the characteristic polynomial are the same real-valued number. Do not fret-we will review examples of cases like this, along with other examples, as we go along. 

Another important thing to review is the quadratic formula, a formula that is so useful, but so often forgotten!
Quadratic formula =-b±b2-4ac2a
Given a quadratic polynomial: x2+7x+3, 

  • the coefficient of x2 is a
  • the coefficient of x is b
  • the remaining scalar is c

Written abstractly, such a quadratic polynomial takes the form: ax2+bx+c

This is important to review because it allows us to efficiently find the roots of the characteristic polynomial, which are the eigenvalues, λ. Just as trace and determinant relate to the characteristic polynomial, they also relate to the quadratic formula, which can be written as:

     λ=T±T2-4D2

This makes sense because the trace and determinant are the coefficients of x2andx in the characteristic polynomial. 

Examples

Here are examples of how to solve for both kinds of eigenvalues:

Let's begin with an example where we compute real eigenvalues:
Suppose we have the matrix:
     A=(5432)

     det(A-λI)=det(5-λ432-λ)=(5-λ)(2-λ)-43=0
     (5-λ)(2-λ)-12=λ2-7λ+(-2)=0

The roots are:
     λ=7±49-482
     λ=4,3

Now let's take an example of where we compute repeated eigenvalues:
     A=(71-43)

     det(A-λI)=det(7-λ1-43-λ)=(7-λ)(3-λ)-(-41)=0
     (7-λ)(3-λ)+4=λ2-10λ+(25)=0
     (λ-5)2=0

The roots are:
     λ1,2=5

Now we will compute complex eigenvalues:
Before we start we should review what it means to have a complex number. "Complex numbers are numbers of the form x + iy, where x and y are real numbers and i is the 'imaginary number' -1 " (Blanchard, Devaney, Hall, 291). 

Consider the system where A=(-2-33-2)
     det(A-λI)=det(-2-λ-33-2-λ)=(-2-λ)(-2-λ)-(-33)=λ2+4λ+13=0.
The roots are:
     λ=-4±-362

We see that the -36 is equal to 6i, such that the eigenvalues become:
     λ=-4±6i2=-2±3i

Computation of Eigenvectors

Given a matrix A=(abcd) and we know that λ is an eigenvalue, we use the same equation from above, Av=λv, to solve for v of the form v=(xy). We notice that Av=λv turns into a system of linear equations:

     ax+by=λx
     cx+dy=λy

Because we have already solved for λ, "we know that there is at least an entire line of eigenvectors (x, y) that satisfy this system of equations. This infinite number of eigenvectors means that the equations are redundant. That is, either the two equations are equivalent, or one of the equations is always satisfied" (Blanchard, Devaney, Hall, 266). 

Examples

We will give an example to demonstrate what is meant by the statement above: 

Consider the matrix A=(2213)

     det(A-λI)=(2-λ)(3-λ)-(21)=0
     λ2-5λ+4=0
     λ=1,4 or λ1=4,λ2=1 

Let's use λ2 in the equation:
     A(xy)=(2213)(xy)=1(xy) 

Rewritten in terms of components, the equation becomes
     2x+2y=x
     1x+3y=y
or
     x+2y=0
     x+2y=0

We can see that -12x=y satisfies both equations, such that the eigenvector for λ2=(1-12)

Now let's view an example where there are complex eigenvalues and a complex eigenvector:

Let's begin where we left off in the example from before where A = (-2-33-2)
We found that eigenvalues were λ1=-2+3i,λ2=-2-3i

Let's take λ1 and plug it into the equation,

     A(xy)=(2213)(xy)=(-2+3i)(xy)

As a system of equations this is:
     -2x-3y =(-2+3i)x
     3x-2y=(-2+3i)y
Which can be rewritten as:
     (-3i)x+3y=0
     3x+(-3i)y=0

Just as in the example above, the equations are redundant. We see that (i)x=y and v=(1i)

There are two notably special cases for eigenvalues and eigenvectors; these are when you have repeated eigenvalues or zero as an eigenvalue. These are notable because (as you would think) there is only one eigenvector associated with a repeated eigenvalue, which impacts how you set up the general solution for a system with repeated eigenvalues. Additionally, if zero is your eigenvalue, Blanchard, Devaney, and Hall state, "This case is important because it divides the linear systems with strictly positive eigenvalues (sources) and strictly negative eigenvalues (sinks) from those with one positive and one negative eigenvalue (saddles)" (319). If you do not understand what is meant by this, you may wish to review how to classify equilibria

Basically, the steps of how you go about finding the corresponding eigenvector for a repeated or zero eigenvalue is the same as above, but these cases are distinguishable because of each step finding eigenvalues, finding eigenvectors, and setting up the form of the general solution build on each other. 

As it Relates to the General Solution

As we previously mentioned, we are going to use eigenvalues and eigenvectors to come up with the form of our general solution. It is important to understand why eigenvalues and eigenvectors are related to a solution. 

For first-order systems of equations, we say that eigenvalues and eigenvectors lead us to a straight-line solution; straight-line solutions are "the simplest solutions (next to equilibrium points) for systems of differential equations" (260). Based on direction fields of systems of differential equations, it is easy to see "special lines" where vectors in the direction field all point in the same direction, either towards or away from the origin. Blanchard, Devaney, and Hall state, "That is, if V=(x, y) is on a straight-line solution, then the vector field at (x,y) must point either in the same direction or in exactly the opposite direction as the vector from (0,0) to (x,y)" (260). By noting that (abcd)v=vλ we algebraically relate how straight-line solutions appear in a direction field to how they can be computed. This is important because it allows us to find both trivial and non-trivial solutions, given λ0.

When it comes down to relating eigenvalues and eigenvectors to the general solution of a first-order system of equations, we will leave it up to you to back up and prove that these forms are correct because that would take up more space and time than we have allotted for, but you may want to review things such as the guessing method for differential equations, the linearity principle, integrating factor, and more to help you out with this deeper understanding!

Given real-valued eigenvalues:
When we have λ1 and λ2 with eigenvectors (x1y1),(x2y2) respectively, the general form is:
     Y(t)=eλ1t(x1y1)+eλ2t (x2y2)

Given repeated eigenvalues:

Our equation does not compute eigenvectors for repeated eigenvalues, and instead gives you the general scalar form of the solution. However, we did not want to fail to describe how to find the general vector form of the solution for repeated eigenvalues. Especially, since all of the other cases in the equation above are answered using the general vector form of the solution. 

The following theorem from Blanchard, Devaney, and Hall gives the gist of the form of the general solution for repeated eigenvalues (313):
"Suppose dYdt=AY is a linear system in which the 2x2 Matrix A has a repeated real eigenvalue λ but only one line of eigenvectors. Then the general solution has the form
                                                                        Y(t)=eλtV0+teλtV1
where V0=(x0,y0) is an arbitrary initial condition and V1 is determined from V0 by
                                                                                  V1=(A-λI )V0
If V1 is zero, then V0is an eigenvector and Y(t) is a straight-line solution. Otherwise, V1 is an eigenvector ." 

You may be wondering what this means; simply put, this mean that you are going to solve for the eigenvector for the one eigenvalue that you know before solving the system again, setting the eigenvector equal to A-λI.

Since this can be a little confusing upon the first introduction, we will provide an example:
I need to add this example
 

Given zero as an eigenvalue:
The general form for zero as λ1 is as follows:
     Y(t)=c1V1+c(2)eλ2tV2
This is because we have to take into account that e0 is always equal to one, eliminating the necessity of including e0t as part of our answer.

Given complex eigenvalues:
Complex eigenvalues are slightly more tricky to deal with than real eigenvalues when finding the general solution. This is because you rely on only one eigenvalue/eigenvector pair, and you can obtain a real-valued solution from the complex solution that you come up with.
We've already reviewed how to find the eigenvalues and eigenvectors; suppose we have λ1 = e-2+3i and we know that the eigenvector = (i1), then we can combine the two such that they resemble a form similar to the general form for real-valued eigenvalues:
     Y(t) = e(-2+3i)t(i1) = (ie(-2+3i)te(-2+3i)t)

We know that this is a solution to the system, but this is not particularly satisfying; it still relies heavily on complex numbers. How we get rid of complex numbers is through euler's formula that states that given eγ+iβ it is equivalent to:

     eγ+iβ=eγeiβ=eγcos(β)+ieγsin(β)

Given our complex solution, we can expand λ1=e(-2+3i)t to be equivalent to λ1=e-2tcos(3t)+ie-2sin(3t).

This proves significant because when we multiply λ1 by the eigenvector (i1) we get:

     (i(e-2tcos(3t)+ie-2sin(3t))e-2tcos(3t)+ie-2sin(3t))=(ie-2tcos(3t)-e-2tsin(3t)e-2tcos(3t)+ie-2tsin(3t))=(-e-2tsin(3t)e-2tcos(3t))+i(e-2tcos(3t)e-2tsin(3t))

By separating the real and imaginary parts: 

     Yre(t)=(-e-2tsin(3t)e-2tcos(3t))
     Yim(t)=(e-2tcos(3t)e-2tsin(3t))

we know that both are solutions of the original system. They are independent solutions because their initial values are independent (Blanchard, Devaney, Hall, 296). Therefore by absorbing i into c2 we get the following as the general form for this example:

     Y(t)=c1(-e-2tsin(3t)e-2tcos(3t))+c2(e-2tcos(3t)e-2tsin(3t))

Writing down an end all, be all format for the general solution for complex eigenvalues is hard; it varies heavily based on the eigenvector, and therefore you might wish to look at the process of finding the general solution as analogous to a recipe with varying ingredients rather than a set of hard and true rules. 

As long as you follow Euler's formula (assuming your eigenvalue/eigenvector are correct), you should be able to find the right answer!

See Also

https://youtu.be/bOreOaAjDno
http://tutorial.math.lamar.edu/Classes/DE/LA_Eigen.aspx
https://www.khanacademy.org/math/linear-algebra/alternate-bases/eigen-everything/v/linear-algebra-introduction-to-eigenvalues-and-eigenvectors

Sources

Blanchard, Paul, Robert L. Devaney, and Glen R. Hall. Differential Equations. 3rd ed. Belmont, CA: Thomson Brooks/Cole, 2006. Print. 


  • Comments
  • Attachments
  • Stats
No comments
This site uses cookies to give you the best, most relevant experience. By continuing to browse the site you are agreeing to our use of cookies.