Graeffe's method is one of the root finding method of a polynomial with real co-efficients. This method gives all the roots approximated in each iteration also this is one of the direct root finding method. Because this method does not require any initial guesses for roots. It was invented independently by Graeffe Dandelin and Lobachevsky. Which was the most popular method for finding roots of polynomials in the 19th and 20th centuries.
EX : let f(x) = x2 - 5x + 6 = 0. Then f(-x) = x2 + 5x + 6 = 0
Number of sign changes of f(x) =0 is 2.Because sign change (+ to -) and (- to +). So, f(x) has maximum 2 positive roots. Number of sign changes of f(-x) =0 is 0. Because sign does not changed. So, f(x) = 0 does not have any negative roots.
Lets get nth order polynomial as f(x) = 0. Then get {f(√x) * f(-√x) } = g(x) = 0. Then graeffe's method says that square root of the division of successive co-efficients of polynomial g(x) becomes the first iteration roots of the polynomial f(x). Also second iteration roots can get by multiplying {g(√x) * g(-√x) } = h(x) = 0 and getting quadratic root of the polynomial g(x). Likewise we can reach exact solutions for the polynomial f(x).
EX : If f(x) = x2 - 5x + 6 = 0 then f(√x) = x - 5√x + 6 = 0 and f(-√x) = x + 5√x + 6 = 0.
g(x) = x2 - 13x + 36 = 0.
first iteration roots are,
R1 = (36/13)1/2 = 1.664100 and R2 = (13/1)1/2 = 3.605551
h(x) = g(√x)*g(-√x) = x2 - 97x + 1296 = 0.
Attributes of nth order polynomial
- There will be n roots. Sometimes all the roots may real, all the roots may complex and sometimes roots may be combination of real and complex values.
All the roots are real - (x2 - 5x + 6 = 0)
All the roots are complex - (5x2 + x +3 = 0)
Combination of real and complex roots - (5x3 - 4x2 - x - 3 = 0) - There must be at least one real root, if n is odd. Because complex roots are occur in pairs.
- Discartes' rule of sign will be true for any n th order polynomial.
Discartes' rule of sign
If we get n th order polynomial as f(x) =0, Discartes' rule of sign says that maximum number of positive roots of the polynomial f(x) =0, is equal to the number of sign changes of the polynomial f(x). Also maximum number of negative roots of the polynomial f(x), is equal to the number of sign changes of the polynomial f(-x).EX : let f(x) = x2 - 5x + 6 = 0. Then f(-x) = x2 + 5x + 6 = 0
Number of sign changes of f(x) =0 is 2.Because sign change (+ to -) and (- to +). So, f(x) has maximum 2 positive roots. Number of sign changes of f(-x) =0 is 0. Because sign does not changed. So, f(x) = 0 does not have any negative roots.
Lets get nth order polynomial as f(x) = 0. Then get {f(√x) * f(-√x) } = g(x) = 0. Then graeffe's method says that square root of the division of successive co-efficients of polynomial g(x) becomes the first iteration roots of the polynomial f(x). Also second iteration roots can get by multiplying {g(√x) * g(-√x) } = h(x) = 0 and getting quadratic root of the polynomial g(x). Likewise we can reach exact solutions for the polynomial f(x).
EX : If f(x) = x2 - 5x + 6 = 0 then f(√x) = x - 5√x + 6 = 0 and f(-√x) = x + 5√x + 6 = 0.
g(x) = x2 - 13x + 36 = 0.
first iteration roots are,
R1 = (36/13)1/2 = 1.664100 and R2 = (13/1)1/2 = 3.605551
h(x) = g(√x)*g(-√x) = x2 - 97x + 1296 = 0.
second iteration roots are,
R1 = (1296/97)1/4 = 1.911869 R2 = (97/1)1/4 = 3.138288
likewise for the third iteration, R1 = (c3/c2)1/8 and R1 = (c2/c1)1/8. We can get any number of iterations and when iteration increases roots converge in to the exact roots.
Derivation
Special cases in Graeffe's method
- If maximum power of polynomial is odd and after squaring if any coefficient of the function except the constant term, is zero, the method does not give exact roots.
Ex :- g(x)=x3-1=0
after first iteration g1(x)=x3 +0*x2 +0*x-1roots: r1=(|-1/0|)1/2 r2= (|0/0|)1/2r3= (|0/0|)1/2But g(x) has a real root (x=1) - For polynomials that have complex roots, the method gives some real values as roots.
Ex:- f(x) = x2+x+1g(x)=f(x).f(-x) = x4+x2+1roots: r1=(|1/1|)1/2 r2=(|1/1|)1/2But f(x) has complex roots.
Comparision of graeffe's method with other populer methods
- It is very complex and very hard to doing manuallybut,Bisection method is a very simple and robust methodNewton-Raphson method is a very powerful method for finding the real root of an equationSecant method is an useful root finding method
- There are no initial guesses, this is direct methodbut,Bisection method - there are two initial guesses.Newton raphson method - there is an initial guess.Secant method - there are two initial guesses.
- Can compute all roots from one executionbut,Bisection method - If polynomial has n root, method should execute n times using incremental search.Newton raphson method - this method also use incremental search.Secant method - this is also use incremental search.
- This method converge to absolute roots of polynomialsbut,Bisection method ,Newton raphson method also Secant method converge to real roots of polynomials
- The convergence of root finding methodsBisection method - If one of the initial guesses is close to the root, the convergence is slower.Newton-Raphson method - It can be divergent if initial guess not close to the root.Secant method - If the initial values aren’t close to the root, there is no guarantee that the secant method converges.Graeffe’s method is faster than bisection method and sometimes, newton raphson method.
Drawbacks in Graeffe's Method
- There are some drawbacks in classical graeffe’smethod,It is a many-to-one map. It can map well-conditioned polynomials into ill-conditioned ones.Ex:f(x) = x2−2x+1, g(x) = x2−1, h(x) = x2+1After two Graeffe iterations, all the threepolynomials are mapped into f(x). It seems unique roots for all polynomials. But they have different real roots .
- Its usual formulation leads to exponents exceeding the maximum allowed by floating-point arithmetic.Ex:- f(x)= (x − 1)(x − 1.01)(x − 2)(x − 3)(x − 4)=−24.24 + 74.5x − 85.35x2 + 45.1x3− 11.01x4 + x5Eight Graeffe iterations are not enough to compute the first root. But eight iterations are enough to overflow the IEEE double precision number system.
- It returns the absolute value of the roots, but not the actual roots.
Researches on Graeffe's Method
- The floating-point arithmetic limitations of computation are avoided in an efficient implementation called “Tangent Graeffe Iteration” by Malajovich and Zubelli (2001). They found a new variation of Graeffe iteration(Renormalizing), that is suitable to IEEE floating-point arithmetic of modern digital computers.
- Renormalization allows us to avoid coefficient growth, and replace a diverging algorithm by a convergent one.
- To recovering the actual value of each root, and recovering pairs of conjugate roots or multiple roots, many algorithms have been proposed, such as reverse Graeffe iteration, splitting algorithms.
Graeffe'S Root Squaring Method >>>>> Download Now
ReplyDelete>>>>> Download Full
Graeffe'S Root Squaring Method >>>>> Download LINK
>>>>> Download Now
Graeffe'S Root Squaring Method >>>>> Download Full
>>>>> Download LINK aR