Monday, September 17, 2012

Graeffe's Root Squaring Method

            Graeffe's method is one of the root finding method of a polynomial with real co-efficients. This method gives all the roots approximated in each iteration also this is one of the direct root finding method. Because this method does not require any initial guesses for roots. It was invented independently by Graeffe Dandelin and Lobachevsky. Which was the most popular method for finding roots of polynomials in the 19th and 20th centuries.

Attributes of nth order polynomial

  1. There will be n roots. Sometimes all the roots may real, all the roots may complex and sometimes roots may be combination of real and complex values.
              All the roots are real - (x2 - 5x + 6 = 0)
              All the roots are complex - (5x2 + x +3 = 0)
              Combination of real and complex roots - (5x3 - 4x2 - x - 3 = 0)
  2. There must be at least one real root, if n is odd. Because complex roots are occur in pairs.
  3. Discartes' rule of sign will be true for any n th order polynomial. 

Discartes' rule of sign

      If we get n th order polynomial as f(x) =0,  Discartes' rule of sign says that maximum number of positive roots of the polynomial f(x) =0, is equal to the number of sign changes of the polynomial f(x). Also maximum number of negative roots of the polynomial f(x), is equal to the number of sign changes of the polynomial f(-x).
        EX : let f(x) = x2 - 5x + 6 = 0.  Then f(-x) = x2 + 5x + 6 = 0
                Number of sign changes of f(x) =0 is 2.Because sign change (+ to -) and (- to +). So, f(x) has    maximum 2 positive roots. Number of sign changes of f(-x) =0 is 0. Because sign does not changed. So, f(x) = 0 does not have any negative roots.

 Lets get nth order polynomial as f(x) = 0. Then get {f(√x) * f(-√x) } = g(x) = 0. Then graeffe's method says that square root of the division of successive co-efficients of polynomial g(x) becomes the first iteration roots of the polynomial f(x). Also second iteration roots can get by multiplying  {g(√x) * g(-√x) } = h(x) = 0 and getting quadratic root of the polynomial g(x). Likewise we can reach exact solutions for the polynomial f(x).
EX :  If f(x) = x2 - 5x + 6 = 0 then f(√x) = x - 5√x  + 6 = 0  and f(-√x) = x + 5√x  + 6 = 0.
         g(x) =  x2 - 13x + 36 = 0.
         first iteration roots are,
                          R1 = (36/13)1/2 =  1.664100   and     R2 = (13/1)1/2 = 3.605551
         h(x) = g(√x)*g(-√x) = x2 - 97x + 1296 = 0.
         second  iteration roots are,
                          R1 = (1296/97)1/4 = 1.911869      R2 = (97/1)1/4 = 3.138288
         likewise for the third iteration,  R1 = (c3/c2)1/8 and R1 = (c2/c1)1/8. We can get any number of iterations and when iteration increases roots converge in to the exact roots.

Derivation


  Special cases in Graeffe's method

  1. If maximum power of polynomial is odd and after squaring if any coefficient of the function except the constant term, is zero, the method does not give exact roots.
       Ex :- g(x)=x3-1=0
      after first iteration g1(x)=x3 +0*x2 +0*x-1
      roots: r1=(|-1/0|)1/2      r2= (|0/0|)1/2  
                           r3= (|0/0|)1/2
      But g(x) has a real root (x=1)
  2. For polynomials that have complex roots, the method gives some real values as roots.
      Ex:- f(x) = x2+x+1
             g(x)=f(x).f(-x) = x4+x2+1
             roots: r1=(|1/1|)1/2    r2=(|1/1|)1/2
             But  f(x)  has complex roots. 

Comparision of graeffe's method with other populer methods

  1. žIt is very complex and very hard to doing manually
      but,
    Bisection method is a very simple and robust method
    Newton-Raphson method  is a very powerful method for finding the real root of an equation
             Secant method is an useful root finding method
  2. žThere are no initial guesses, this is direct method
    but,
    Bisection method - there are  two initial guesses.
    Newton raphson method - there is an initial guess.
    Secant method - there are  two initial guesses.
  3. žCan compute all roots from  one execution
    but,
    Bisection method - If polynomial has n root, method should execute n times using incremental search.
    Newton raphson method - this method  also use incremental search.
    Secant method - this is  also use incremental search.
  4. žThis method converge to absolute roots of polynomials
      but,
    Bisection method ,Newton raphson  method also Secant method converge to real roots of polynomials
  5. The  convergence of root finding methods
    Bisection  method - If one of the initial guesses is close to the root, the convergence is slower.
    Newton-Raphson method - It can be divergent if initial guess not close to the root.
    Secant  method - If the initial values aren’t close to the root, there is no guarantee that the      secant method converges.
     Graeffe’s method is faster than bisection method and sometimes, newton raphson method.

Drawbacks in Graeffe's Method

  1. There are some drawbacks in classical graeffe’s
    method,
    žIt is a many-to-one map. It can map well-conditioned polynomials into ill-conditioned ones.
    Ex:f(x) = x2−2x+1, g(x) = x2−1, h(x) = x2+1
       After two Graeffe iterations, all the three
      polynomials are mapped into f(x). It seems unique roots for all polynomials. But they have different real roots .
  2. žIts usual formulation leads to exponents exceeding the maximum allowed by floating-point arithmetic.
    Ex:-  f(x)= (x − 1)(x − 1.01)(x − 2)(x − 3)(x − 4)
                   =−24.24 + 74.5x − 85.35x2 + 45.1x3− 11.01x4 + x5
       Eight Graeffe iterations are not enough to compute the first root. But eight iterations are enough to overflow the IEEE double precision number system.
  3. žIt returns the absolute value of the roots, but not the actual roots.

Researches on Graeffe's Method

  • žThe floating-point arithmetic limitations of computation are avoided in an efficient implementation  called “Tangent Graeffe Iteration” by Malajovich and Zubelli (2001). They found a new variation of Graeffe iteration(Renormalizing), that is suitable to IEEE floating-point arithmetic of modern digital computers.
  • žRenormalization allows us to avoid coefficient growth, and replace a diverging algorithm by a convergent one.
  • žTo recovering the actual value of each root, and recovering pairs of conjugate roots or multiple roots, many algorithms have been proposed, such as reverse Graeffe iteration, splitting algorithms.

References

         Graeffe’s Method

 

1 comments:

  1. Graeffe'S Root Squaring Method >>>>> Download Now

    >>>>> Download Full

    Graeffe'S Root Squaring Method >>>>> Download LINK

    >>>>> Download Now

    Graeffe'S Root Squaring Method >>>>> Download Full

    >>>>> Download LINK aR

    ReplyDelete