Wednesday, September 19, 2012

Simple Hello World Game In C

         Do you want to publish source codes in your blog or web site as follows.
                              Visit Source Code Formatter
 ////////////////////////////A Hello World for game/////////////////////////////////  
 ///////////////////////////////By croosetech///////////////////////////////////////  
 /////////////////Visit croosetech.blogspot.com for more////////////////////////////  
 #include <stdio.h>  
 #include <stdlib.h>  
 #include <string.h>  
 #include <time.h>  
 int main(){  
        int a=0,i=0,answer=0,last=0,x=0,exit=0;  
        last=10;  
        unsigned int iseed = (unsigned int)time(NULL);  
     srand (iseed);  
        while(x<5){  
             i=3;  
             a=rand()%last;   
             printf("\n****************************************\n");  
           printf("\tThis is a Hello World Game\n");  
           printf("****************************************\n");  
             printf("I have a number between 0 to %d.\n\n",last-1);  
             while(i>0){  
                  printf("Try to guess my number(%i attemps remaining): ",i);   
                  scanf("%d", &answer);  
                  if(answer<a){  
                       printf("\nYours %d is too small...\n", answer);   
                       i--;  
                       continue;  
                  }  
                  if(answer>a){  
                       printf("\nYours %d is too big...\n", answer);   
                       i--;  
                       continue;  
                  }  
                  if(answer==a){  
                       printf("\nYou're Right! My number is %d!\n\n\t\tCongratz...", a);   
                       break;  
                  }  
             }  
             if(i==0){  
                  printf("\n\t\tYou are looser !!!.\n\t\t GAME OVER\n");  
                  printf("****************************************\n");  
                  printf("To try again press 1(any other to exit): ");  
                  scanf("%d", &exit);  
                  if(exit==1){  
                       x=0;  
                       last=10;  
                       continue;  
                  }  
                  else {  
                       printf("Good bye!!!");  
                       return 0;  
                  }  
             }  
             else{  
                  printf("Now you are move on to next level\n\n");  
                  x++;  
                  last=last+10;  
                  continue;  
             }  
       }  
       if(x==5){  
            system("clear");  
            printf("\t\tAWESOME PERFORMANCE\n\n\n");  
       }  
      return 0;    
 }  

Monday, September 17, 2012

Graeffe's Root Squaring Method

            Graeffe's method is one of the root finding method of a polynomial with real co-efficients. This method gives all the roots approximated in each iteration also this is one of the direct root finding method. Because this method does not require any initial guesses for roots. It was invented independently by Graeffe Dandelin and Lobachevsky. Which was the most popular method for finding roots of polynomials in the 19th and 20th centuries.

Attributes of nth order polynomial

  1. There will be n roots. Sometimes all the roots may real, all the roots may complex and sometimes roots may be combination of real and complex values.
              All the roots are real - (x2 - 5x + 6 = 0)
              All the roots are complex - (5x2 + x +3 = 0)
              Combination of real and complex roots - (5x3 - 4x2 - x - 3 = 0)
  2. There must be at least one real root, if n is odd. Because complex roots are occur in pairs.
  3. Discartes' rule of sign will be true for any n th order polynomial. 

Discartes' rule of sign

      If we get n th order polynomial as f(x) =0,  Discartes' rule of sign says that maximum number of positive roots of the polynomial f(x) =0, is equal to the number of sign changes of the polynomial f(x). Also maximum number of negative roots of the polynomial f(x), is equal to the number of sign changes of the polynomial f(-x).
        EX : let f(x) = x2 - 5x + 6 = 0.  Then f(-x) = x2 + 5x + 6 = 0
                Number of sign changes of f(x) =0 is 2.Because sign change (+ to -) and (- to +). So, f(x) has    maximum 2 positive roots. Number of sign changes of f(-x) =0 is 0. Because sign does not changed. So, f(x) = 0 does not have any negative roots.

 Lets get nth order polynomial as f(x) = 0. Then get {f(√x) * f(-√x) } = g(x) = 0. Then graeffe's method says that square root of the division of successive co-efficients of polynomial g(x) becomes the first iteration roots of the polynomial f(x). Also second iteration roots can get by multiplying  {g(√x) * g(-√x) } = h(x) = 0 and getting quadratic root of the polynomial g(x). Likewise we can reach exact solutions for the polynomial f(x).
EX :  If f(x) = x2 - 5x + 6 = 0 then f(√x) = x - 5√x  + 6 = 0  and f(-√x) = x + 5√x  + 6 = 0.
         g(x) =  x2 - 13x + 36 = 0.
         first iteration roots are,
                          R1 = (36/13)1/2 =  1.664100   and     R2 = (13/1)1/2 = 3.605551
         h(x) = g(√x)*g(-√x) = x2 - 97x + 1296 = 0.
         second  iteration roots are,
                          R1 = (1296/97)1/4 = 1.911869      R2 = (97/1)1/4 = 3.138288
         likewise for the third iteration,  R1 = (c3/c2)1/8 and R1 = (c2/c1)1/8. We can get any number of iterations and when iteration increases roots converge in to the exact roots.

Derivation


  Special cases in Graeffe's method

  1. If maximum power of polynomial is odd and after squaring if any coefficient of the function except the constant term, is zero, the method does not give exact roots.
       Ex :- g(x)=x3-1=0
      after first iteration g1(x)=x3 +0*x2 +0*x-1
      roots: r1=(|-1/0|)1/2      r2= (|0/0|)1/2  
                           r3= (|0/0|)1/2
      But g(x) has a real root (x=1)
  2. For polynomials that have complex roots, the method gives some real values as roots.
      Ex:- f(x) = x2+x+1
             g(x)=f(x).f(-x) = x4+x2+1
             roots: r1=(|1/1|)1/2    r2=(|1/1|)1/2
             But  f(x)  has complex roots. 

Comparision of graeffe's method with other populer methods

  1. žIt is very complex and very hard to doing manually
      but,
    Bisection method is a very simple and robust method
    Newton-Raphson method  is a very powerful method for finding the real root of an equation
             Secant method is an useful root finding method
  2. žThere are no initial guesses, this is direct method
    but,
    Bisection method - there are  two initial guesses.
    Newton raphson method - there is an initial guess.
    Secant method - there are  two initial guesses.
  3. žCan compute all roots from  one execution
    but,
    Bisection method - If polynomial has n root, method should execute n times using incremental search.
    Newton raphson method - this method  also use incremental search.
    Secant method - this is  also use incremental search.
  4. žThis method converge to absolute roots of polynomials
      but,
    Bisection method ,Newton raphson  method also Secant method converge to real roots of polynomials
  5. The  convergence of root finding methods
    Bisection  method - If one of the initial guesses is close to the root, the convergence is slower.
    Newton-Raphson method - It can be divergent if initial guess not close to the root.
    Secant  method - If the initial values aren’t close to the root, there is no guarantee that the      secant method converges.
     Graeffe’s method is faster than bisection method and sometimes, newton raphson method.

Drawbacks in Graeffe's Method

  1. There are some drawbacks in classical graeffe’s
    method,
    žIt is a many-to-one map. It can map well-conditioned polynomials into ill-conditioned ones.
    Ex:f(x) = x2−2x+1, g(x) = x2−1, h(x) = x2+1
       After two Graeffe iterations, all the three
      polynomials are mapped into f(x). It seems unique roots for all polynomials. But they have different real roots .
  2. žIts usual formulation leads to exponents exceeding the maximum allowed by floating-point arithmetic.
    Ex:-  f(x)= (x − 1)(x − 1.01)(x − 2)(x − 3)(x − 4)
                   =−24.24 + 74.5x − 85.35x2 + 45.1x3− 11.01x4 + x5
       Eight Graeffe iterations are not enough to compute the first root. But eight iterations are enough to overflow the IEEE double precision number system.
  3. žIt returns the absolute value of the roots, but not the actual roots.

Researches on Graeffe's Method

  • žThe floating-point arithmetic limitations of computation are avoided in an efficient implementation  called “Tangent Graeffe Iteration” by Malajovich and Zubelli (2001). They found a new variation of Graeffe iteration(Renormalizing), that is suitable to IEEE floating-point arithmetic of modern digital computers.
  • žRenormalization allows us to avoid coefficient growth, and replace a diverging algorithm by a convergent one.
  • žTo recovering the actual value of each root, and recovering pairs of conjugate roots or multiple roots, many algorithms have been proposed, such as reverse Graeffe iteration, splitting algorithms.

References

         Graeffe’s Method

 

Saturday, September 15, 2012

How to find Z values of a string

             Finding Z value of a string is requires for Z algorithm. Z algorithm is one of the linear string matching algorithm. As pre-processing of Z algorithm we find Z value of given string. We will denote string by S an arbitrary string. Let S be a string and i >1 one of its positions. Starting at i, we consider all the substrings S[i..j], where i ≤ j ≤ |S|, so that S[i..j] matches a prefix of S itself. Among all the possible values j may take, select the greatest one, max(j) and denote the number [max(j) − i + 1] by Zi(S). If S[i] is different from S[1], then such a j does not exist and Zi(S) is set to 0.
              I think it is very hard to understand. Lets consider finding Z value using an example. Finding Z value starts from 2 to length of the string. That means if our string is "ABCDEF", we only find Z2 to Z6 except Z1. Because we can not find value for Z1.
Lets get our string as S = AABCDAABCXYAABCDAABCDX
Zi is equal to the length of the longest common prefix of S string and S[i to |S|] string. 

Finding Z2
        Devide string in to two sub-strings (first letter as a sub-string and rest as another sub-string)
             S1 =  AABCDAABCXYAABCDAABCDX
             S2 =  ABCDAABCXYAABCDAABCDX
        Then find length of the longest common prefixes of these two substrings. Look How to find prefixes of  a string if you are not well with prefixes.

Prefix of S1 sub-string    -  A, AA , AAB , AABC , AABCD , AABCDA , AABCDAA , AABCDAAB ,        AABCDAABC , AABCDAABCX , AABCDAABCXY , AABCDAABCXYA , AABCDAABCXYAA , AABCDAABCXYAAB , AABCDAABCXYAABC , AABCDAABCXYAABCD , AABCDAABCXYAABCDA , AABCDAABCXYAABCDAA , AABCDAABCXYAABCDAAB , AABCDAABCXYAABCDAABC , AABCDAABCXYAABCDAABCD , AABCDAABCXYAABCDAABCDX

Prefix of S2 substring - A , AB , ABC , ABCD , ABCDA , ABCDAA , ABCDAAB , ABCDAABC , ABCDAABCX , ABCDAABCXY , ABCDAABCXYA , ABCDAABCXYAA , ABCDAABCXYAAB , ABCDAABCXYAABC , ABCDAABCXYAABCD , ABCDAABCXYAABCDA , ABCDAABCXYAABCDAA , ABCDAABCXYAABCDAAB , ABCDAABCXYAABCDAABC , ABCDAABCXYAABCDAABCD , ABCDAABCXYAABCDAABCDX

Highlighted by green are the common prefixes of these two sub-strings. Length of the longest common prefix become as Z value of that position. Therefore, Z2 = 1
Finding Z3
        Devide string in to two sub-strings (first two letters as a sub-string and rest as another sub-string)
             S1 =  AABCDAABCXYAABCDAABCDX
             S2 =  BCDAABCXYAABCDAABCDX


        Then find length of the longest common prefixes of these two substrings.

Prefix of S1 sub-string    -  A, AA , AAB , AABC , AABCD , AABCDA , AABCDAA , AABCDAAB ,        AABCDAABC , AABCDAABCX , AABCDAABCXY , AABCDAABCXYA , AABCDAABCXYAA , AABCDAABCXYAAB , AABCDAABCXYAABC , AABCDAABCXYAABCD , AABCDAABCXYAABCDA , AABCDAABCXYAABCDAA , AABCDAABCXYAABCDAAB , AABCDAABCXYAABCDAABC , AABCDAABCXYAABCDAABCD , AABCDAABCXYAABCDAABCDX


Prefix of second substring - B , BC ,BCD , BCDA , BCDAA , BCDAAB , BCDAABC , BCDAABCX , BCDAABCXY , BCDAABCXYA , BCDAABCXYAA , BCDAABCXYAAB , BCDAABCXYAABC , BCDAABCXYAABCD , BCDAABCXYAABCDA , BCDAABCXYAABCDAA , BCDAABCXYAABCDAAB , BCDAABCXYAABCDAABC , BCDAABCXYAABCDAABCD , BCDAABCXYAABCDAABCDX

There are no any common prefixes in these two sub-strings. Therefore, Z3 = 0
Likewise we can find other Z values.
Finding Z4
            S1 =  AABCDAABCXYAABCDAABCDX
            S2 =  CDAABCXYAABCDAABCDX                            Z4=0

Finding Z5
          S1 =  AABCDAABCXYAABCDAABCDX
          S2 =  DAABCXYAABCDAABCDX                                 Z5=0

Finding Z6
            S1 =  AABCDAABCXYAABCDAABCDX
            S2 = AABCXYAABCDAABCDX                                    Z6=4

Finding Z7
            S1 =  AABCDAABCXYAABCDAABCDX
            S2 =  ABCXYAABCDAABCDX                                      Z7=1

Finding Z8
           S1 =  AABCDAABCXYAABCDAABCDX
           S2 =  BCXYAABCDAABCDX                                          Z8=0

Finding Z9
            S1 =  AABCDAABCXYAABCDAABCDX
            S2 =  CXYAABCDAABCDX                                            Z9=0

Finding Z10
            S1 =  AABCDAABCXYAABCDAABCDX
            S2 =  XYAABCDAABCDX                                               Z10=0

Finding Z11
            S1 =  AABCDAABCXYAABCDAABCDX
            S2 =  YAABCDAABCDX                                                 Z11=0

Finding Z12
           S1 =  AABCDAABCXYAABCDAABCDX
           S2 =  AABCDAABCDX                                                     Z12=9

Finding Z13
            S1 =  AABCDAABCXYAABCDAABCDX
            S2 =  ABCDAABCDX                                                       Z13=1

Finding Z14
            S1 =  AABCDAABCXYAABCDAABCDX
            S2 =  BCDAABCDX                                                         Z14=0

Finding Z15
            S1 =  AABCDAABCXYAABCDAABCDX
            S2 =  CDAABCDX                                                            Z15=0

Finding Z16
            S1 =  AABCDAABCXYAABCDAABCDX
            S2 =  DAABCDX                                                               Z16=0

Finding Z17
            S1 =  AABCDAABCXYAABCDAABCDX
            S2 =  AABCDX                                                                  Z17=5

Finding Z18
            S1 =  AABCDAABCXYAABCDAABCDX
            S2 =  ABCDX                                                                     Z18=1

Finding Z19
            S1 =  AABCDAABCXYAABCDAABCDX
            S2 =  BCDX                                                                        Z19=0

Finding Z20
            S1 =  AABCDAABCXYAABCDAABCDX
            S2 = CDX                                                                            Z20=0

Finding Z21
           S1 =  AABCDAABCXYAABCDAABCDX
           S2 =  DX                                                                               Z21=0

Finding Z22
            S1 =  AABCDAABCXYAABCDAABCDX
            S2 =  X                                                                                Z22=0

Wednesday, September 12, 2012

Boyer Moore string Matching algorithm

             Boyer Moore string matching algorithm is one of the improved version of Naive or Brute Force String Matching Algorithm. In this method we try to get more than one position movement like in Brute Force method to minimize running time of string matching process.This method use two running time heuristics to minimize running time.
  1. Looking-Glass Heuristics
  2. Character-Jump Heuristic
Now lets get idea of these two Heuristic

Looking-Glass Heuristics

          There we start comparison of our pattern with the text from the end of pattern and
move backward to the front of pattern. This comparison technique may not good all the time than comparing from first letter to last letter of the pattern. Sometimes it may be better than front to the backward comparing and sometimes may not.
          As an example get pattern as FOOD and comparing letters of the text as GOOD.
then front to the backward comparison get only one comparison to move to the next position. Because F and G gets mismatch. But backward to the front comparison get 4 comparisons to move to the next position. Therefore, we can't say that Looking-Glass Heuristics is not always time effective. It may be or may not be.

 Character-Jump Heuristic

          In this technique, we find more number of movement positions than moving one by one position when we get mismatches. Let text array as T[] and pattern array as P[]. If we find that T[i] letter mismatch with P[j] letter. Then we find that T[i] letter is in anywhere of P that does not take comparison in that position. If it is, then we move P as get align with the T[i] letter. If it is not,then move pattern completely past T[i] letter.

According to the above two heuristics, we can say that,
  1. Looking-Glass heuristic enables to get to the destination faster by going backward if there is a mismatch during the consideration of P at a certain location in T.
  2. Character-Jump heuristic enables to avoid lots of needless comparisons by significantly shifting P relative to T.
There are three types of Boyer Moore string Matching algorithms.
  1. Bad Character Rule
  2. Extended Bad Character Rule
  3. Good Suffix Rule

Bad Character Rule

      Mismatching character of the text aligned with the pattern is called Bad Character. In this rule we should have to find right most occurrences of each letters of the alphabet. As an example lets get
     Alphabet S={A,B,C,D,R,T}
     Pattern P= ARDCARA
     Text T=ABATARADABARDAARADABADATATABAT

Now lets find right most occurrence positions(R) of alphabet
       R(A) = 7               R(B) = 0             R(C) = 4           R(D) = 3           R(R) = 6            R(T) = 0

ALGORITHM:
        if T[k] letter mismatched with P[i] letter
               shift P along by max[1,i-R(T[k])]

Red Boxes - Mismatches               Green Boxes - Matches
Explanation of each raw.
  1. First raw is the text and second raw is the pattern. 
  2. Last three letters matched and C mismatched. Bad Character is T and i is 4. Find right most occurrence of  T in Pattern. That is R(T) = 0. Then find (i- R(T) ) = 4-0 = 4. Then find max(1,4). That is 4. Then move by 4 positions.   
  3. Last letter matched and R mismatched. Bad Character is B and i is 6. Find right most occurrence of B in Pattern. That is R(B) = 0. Then find (i- R(B) ) = 6-0 = 6. Then find max(1,6). That is 6. Then     move by 6 positions. 
  4. Last three letters matched and C mismatched. Bad Character is A and i is 4. Find right most occurrence of  A in Pattern. That is R(A) = 7. Then find (i- R(A)) = 4-7 = -3. Then find max(1,-3). That is 1. Then move by 1 positions.  
  5. Last letter A mismatched. Bad Character is D and i is 7. Find right most occurrence of  D in Pattern. That is R(D) = 3. Then find (i- R(D) ) = 7-3 = 4. Then find max(1,4). That is 4. Then move by 4 positions. 
  6. Last letter A mismatched. Bad Character is D and i is 7. Find right most occurrence of  D in Pattern. That is R(D) = 3. Then find (i- R(D) ) = 7-3 = 4. Then find max(1,4). That is 4. Then move by 4 positions.
  7. Last letter A mismatched. Bad Character is T and i is 7. Find right most occurrence of  T in Pattern. That is R(T) = 0. Then find (i- R(T) ) = 7-0 = 7. Then find max(1,7). That is 7. Then move by 7 positions.
Above example we can see,       
      We get more than one shift only when  R(T[k]) + 1 < i. Otherwise we shift the pattern by only one position. Therefore this rule is not useful when R(T[k]) >= i. Therefore we move on to extended bad character rule.

Extended Bad Character Rule 

         In this rule we does not consider right most occurrences of letters of alphabet and we should have to find all the occurrences of each letters of the alphabet. As an example lets get
     Alphabet S={A,B,C,D,R,T}
     Pattern P= ARDCARA
     Text T=ABATARADABARDAARADABADATATABAT

Now lets find  occurrence positions of alphabet
     A_list = {7,5,1}            B_list = {Ø}           C_list = {4}        D_list = {3}
     R_list = {2,6}             T_list = {Ø}

Red Boxes - Mismatches             Green Boxes - Matches

Explanation of each raw.
  1. First raw is the text and second raw is the pattern. 
  2. Last three letters matched and C mismatched. Bad Character is T and i is 4. View occurrence list of T. That is T_list = {Ø}.  Then move by 4 positions.   
  3. Last letter matched and R mismatched. Bad Character is B and i is 6. View occurrence list of B. That is B_list = {Ø}.  Then move by 6 positions. 
  4. Last three letters matched and C mismatched. Bad Character is A and i is 4. View occurrence list of A. That is A_list = {7,5,1}. Then find nearest less value than i of A_list. That is 1 and i is 4. Then move by (4-1)positions. That means move by 3 positions. 
  5. Last letter A mismatched. Bad Character is B and i is 7. View occurrence list of B. That is B_list = {Ø}.  Then move by 7 positions.
  6. Last letter matched and R mismatched. Bad Character is T and i is 6. View occurrence list of T. That is T_list = {Ø}.  Then move 6 positions.
Above example we can see,       
      We get more than one position shift (three position) in fifth raw using extended bad character rule. But using bad character rule we get only one position shift in fifth raw. Therefore we can say that Extended Bad Character Rule is efficient that Bad Character Rule.
       
You can find C Source Code for Boyer Moore String Matching Algorithm using extended bad character rule  from following link.
                   Boyer Moore String Matching Algorithm In C 

Pre-processing

             In above string matching algorithm using Bad character rule, there is a process done before starting string matching process. That is finding right most occurrence or finding all the occurrences of the alphabet in the pattern. This process is called as pre-processing. This process also get  running time. Pre-processing process is used to avoid (n-m+1) shifts in Naive algorithm.
             Also we can see that pre-processing time is depend on the alphabet. Because we find occurrences of the alphabet in the pattern. So we can say that woks the fastest when the alphabet is moderately sized and the pattern is relatively long.

 Running Time Analysis

             Boyer Moore String matching algorithm  is not like Naive algorithm. Becouse above mentioned that this algorithm take time for pre-processing. Therefore, addition of pre-processing time and string matching time is the running time of this algorithm. 
Lets get : n-length of text                  m-length of pattern                    |Σ| -length of alphabet
  1. Pre-processing time                                                                                                                                     In worst case using bad character rule takes ( |Σ| + m) comparisons. Because in each letter of alphabet(|Σ|), we find right most occurrence of pattern. Pattern length is m. So that takes  ( |Σ| + m) comparisons.Therefore, running time becomes O( |Σ| + m).                                                                
  2. String matching time                                                                                                                                      In worst case using bad character rule takes (mn) comparisons. Because in each letter of pattern(m) compare with text and move by one position each time, So we should have to make (n-m+1) positions. So all the comparisons becomes  ( m*(n-m+1)). Simply we say (mn) comparisons. Therefore, running time becomes O(nm).
Therefore the worst case running time of Boyer-Moore algorithm is O(nm + |∑|)   

Advantages

  1. Boyer-Moore algorithm is extremely fast on large alphabet (relative to the length of the pattern).