C++ Neural Networks and Fuzzy Logic
by Valluru B. Rao M&T Books, IDG Books Worldwide, Inc. ISBN: 1558515526 Pub Date: 06/01/95 |
Previous | Table of Contents | Next |
Let us run the example that we have created an input file for. We have an input.dat file with the characters A and X defined. A run of the program with these inputs is shown as follows:
Please enter initial values for: alpha (0.01-1.0), and the neighborhood size (integer between 0 and 50) separated by spaces, e.g., 0.3 5 0.3 5 Now enter the period, which is the number of cycles after which the values for alpha the neighborhood size are decremented choose an integer between 1 and 500, e.g., 50 50 Please enter the maximum cycles for the simulation A cycle is one pass through the data set. Try a value of 500 to start with 500 Enter in the layer sizes separated by spaces. A Kohonen network has an input layer followed by a Kohonen (output) layer 35 100
The output of the program is contained in file kohonen.dat as usual. This shows the following result.
cycle pattern win index neigh_size avg_dist_per_pattern 0 0 42 5 100.000000 0 1 47 5 100.000000 1 2 42 5 0.508321 1 3 47 5 0.508321 2 4 40 5 0.742254 2 5 47 5 0.742254 3 6 40 5 0.560121 3 7 47 5 0.560121 4 8 40 5 0.392084 4 9 47 5 0.392084 5 10 40 5 0.274459 5 11 47 5 0.274459 6 12 40 5 0.192121 6 13 47 5 0.192121 7 14 40 5 0.134485 7 15 47 5 0.134485 8 16 40 5 0.094139 8 17 47 5 0.094139 9 18 40 5 0.065898 9 19 47 5 0.065898 10 20 40 5 0.046128 10 21 47 5 0.046128 11 22 40 5 0.032290 11 23 47 5 0.032290 12 24 40 5 0.022603 12 25 47 5 0.022603 13 26 40 5 0.015822 13 27 47 5 0.015822 14 28 40 5 0.011075 14 29 47 5 0.011075 15 30 40 5 0.007753 15 31 47 5 0.007753 16 32 40 5 0.005427 16 33 47 5 0.005427 17 34 40 5 0.003799 17 35 47 5 0.003799 18 36 40 5 0.002659 18 37 47 5 0.002659 19 38 40 5 0.001861 19 39 47 5 0.001861 20 40 40 5 0.001303 20 41 47 5 0.001303
The tolerance for the distance was set to be 0.001 for this program, and the program was able to converge to this value. Both of the inputs were successfully classified into two different winning output neurons. In Figures 12.2 and 12.3 you see two snapshots of the input and weight vectors that you will find with this program. The weight vector resembles the input as you can see, but it is not an exact replication.
Figure 12.2 Sample screen output of the letter A from the input and weight vectors.
Figure 12.3 Sample screen output of the letter X from the input and weight vectors.
Previous | Table of Contents | Next |