C++ Neural Networks and Fuzzy Logic
by Valluru B. Rao M&T Books, IDG Books Worldwide, Inc. ISBN: 1558515526 Pub Date: 06/01/95 |
Previous | Table of Contents | Next |
As mentioned in Chapter 11, you actually dont desire the exact replication of the input pattern for the weight vector. This would amount to memorizing of the input patterns with no capacity for generalization.
For example, a typical use of this alphabet classifier system would be to use it to process noisy data, like handwritten characters. In such a case, you would need a great deal of latitude in scoping a class for a letter A.
The next step of the program is to add characters and see what categories they end up in. There are many alphabetic characters that look alike, such as H and B for example. You can expect the Kohonen classifier to group these like characters into the same class.
We now modify the input.dat file to add the characters H, B, and I. The new input.dat file is shown as follows.
0 0 1 0 0 0 1 0 1 0 1 0 0 0 1 1 0 0 0 1 1 1 1 1 1 1 0 0 0 1 1 0 0 0 1 1 0 0 0 1 0 1 0 1 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 1 0 1 0 1 0 0 0 1 1 0 0 0 1 1 0 0 0 1 1 0 0 0 1 1 1 1 1 1 1 0 0 0 1 1 0 0 0 1 1 0 0 0 1 1 1 1 1 1 1 0 0 0 1 1 0 0 0 1 1 1 1 1 1 1 0 0 0 1 1 0 0 0 1 1 1 1 1 1 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0
The output using this input file is shown as follows.
- done >average dist per cycle = 0.732607 <- >dist last cycle = 0.00360096 <- ->dist last cycle per pattern= 0.000720192 <- >total cycles = 37 <- >total patterns = 185 <- -
The file kohonen.dat with the output values is now shown as follows.
cycle pattern win index neigh_size avg_dist_per_pattern 0 0 69 5 100.000000 0 1 93 5 100.000000 0 2 18 5 100.000000 0 3 18 5 100.000000 0 4 78 5 100.000000 1 5 69 5 0.806743 1 6 93 5 0.806743 1 7 18 5 0.806743 1 8 18 5 0.806743 1 9 78 5 0.806743 2 10 69 5 0.669678 2 11 93 5 0.669678 2 12 18 5 0.669678 2 13 18 5 0.669678 2 14 78 5 0.669678 3 15 69 5 0.469631 3 16 93 5 0.469631 3 17 18 5 0.469631 3 18 18 5 0.469631 3 19 78 5 0.469631 4 20 69 5 0.354791 4 21 93 5 0.354791 4 22 18 5 0.354791 4 23 18 5 0.354791 4 24 78 5 0.354791 5 25 69 5 0.282990 5 26 93 5 0.282990 5 27 18 5 0.282990 ... 35 179 78 5 0.001470 36 180 69 5 0.001029 36 181 93 5 0.001029 36 182 13 5 0.001029 36 183 19 5 0.001029 36 184 78 5 0.001029
Again, the network does not find a problem in classifying these vectors.
Until cycle 21, both the H and the B were classified as output neuron 18. The ability to distinguish these vectors is largely due to the small tolerance we have assigned as a termination criterion.
Previous | Table of Contents | Next |