Translate

Wednesday, September 26, 2012

Classification by Back Propogation:


Classification by Back Propogation:

 

Classification is a data mining (machine learning) technique used to predict group membership for data instances. Classification means evaluating a function, which assigns a class label to a data item. Classification is a supervised learning process, uses training set which has correct answers (class label attribute). Classification proceeds as these steps: First create a model by running the algorithm on the training data. Then test the model. If accuracy is low, regenerate the model, after changing features, reconsidering samples. Then identify a class label for the incoming new data. So here the problem is to develop the classification model using the available training set which needs to be normalized. Then this data is given to the Back propagation algorithm for classification. After applying Back propagation algorithm, genetic algorithm is applied for weight adjustment. The developed model can then be applied to classify the unknown tuples from the given database and this information may be used by decision maker to make useful decision. If one can write down a flow chart or a formula that accurately describes the problem, then stick with a traditional programming method. There are many tasks of data mining that are not solved efficiently with simple mathematical formulas. Large scale data mining applications involving complex decision making can access billions of bytes of data.. Hence, the efficiency of such applications is paramount. Classification is a key data mining technique.

 

Artificial Neural Network (ANN) is a computational model, which is based on Biological

Neural Network. Artificial Neural Network is often called as Neural Network (NN). To build artificial neural network, artificial neurons, also called as nodes, are interconnected. The architecture of NN is very important for performing a particular computation. Some neurons are arranged to take inputs from outside environment. These neurons are not connected with each other, so the arrangement of these neurons is in a layer, called as Input layer. All the neurons of input layer are producing some output, which is the input to next layer. The architecture of NN can be of single layer or multilayer. In a single layer Neural Network, only one input layer and one output layer is there, while in multilayer neural network, there can be one or more hidden layer.

 

An artificial neuron is an abstraction of biological neurons and the basic unit in an ANN. The Artificial Neuron receives one or more inputs and sums them to produce an output. Usually the sums of each node are weighted, and the sum is passed through a function known as an activation or transfer function. The objective here is to develop a data classification algorithm that will be used as a general-purpose classifier. To classify any database first, it is required to train the model. The proposed training algorithm used here is a Hybrid BP-GA. After successful training user can give unlabeled data to classify The synapses or connecting links: that provide weights, wj, to the input values, xj for j = 1 ...m; An adder: that sums the weighted input values to compute the input to the activation function


Where,

w0 is called the bias, is a numerical value associated with the neuron. It is convenient to think

of the bias as the weight for an input x0 whose value is always equal to one, so that;



An activation function g: that maps v to g(v) the output value of the neuron. This function is a

monotone function. The practical value of the logistic function arises from the fact

that it is almost linear in the range where g is between 0.1 and 0.9 but has a squashing effect on

very small or very large values.

ANN Learning: Back Propagation Algorithm

 

The back propagation algorithm cycles through two distinct passes, a forward pass followed by a backward pass through the layers of the network. The algorithm alternates between these passes several times as it scans the training data.

 

Forward Pass: Computation of outputs of all the neurons in the network

 

 • The algorithm starts with the first hidden layer using as input values the independent variables of a case from the training data set.

• The neuron outputs are computed for all neurons in the first hidden layer by performing the relevant sum and activation function evaluations.

• These outputs are the inputs for neurons in the second hidden layer. Again the relevant sum

and activation function calculations are performed to compute the outputs of second layer

neurons.

 

Backward pass: Propagation of error and adjustment of weights

 

• This phase begins with the computation of error at each neuron in the output layer. A popular error function is the squared difference between ok the output of node k and yk the target value for that node.

• The target value is just 1 for the output node corresponding to the class of the exemplar and zero for other output nodes.

• The new value of the weight wjk of the connection from node j to node k is given by:

wnewjk= woldjk+_oj_k. Here _ is an important tuning parameter that is chosen by trial and

error by repeated runs on the training data. Typical values for _ are in the range 0.1 to 0.9.

• The backward propagation of weight adjustments along these lines continues until we

reach the input layer.

• At this time we have a new set of weights on which we can make a new forward pass when

presented with a training data observation

Parameters to be considered to build BP algorithm

 

Initial weight range(r): It is the range usually between [-r, r], weights are initialized between

these range.

Number of hidden layers: Up to four hidden layers can be specified; see the overview section

for more detail on layers in a neural network (input, hidden and output). Let us specify the

number to be 1.

Number of Nodes in Hidden Layer: Specify the number of nodes in each hidden layer. Selecting the number of hidden layers and the number of nodes is largely a matter of trial and error.

 

Number of Epochs: An epoch is one sweep through all the records in the training set. Increasing this number will likely improve the accuracy of the model, but at the cost of time, and decreasing this number will likely decrease the accuracy, but take less time.

 

Step size (Learning rate) for gradient descent: This is the multiplying factor for the error correction during back propagation; it is roughly equivalent to the learning rate for the neural network. A low value produces slow but steady learning; a high value produces rapid but erratic learning. Values for the step size typically range from 0.1 to 0.9.

 

Error tolerance: The error in a particular iteration is back propagated only if it is greater than the error tolerance. Typically error tolerance is a small value in the range 0 to 1.

 

Hidden layer sigmoid: The output of every hidden node passes through a sigmoid function. Standard sigmoid function is logistic; the range is between 0 and 1.

 

Why to choose Back Propagation Neural Network?

• A study of comparing Feed forward Network, Recurrent Neural Network and Time-delay neural Network shows that highest correct classification rate is achieved by the fully connected feed forward neural network.

• From table 3.1, it can be seen that results obtained from BPNN are better than those obtained from MLC.

• From the obtained results in table 3.3, it can be seen that MLP is having highest classification rate, with reasonable error and time taken to classify is also reasonably less

 • From table 3.3, it can be seen that considering some of the performance parameters, BPNN is better than other methods, GA, KNN and MLC.

No comments:

Post a Comment