Perceptron

MaheswaraReddy
4 min readFeb 28, 2021

--

Perceptron is a simple model, which has neuron’s with inputs and single output.

Inputs will depend on Data and its dimension. Weights will be initialized for all the inputs during the initialization including for Bias. In summation function the inputs will be multiplied with corresponding weights and the products will summed. The sum will be exposed to activation function. Activation function will be used only for the classification problems, it won’t be used for regression problems. We have different activation functions like sigmoid, softmax, tanh, Relu, Leaky Relu etc.

Let me explain perceptron with an example. Below data frame I considered to explain it. Currently I am assuming it as classification problem.

Where I1, I2 and I3 are three inputs and ‘O’ column is an output. Weights are pre-initialised as below.

When 1st row is exposed to perceptron, 5, 6, 7 as inputs and they will be multiplied with weights 0.1, 0.2 and 0.3 respectively. So, after summation function we got 3.8. The result will be exposed to activation function. In this case I considered sigmoid as activation function. After activation function I got 0.978 as output, which is closely equal to 1. Even if I consider 0.5 as threshold, the value above 0.5 is considered as 1 and below 0.5 is considered as 0. So, 0.978 we will make it as 1. In this case actual value and predicted values are same.

2nd row is exposed to perceptron, 3, 4, 5 as inputs and they will be multiplied with weights 0.1, 0.2 and 0.3 respectively. So, after summation function we got 2.6. The result will be exposed to activation function. After activation function I got 0.9308 as output, which is closely equal to 1. Even if I consider 0.5 as threshold, the value above 0.5 is considered as 1 and below 0.5 is considered as 0. So, 0.9308 we will make it as 1. In this case actual value and predicted values are different.

3rd row is exposed to perceptron, 1, 5, 6 as inputs and they will be multiplied with weights 0.1, 0.2 and 0.3 respectively. So, after summation function we got 2.9. The result will be exposed to activation function. After activation function I got 0.9478 as output, which is closely equal to 1. Even if I consider 0.5 as threshold, the value above 0.5 is considered as 1 and below 0.5 is considered as 0. So 0.9478 we will make it as 1. In this case actual value and predicted values are same.

Actual output and predicted outputs are tabulated for comparison. In this 1 out of 3 is wrongly classified.

Same data frame I considered to explain the regression. In regression case we won’t have activation function. So, without activation function output values are considered.

Any machine learning problem or Deep Learning Problem, three main things exists.

· Predict

· Compare

· Learn

Perceptron is part of predict, it won’t contain Compare and Learn. We can observe in the above perceptron examples, only prediction is done, compare and learning is not done. It’s a single feed forward epoch, where loss and learning didn’t happen. Without learning, machine learning and deep learning is nothing, so perceptron we can considered as waste as perceptron alone. If is part of full (includes Feed Forward and Backword Propagation) neural network, then it has lot of power, that power mostly comes from compare, learn and epochs.

Thanks for your claps and likes.

--

--