Scaling/Min-Max scaling

MaheswaraReddy
3 min readFeb 20, 2021

--

What is scaling?

Feature Scaling is one of the most important transformation we need to apply to our data. Machine Learning algorithms (Mostly Regression algorithms) don’t perform well when the inputs are numerical with different scales.

when different features are in different scales, after applying scaling all the features will be converted to the same scale. Let’s take we have two features where one feature is measured on a scale from 1 to 10 and the second feature is measured on a scale from 1 to 100,00, respectively. If we calculate the mean squared error, algorithm will mostly be busy in optimizing the weights corresponding to second feature instead of both the features. Same will be applicable when the algorithm uses distance calculations like Euclidian or Manhattan distances, second feature will dominate the result. So, if we scale the features, algorithm will give equal priority for both the features.

There are two common ways to get all attributes to have the same scale: min-max scaling and standardization.

Standardization I already explained. Please have a look at the below link

Min-Max scaling, We have to subtract min value from actual value and divide it with max minus min. Scikit-Learn provides a transformer called MinMaxScaler. It has a feature_range hyperparameter that lets you change the range if you don’t want 0 to1 for any reason.

class sklearn.preprocessing.MinMaxScaler(feature_range=0,1,*, copy=True, clip=False)

Mathematical Explanation

Min-Max Scaling formula:

Considered 2 columns A and B with different scales.

Min Max Scale Math — 1
Min Max Scale Math — 2

After scaling we can see both A and B columns are in same scale i.e in between 0 and 1.

We can change the min and max values. In the below case I selected 1 as min and 2 as max, you can observe all the values are fitted in between 1 and 2.

Range changed to 1 and 2

linear regression, logistic regression, or anything that involves a matrix, are affected by the scale of the input, So Scaling will improve the model performance. Tree-based models, on the other hand, couldn’t care scaling.

When we use Neural Networks, remember that it is important to first normalize the input feature vectors, or else training may be much slower. Image Processing and NLP will also use neural networks to deal, so Scaling has to be performed on features.

Values whatever I used above in mathematical explanation, same has been considered to explain coding

from sklearn.preprocessing import MinMaxScaler

data = [[1, 1], [2, 20], [3, 300], [4, 4000]]

scaler = MinMaxScaler()

scaler.fit(data)

print(scaler.transform(data))

Output:

[[0. 0. ]

[0.33333333 0.00475119]

[0.66666667 0.07476869]

[1. 1. ]]

When I change min as 1 and max as 2. The values are transformed in between 1 and 2.

scaler = MinMaxScaler(feature_range=(1, 2))

scaler.fit(data)

print(scaler.transform(data))

Output:

[[1. 1. ]

[1.33333333 1.00475119]

[1.66666667 1.07476869]

[2. 2. ]]

--

--

No responses yet