Activation Functions

 Activation functions are used in the model to introduce nonlinearity. You can think of nonlinearity as a curve line, which means that without a nonlinearity models can only use straight lines only to learn the parameters.




  • Relu

Relu activation function can be used anywhere in the model, it is used after the final layer only if your output is natural number. Relu retire zero is if the input number is positive and if the input number is positive then it remains the same.

The only disadvantage to relu is that its output is zero when the input is a negative number.

Relu Range: 0 to inf


  • LeakyReLU

Leaky Relu activation function is used in the intermediate layers of deep learning models.

Leaky Relu Range: -inf to +inf


There are some other activation functions which are commonly used but these activation functions contribute to the vanishing gradient, because these reduce the big positive number and makes them close to 1, and any negative number will be close to 0. If you use such an activation function multiple times in a model then the numbers will keep getting smaller along with the gradient of those numbers.


  • Sigmoid

Sigmoid is commonly used after the last layer in deep learning, because it is one of the activation functions which contribute to the vanishing gradient.

Sigmoid Range: 0 to 1


  • Tanh

Tanh function is mostly used in the intermediate layers of model,

Tanh Range: -1 to +1


  • Softmax

Softmax function is used when we have multiple classes for our model, such as if a picture contains a human or dog or car, in this case we have three different classes. But a picture can only contain one of these three at a time in a single image.

Softmax Range: 0 to 1

In the above example we will have 3 numbers, [0.2, 0.3, 0.5] sum of these numbers will be equal to 1. You can interpret it as 20% chance it is a Human, 30% chance it is a Dog and 50% chance it is a Car.

Deep Learning Model parameter selection

 When training deep learning models you have to set several parameters, however there is no specific way to follow, but you can randomly select some parameter to try and get result based on that results you can further modify the parameters.

You can also use Genetic Algorithm to train random deep learning models and among them select the best, then these best models parameters can be combined to create other new models.

Overfitting in Deep Learning, Neural Networks

 Overfitting is a problem in which model remember the original data which was used to train the model. Overfit model don`t work well on new dataset. In real world you encounter new data everyday.


Solution

To avoid overfitting you need to collect more data, since you will have lots of data and your model will not be able to remember all the data, so your model will try to find a simple pattern by which it can give correct answer.

Dropout

you can also use dropouts which will drop a neuron by making its value equal to zero. you have to define percentage for each neuron to be dropped. Since neural network learn by adjusting weights of neurons, so dropping neurons will make it harder for neurons to learn the data, that way model will only learn the pattern.
You can also drop a block of layers, such as residual block. You can also drop a connection toward a layer.

Regularization

Regularization is a technique which modify the loss of a neural network based on sum of weights. The outcome of regularization is that the difference between weights will be smoothed out. all the weights will be lose to each other with less variance among them.

Normalization

Normalization is method by which you can reduce values and make them between a certain range such as between -1 and 1. similarly you can normalize the output of a neural network layer, using batch normalization or instance normalization.

Keras For Deep Learning

 Keras is an API which is used for deep learning, it uses tensorflow for computation. Keras will automatically calculate gradient for you, along will complex calculation.

Using Keras is very easy, first you have to install keras.

There are 2 different popular version to use when working with keras.

keras 2.2.4 and tensorflow 1.13.1

keras 2.4.x and tensorflow 2.x

here x means any version.


Normalize data using sklearn in python machine learning and data science

 


import numpy as np
from sklearn import preprocessing

#We imported a couple of packages. Let's create some sample data and add the line to this file:

input_data = np.array([[3, -1.5, 3, -6.4], [0, 3, -1.3, 4.1], [1, 2.3, -2.9, -4.3]])

data_normalized = preprocessing.normalize(input_data, norm = 'l1')
print "\nL1 normalized data = ", data_normalized

Mean Removal from data in python using sklearn machine learning and data science

 


import numpy as np
from sklearn import preprocessing

#We imported a couple of packages. Let's create some sample data and add the line to this file:

input_data = np.array([[3, -1.5, 3, -6.4], [0, 3, -1.3, 4.1], [1, 2.3, -2.9, -4.3]])

data_standardized = preprocessing.scale(input_data)
print "\nMean = ", data_standardized.mean(axis = 0)
print "Std deviation = ", data_standardized.std(axis = 0)

Label Encoding for machine learning and data science in python using sklearn


from sklearn import preprocessing

label_encoder = preprocessing.LabelEncoder()
input_classes = ['suzuki', 'ford', 'suzuki', 'toyota', 'ford', 'bmw']
label_encoder.fit(input_classes)
print("\nClass mapping:")
for i, item in enumerate(label_encoder.classes_):
print(item, '-->', i)

labels = ['toyota', 'ford', 'suzuki']
encoded_labels = label_encoder.transform(labels)
print "\nLabels =", labels
print "Encoded labels =", list(encoded_labels)

######decoding

encoded_labels = [3, 2, 0, 2, 1]
decoded_labels = label_encoder.inverse_transform(encoded_labels)
print "\nEncoded labels =", encoded_labels
print "Decoded labels =", list(decoded_labels)

Prime Number Calculation

 # -*- coding: utf-8 -*-

"""
Created on Fri Nov 9 19:33:46 2018

@author: Faheem Khaskheli
"""

import numpy as np

prime = np.array([2,3,5,7])

#limit = 10**5
try:
    start = np.load("limit.npy")
except:
    start = 1
limit = start + 10**5
print(limit)
prime = np.load("prime_number.npy")
def primenumbers(prime):
for i in range(start,limit):
p = True
for j in range(0,prime.size):

mod = i % prime[j]
if mod == 0 :
p = False
break
if p:
prime = np.append(prime,[i])
print(i)
return prime

prime = primenumbers(prime)
np.save("prime_number.npy",prime)
np.save("limit.npy",limit)
print(prime)

Image histogram in python


import cv2
import numpy as np
from matplotlib import pyplot as plt

img = cv2.imread('Dragon-ball-Super-100-Jiren.jpg',0)
plt.hist(img.ravel(),256,[0,256])

    plt.show() 

Saving Calculation in python

 This code is made in python you can use it to calculate your return on saving, you have to specify you initial credit and monthly saving, along with number of years to save.



initial_credit = 200_000 # initial saving amount
years = 10 # years to save
interest = 0.03 # interest rate per year
credits = initial_credit
montly_investment = 10_000 # monthly saving


for y in range(years):
for m in range(12):
monthly_interest = (credits * (interest/12)) # calculate montly profit
# adding monthly profit and monthly saving, with credits
credits += (monthly_interest + montly_investment)
# show new credit value, show monthly profit through only saving interest
print(credits, ", ",monthly_interest)

Residual Block in Deep Learning model

 

Residual Block

Residual blocks are very useful in the deep learning model, because these blocks can be skipped by DL model if they are not needed, they have a skip connection which can pass the original input to output as it is, also in deep learning model they can be used to handle the vanishing gradient problem.


Residual block architecture includes an input layer, this input layer will pass its output to another layer along with the layer next to it as shown in figure above. That way if the block is needed then it will be used, otherwise it will be skipped. The gradient of that block will have 2 paths to  follow. The 1st path will be through the middle layer, the 2nd path will be the curved path without any layer in between. The gradient going through 1st path will be modified based on the block error, while the gradient going through 2nd path will remain the same. When these Gradients merge they are added together. 

This is simple architecture of the residual block, you can use it anywhere in the model, when using it in the middle of a model the input layer will be the previous layer. The figure above shows the single CNN layer based Residual block but you can make a residual block with multiple layers of different types as well, you just have to be careful about the dimension of the output of these layers, since these two paths will merge together so they should have similar dimensions.


Data Augmentation

 

Data Augmentation

Data augmentation is a technique which is used to create different images using the images we already have. Such as you can rotate an image, zoom in, zoom out, crop.

Spatial Data Augmentation

In spatial data augmentation you have to rotate, crop, zoom in, zoom out the images.

Photometric data Augmentation

In photometric data augmentation you have to blur, sharpen, change color of the images.

Normalization of image in python

 from sklearn import preprocessing

import cv2

####

## you can normalize each of rgb channel of image separately, and combine them as well.

####

img_data = cv2.imread(path)

b,g,r = cv2.split(img_data)

data_scaler = preprocessing.MinMaxScaler(feature_range = (0, 1))

b = data_scaler.fit_transform(b)

g = data_scaler.fit_transform(g)

r = data_scaler.fit_transform(r)        

img_data = cv2.merge((b,g,r))

Comparing while loop and for loop in python

Using following code i found for loop is faster in python

import time                                                                                  

=

st = time.time()
while i < 100_000_000:
a = 0
i += 1
print(time.time()-st)

i = 0
st = time.time()
while True:
a = 0
if i < 100_000_000:
i += 1
else:
break
print(time.time()-st)

st = time.time()
for i2 in range(100_000_000):
a = 0
print(time.time()-st)

Results
E:\Anaconda\python.exe E:/Projects/Test/t.py
14.323773622512817
14.415262937545776
9.459656000137329

Process finished with exit code 0

Building a CLI-Based People Tracking and Dwell Time Analytics System Using YOLOv8 and DeepSORT

  Introduction Tracking people across video frames and analyzing their behavior (like  dwell time ) is a crucial task for many real-world ap...