Neural Network Hello World – Create your first network with Keras

Till now we have seen basics on NN’s and at this stage we can create our first Neural Network

Problem Statement

First let’s define a problem statement, this will be an imaginary problem and a very simple one but still we will use NN to solve it. The problem is as follows:

Let’s assume we have a set of numbers (1,2000) this a data set which belongs to classification let’s say “MEN” and we have set of numbers (2001, 4000) which belongs to classification lets say “WOMEN”. The job of our NN is to learn from this data and predict future data easily.

P.S This is very trivial if you think about it from a tradition programming point of view. But this just a helloworld 101 for NN 🙂

Data Setup (Training Set)

Now we need to setup data for this, we will setup data simply using for loops and also add some noise to the data. (noise = wrong data)

from __future__ import print_function
import random 
import numpy as np
import keras
from keras.models import Sequential
from keras.layers import Dense, Activation
from keras.optimizers import Adam

training_data = []
training_labels = []


# actual data
# i.e no from 1 to 2000 denotes 1 which means MEN
for i in range(1,2000):
  training_data.append(i)
  training_labels.append(1)

# no from 2001 to 4000 denotes 0 which means WOMEN
for i in range(2001,4000):
  training_data.append(i)
  training_labels.append(0)

# let's add some noise or wrong data
for i in range(0,100):
  data = random.randint(1,2000)
  training_data.append(data)
  training_labels.append(0)

for i in range(0,100):
  data = random.randint(2001,4000)
  training_data.append(data)
  training_labels.append(1)

The Neural Network

Next our neural network looks like this

model = Sequential()
model.add(Dense(8, input_shape=(1,)))
model.add(Activation('relu'))
model.add(Dense(2))
model.add(Activation("softmax"))

model.summary()

model.compile(Adam(lr=.001),loss="sparse_categorical_crossentropy",metrics=['accuracy'])

This is a very simple NN and similar to what we saw before.

We have covered the explanation for this in our previous blogs, go through them if any doubts on this.

Data Pre Processing

This is one the hardest steps when working with NN, is the data. In our case its easy to pre-process it. Keras expects the input data to be between 0,1 usually. If we give large values results can be unexpected, so we can functions to scale our data to this format. Also keras works with numpy arrays only. https://scikit-learn.org is a great package in python which has lot of supporting functions for our neural networks.

from sklearn.preprocessing import MinMaxScaler
scalar = MinMaxScaler()
scaled_train_samples = scalar.fit_transform(np.array(training_data).reshape(-1, 1))

With this code we scale our data between 0 - 1

Training Our Network

model.fit(scaled_train_samples,np.array(training_labels), validation_split=.1 ,batch_size=20, epochs=20,verbose=2, shuffle=True)

Finally we feed data into our neural network. We feed the training samples, labels and set other parameters.

validation_split = this is an optional parameter this basically tell’s the NN to use 10% of the data as a validation set to calculate its accuracy

batch_size= the network consumes data in batches. we can specificity the amount of data it will consume at a time e.g if we have 5000 data points and batch size of 10, to do one full pass on the data it will take 500 times. https://deeplizard.com/learn/video/U4WB9p6ODjM

epochs=this is the total number of passes we want our network to make over the entire data.

shuffle= this is saw that our input data get’s shuffled when passed to the network. this is always better to do, so that our network more generalized data set and learning.

Now when you run this code you will get an output like this

Train on 3778 samples, validate on 420 samples
Epoch 1/20
 - 0s - loss: 0.6185 - acc: 0.7509 - val_loss: 0.6075 - val_acc: 0.5976
Epoch 2/20
 - 0s - loss: 0.5274 - acc: 0.8711 - val_loss: 0.6115 - val_acc: 0.5714
Epoch 3/20
 - 0s - loss: 0.4471 - acc: 0.9140 - val_loss: 0.6541 - val_acc: 0.5405
Epoch 4/20
 - 0s - loss: 0.3698 - acc: 0.9502 - val_loss: 0.7340 - val_acc: 0.5357
Epoch 5/20
 - 0s - loss: 0.3017 - acc: 0.9714 - val_loss: 0.8450 - val_acc: 0.5262
Epoch 6/20
 - 0s - loss: 0.2474 - acc: 0.9839 - val_loss: 0.9888 - val_acc: 0.5310
Epoch 7/20
 - 0s - loss: 0.2074 - acc: 0.9876 - val_loss: 1.1293 - val_acc: 0.5262
Epoch 8/20
 - 0s - loss: 0.1784 - acc: 0.9899 - val_loss: 1.2655 - val_acc: 0.5262
Epoch 9/20
 - 0s - loss: 0.1566 - acc: 0.9936 - val_loss: 1.4088 - val_acc: 0.5262
Epoch 10/20
 - 0s - loss: 0.1403 - acc: 0.9936 - val_loss: 1.5367 - val_acc: 0.5238
Epoch 11/20
 - 0s - loss: 0.1274 - acc: 0.9958 - val_loss: 1.6778 - val_acc: 0.5262
Epoch 12/20
 - 0s - loss: 0.1172 - acc: 0.9905 - val_loss: 1.7902 - val_acc: 0.5262
Epoch 13/20
 - 0s - loss: 0.1085 - acc: 0.9955 - val_loss: 1.9054 - val_acc: 0.5238
Epoch 14/20
 - 0s - loss: 0.1016 - acc: 0.9929 - val_loss: 2.0173 - val_acc: 0.5238
Epoch 15/20
 - 0s - loss: 0.0956 - acc: 0.9942 - val_loss: 2.1369 - val_acc: 0.5262
Epoch 16/20
 - 0s - loss: 0.0902 - acc: 0.9950 - val_loss: 2.2169 - val_acc: 0.5262
Epoch 17/20
 - 0s - loss: 0.0856 - acc: 0.9960 - val_loss: 2.3506 - val_acc: 0.5262
Epoch 18/20
 - 0s - loss: 0.0818 - acc: 0.9958 - val_loss: 2.4432 - val_acc: 0.5238
Epoch 19/20
 - 0s - loss: 0.0781 - acc: 0.9966 - val_loss: 2.5346 - val_acc: 0.5238
Epoch 20/20
 - 0s - loss: 0.0749 - acc: 0.9995 - val_loss: 2.6402 - val_acc: 0.5262
<keras.callbacks.History at 0x7fe9ddec6c50>

Here you can see loss and accuracy i.e in the end the accuracy is very high 99% and very less loss.

This is good!

Predict Data

Now, once our neutral network is trained, we can provide it data to make predictions.

control_data = np.random.randint(low = 1, high = 4000, size = 50)

print(control_data)

final_data = scalar.fit_transform(control_data.reshape(-1,1))


predictions = model.predict_classes(final_data, batch_size=10)
print(predictions)


predictions = model.predict(final_data, batch_size=10)
print(predictions)

So in this code, we generate a random list of number (50 numbers) between 1 and 4000, and predict the outcome.

This is the outcome in my case

[ 698  929 1288 1758 1048 3762  524 2761 3836 3350  591  247 1465 2206
  370  510 1105 3977 2468  967 1390    7 1472  358  517 2685  107 2430
  365 1349  983 3633 3708 1532 1297 2358 1189 1756 1812 1516 3878 3571
  998 1782 2463 3580 1176 1703 1664 3102]
[1 1 1 1 1 0 1 0 0 0 1 1 1 0 1 1 1 0 0 1 1 1 1 1 1 0 1 0 1 1 1 0 0 1 1 0 1
 1 1 1 0 0 1 1 0 0 1 1 1 0]

As you can see the outcome is 100% correct!

Check out the entire NN here https://colab.research.google.com/drive/1UTBT4_vtMuTVaRCPJum_SziVNbDI0x0r

Next Steps

At this stage, its best to play around few things like if you change the learning rate to .0001 instead of .001 the accuracy decreases a lot

Epoch 17/20
 - 0s - loss: 0.4530 - acc: 0.8655 - val_loss: 0.6123 - val_acc: 0.5952
Epoch 18/20
 - 0s - loss: 0.4443 - acc: 0.8714 - val_loss: 0.6177 - val_acc: 0.5905
Epoch 19/20
 - 0s - loss: 0.4358 - acc: 0.8785 - val_loss: 0.6236 - val_acc: 0.5881
Epoch 20/20
 - 0s - loss: 0.4274 - acc: 0.8835 - val_loss: 0.6296 - val_acc: 0.5810
<keras.callbacks.History at 0x7f6be923cac8>

From being 99% accurate its down to 88% accurate!

Play around different things, batch size, epoch, add another dense layer, change the optimization function or loss function and see how the results change.

excellence-social-linkdin
excellence-social-facebook
excellence-social-instagram
excellence-social-skype