# Simple Neural Network using Tensorflow Part 1

## WHAT

Neural network is similar like human neurons. They connect with each other. It consists of 3 type of layers: input, hidden, output. If you fit data into a neural network, it will process and produce a good (or bad) prediction.

Normally, it performs two phrases: forward and back-propagation. Today we only talk about feedforward Tensorflow will help us to figure out how to handle backpropagation.

## HOW

How to make a neural network model? We require some ingredient to make this things work. The main ingredients to make neural network are weight and bias. These  two parameters are used to adjust the model to make it approximately close to the result we want. Feedforward calculation is simple. Each node will perform 2 steps. Firstly, multiply weight with your input, then add some bias into it. Then it is sent through an activation function. After that, pass to another node for further processing. The process repeats. This is an example.

Until the process produces final output, we can compare it with the actual result by using loss function. There are many loss functions out there, but today we will use quadratic loss function.

$\frac{1}{2m}\sum_i^m (y_i-a_i)^2$

We compare the predicted result with actual result. Then we square them to make negative value positive. After that, we find the mean of each result. We also call it mean square error. Besides, we also divide by two. Why divide by two? The reason for that is to make derivative equation easy. The next phase will be back-propagation which is used to update the weights and biases to minimize the error. But... we will not get into the details.

## Experiment: MNIST handwritten classification

Let's get into coding stuff. Set up your project. I will use Tensorflow to create feedforward graph. Please make sure your Tensorflow is installed in your environment.

To make it easy and simple neural network, we will use MNIST handwritten classification to feed into input layer.

How many input neurons do we need actually? If we look at the data, we have 784 pixels (inputs) to determine an output for one training example. So, we will make a vector of 784 inputs to feed into the model.

$X=\begin{bmatrix}x_1 \\ . \\ .\\ . \\ x_{784}\end{bmatrix}$

Then how many neurons we need to use for hidden layer? You can set and adjust on your own. For this example, we will use 20 nodes for the first hidden layer, 15 nodes for the second hidden layer. For the output layer, we will use 10 nodes instead of a node to represent a character. We will use sigmoid function that produces output from 0 to 1

Alright. We planned our artificial neural network. I will write the code next time.