Computer Science‎ > ‎

TensorFlow: Getting Started with TensorFlow


Tensorflow Basics

TensorFlow library can do most of the things that we can do with other library also . However, it has a very different approach for doing so. And it can do a whole lot more cool stuff which we'll eventually get into.
The major difference to take away from the remainder of this session is that instead of computing things immediately, we first define things that we want to compute later using what's called a graph. Everything in Tensorflow takes place in a computational graph and running and evaluating anything in the graph requires a session. Let's take a look at how these both work and then we'll get into the benefits of why this is useful:

Variables
    We're first going to import the tensorflow library:       
   

    Let's take a look at how we might create a range of numbers. Using numpy, we could for                instance use the linear space function:




[-3.         -2.93939394 -2.87878788 -2.81818182 -2.75757576 -2.6969697
 -2.63636364 -2.57575758 -2.51515152 -2.45454545 -2.39393939 -2.33333333
 -2.27272727 -2.21212121 -2.15151515 -2.09090909 -2.03030303 -1.96969697
 -1.90909091 -1.84848485 -1.78787879 -1.72727273 -1.66666667 -1.60606061
 -1.54545455 -1.48484848 -1.42424242 -1.36363636 -1.3030303  -1.24242424
 -1.18181818 -1.12121212 -1.06060606 -1.         -0.93939394 -0.87878788
 -0.81818182 -0.75757576 -0.6969697  -0.63636364 -0.57575758 -0.51515152
 -0.45454545 -0.39393939 -0.33333333 -0.27272727 -0.21212121 -0.15151515
 -0.09090909 -0.03030303  0.03030303  0.09090909  0.15151515  0.21212121
  0.27272727  0.33333333  0.39393939  0.45454545  0.51515152  0.57575758
  0.63636364  0.6969697   0.75757576  0.81818182  0.87878788  0.93939394
  1.          1.06060606  1.12121212  1.18181818  1.24242424  1.3030303
  1.36363636  1.42424242  1.48484848  1.54545455  1.60606061  1.66666667
  1.72727273  1.78787879  1.84848485  1.90909091  1.96969697  2.03030303
  2.09090909  2.15151515  2.21212121  2.27272727  2.33333333  2.39393939
  2.45454545  2.51515152  2.57575758  2.63636364  2.6969697   2.75757576
  2.81818182  2.87878788  2.93939394  3.        ]
(100,)
float64

Tensors

In tensorflow, we could try to do the same thing using their linear space function:

Tensor("LinSpace:0", shape=(100,), dtype=float32)

Instead of a np.array, we are returned a tf.Tensor.
The name of it is "LinSpace:0".  Wherever we see this colon 0, that 
just means the output of.  So the name of this Tensor is saying, the 
output of LinSpace.

Think of tf.Tensors the same way as you would the np.array. It is described by its shape in this case, only 1 dimension of 100 values. And it has a dtype in this case, float32. But unlike the np.array, there are no values printed here! That's because it actually hasn't computed its values yet. Instead, it just refers to the output of a which has been already been added to Tensorflow's default computational graph. The result of that operation is the tensor that we are returned.


Graphs

Let's try and inspect the underlying graph. We can request the "default" graph where all of our operations have been added:


Operations

And from this graph, we can get a list of all the operations that have been added, and print out their names:

['LinSpace/start', 'LinSpace/stop', 'LinSpace/num', 'LinSpace']

So Tensorflow has named each of our operations to generally reflect what they are doing. There are a few parameters that are all prefixed by LinSpace, and then the last one which is the operation which takes all of the parameters and creates an output for the linspace.     

Tensors

We can request the output of any operation, which is a tensor, by asking the graph for the tensor's name:

<tf.Tensor 'LinSpace:0' shape=(100,) dtype=float32>

What I've done is asked for the tf.Tensor that comes from the operation "LinSpace".


Session

In order to actually compute anything in tensorflow, we need to create a session . The session is responsible for evaluating the graph. Let's see how this works:

[-3.         -2.939394   -2.87878799 -2.81818175 -2.75757575 -2.69696975
 -2.63636351 -2.5757575  -2.5151515  -2.4545455  -2.3939395  -2.33333325
 -2.27272725 -2.21212125 -2.15151501 -2.090909   -2.030303   -1.969697
 -1.90909088 -1.84848475 -1.78787875 -1.72727275 -1.66666663 -1.6060605
 -1.5454545  -1.4848485  -1.42424238 -1.36363626 -1.30303025 -1.24242425
 -1.18181813 -1.12121201 -1.060606   -1.         -0.939394   -0.87878776
 -0.81818175 -0.75757575 -0.69696951 -0.63636351 -0.5757575  -0.5151515
 -0.4545455  -0.39393926 -0.33333325 -0.27272725 -0.21212101 -0.15151501
 -0.090909   -0.030303    0.030303    0.09090924  0.15151525  0.21212125
  0.27272749  0.33333349  0.3939395   0.4545455   0.5151515   0.57575774
  0.63636374  0.69696975  0.75757599  0.81818199  0.87878799  0.939394    1.
  1.060606    1.12121201  1.18181849  1.24242449  1.30303049  1.36363649
  1.4242425   1.4848485   1.5454545   1.60606098  1.66666698  1.72727299
  1.78787899  1.84848499  1.909091    1.969697    2.030303    2.090909
  2.15151548  2.21212149  2.27272749  2.33333349  2.3939395   2.4545455
  2.5151515   2.57575798  2.63636398  2.69696999  2.75757599  2.81818199
  2.87878799  2.939394    3.        ]
[-3.         -2.939394   -2.87878799 -2.81818175 -2.75757575 -2.69696975
 -2.63636351 -2.5757575  -2.5151515  -2.4545455  -2.3939395  -2.33333325
 -2.27272725 -2.21212125 -2.15151501 -2.090909   -2.030303   -1.969697
 -1.90909088 -1.84848475 -1.78787875 -1.72727275 -1.66666663 -1.6060605
 -1.5454545  -1.4848485  -1.42424238 -1.36363626 -1.30303025 -1.24242425
 -1.18181813 -1.12121201 -1.060606   -1.         -0.939394   -0.87878776
 -0.81818175 -0.75757575 -0.69696951 -0.63636351 -0.5757575  -0.5151515
 -0.4545455  -0.39393926 -0.33333325 -0.27272725 -0.21212101 -0.15151501
 -0.090909   -0.030303    0.030303    0.09090924  0.15151525  0.21212125
  0.27272749  0.33333349  0.3939395   0.4545455   0.5151515   0.57575774
  0.63636374  0.69696975  0.75757599  0.81818199  0.87878799  0.939394    1.
  1.060606    1.12121201  1.18181849  1.24242449  1.30303049  1.36363649
  1.4242425   1.4848485   1.5454545   1.60606098  1.66666698  1.72727299
  1.78787899  1.84848499  1.909091    1.969697    2.030303    2.090909
  2.15151548  2.21212149  2.27272749  2.33333349  2.3939395   2.4545455
  2.5151515   2.57575798  2.63636398  2.69696999  2.75757599  2.81818199
  2.87878799  2.939394    3.        ]
We could also explicitly tell the session which graph we want to manage:
By default, it grabs the default graph. But we could have created a new graph like so:

And then used this graph only in our session.

array([-3.        , -2.939394  , -2.87878799, -2.81818175, -2.75757575,
       -2.69696975, -2.63636351, -2.5757575 , -2.5151515 , -2.4545455 ,
       -2.3939395 , -2.33333325, -2.27272725, -2.21212125, -2.15151501,
       -2.090909  , -2.030303  , -1.969697  , -1.90909088, -1.84848475,
       -1.78787875, -1.72727275, -1.66666663, -1.6060605 , -1.5454545 ,
       -1.4848485 , -1.42424238, -1.36363626, -1.30303025, -1.24242425,
       -1.18181813, -1.12121201, -1.060606  , -1.        , -0.939394  ,
       -0.87878776, -0.81818175, -0.75757575, -0.69696951, -0.63636351,
       -0.5757575 , -0.5151515 , -0.4545455 , -0.39393926, -0.33333325,
       -0.27272725, -0.21212101, -0.15151501, -0.090909  , -0.030303  ,
        0.030303  ,  0.09090924,  0.15151525,  0.21212125,  0.27272749,
        0.33333349,  0.3939395 ,  0.4545455 ,  0.5151515 ,  0.57575774,
        0.63636374,  0.69696975,  0.75757599,  0.81818199,  0.87878799,
        0.939394  ,  1.        ,  1.060606  ,  1.12121201,  1.18181849,
        1.24242449,  1.30303049,  1.36363649,  1.4242425 ,  1.4848485 ,
        1.5454545 ,  1.60606098,  1.66666698,  1.72727299,  1.78787899,
        1.84848499,  1.909091  ,  1.969697  ,  2.030303  ,  2.090909  ,
        2.15151548,  2.21212149,  2.27272749,  2.33333349,  2.3939395 ,
        2.4545455 ,  2.5151515 ,  2.57575798,  2.63636398,  2.69696999,
        2.75757599,  2.81818199,  2.87878799,  2.939394  ,  3.        ], dtype=float32)


(100,)
[100]
let's apply what we have learn so far 

1 Part One - Comput
e the Mean
Instructions
Use Python, Numpy,  Matplotlib and pandas or csv to load given dataset of images and create a montage of the dataset as a H x W image. You’ll need to make sure tensorflow call placeholder using(in our case) a 4-d array of N x H x W x C dimensions, meaning every image will need to be the same size! You can load an existing dataset of images, find your own images, or perhaps create your own images using a creative process such as painting, photography, or something along those lines.First use Tensorflow to define a session. Then use Tensorflow to create an operation which takes your 4-d array and calculates the mean color image (H xW x 3) using the function tf.reduce mean. Have a look at the documentation for this function to see how it works in order to get the mean of every pixel and get an image of (H x W x 3) as a result.You’ll then calculate the mean image by running the operation you create with your session (e.g. sess.run(...)).


2 Part Two - Compute the Standard Deviation
Instructions
Now use tensorflow to calculate the standard deviation and upload the stan-
dard deviation image averaged across color channels as a ”jet” heatmap of the N

images. This will be a little more involved as there is no operation in tensorflow
to do this for you. However, you can do this by calculating the mean image
of your dataset as a 4-D array. To do this, you could write e.g. mean img 4d
= tf.reduce_mean(imgs, axis=0, keep dims=True) to give you a 1 x H x W x
C dimension array calculated on the N x H x W x C images variable. The
axis parameter is saying to calculate the mean over the 0th dimension, meaning
for every possible H, W, C, or for every pixel, you will have a mean composed
over the N possible values it could have had, or what that pixel was for every
possible image. This way, you can write images - mean img 4d to give you a N
x H x W x C dimension variable, with every image in your images array having
been subtracted by the mean img 4d. If you calculate the square root of the
expected squared differences of this resulting operation, you have your standard
deviation!
In summary, you’ll need to write something like: subtraction = imgs-
tf.reduce_mean(imgs, axis=0, keep dims=True), then reduce this operation us-
ing tf.sqrt(tf.reduce_mean(subtraction * subtraction, axis=0)) to get your stan-
dard deviation


3 Part Third - Normalize the Dataset
Instructions
 Using tensorflow, we’ll attempt to normalize your dataset using
 the mean and standard deviation.
            

We apply another type of normalization to 0-1 just for the purposes of plot-ting the image. If we didn’t do this, the range of our values would be somewhere
between -1 and 1, and matplotlib would not be able to interpret the entire range
of values. By rescaling our -1 to 1 valued images to 0-1, we can visualize it better.



4 Assignment
In the given dataset of handwritten digits 
• find mean image
• standard deviation
• use both to normalize the data



ċ
test.csv
(49920k)
Prashant Bhattacharji,
Jan 19, 2019, 4:34 AM