Data-310-Public-Raposo

Data 310 project summaries

View the Project on GitHub aeraposo/Data-310-Public-Raposo

7.9.20 Class exercise

1. What is TF Hub? How did you use it when creating your script for “text classification of movie reviews”?
TF Hub is a TensorFlow library that contains a model called hub_layer. We used hub_layer to add a layer to our neural network that was pretrained to match parts of words to numbers the machine can interpret and find in a library.

2. What are the optimizer and loss functions? How good was your “text classification of movie reviews” model?
We used the “adam” optimizer function and the BinaryCrossentropy loss function. These work together ‘guessing and checking’ to reduce the model’s parameters and improve accuracy/reduce loss. The model performed pretty well- it had an ~85% accuracy and the loss was relativly small too.

3. In “text classification with preprocessed text” you produced a graph of training and validation loss. Add the graph to this response and provide a brief explanation.
This figure shows that as the model is trained (as # of epochs increases, training accuracy increases), it becomes more accurate at predicting the correct classification for data points it hasn’t seen yet so loss, a measure of testing error, decreases (validation loss decreases). This makes sense because as the machine “sees” more examples, it gets a better idea of what its looking for each classification and makes fewer misclassifications/mistakes. It also appears that validation loss begins leveling off at arround 6 epochs.
LINK TO GRAPH (LOSS)

4. Likewise do the same for the training and validation accuracy graph.
This figure shows that as the model is trained (as # of epochs increases, training accuracy increases), it becomes more accurate at predicting the correct classification for data points it hasn’t seen yet (validation accuracy increases). This makes sense because as the machine “sees” more examples, it gets a better idea of what its looking for each classification and makes more correct classifications. It also appears that validation accuracy begins leveling off at arround 6 epochs.
LINK TO GRAPH (ACCURACY)