{ "cells": [ { "cell_type": "code", "execution_count": 1, "metadata": { "scrolled": true }, "outputs": [ { "data": { "application/javascript": [ "IPython.OutputArea.prototype._should_scroll = function(lines) {\n", " return false;\n", "}" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "%%javascript\n", "IPython.OutputArea.prototype._should_scroll = function(lines) {\n", " return false;\n", "}" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Feature Extraction and Machine Learning Techniques for Musical Genre Determination
by [Rosy Davis](mailto:rosydavis@ieee.org), CSUN MSEE 2017\n", "\n", "## Introduction\n", "\n", "This notebook runs the neural network models for my masters project, \"Feature Extraction\n", "and Machine Learning Techniques for Musical Genre Determination,\" for which I will be\n", "receiving a Masters of Science in \n", "[Electrical Engineering](https://www.csun.edu/engineering-computer-science/electrical-computer-engineering/) \n", "from [California State University, Northridge](https://www.csun.edu/) in December 2017. \n", "My advisor at CSUN is \n", "[Dr. Xiyi Hang](http://www.csun.edu/faculty/profiles/xiyi.hang.14). The accompanying paper is not currently public, but when it is released, a link will be added to this page. Only a partial list of most-relevant references appears in this notebook; the full list appears in the accompanying paper.\n", "\n", "In this project, two approaches to musical genre classification were investigated: the use of support vector classification on Mel-frequency cepstral coefficient (MFCC) features (Experiment 1, the \"[SVCModels.ipynb](SVCModels.ipynb)\" notebook), and the use of neural networks on image data generated via the discrete wavelet transform (DWT) (Experiments 2-5, this notebook).\n", "\n", "The models used include a shallow two-layer neural network (abbreviated in the code as '`fcnn`', for **f**ully-**c**onnected **n**eural **n**etwork) and four convolutional models ([Inception V3](https://keras.io/applications/#inceptionv3), [Xception](https://keras.io/applications/#xception), [ResNet50](https://keras.io/applications/#resnet50), and [VGG16](https://keras.io/applications/#vgg16)), all instantiated with Keras. For the four convolutional models, the ImageNet weights were loaded at instantiation to speed training. Further training was performed via Google Compute Engine (8 x vCPU, 52 GB memory, 1 x NVIDIA Tesla K80) to bring training time down within reasonable limits.\n", "\n", "The dataset used was the [FMA music dataset](https://github.com/mdeff/fma). For neural network processing, the audio in the FMA dataset was first processed by generating time-frequency images from the MP3 audio via the discrete wavelet transform (see \"[generate_wavelets.py](generate_wavelets.py)\").\n", "\n", "### Contents\n", "* [Setup](#Setup)\n", "* [Experiment 2: Optimizer Cross-Validation](#Experiment-2:-Optimizer-Cross-Validation)\n", "* [Experiment 3: Small Dataset Without Augmentation](#Experiment-3:-Small-Dataset-Without-Augmentation)\n", "* [Experiment 4: Small Dataset With Augmentation](#Experiment-4:-Small-Dataset-With-Augmentation)\n", "* [Experiment 5: Extended Dataset](#Experiment-5:-Extended-Dataset)\n", "* [Test Set Evaluation](#Test-Set-Evaluation)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Setup\n", "\n", "### Imports" ] }, { "cell_type": "code", "execution_count": 13, "metadata": { "scrolled": true }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "The autoreload extension is already loaded. To reload it, use:\n", " %reload_ext autoreload\n" ] } ], "source": [ "# For nice tables/display helpers:\n", "from IPython.display import display, Audio, HTML\n", "import time # for sleep()\n", "from datetime import datetime\n", "\n", "# Tensorflow (for checking GPU):\n", "import tensorflow as tf\n", "from tensorflow.python.client import device_lib\n", "\n", "# Keras:\n", "import keras\n", "import keras.applications \n", "\n", "# Math (mostly ceil):\n", "import math\n", "#import numpy as np\n", "import pandas as pd\n", " \n", "# File/path manipulation:\n", "import os, os.path\n", "\n", "# For looking up MAC address (used to uniquify data files):\n", "import uuid\n", "\n", "# My utilities:\n", "import custom_keras_utils as cku\n", "import utilities as ut \n", "import code_timing as timer\n", "\n", "# Code to autoreload external modules: http://bit.ly/2wyj7sD\n", "%load_ext autoreload\n", "%autoreload 2" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Check Processor\n", "\n", "First, assess whether or not a GPU is available. Because the training runs are so computationally expensive, if a GPU is not available, this notebook will run only very small training runs, to allow development and preliminary testing on a low-powered machine." ] }, { "cell_type": "code", "execution_count": 14, "metadata": { "scrolled": true }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "CPU found...\n", "No GPU found; adjusting control variables for quicker training runs.\n" ] } ], "source": [ "devlist = device_lib.list_local_devices()\n", "using_gpu = False\n", "for item in devlist:\n", " if item.device_type == \"CPU\":\n", " print(\"CPU found...\", end = \"\")\n", " if item.device_type == \"GPU\":\n", " using_gpu = True\n", " print(\"GPU found!\")\n", " break\n", "if not using_gpu:\n", " print(\"\\nNo GPU found; adjusting control variables for quicker training runs.\")\n", " " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Configuration\n", "\n", "The `param_dict` dictionary controls the current run type and parameters, controlling parameters such as the batch size, the epoch size, and the size of the hidden layer in the two-layer network (both on its own and when used as the classifier for the deep networks), as well as several parameters for usability, such as whether or not to display popups when a run completes. Because training runs are time consuming even on a GPU, popups will have the effect of refocusing the notebook if it has been left running in the background while the user performs other tasks. The notebook has the ability to also play audio alerts; to enable these alerts, two .WAV files, \"complete.wav\" and \"error.wav\" must be added to the folder in which this notebook is run.\n", "\n", "`param_dict` also controls the metadata is recorded in the dataframe that tracks runs: the dataframe will record the time of the run, the MAC address of the machine that ran it, and so forth to allow multiple runs to be easily distinguished in the data analysis phase." ] }, { "cell_type": "code", "execution_count": 18, "metadata": { "scrolled": true }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Using directory data/fma_images/byclass/small/dwt as source for data.\n", "\n", "Creating generators with batch size 4...\n", "Loading mean and standard deviation for the training set from file 'saved_objects/fma_small_dwt_stats.npz'.\n", "\n", "Found 6400 images belonging to 8 classes.\n", "Found 800 images belonging to 8 classes.\n", "Found 800 images belonging to 8 classes.\n", "\n", "Could not locate 'complete.wav'; disabling audio alerts.\n" ] } ], "source": [ "# Default master control variables--these can be modified to change the default run type:\n", "param_dict = {\"datasetname\": \"FMA\",\n", " \"resume\": False,\n", " \"run_verbosity\": 1, # 0 for silent, 2 for summary, 1 for chatty\n", " \"popup_override\": True, # Manually prevent popups (if this is False,\n", " # they default to the time check in \n", " # display_popups, below).\n", " \"alert\": True, # for audio alerts\n", " \"seed\": 5,\n", " \"augmentation\": 0, # value to use for data augmentation, [0,1]\n", " \"which_wavelet\": \"dwt\", # use DWT images instead of CWT images\n", " \"which_size\": \"small\",\n", " \"dataset_size\": 6400, # 6400 training images for the small FMA set\n", " \"hidden_size\": 128,\n", " \"source\": uuid.getnode(), # MAC address, to uniquify runs from multiple \n", " # machines\n", " \"run_crossval\": True}\n", "\n", "# Suppress popups overnight since no one will be around to be alerted:\n", "param_dict[\"display_popups\"] = lambda: ((datetime.now().hour >= 7 and \n", " datetime.now().hour < 21) and not\n", " param_dict[\"popup_override\"])\n", "\n", "\n", "\n", "# All models are compiled with the same compile arguments, other than the optimizer:\n", "run_options = tf.RunOptions(trace_level=tf.RunOptions.FULL_TRACE)\n", "param_dict[\"run_metadata\"] = tf.RunMetadata()\n", "param_dict[\"compile_args\"] = {\"loss\": 'categorical_crossentropy',\n", " \"metrics\": ['categorical_accuracy'],\n", " \"options\": run_options, # note naming difference!\n", " \"run_metadata\": param_dict[\"run_metadata\"]}\n", "\n", "# Set up the GPU-available/CPU-only specific options:\n", "if using_gpu:\n", " # Large training runs are OK to process on the GPU:\n", " param_dict[\"spu\"] = \"gpu\"\n", " param_dict[\"pass_epochs\"] = 5 #20\n", " # Adjust so each epoch sees every image about once:\n", " param_dict[\"batch_size\"] = 128 # 192 is too high for even one epoch of VGG19 on GCE.\n", " # Note: going higher than 64 sometimes will lead to \n", " # memory issues on VGG16, b/c garbage collection isn't \n", " # instantaneous and VGG16 has a huge number of \n", " # parameters, but we want this as large as possible, \n", " # This problem is ameliorated somewhat by specifying\n", " # a small epoch_batch_size in the call to \n", " # run_pretrained_model(), which will checkpoint the \n", " # training every epoch_batch_size epochs to clean up\n", " # memory fragmentation (see also http://bit.ly/2hDHJay )\n", " param_dict[\"steps_per_epoch\"] = math.ceil(param_dict[\"dataset_size\"]/\n", " param_dict[\"batch_size\"]) \n", " param_dict[\"validation_steps\"] = math.ceil(param_dict[\"dataset_size\"]/\n", " (8*param_dict[\"batch_size\"])) \n", "else:\n", " # We want to skip them, though, if no GPU is available, while still being able to test\n", " # the code:\n", " param_dict[\"spu\"] = \"cpu\"\n", " param_dict[\"pass_epochs\"] = 1\n", " # Adjust for quick development runs, where an \"epoch\" won't really see the whole \n", " # training set:\n", " param_dict[\"batch_size\"] = 4 # <4 confuses Keras\n", " param_dict[\"steps_per_epoch\"] = 1\n", " param_dict[\"validation_steps\"] = math.ceil(16/param_dict[\"batch_size\"]) \n", " # So we'll check at least 16 examples \n", " # in validation--much smaller than \n", " # this confuses Keras.\n", "\n", "# Register the names we'll save the data files as, since they get used a bunch of \n", "# different places:\n", "param_dict[\"fma_results_name\"] = \"fma_results_{}\".format(param_dict[\"spu\"])\n", "param_dict[\"crossval_results_name\"] = \"crossval_results_{}\".format(param_dict[\"spu\"])\n", "param_dict[\"run_timelines_file\"] = \"run_timelines/timeline_{}_{}.ctf.json\".format(\n", " param_dict[\"spu\"],\n", " timer.datetimepath())\n", "param_dict[\"timing_results_name\"] = \"timing_results\"\n", "\n", "\n", "# Map from short names to display-friendly names:\n", "param_dict[\"model_names\"] = {}\n", "param_dict[\"model_names\"][\"fcnn\"] = \"Two-Layer Network\"\n", "param_dict[\"model_names\"][\"xception\"] = \"Xception\"\n", "param_dict[\"model_names\"][\"inception_v3\"] = \"Inception V3\"\n", "param_dict[\"model_names\"][\"resnet50\"] = \"ResNet50\"\n", "param_dict[\"model_names\"][\"vgg16\"] = \"VGG16\"\n", "\n", "# Set up path\n", "param_dict[\"img_dir\"] = os.path.join(\"data\",\n", " os.path.join(\"fma_images\",\n", " os.path.join(\"byclass\", \n", " os.path.join(param_dict[\"which_size\"], \n", " param_dict[\"which_wavelet\"])\n", " )\n", " )\n", " )\n", "print(\"Using directory {} as source for data.\\n\".format(param_dict[\"img_dir\"]))\n", "\n", "# Configure the generators based on the specified parameters:\n", "generators = {}\n", "(generators[\"train\"], \n", " generators[\"val\"], \n", " generators[\"test\"]) = cku.set_up_generators(param_dict)\n", "\n", "param_dict[\"classes\"] = list(generators[\"val\"].class_indices.keys())\n", "param_dict[\"num_classes\"] = len(param_dict[\"classes\"])\n", "\n", "# Set up to track the models created for all runs:\n", "models = {}\n", "\n", "# Set up no audio alert mode so that if it's set, the notebook will avoid playing audio \n", "# without throwing errors:\n", "if not param_dict[\"alert\"]:\n", " error_file = \"\" \n", " complete_file = \"\"\n", "else: # Set up for audio alerts--or at least try to...\n", " complete_file = \"complete.wav\"\n", " error_file = \"error.wav\"\n", " if not os.path.isfile(complete_file):\n", " print(\"\\nCould not locate '{}'; disabling audio alerts.\".format(complete_file))\n", " complete_file = \"\"\n", " error_file = \"\"\n", " elif not os.path.isfile(error_file):\n", " print(\"\\nCould not locate '{}'; disabling audio alerts.\".format(error_file))\n", " complete_file = \"\"\n", " error_file = \"\"" ] }, { "cell_type": "code", "execution_count": 23, "metadata": { "scrolled": true }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "No previous runs found; skipping ETA calculations.\n" ] } ], "source": [ "def calc_etas(param_dict):\n", " param_dict[\"etas\"] = {}\n", " \n", " try:\n", " prev_runs = ut.load_obj(param_dict[\"fma_results_name\"])\n", " except:\n", " print(\"No previous runs found; skipping ETA calculations.\")\n", " \n", " for model_name in [\"fcnn\", \"xception\", \"inception_v3\", \"resnet50\", \"vgg16\"]:\n", " param_dict[\"etas\"][model_name] = \"unknown (no similar runs found)\"\n", " \n", " return\n", "\n", " # At least some previous runs found; calculate all the ETAS we can:\n", " grouped_for_etas = prev_runs.groupby([\"Source Processor\", \n", " \"Pass Epochs\", \n", " \"Data Set Size\", \n", " \"Model\"])[\"Run Duration\"]\n", " for model_name in [\"fcnn\", \"xception\", \"inception_v3\", \"resnet50\", \"vgg16\"]:\n", " try:\n", " epochs = param_dict[\"pass_epochs\"]\n", " eta_run_key = (param_dict[\"spu\"],\n", " param_dict[\"pass_epochs\"],\n", " param_dict[\"which_size\"],\n", " model_name)\n", " mean_time = (prev_runs.loc[grouped_for_etas.groups[eta_run_key]]\n", " [\"Run Duration\"].mean())\n", " std_time = (prev_runs.loc[grouped_for_etas.groups[eta_run_key]]\n", " [\"Run Duration\"].std())\n", " max_time = (prev_runs.loc[grouped_for_etas.groups[eta_run_key]]\n", " [\"Run Duration\"].max())\n", " min_time = (prev_runs.loc[grouped_for_etas.groups[eta_run_key]]\n", " [\"Run Duration\"].min())\n", " eta = \"{} ± {} ([{},{}])\".format(timer.time_from_sec(mean_time),\n", " timer.time_from_sec(std_time),\n", " timer.time_from_sec(min_time),\n", " timer.time_from_sec(max_time))\n", " except:\n", " eta = \"unknown (no similar runs found)\"\n", "\n", " param_dict[\"etas\"][model_name] = eta\n", " \n", "calc_etas(param_dict)" ] }, { "cell_type": "code", "execution_count": 24, "metadata": { "collapsed": true, "scrolled": true }, "outputs": [], "source": [ "# Set up some helper functions to consolidate later calls:\n", "js_alert_wrapper = (\"\")\n", "def delayed_popup():\n", " time.sleep(2)\n", " if param_dict[\"display_popups\"]():\n", " display(HTML(js_alert_wrapper.format(js)))\n", "\n", "persistent_error = None\n", "def wrap_for_alerts(function_hook, function_name, *args, **kwargs):\n", " global audio_file\n", " global js\n", " global persistent_error\n", " persistent_error = None\n", " \n", " # Only capitalize the first letter without altering other capitalization:\n", " function_name_init_cap = function_name[0].upper() + function_name[1:]\n", " try:\n", " function_hook(*args, **kwargs)\n", "\n", " js = \"{} complete!\".format(function_name_init_cap)\n", " audio_file = complete_file\n", " except Exception as e:\n", " persistent_error = e\n", " print((\"\\nException caught in {}; temporarily handling silently to \"\n", " \"allow alerts to go through:\").format(function_name))\n", " display(e)\n", " js = \"{} errored out...\".format(function_name_init_cap)\n", " audio_file = error_file\n", " \n", " \n", "# Crossval:\n", "def run_crossval_hook():\n", " global best_opt_key\n", " if (param_dict[\"run_crossval\"]):\n", " best_opt_key = cku.run_crossval(param_dict, opts, models)\n", " ut.save_obj(best_opt_key, \"best_opt_key\")\n", " else:\n", " best_opt_key = ut.load_obj(\"best_opt_key\")\n", " print(\"Best optimizer {} loaded from file.\".format(best_opt_key))\n", " \n", "def run_crossval():\n", " wrap_for_alerts(run_crossval_hook,\"cross-validation\")\n", "\n", "def model_run_init():\n", " # Update the ETAs to catch new runs:\n", " calc_etas(param_dict)\n", " \n", " # Load best optimizer to allow running after restart or backend reset:\n", " param_dict[\"run_crossval\"] = False\n", " run_crossval() \n", " build_opts(best_opt_key)\n", " \n", " \n", "# Fully connected network:\n", "def run_fcnn_hook():\n", " model_run_init()\n", " \n", " _ = cku.run_fcnn_model(param_dict, generators, opts, best_opt_key, models, \n", " param_dict[\"etas\"][\"fcnn\"],\n", " epoch_batch_size = 10)\n", " \n", "def run_fcnn():\n", " wrap_for_alerts(run_fcnn_hook,\"fully connected network run\")\n", " \n", " \n", "# Xception:\n", "def run_xception_hook():\n", " # Load best optimizer to allow running after restart or backend reset:\n", " model_run_init()\n", " \n", " cku.run_pretrained_model(param_dict, generators, models, opts, \n", " keras.applications.xception.Xception, \"xception\", \n", " best_opt_key, False,\n", " [122, 105, 95, 85, 75], param_dict[\"etas\"][\"xception\"],\n", " epoch_batch_size = 7)\n", " \n", "def run_xception():\n", " wrap_for_alerts(run_xception_hook,\"Xception run\")\n", " \n", " \n", "# InceptionV3:\n", "def run_inception_v3_hook():\n", " model_run_init()\n", " \n", " cku.run_pretrained_model(param_dict, generators, models, opts,\n", " keras.applications.inception_v3.InceptionV3, \"inception_v3\",\n", " best_opt_key, False,\n", " [249, 232, 229, 200, 187], param_dict[\"etas\"][\"inception_v3\"],\n", " epoch_batch_size = 10)\n", " \n", "def run_inception_v3():\n", " wrap_for_alerts(run_inception_v3_hook,\"Inception V3 run\")\n", " \n", " \n", "# ResNet50:\n", "def run_resnet50_hook():\n", " model_run_init()\n", " \n", " cku.run_pretrained_model(param_dict, generators, models, opts, \n", " keras.applications.resnet50.ResNet50, \"resnet50\",\n", " best_opt_key, False, \n", " [161, 151, 139, 129, 119], param_dict[\"etas\"][\"resnet50\"],\n", " epoch_batch_size = 7)\n", " \n", "def run_resnet50():\n", " wrap_for_alerts(run_resnet50_hook,\"ResNet50 run\")\n", " \n", " \n", "# VGG16:\n", "def run_vgg16_hook():\n", " model_run_init()\n", " \n", " cku.run_pretrained_model(param_dict, generators, models, opts, \n", " keras.applications.vgg16.VGG16, \"vgg16\", \n", " best_opt_key, False,\n", " [17, 15, 13, 11, 8], param_dict[\"etas\"][\"vgg16\"],\n", " epoch_batch_size = 3)\n", " \n", "def run_vgg16():\n", " wrap_for_alerts(run_vgg16_hook,\"VGG16 run\")\n", " " ] }, { "cell_type": "code", "execution_count": 25, "metadata": { "collapsed": true, "scrolled": true }, "outputs": [], "source": [ "def build_opts(opt_key = None):\n", " global opts\n", " \n", " # Set up the optimizers for the cross-validation:\n", " opts = {}\n", " if opt_key is None:\n", " learning_rates = [0.001, 0.005, 0.01]\n", " rhos = [0.75, 0.8, 0.85, 0.9, 0.95]\n", " decays = [1e-6, 0.001, 0.01, 0.02, 0.04]\n", " epsilons = [1e-10, 1e-9, 1e-8, 1e-7]\n", " for lr in learning_rates:\n", " for rho in rhos:\n", " for epsilon in epsilons:\n", " for decay in decays:\n", " opts[lr, decay, rho, epsilon] = keras.optimizers.RMSprop(lr = lr,\n", " decay = decay,\n", " rho = rho,\n", " epsilon = epsilon)\n", " else:\n", " # We really only need the one we've requested:\n", " opts[opt_key] = keras.optimizers.RMSprop(lr = opt_key[0],\n", " decay = opt_key[1],\n", " rho = opt_key[2],\n", " epsilon = opt_key[3])\n", " \n", "build_opts()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Experiment 2: Optimizer Cross-Validation\n", "\n", "Before training, quickly evaluate all possible optimizer hyperparameter combinations to find the best combination from the set of hyperparameter values specified. Note that this search is not exhaustive; rather, the best optimizer is simply selected from among a set of plausible values based on a very short (1 epoch) training period." ] }, { "cell_type": "code", "execution_count": 7, "metadata": { "scrolled": true }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Cross-validation of 300 optimizers with manual parameters takes about 05:00:00.\n", "\n", "Creating generators with batch size 2048...\n", "Loading mean and standard deviation for the training set from file 'saved_objects/fma_small_dwt_stats.npz'.\n", "\n", "Found 6400 images belonging to 8 classes.\n", "Found 800 images belonging to 8 classes.\n", "Found 800 images belonging to 8 classes.\n", "\n", "Starting run at Thursday, 2017 October 12, 3:39 PM...1/300 (0.0%)...2/300 (0.3%)...3/300 (0.7%)...4/300 (1.0%)...5/300 (1.3%)...6/300 (1.7%)...7/300 (2.0%)...8/300 (2.3%)...9/300 (2.7%)...10/300 (3.0%)...11/300 (3.3%)...12/300 (3.7%)...13/300 (4.0%)...14/300 (4.3%)...15/300 (4.7%)...16/300 (5.0%)...17/300 (5.3%)...18/300 (5.7%)...19/300 (6.0%)...20/300 (6.3%)...21/300 (6.7%)...22/300 (7.0%)...23/300 (7.3%)...24/300 (7.7%)...25/300 (8.0%)...26/300 (8.3%)...27/300 (8.7%)...28/300 (9.0%)...29/300 (9.3%)...30/300 (9.7%)...31/300 (10.0%)...32/300 (10.3%)...33/300 (10.7%)...34/300 (11.0%)...35/300 (11.3%)...36/300 (11.7%)...37/300 (12.0%)...38/300 (12.3%)...39/300 (12.7%)...40/300 (13.0%)...41/300 (13.3%)...42/300 (13.7%)...43/300 (14.0%)...44/300 (14.3%)...45/300 (14.7%)...46/300 (15.0%)...47/300 (15.3%)...48/300 (15.7%)...49/300 (16.0%)...50/300 (16.3%)...51/300 (16.7%)...52/300 (17.0%)...53/300 (17.3%)...54/300 (17.7%)...55/300 (18.0%)...56/300 (18.3%)...57/300 (18.7%)...58/300 (19.0%)...59/300 (19.3%)...60/300 (19.7%)...61/300 (20.0%)...62/300 (20.3%)...63/300 (20.7%)...64/300 (21.0%)...65/300 (21.3%)...66/300 (21.7%)...67/300 (22.0%)...68/300 (22.3%)...69/300 (22.7%)...70/300 (23.0%)...71/300 (23.3%)...72/300 (23.7%)...73/300 (24.0%)...74/300 (24.3%)...75/300 (24.7%)...76/300 (25.0%)...77/300 (25.3%)...78/300 (25.7%)...79/300 (26.0%)...80/300 (26.3%)...81/300 (26.7%)...82/300 (27.0%)...83/300 (27.3%)...84/300 (27.7%)...85/300 (28.0%)...86/300 (28.3%)...87/300 (28.7%)...88/300 (29.0%)...89/300 (29.3%)...90/300 (29.7%)...91/300 (30.0%)...92/300 (30.3%)...93/300 (30.7%)...94/300 (31.0%)...95/300 (31.3%)...96/300 (31.7%)...97/300 (32.0%)...98/300 (32.3%)...99/300 (32.7%)...100/300 (33.0%)...101/300 (33.3%)...102/300 (33.7%)...103/300 (34.0%)...104/300 (34.3%)...105/300 (34.7%)...106/300 (35.0%)...107/300 (35.3%)...108/300 (35.7%)...109/300 (36.0%)...110/300 (36.3%)...111/300 (36.7%)...112/300 (37.0%)...113/300 (37.3%)...114/300 (37.7%)...115/300 (38.0%)...116/300 (38.3%)...117/300 (38.7%)...118/300 (39.0%)...119/300 (39.3%)...120/300 (39.7%)...121/300 (40.0%)...122/300 (40.3%)...123/300 (40.7%)...124/300 (41.0%)...125/300 (41.3%)...126/300 (41.7%)...127/300 (42.0%)...128/300 (42.3%)...129/300 (42.7%)...130/300 (43.0%)...131/300 (43.3%)...132/300 (43.7%)...133/300 (44.0%)...134/300 (44.3%)...135/300 (44.7%)...136/300 (45.0%)...137/300 (45.3%)...138/300 (45.7%)...139/300 (46.0%)...140/300 (46.3%)...141/300 (46.7%)...142/300 (47.0%)...143/300 (47.3%)...144/300 (47.7%)...145/300 (48.0%)...146/300 (48.3%)...147/300 (48.7%)...148/300 (49.0%)...149/300 (49.3%)...150/300 (49.7%)...151/300 (50.0%)...152/300 (50.3%)...153/300 (50.7%)...154/300 (51.0%)...155/300 (51.3%)...156/300 (51.7%)...157/300 (52.0%)...158/300 (52.3%)...159/300 (52.7%)...160/300 (53.0%)...161/300 (53.3%)...162/300 (53.7%)...163/300 (54.0%)...164/300 (54.3%)...165/300 (54.7%)...166/300 (55.0%)...167/300 (55.3%)...168/300 (55.7%)...169/300 (56.0%)...170/300 (56.3%)...171/300 (56.7%)...172/300 (57.0%)...173/300 (57.3%)...174/300 (57.7%)...175/300 (58.0%)...176/300 (58.3%)...177/300 (58.7%)...178/300 (59.0%)...179/300 (59.3%)...180/300 (59.7%)...181/300 (60.0%)...182/300 (60.3%)...183/300 (60.7%)...184/300 (61.0%)...185/300 (61.3%)...186/300 (61.7%)...187/300 (62.0%)...188/300 (62.3%)...189/300 (62.7%)...190/300 (63.0%)...191/300 (63.3%)...192/300 (63.7%)...193/300 (64.0%)...194/300 (64.3%)...195/300 (64.7%)...196/300 (65.0%)...197/300 (65.3%)...198/300 (65.7%)...199/300 (66.0%)...200/300 (66.3%)...201/300 (66.7%)...202/300 (67.0%)...203/300 (67.3%)...204/300 (67.7%)...205/300 (68.0%)...206/300 (68.3%)...207/300 (68.7%)...208/300 (69.0%)...209/300 (69.3%)...210/300 (69.7%)...211/300 (70.0%)...212/300 (70.3%)...213/300 (70.7%)...214/300 (71.0%)...215/300 (71.3%)...216/300 (71.7%)...217/300 (72.0%)...218/300 (72.3%)...219/300 (72.7%)...220/300 (73.0%)...221/300 (73.3%)...222/300 (73.7%)...223/300 (74.0%)...224/300 (74.3%)...225/300 (74.7%)...226/300 (75.0%)...227/300 (75.3%)...228/300 (75.7%)...229/300 (76.0%)...230/300 (76.3%)...231/300 (76.7%)...232/300 (77.0%)...233/300 (77.3%)...234/300 (77.7%)...235/300 (78.0%)...236/300 (78.3%)...237/300 (78.7%)...238/300 (79.0%)...239/300 (79.3%)...240/300 (79.7%)...241/300 (80.0%)...242/300 (80.3%)...243/300 (80.7%)...244/300 (81.0%)...245/300 (81.3%)...246/300 (81.7%)...247/300 (82.0%)...248/300 (82.3%)...249/300 (82.7%)...250/300 (83.0%)...251/300 (83.3%)...252/300 (83.7%)...253/300 (84.0%)...254/300 (84.3%)...255/300 (84.7%)...256/300 (85.0%)...257/300 (85.3%)...258/300 (85.7%)...259/300 (86.0%)...260/300 (86.3%)...261/300 (86.7%)...262/300 (87.0%)...263/300 (87.3%)...264/300 (87.7%)...265/300 (88.0%)...266/300 (88.3%)...267/300 (88.7%)...268/300 (89.0%)...269/300 (89.3%)...270/300 (89.7%)...271/300 (90.0%)...272/300 (90.3%)...273/300 (90.7%)...274/300 (91.0%)...275/300 (91.3%)...276/300 (91.7%)...277/300 (92.0%)...278/300 (92.3%)...279/300 (92.7%)...280/300 (93.0%)...281/300 (93.3%)...282/300 (93.7%)...283/300 (94.0%)...284/300 (94.3%)...285/300 (94.7%)...286/300 (95.0%)...287/300 (95.3%)...288/300 (95.7%)...289/300 (96.0%)...290/300 (96.3%)...291/300 (96.7%)...292/300 (97.0%)...293/300 (97.3%)...294/300 (97.7%)...295/300 (98.0%)...296/300 (98.3%)...297/300 (98.7%)...298/300 (99.0%)...299/300 (99.3%)...300/300 (99.7%)\n", "Best optimizer: (0.01, 0.02, 0.85, 1e-10) - 30.4% validation accuracy!\n" ] }, { "data": { "text/html": [ "\n", " \n", " " ], "text/plain": [ "" ] }, "execution_count": 7, "metadata": {}, "output_type": "execute_result" } ], "source": [ "run_crossval() \n", "Audio(url=audio_file, autoplay=True)" ] }, { "cell_type": "code", "execution_count": 8, "metadata": { "collapsed": true, "scrolled": true }, "outputs": [], "source": [ "# Must be after Alert() call and in a separate cell for both audio and pop-up; sleep \n", "# allows the audio to play before the pop-up alters HTML output on the page:\n", "delayed_popup()\n", "\n", "if persistent_error is not None:\n", " raise(persistent_error) # crossval is special--other runs depend on it, so halt \n", " # execution if there's an error in crossval" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Results" ] }, { "cell_type": "code", "execution_count": 9, "metadata": { "collapsed": true, "scrolled": true }, "outputs": [], "source": [ "crossval_results = ut.load_obj(param_dict[\"crossval_results_name\"])\n", "display(crossval_results)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Experiment 3: Small Dataset Without Augmentation\n", "\n", "### Simple Two-Layer Neural Network\n", "\n" ] }, { "cell_type": "code", "execution_count": 8, "metadata": { "scrolled": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Best optimizer (0.01, 0.02, 0.85, 1e-10) loaded from file.\n", "Using hidden size 128 and optimizer (0.01, 0.02, 0.85, 1e-10)...\n", "Fully connected network run begun at Monday, 2017 October 16, 10:30 PM.\n", "\t[30 epochs on small FMA on GPU takes\n", "\t00:09:28 ± 00:04:30 ([00:06:16,00:12:39]).]\n", "\n", "\n", "Training for epochs 1 to 10...\n", "Epoch 1/10\n", "50/50 [==============================] - 28s - loss: 1.8238 - categorical_accuracy: 0.3172 - val_loss: 1.8220 - val_categorical_accuracy: 0.3638\n", "Epoch 2/10\n", "50/50 [==============================] - 23s - loss: 1.6009 - categorical_accuracy: 0.4188 - val_loss: 1.6651 - val_categorical_accuracy: 0.3688\n", "Epoch 3/10\n", "50/50 [==============================] - 24s - loss: 1.4074 - categorical_accuracy: 0.5067 - val_loss: 1.6592 - val_categorical_accuracy: 0.3638\n", "Epoch 4/10\n", "50/50 [==============================] - 23s - loss: 1.1634 - categorical_accuracy: 0.6016 - val_loss: 1.6317 - val_categorical_accuracy: 0.3950\n", "Epoch 5/10\n", "50/50 [==============================] - 23s - loss: 0.9404 - categorical_accuracy: 0.6922 - val_loss: 1.6611 - val_categorical_accuracy: 0.3600\n", "Epoch 6/10\n", "50/50 [==============================] - 23s - loss: 0.7459 - categorical_accuracy: 0.7616 - val_loss: 1.6978 - val_categorical_accuracy: 0.3750\n", "Epoch 7/10\n", "50/50 [==============================] - 24s - loss: 0.5764 - categorical_accuracy: 0.8269 - val_loss: 1.8252 - val_categorical_accuracy: 0.3350\n", "Epoch 8/10\n", "50/50 [==============================] - 24s - loss: 0.4630 - categorical_accuracy: 0.8622 - val_loss: 1.8771 - val_categorical_accuracy: 0.3600\n", "Epoch 9/10\n", "50/50 [==============================] - 23s - loss: 0.3531 - categorical_accuracy: 0.9055 - val_loss: 2.0158 - val_categorical_accuracy: 0.3325\n", "Epoch 10/10\n", "50/50 [==============================] - 24s - loss: 0.3078 - categorical_accuracy: 0.9156 - val_loss: 2.1083 - val_categorical_accuracy: 0.3475\n", "\n", "Training for epochs 11 to 20...\n", "Epoch 11/20\n", "50/50 [==============================] - 27s - loss: 0.2561 - categorical_accuracy: 0.9339 - val_loss: 2.2407 - val_categorical_accuracy: 0.3400\n", "Epoch 12/20\n", "50/50 [==============================] - 26s - loss: 0.2204 - categorical_accuracy: 0.9487 - val_loss: 2.3652 - val_categorical_accuracy: 0.3412\n", "Epoch 13/20\n", "50/50 [==============================] - 26s - loss: 0.1833 - categorical_accuracy: 0.9553 - val_loss: 2.5033 - val_categorical_accuracy: 0.3150\n", "Epoch 14/20\n", "50/50 [==============================] - 26s - loss: 0.1522 - categorical_accuracy: 0.9659 - val_loss: 2.5987 - val_categorical_accuracy: 0.3150\n", "Epoch 15/20\n", "50/50 [==============================] - 26s - loss: 0.1394 - categorical_accuracy: 0.9705 - val_loss: 2.7271 - val_categorical_accuracy: 0.3287\n", "Epoch 16/20\n", "50/50 [==============================] - 25s - loss: 0.1162 - categorical_accuracy: 0.9758 - val_loss: 2.8336 - val_categorical_accuracy: 0.3250\n", "Epoch 17/20\n", "50/50 [==============================] - 26s - loss: 0.1093 - categorical_accuracy: 0.9764 - val_loss: 2.8789 - val_categorical_accuracy: 0.3287\n", "Epoch 18/20\n", "50/50 [==============================] - 26s - loss: 0.0991 - categorical_accuracy: 0.9809 - val_loss: 2.9605 - val_categorical_accuracy: 0.3362\n", "Epoch 19/20\n", "50/50 [==============================] - 26s - loss: 0.0872 - categorical_accuracy: 0.9862 - val_loss: 3.0633 - val_categorical_accuracy: 0.3237\n", "Epoch 20/20\n", "50/50 [==============================] - 25s - loss: 0.0812 - categorical_accuracy: 0.9831 - val_loss: 3.0591 - val_categorical_accuracy: 0.3275\n", "\n", "Training for epochs 21 to 30...\n", "Epoch 21/30\n", "50/50 [==============================] - 27s - loss: 0.0808 - categorical_accuracy: 0.9825 - val_loss: 3.1616 - val_categorical_accuracy: 0.3262\n", "Epoch 22/30\n", "50/50 [==============================] - 25s - loss: 0.0741 - categorical_accuracy: 0.9873 - val_loss: 3.2545 - val_categorical_accuracy: 0.3175\n", "Epoch 23/30\n", "50/50 [==============================] - 25s - loss: 0.0670 - categorical_accuracy: 0.9856 - val_loss: 3.2763 - val_categorical_accuracy: 0.3237\n", "Epoch 24/30\n", "50/50 [==============================] - 26s - loss: 0.0701 - categorical_accuracy: 0.9878 - val_loss: 3.3420 - val_categorical_accuracy: 0.3200\n", "Epoch 25/30\n", "50/50 [==============================] - 26s - loss: 0.0561 - categorical_accuracy: 0.9914 - val_loss: 3.3833 - val_categorical_accuracy: 0.3250\n", "Epoch 26/30\n", "50/50 [==============================] - 26s - loss: 0.0578 - categorical_accuracy: 0.9888 - val_loss: 3.4146 - val_categorical_accuracy: 0.3362\n", "Epoch 27/30\n", "50/50 [==============================] - 26s - loss: 0.0512 - categorical_accuracy: 0.9920 - val_loss: 3.4581 - val_categorical_accuracy: 0.3250\n", "Epoch 28/30\n", "50/50 [==============================] - 25s - loss: 0.0528 - categorical_accuracy: 0.9914 - val_loss: 3.5485 - val_categorical_accuracy: 0.3237\n", "Epoch 29/30\n", "50/50 [==============================] - 26s - loss: 0.0511 - categorical_accuracy: 0.9914 - val_loss: 3.5479 - val_categorical_accuracy: 0.3237\n", "Epoch 30/30\n", "50/50 [==============================] - 26s - loss: 0.0445 - categorical_accuracy: 0.9919 - val_loss: 3.6415 - val_categorical_accuracy: 0.3200\n", "\n", "00:12:50 for Two-Layer Network to yield 99.2% training accuracy and 32.0% validation accuracy in 5 \n", "epochs (x3 training phases).\n", "\n", "Fully connected run complete at Monday, 2017 October 16, 10:42 PM.\n", "Clearing keras's backend Tensorflow session...\n", "\n" ] }, { "data": { "text/html": [ "\n", " \n", " " ], "text/plain": [ "" ] }, "execution_count": 8, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# Run a full training pass:\n", "run_fcnn()\n", " \n", "Audio(url=audio_file, autoplay=True)" ] }, { "cell_type": "code", "execution_count": 9, "metadata": { "collapsed": true, "scrolled": true }, "outputs": [], "source": [ "# Must be after Alert() call and in a separate cell for both audio and pop-up; sleep \n", "# allows the audio to play before the pop-up alters HTML output on the page:\n", "delayed_popup()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Note that the chance accuracy for an 8-genre classification task is 12.5%, so this result exceeds that value significantly. However, the enormous gap between the training accuracy and the validation accuracy indicates that the two-layer neural network has badly overfit the training data." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Xception\n", "\n", "[Xception](https://keras.io/applications/#xception) is a model derived from Inception [1]. It uses the principle of *depthwise separable* convolutions to factorize single large (e.g. 5x5) convolutions into stacks of smaller (e.g. 1x1) convolutions along the depth axis. The Keras instantiation has 22,910,480 parameters across 126 layers and achieves a top-1 accuracy of 79% on ImageNet.\n", "\n", "\n", "#### Relevant Literature\n", "\n", "[1] C. Szegedy et al., “Going deeper with convolutions,” Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1–9.\n", "\n", "[2] C. Szegedy et al., “Rethinking the Inception architecture for computer vision,” CoRR, vol.\n", "abs/1512.00567, 2015 [Online]. Available: http://arxiv.org/abs/1512.00567. Last checked: 2017 September 18\n", "\n", "[3] F. Chollet, “Xception: Deep learning with depthwise separable convolutions,” CoRR, vol. abs/1610.02357, 2016 [Online]. Available: http://arxiv.org/abs/1610.02357. Last checked: 2017 September 21" ] }, { "cell_type": "code", "execution_count": 10, "metadata": { "scrolled": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Best optimizer (0.01, 0.02, 0.85, 1e-10) loaded from file.\n", "Using optimizer (0.01, 0.02, 0.85, 1e-10)...\n", "Xception run begun at Monday, 2017 October 16, 10:42 PM.\n", "\t[5 epochs (x6 passes) on small FMA on GPU takes\n", "\tunknown (no similar runs found).]\n", "\n", "First-round training (training the classifier)...\n", "\n", "Training for epochs 1 to 5...\n", "Epoch 1/5\n", "50/50 [==============================] - 193s - loss: 1.9081 - categorical_accuracy: 0.2856 - val_loss: 2.1487 - val_categorical_accuracy: 0.1625\n", "Epoch 2/5\n", "50/50 [==============================] - 190s - loss: 1.6867 - categorical_accuracy: 0.3852 - val_loss: 2.0583 - val_categorical_accuracy: 0.2512\n", "Epoch 3/5\n", "50/50 [==============================] - 190s - loss: 1.5682 - categorical_accuracy: 0.4344 - val_loss: 1.9656 - val_categorical_accuracy: 0.2888\n", "Epoch 4/5\n", "50/50 [==============================] - 191s - loss: 1.5023 - categorical_accuracy: 0.4545 - val_loss: 1.9731 - val_categorical_accuracy: 0.2687\n", "Epoch 5/5\n", "50/50 [==============================] - 191s - loss: 1.4063 - categorical_accuracy: 0.4880 - val_loss: 1.9160 - val_categorical_accuracy: 0.3162\n", "\n", "\n", "Further training (refining convolutional blocks, starting with\n", "\tlayer 122)...\n", "\n", "Training for epochs 6 to 10...\n", "Epoch 6/10\n", "50/50 [==============================] - 202s - loss: 1.3042 - categorical_accuracy: 0.5422 - val_loss: 1.8533 - val_categorical_accuracy: 0.3337\n", "Epoch 7/10\n", "50/50 [==============================] - 201s - loss: 1.3080 - categorical_accuracy: 0.5327 - val_loss: 1.8165 - val_categorical_accuracy: 0.3325\n", "Epoch 8/10\n", "50/50 [==============================] - 201s - loss: 1.2889 - categorical_accuracy: 0.5506 - val_loss: 1.8056 - val_categorical_accuracy: 0.3450\n", "Epoch 9/10\n", "50/50 [==============================] - 200s - loss: 1.2959 - categorical_accuracy: 0.5391 - val_loss: 1.7943 - val_categorical_accuracy: 0.3525\n", "Epoch 10/10\n", "50/50 [==============================] - 201s - loss: 1.2982 - categorical_accuracy: 0.5423 - val_loss: 1.7935 - val_categorical_accuracy: 0.3588\n", "\n", "\n", "Further training (refining convolutional blocks, starting with\n", "\tlayer 105)...\n", "\n", "Training for epochs 11 to 15...\n", "Epoch 11/15\n", "50/50 [==============================] - 230s - loss: 1.2795 - categorical_accuracy: 0.5458 - val_loss: 1.7895 - val_categorical_accuracy: 0.3600\n", "Epoch 12/15\n", "50/50 [==============================] - 229s - loss: 1.2823 - categorical_accuracy: 0.5459 - val_loss: 1.7825 - val_categorical_accuracy: 0.3550\n", "Epoch 13/15\n", "50/50 [==============================] - 228s - loss: 1.2825 - categorical_accuracy: 0.5448 - val_loss: 1.7823 - val_categorical_accuracy: 0.3563\n", "Epoch 14/15\n", "50/50 [==============================] - 228s - loss: 1.2949 - categorical_accuracy: 0.5445 - val_loss: 1.7775 - val_categorical_accuracy: 0.3525\n", "Epoch 15/15\n", "50/50 [==============================] - 228s - loss: 1.2756 - categorical_accuracy: 0.5552 - val_loss: 1.7783 - val_categorical_accuracy: 0.3563\n", "\n", "\n", "Further training (refining convolutional blocks, starting with\n", "\tlayer 95)...\n", "\n", "Training for epochs 16 to 20...\n", "Epoch 16/20\n", "50/50 [==============================] - 249s - loss: 1.2591 - categorical_accuracy: 0.5608 - val_loss: 1.7775 - val_categorical_accuracy: 0.3575\n", "Epoch 17/20\n", "50/50 [==============================] - 247s - loss: 1.2602 - categorical_accuracy: 0.5552 - val_loss: 1.7728 - val_categorical_accuracy: 0.3563\n", "Epoch 18/20\n", "50/50 [==============================] - 247s - loss: 1.2609 - categorical_accuracy: 0.5513 - val_loss: 1.7731 - val_categorical_accuracy: 0.3600\n", "Epoch 19/20\n", "50/50 [==============================] - 247s - loss: 1.2537 - categorical_accuracy: 0.5633 - val_loss: 1.7691 - val_categorical_accuracy: 0.3625\n", "Epoch 20/20\n", "50/50 [==============================] - 247s - loss: 1.2472 - categorical_accuracy: 0.5633 - val_loss: 1.7710 - val_categorical_accuracy: 0.3588\n", "\n", "\n", "Further training (refining convolutional blocks, starting with\n", "\tlayer 85)...\n", "\n", "Training for epochs 21 to 25...\n", "Epoch 21/25\n", "50/50 [==============================] - 264s - loss: 1.2437 - categorical_accuracy: 0.5655 - val_loss: 1.7707 - val_categorical_accuracy: 0.3613\n", "Epoch 22/25\n", "50/50 [==============================] - 262s - loss: 1.2339 - categorical_accuracy: 0.5698 - val_loss: 1.7666 - val_categorical_accuracy: 0.3600\n", "Epoch 23/25\n", "50/50 [==============================] - 262s - loss: 1.2335 - categorical_accuracy: 0.5692 - val_loss: 1.7678 - val_categorical_accuracy: 0.3600\n", "Epoch 24/25\n", "50/50 [==============================] - 261s - loss: 1.2312 - categorical_accuracy: 0.5687 - val_loss: 1.7640 - val_categorical_accuracy: 0.3600\n", "Epoch 25/25\n", "50/50 [==============================] - 262s - loss: 1.2194 - categorical_accuracy: 0.5808 - val_loss: 1.7664 - val_categorical_accuracy: 0.3625\n", "\n", "\n", "Further training (refining convolutional blocks, starting with\n", "\tlayer 75)...\n", "\n", "Training for epochs 26 to 30...\n", "Epoch 26/30\n", "50/50 [==============================] - 279s - loss: 1.1962 - categorical_accuracy: 0.5875 - val_loss: 1.7668 - val_categorical_accuracy: 0.3625\n", "Epoch 27/30\n", "50/50 [==============================] - 277s - loss: 1.1949 - categorical_accuracy: 0.5867 - val_loss: 1.7628 - val_categorical_accuracy: 0.3600\n", "Epoch 28/30\n", "50/50 [==============================] - 275s - loss: 1.2016 - categorical_accuracy: 0.5928 - val_loss: 1.7641 - val_categorical_accuracy: 0.3588\n", "Epoch 29/30\n", "50/50 [==============================] - 275s - loss: 1.1969 - categorical_accuracy: 0.5842 - val_loss: 1.7613 - val_categorical_accuracy: 0.3613\n", "Epoch 30/30\n", "50/50 [==============================] - 276s - loss: 1.1927 - categorical_accuracy: 0.5841 - val_loss: 1.7635 - val_categorical_accuracy: 0.3625\n", "\n", "01:57:45 for Xception to yield 58.4% training accuracy and 36.2% validation accuracy in 5 \n", "epochs (x6 training phases).\n", "\n", "Xception run complete at Tuesday, 2017 October 17, 12:40 AM.\n", "Clearing keras's backend Tensorflow session...\n", "\n" ] }, { "data": { "text/html": [ "\n", " \n", " " ], "text/plain": [ "" ] }, "execution_count": 10, "metadata": {}, "output_type": "execute_result" } ], "source": [ "run_xception()\n", " \n", "Audio(url=audio_file, autoplay=True)" ] }, { "cell_type": "code", "execution_count": 11, "metadata": { "collapsed": true, "scrolled": true }, "outputs": [], "source": [ "# Must be after Alert() call and in a separate cell for both audio and pop-up; sleep \n", "# allows the audio to play before the pop-up alters HTML output on the page:\n", "delayed_popup()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Inception V3\n", "\n", "[Inception V3](https://keras.io/applications/#inceptionv3) is, like Xception, a model derived from Inception [1]. It uses a principle similar to the depthwise separable convolutions upon which Xception derives to likewise improve the efficiency of the original Inception architecture. The Keras instantiation has 23,851,784 parameters across 159 layers and achieves a top-1 accuracy of 78.8% on ImageNet.\n", "\n", "\n", "#### Relevant Literature\n", "\n", "[1] C. Szegedy et al., “Going deeper with convolutions,” Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1–9.\n", "\n", "[2] C. Szegedy et al., “Rethinking the Inception architecture for computer vision,” CoRR, vol. abs/1512.00567, 2015 [Online]. Available: http://arxiv.org/abs/1512.00567. Last checked: 2017 September 18\n" ] }, { "cell_type": "code", "execution_count": 12, "metadata": { "scrolled": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Best optimizer (0.01, 0.02, 0.85, 1e-10) loaded from file.\n", "Using optimizer (0.01, 0.02, 0.85, 1e-10)...\n", "Inception V3 run begun at Tuesday, 2017 October 17, 12:40 AM.\n", "\t[5 epochs (x6 passes) on small FMA on GPU takes\n", "\tunknown (no similar runs found).]\n", "\n", "First-round training (training the classifier)...\n", "\n", "Training for epochs 1 to 5...\n", "Epoch 1/5\n", "50/50 [==============================] - 126s - loss: 1.9551 - categorical_accuracy: 0.2591 - val_loss: 4.1225 - val_categorical_accuracy: 0.1237\n", "Epoch 2/5\n", "50/50 [==============================] - 121s - loss: 1.7639 - categorical_accuracy: 0.3445 - val_loss: 2.5452 - val_categorical_accuracy: 0.1700\n", "Epoch 3/5\n", "50/50 [==============================] - 120s - loss: 1.6794 - categorical_accuracy: 0.3853 - val_loss: 2.4868 - val_categorical_accuracy: 0.1925\n", "Epoch 4/5\n", "50/50 [==============================] - 120s - loss: 1.6130 - categorical_accuracy: 0.4133 - val_loss: 2.1394 - val_categorical_accuracy: 0.2537\n", "Epoch 5/5\n", "50/50 [==============================] - 121s - loss: 1.5428 - categorical_accuracy: 0.4430 - val_loss: 2.0526 - val_categorical_accuracy: 0.2863\n", "\n", "\n", "Further training (refining convolutional blocks, starting with\n", "\tlayer 249)...\n", "\n", "Training for epochs 6 to 10...\n", "Epoch 6/10\n", "50/50 [==============================] - 138s - loss: 1.4252 - categorical_accuracy: 0.4931 - val_loss: 1.9367 - val_categorical_accuracy: 0.3050\n", "Epoch 7/10\n", "50/50 [==============================] - 134s - loss: 1.4294 - categorical_accuracy: 0.5023 - val_loss: 1.8918 - val_categorical_accuracy: 0.3113\n", "Epoch 8/10\n", "50/50 [==============================] - 134s - loss: 1.4259 - categorical_accuracy: 0.4927 - val_loss: 1.8634 - val_categorical_accuracy: 0.3125\n", "Epoch 9/10\n", "50/50 [==============================] - 134s - loss: 1.4144 - categorical_accuracy: 0.5023 - val_loss: 1.8480 - val_categorical_accuracy: 0.3137\n", "Epoch 10/10\n", "50/50 [==============================] - 134s - loss: 1.3816 - categorical_accuracy: 0.5159 - val_loss: 1.8337 - val_categorical_accuracy: 0.3187\n", "\n", "\n", "Further training (refining convolutional blocks, starting with\n", "\tlayer 232)...\n", "\n", "Training for epochs 11 to 15...\n", "Epoch 11/15\n", "50/50 [==============================] - 144s - loss: 1.3646 - categorical_accuracy: 0.5153 - val_loss: 1.8270 - val_categorical_accuracy: 0.3137\n", "Epoch 12/15\n", "50/50 [==============================] - 141s - loss: 1.3590 - categorical_accuracy: 0.5233 - val_loss: 1.8221 - val_categorical_accuracy: 0.3187\n", "Epoch 13/15\n", "50/50 [==============================] - 140s - loss: 1.3431 - categorical_accuracy: 0.5292 - val_loss: 1.8197 - val_categorical_accuracy: 0.3250\n", "Epoch 14/15\n", "50/50 [==============================] - 141s - loss: 1.3420 - categorical_accuracy: 0.5347 - val_loss: 1.8198 - val_categorical_accuracy: 0.3225\n", "Epoch 15/15\n", "50/50 [==============================] - 141s - loss: 1.3126 - categorical_accuracy: 0.5409 - val_loss: 1.8159 - val_categorical_accuracy: 0.3237\n", "\n", "\n", "Further training (refining convolutional blocks, starting with\n", "\tlayer 229)...\n", "\n", "Training for epochs 16 to 20...\n", "Epoch 16/20\n", "50/50 [==============================] - 145s - loss: 1.2866 - categorical_accuracy: 0.5459 - val_loss: 1.8138 - val_categorical_accuracy: 0.3212\n", "Epoch 17/20\n", "50/50 [==============================] - 142s - loss: 1.2749 - categorical_accuracy: 0.5583 - val_loss: 1.8157 - val_categorical_accuracy: 0.3250\n", "Epoch 18/20\n", "50/50 [==============================] - 143s - loss: 1.2626 - categorical_accuracy: 0.5719 - val_loss: 1.8160 - val_categorical_accuracy: 0.3225\n", "Epoch 19/20\n", "50/50 [==============================] - 143s - loss: 1.2651 - categorical_accuracy: 0.5675 - val_loss: 1.8183 - val_categorical_accuracy: 0.3200\n", "Epoch 20/20\n", "50/50 [==============================] - 143s - loss: 1.2403 - categorical_accuracy: 0.5777 - val_loss: 1.8147 - val_categorical_accuracy: 0.3187\n", "\n", "\n", "Further training (refining convolutional blocks, starting with\n", "\tlayer 200)...\n", "\n", "Training for epochs 21 to 25...\n", "Epoch 21/25\n", "50/50 [==============================] - 157s - loss: 1.2033 - categorical_accuracy: 0.6008 - val_loss: 1.8141 - val_categorical_accuracy: 0.3175\n", "Epoch 22/25\n", "50/50 [==============================] - 152s - loss: 1.1940 - categorical_accuracy: 0.6031 - val_loss: 1.8129 - val_categorical_accuracy: 0.3187\n", "Epoch 23/25\n", "50/50 [==============================] - 153s - loss: 1.1902 - categorical_accuracy: 0.5988 - val_loss: 1.8138 - val_categorical_accuracy: 0.3212\n", "Epoch 24/25\n", "50/50 [==============================] - 153s - loss: 1.1784 - categorical_accuracy: 0.6141 - val_loss: 1.8186 - val_categorical_accuracy: 0.3237\n", "Epoch 25/25\n", "50/50 [==============================] - 153s - loss: 1.1563 - categorical_accuracy: 0.6214 - val_loss: 1.8143 - val_categorical_accuracy: 0.3225\n", "\n", "\n", "Further training (refining convolutional blocks, starting with\n", "\tlayer 187)...\n", "\n", "Training for epochs 26 to 30...\n", "Epoch 26/30\n", "50/50 [==============================] - 162s - loss: 1.1131 - categorical_accuracy: 0.6405 - val_loss: 1.8133 - val_categorical_accuracy: 0.3200\n", "Epoch 27/30\n", "50/50 [==============================] - 158s - loss: 1.1050 - categorical_accuracy: 0.6542 - val_loss: 1.8146 - val_categorical_accuracy: 0.3225\n", "Epoch 28/30\n", "50/50 [==============================] - 158s - loss: 1.0909 - categorical_accuracy: 0.6552 - val_loss: 1.8152 - val_categorical_accuracy: 0.3250\n", "Epoch 29/30\n", "50/50 [==============================] - 157s - loss: 1.0807 - categorical_accuracy: 0.6648 - val_loss: 1.8188 - val_categorical_accuracy: 0.3250\n", "Epoch 30/30\n", "50/50 [==============================] - 158s - loss: 1.0608 - categorical_accuracy: 0.6722 - val_loss: 1.8169 - val_categorical_accuracy: 0.3212\n", "\n", "01:11:55 for Inception V3 to yield 67.2% training accuracy and 32.1% validation accuracy in 5 \n", "epochs (x6 training phases).\n", "\n", "Inception V3 run complete at Tuesday, 2017 October 17, 1:53 AM.\n", "Clearing keras's backend Tensorflow session...\n", "\n" ] }, { "data": { "text/html": [ "\n", " \n", " " ], "text/plain": [ "" ] }, "execution_count": 12, "metadata": {}, "output_type": "execute_result" } ], "source": [ "run_inception_v3()\n", "Audio(url=audio_file, autoplay=True)" ] }, { "cell_type": "code", "execution_count": 13, "metadata": { "collapsed": true, "scrolled": true }, "outputs": [], "source": [ "# Must be after Alert() call and in a separate cell for both audio and pop-up; sleep \n", "# allows the audio to play before the pop-up alters HTML output on the page:\n", "delayed_popup()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### ResNet50\n", "\n", "[ResNet50](https://keras.io/applications/#resnet50) uses a set of shallow connections (*shortcut connections*) that run alongside the deeper connections in the model to improve training accuracy and allow for easier optimization. The Keras instantiation has 25,636,712 parameters across 168 layers and achieves a top-1 accuracy of 75.9% on ImageNet.\n", "\n", "#### Relevant Literature\n", "\n", "[4] K. He et al., “Deep residual learning for image recognition,” Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778.\n", "\n", "[5] C. Szegedy et al., “Inception-v4, Inception-ResNet and the impact of residual connections on learning,” CoRR, vol. abs/1602.07261, 2016 [Online]. Available: http://arxiv.org/abs/1602.07261. Last checked: 2017 October 29" ] }, { "cell_type": "code", "execution_count": 14, "metadata": { "scrolled": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Best optimizer (0.01, 0.02, 0.85, 1e-10) loaded from file.\n", "Using optimizer (0.01, 0.02, 0.85, 1e-10)...\n", "ResNet50 run begun at Tuesday, 2017 October 17, 1:53 AM.\n", "\t[5 epochs (x6 passes) on small FMA on GPU takes\n", "\tunknown (no similar runs found).]\n", "\n", "First-round training (training the classifier)...\n", "\n", "Training for epochs 1 to 5...\n", "Epoch 1/5\n", "50/50 [==============================] - 149s - loss: 1.7676 - categorical_accuracy: 0.3503 - val_loss: 3.4586 - val_categorical_accuracy: 0.1713\n", "Epoch 2/5\n", "50/50 [==============================] - 145s - loss: 1.5177 - categorical_accuracy: 0.4537 - val_loss: 2.5427 - val_categorical_accuracy: 0.2525\n", "Epoch 3/5\n", "50/50 [==============================] - 145s - loss: 1.3818 - categorical_accuracy: 0.5127 - val_loss: 2.1963 - val_categorical_accuracy: 0.3187\n", "Epoch 4/5\n", "50/50 [==============================] - 145s - loss: 1.2809 - categorical_accuracy: 0.5400 - val_loss: 1.9553 - val_categorical_accuracy: 0.3738\n", "Epoch 5/5\n", "50/50 [==============================] - 145s - loss: 1.1917 - categorical_accuracy: 0.5795 - val_loss: 1.8579 - val_categorical_accuracy: 0.4100\n", "\n", "\n", "Further training (refining convolutional blocks, starting with\n", "\tlayer 161)...\n", "\n", "Training for epochs 6 to 10...\n", "Epoch 6/10\n", "50/50 [==============================] - 154s - loss: 1.0874 - categorical_accuracy: 0.6223 - val_loss: 1.7664 - val_categorical_accuracy: 0.4113\n", "Epoch 7/10\n", "50/50 [==============================] - 152s - loss: 1.0651 - categorical_accuracy: 0.6239 - val_loss: 1.7252 - val_categorical_accuracy: 0.4200\n", "Epoch 8/10\n", "50/50 [==============================] - 153s - loss: 1.0705 - categorical_accuracy: 0.6308 - val_loss: 1.7051 - val_categorical_accuracy: 0.4213\n", "Epoch 9/10\n", "50/50 [==============================] - 151s - loss: 1.0667 - categorical_accuracy: 0.6269 - val_loss: 1.6995 - val_categorical_accuracy: 0.4225\n", "Epoch 10/10\n", "50/50 [==============================] - 151s - loss: 1.0484 - categorical_accuracy: 0.6394 - val_loss: 1.6971 - val_categorical_accuracy: 0.4238\n", "\n", "\n", "Further training (refining convolutional blocks, starting with\n", "\tlayer 151)...\n", "\n", "Training for epochs 11 to 15...\n", "Epoch 11/15\n", "50/50 [==============================] - 159s - loss: 1.0552 - categorical_accuracy: 0.6372 - val_loss: 1.6956 - val_categorical_accuracy: 0.4238\n", "Epoch 12/15\n", "50/50 [==============================] - 158s - loss: 1.0479 - categorical_accuracy: 0.6395 - val_loss: 1.6948 - val_categorical_accuracy: 0.4225\n", "Epoch 13/15\n", "50/50 [==============================] - 157s - loss: 1.0446 - categorical_accuracy: 0.6422 - val_loss: 1.6945 - val_categorical_accuracy: 0.4188\n", "Epoch 14/15\n", "50/50 [==============================] - 158s - loss: 1.0412 - categorical_accuracy: 0.6419 - val_loss: 1.6945 - val_categorical_accuracy: 0.4213\n", "Epoch 15/15\n", "50/50 [==============================] - 157s - loss: 1.0224 - categorical_accuracy: 0.6498 - val_loss: 1.6941 - val_categorical_accuracy: 0.4200\n", "\n", "\n", "Further training (refining convolutional blocks, starting with\n", "\tlayer 139)...\n", "\n", "Training for epochs 16 to 20...\n", "Epoch 16/20\n", "50/50 [==============================] - 170s - loss: 1.0178 - categorical_accuracy: 0.6544 - val_loss: 1.6953 - val_categorical_accuracy: 0.4225\n", "Epoch 17/20\n", "50/50 [==============================] - 168s - loss: 1.0132 - categorical_accuracy: 0.6528 - val_loss: 1.6951 - val_categorical_accuracy: 0.4225\n", "Epoch 18/20\n", "50/50 [==============================] - 168s - loss: 0.9890 - categorical_accuracy: 0.6702 - val_loss: 1.6950 - val_categorical_accuracy: 0.4213\n", "Epoch 19/20\n", "50/50 [==============================] - 168s - loss: 0.9809 - categorical_accuracy: 0.6634 - val_loss: 1.6956 - val_categorical_accuracy: 0.4175\n", "Epoch 20/20\n", "50/50 [==============================] - 168s - loss: 0.9590 - categorical_accuracy: 0.6766 - val_loss: 1.6955 - val_categorical_accuracy: 0.4238\n", "\n", "\n", "Further training (refining convolutional blocks, starting with\n", "\tlayer 129)...\n", "\n", "Training for epochs 21 to 25...\n", "Epoch 21/25\n", "50/50 [==============================] - 182s - loss: 0.9384 - categorical_accuracy: 0.6878 - val_loss: 1.6958 - val_categorical_accuracy: 0.4200\n", "Epoch 22/25\n", "50/50 [==============================] - 179s - loss: 0.9278 - categorical_accuracy: 0.6933 - val_loss: 1.6958 - val_categorical_accuracy: 0.4213\n", "Epoch 23/25\n", "50/50 [==============================] - 179s - loss: 0.9214 - categorical_accuracy: 0.6975 - val_loss: 1.6960 - val_categorical_accuracy: 0.4213\n", "Epoch 24/25\n", "50/50 [==============================] - 179s - loss: 0.9050 - categorical_accuracy: 0.7064 - val_loss: 1.6960 - val_categorical_accuracy: 0.4200\n", "Epoch 25/25\n", "50/50 [==============================] - 179s - loss: 0.8878 - categorical_accuracy: 0.7163 - val_loss: 1.6965 - val_categorical_accuracy: 0.4200\n", "\n", "\n", "Further training (refining convolutional blocks, starting with\n", "\tlayer 119)...\n", "\n", "Training for epochs 26 to 30...\n", "Epoch 26/30\n", "50/50 [==============================] - 191s - loss: 0.8610 - categorical_accuracy: 0.7209 - val_loss: 1.6970 - val_categorical_accuracy: 0.4250\n", "Epoch 27/30\n", "50/50 [==============================] - 188s - loss: 0.8563 - categorical_accuracy: 0.7311 - val_loss: 1.6977 - val_categorical_accuracy: 0.4213\n", "Epoch 28/30\n", "50/50 [==============================] - 188s - loss: 0.8383 - categorical_accuracy: 0.7400 - val_loss: 1.6982 - val_categorical_accuracy: 0.4225\n", "Epoch 29/30\n", "50/50 [==============================] - 188s - loss: 0.8318 - categorical_accuracy: 0.7428 - val_loss: 1.7000 - val_categorical_accuracy: 0.4225\n", "Epoch 30/30\n", "50/50 [==============================] - 188s - loss: 0.8112 - categorical_accuracy: 0.7508 - val_loss: 1.7021 - val_categorical_accuracy: 0.4225\n", "\n", "01:23:14 for ResNet50 to yield 75.1% training accuracy and 42.2% validation accuracy in 5 \n", "epochs (x6 training phases).\n", "\n", "ResNet50 run complete at Tuesday, 2017 October 17, 3:16 AM.\n", "Clearing keras's backend Tensorflow session...\n", "\n" ] }, { "data": { "text/html": [ "\n", " \n", " " ], "text/plain": [ "" ] }, "execution_count": 14, "metadata": {}, "output_type": "execute_result" } ], "source": [ "run_resnet50()\n", "Audio(url=audio_file, autoplay=True)" ] }, { "cell_type": "code", "execution_count": 15, "metadata": { "collapsed": true, "scrolled": true }, "outputs": [], "source": [ "# Must be after Alert() call and in a separate cell for both audio and pop-up; sleep \n", "# allows the audio to play before the pop-up alters HTML output on the page:\n", "delayed_popup()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### VGG16\n", "\n", "[VGG16](https://keras.io/applications/#vgg16) is the largest, but simplest, convolutional model tested. It is made up of a stack of five blocks, each which consists of some number of convolutional layers followed by a pooling layer. The Keras instantiation has 138,357,544 parameters across 23 layers and achieves a top-1 accuracy of 71.59% on ImageNet.\n", "\n", "#### Relevant Literature\n", "\n", "[6] K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” CoRR, vol. abs/1409.1556, 2014 [Online]. Available: http://arxiv.org/abs/1409.1556. Last checked: 2017 October 29" ] }, { "cell_type": "code", "execution_count": 16, "metadata": { "scrolled": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Best optimizer (0.01, 0.02, 0.85, 1e-10) loaded from file.\n", "Using optimizer (0.01, 0.02, 0.85, 1e-10)...\n", "VGG16 run begun at Tuesday, 2017 October 17, 3:16 AM.\n", "\t[5 epochs (x6 passes) on small FMA on GPU takes\n", "\tunknown (no similar runs found).]\n", "\n", "First-round training (training the classifier)...\n", "\n", "Training for epochs 1 to 3...\n", "Epoch 1/3\n", "50/50 [==============================] - 267s - loss: 1.7977 - categorical_accuracy: 0.3381 - val_loss: 2.8726 - val_categorical_accuracy: 0.1775\n", "Epoch 2/3\n", "50/50 [==============================] - 249s - loss: 1.6469 - categorical_accuracy: 0.4036 - val_loss: 2.1224 - val_categorical_accuracy: 0.2525\n", "Epoch 3/3\n", "50/50 [==============================] - 248s - loss: 1.5977 - categorical_accuracy: 0.4303 - val_loss: 1.7418 - val_categorical_accuracy: 0.3362\n", "\n", "Training for epochs 4 to 5...\n", "Epoch 4/5\n", "50/50 [==============================] - 249s - loss: 1.5541 - categorical_accuracy: 0.4447 - val_loss: 1.6696 - val_categorical_accuracy: 0.3625\n", "Epoch 5/5\n", "50/50 [==============================] - 249s - loss: 1.5188 - categorical_accuracy: 0.4483 - val_loss: 1.6570 - val_categorical_accuracy: 0.3975\n", "\n", "\n", "Further training (refining convolutional blocks, starting with\n", "\tlayer 17)...\n", "\n", "Training for epochs 6 to 8...\n", "Epoch 6/8\n", "50/50 [==============================] - 258s - loss: 1.4874 - categorical_accuracy: 0.4686 - val_loss: 1.6183 - val_categorical_accuracy: 0.3987\n", "Epoch 7/8\n", "50/50 [==============================] - 257s - loss: 1.4607 - categorical_accuracy: 0.4697 - val_loss: 1.6087 - val_categorical_accuracy: 0.3987\n", "Epoch 8/8\n", "50/50 [==============================] - 256s - loss: 1.4483 - categorical_accuracy: 0.4714 - val_loss: 1.5966 - val_categorical_accuracy: 0.4213\n", "\n", "Training for epochs 9 to 10...\n", "Epoch 9/10\n", "50/50 [==============================] - 257s - loss: 1.4369 - categorical_accuracy: 0.4884 - val_loss: 1.6069 - val_categorical_accuracy: 0.4150\n", "Epoch 10/10\n", "50/50 [==============================] - 255s - loss: 1.4242 - categorical_accuracy: 0.4870 - val_loss: 1.5998 - val_categorical_accuracy: 0.4113\n", "\n", "\n", "Further training (refining convolutional blocks, starting with\n", "\tlayer 15)...\n", "\n", "Training for epochs 11 to 13...\n", "Epoch 11/13\n", "50/50 [==============================] - 281s - loss: 1.4279 - categorical_accuracy: 0.4933 - val_loss: 1.5996 - val_categorical_accuracy: 0.4200\n", "Epoch 12/13\n", "50/50 [==============================] - 279s - loss: 1.4074 - categorical_accuracy: 0.4927 - val_loss: 1.6007 - val_categorical_accuracy: 0.4238\n", "Epoch 13/13\n", "50/50 [==============================] - 280s - loss: 1.3908 - categorical_accuracy: 0.4967 - val_loss: 1.5852 - val_categorical_accuracy: 0.4400\n", "\n", "Training for epochs 14 to 15...\n", "Epoch 14/15\n", "50/50 [==============================] - 280s - loss: 1.3636 - categorical_accuracy: 0.5089 - val_loss: 1.5824 - val_categorical_accuracy: 0.4250\n", "Epoch 15/15\n", "50/50 [==============================] - 280s - loss: 1.3474 - categorical_accuracy: 0.5175 - val_loss: 1.5655 - val_categorical_accuracy: 0.4550\n", "\n", "\n", "Further training (refining convolutional blocks, starting with\n", "\tlayer 13)...\n", "\n", "Training for epochs 16 to 18...\n", "Epoch 16/18\n", "50/50 [==============================] - 311s - loss: 1.3355 - categorical_accuracy: 0.5173 - val_loss: 1.6161 - val_categorical_accuracy: 0.4125\n", "Epoch 17/18\n", "50/50 [==============================] - 309s - loss: 1.3221 - categorical_accuracy: 0.5242 - val_loss: 1.6508 - val_categorical_accuracy: 0.4375\n", "Epoch 18/18\n", "50/50 [==============================] - 309s - loss: 1.3067 - categorical_accuracy: 0.5341 - val_loss: 1.5865 - val_categorical_accuracy: 0.4200\n", "\n", "Training for epochs 19 to 20...\n", "Epoch 19/20\n", "50/50 [==============================] - 309s - loss: 1.2818 - categorical_accuracy: 0.5447 - val_loss: 1.5512 - val_categorical_accuracy: 0.4375\n", "Epoch 20/20\n", "50/50 [==============================] - 308s - loss: 1.2517 - categorical_accuracy: 0.5573 - val_loss: 1.5872 - val_categorical_accuracy: 0.4575\n", "\n", "\n", "Further training (refining convolutional blocks, starting with\n", "\tlayer 11)...\n", "\n", "Training for epochs 21 to 23...\n", "Epoch 21/23\n", "50/50 [==============================] - 393s - loss: 1.2490 - categorical_accuracy: 0.5559 - val_loss: 1.6019 - val_categorical_accuracy: 0.4200\n", "Epoch 22/23\n", "50/50 [==============================] - 389s - loss: 1.2369 - categorical_accuracy: 0.5641 - val_loss: 1.6773 - val_categorical_accuracy: 0.4213\n", "Epoch 23/23\n", "50/50 [==============================] - 389s - loss: 1.2191 - categorical_accuracy: 0.5694 - val_loss: 1.6190 - val_categorical_accuracy: 0.4075\n", "\n", "Training for epochs 24 to 25...\n", "Epoch 24/25\n", "50/50 [==============================] - 389s - loss: 1.1904 - categorical_accuracy: 0.5828 - val_loss: 1.6506 - val_categorical_accuracy: 0.4050\n", "Epoch 25/25\n", "50/50 [==============================] - 389s - loss: 1.1609 - categorical_accuracy: 0.5895 - val_loss: 1.5845 - val_categorical_accuracy: 0.4375\n", "\n", "\n", "Further training (refining convolutional blocks, starting with\n", "\tlayer 8)...\n", "\n", "Training for epochs 26 to 28...\n", "Epoch 26/28\n", "50/50 [==============================] - 487s - loss: 1.1469 - categorical_accuracy: 0.6009 - val_loss: 1.6064 - val_categorical_accuracy: 0.4300\n", "Epoch 27/28\n", "50/50 [==============================] - 482s - loss: 1.1258 - categorical_accuracy: 0.6083 - val_loss: 1.6441 - val_categorical_accuracy: 0.4313\n", "Epoch 28/28\n", "50/50 [==============================] - 483s - loss: 1.1286 - categorical_accuracy: 0.6058 - val_loss: 1.5709 - val_categorical_accuracy: 0.4387\n", "\n", "Training for epochs 29 to 30...\n", "Epoch 29/30\n", "50/50 [==============================] - 483s - loss: 1.0884 - categorical_accuracy: 0.6344 - val_loss: 1.6614 - val_categorical_accuracy: 0.4000\n", "Epoch 30/30\n", "50/50 [==============================] - 482s - loss: 1.0571 - categorical_accuracy: 0.6380 - val_loss: 1.6041 - val_categorical_accuracy: 0.4437\n", "\n", "02:44:37 for VGG16 to yield 63.8% training accuracy and 44.4% validation accuracy in 5 \n", "epochs (x6 training phases).\n", "\n", "VGG16 run complete at Tuesday, 2017 October 17, 6:01 AM.\n", "Clearing keras's backend Tensorflow session...\n", "\n" ] }, { "data": { "text/html": [ "\n", " \n", " " ], "text/plain": [ "" ] }, "execution_count": 16, "metadata": {}, "output_type": "execute_result" } ], "source": [ "run_vgg16()\n", " \n", "Audio(url=audio_file, autoplay=True)" ] }, { "cell_type": "code", "execution_count": 17, "metadata": { "collapsed": true, "scrolled": true }, "outputs": [], "source": [ "# Must be after Alert() call and in a separate cell for both audio and pop-up; sleep \n", "# allows the audio to play before the pop-up alters HTML output on the page:\n", "delayed_popup()" ] }, { "cell_type": "code", "execution_count": 20, "metadata": { "scrolled": true }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Backed up 'saved_objects/fma_results_gpu.pkl' to\n", "\t'saved_object_backups/fma_results_gpu-2017-10-17+0927.pkl.bak'.\n", "\n", "Backed up 'saved_objects/crossval_results_gpu.pkl' to\n", "\t'saved_object_backups/crossval_results_gpu-2017-10-17+0927.pkl.bak'.\n", "\n" ] } ], "source": [ "# Back up the results dataframes\n", "import shutil\n", "\n", "for key in [\"fma_results_name\", \"crossval_results_name\"]:\n", " src = os.path.join(\"saved_objects\", \"{}.pkl\".format(param_dict[key])) \n", " dst = os.path.join(\"saved_object_backups\", \n", " \"{}-{}.pkl.bak\".format(param_dict[key],\n", " timer.datetimepath()))\n", " directory = os.path.dirname(dst)\n", " if not os.path.exists(directory):\n", " os.makedirs(directory)\n", " #shutil.copyfile(src, dst)\n", "\n", " print (\"Backed up '{}' to\\n\\t'{}'.\\n\".format(src, dst))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Experiment 4: Small Dataset With Augmentation\n", "\n", "Because of the small size of the dataset, data augmentation was used to improve results. Because the DWT images take a long time to generate, this augmentation was performed at the image level, but only those forms of image-based augmentation that are meaningful to perform on audio were used--i.e., only horizontal shift, which performs an operation similar to windowing. \n", "\n", "See Section 4.5 of the accompanying paper, \"Experiment 4: Small Dataset With Augmentation,\" for more details, as well as Section 5.6, \"Approaches to Data Augmentation,\" for the proposal of an alternate approach that may be more appropriate for audio sources.\n", "\n", "### Section-Specific Setup (66% Augmentation)" ] }, { "cell_type": "code", "execution_count": 8, "metadata": { "scrolled": true }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Creating generators with batch size 128...\n", "Loading mean and standard deviation for the training set from file 'saved_objects/fma_small_dwt_stats.npz'.\n", "\n", "Using up to 66.0% horizontal shift to augment training data.\n", "Found 6400 images belonging to 8 classes.\n", "Found 800 images belonging to 8 classes.\n", "Found 800 images belonging to 8 classes.\n" ] } ], "source": [ "# Enable data augmentation:\n", "param_dict[\"augmentation\"] = 0.66 # i.e., keep at least 10 sec of 30 sec clips\n", "# Reconfigure the generators based on the specified parameters:\n", "generators = {}\n", "(generators[\"train\"], \n", " generators[\"val\"], \n", " generators[\"test\"]) = cku.set_up_generators(param_dict)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Model Reruns" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Best optimizer (0.01, 0.02, 0.85, 1e-10) loaded from file.\n", "Using hidden size 128 and optimizer (0.01, 0.02, 0.85, 1e-10)...\n", "Fully connected network run begun at Tuesday, 2017 October 17, 9:27 AM.\n", "\t[30 epochs on small FMA on GPU takes\n", "\t00:10:35 ± 00:03:44 ([00:06:16,00:12:50]).]\n", "\n", "\n", "Training for epochs 1 to 10...\n", "Epoch 1/10\n", "50/50 [==============================] - 96s - loss: 1.8237 - categorical_accuracy: 0.3123 - val_loss: 1.8345 - val_categorical_accuracy: 0.3513\n", "Epoch 2/10\n", "50/50 [==============================] - 89s - loss: 1.7128 - categorical_accuracy: 0.3717 - val_loss: 1.7082 - val_categorical_accuracy: 0.3775\n", "Epoch 3/10\n", "50/50 [==============================] - 87s - loss: 1.6821 - categorical_accuracy: 0.3845 - val_loss: 1.6368 - val_categorical_accuracy: 0.3837\n", "Epoch 4/10\n", "50/50 [==============================] - 88s - loss: 1.6529 - categorical_accuracy: 0.4017 - val_loss: 1.6114 - val_categorical_accuracy: 0.4050\n", "Epoch 5/10\n", "50/50 [==============================] - 88s - loss: 1.6404 - categorical_accuracy: 0.4022 - val_loss: 1.6092 - val_categorical_accuracy: 0.4050\n", "Epoch 6/10\n", "50/50 [==============================] - 88s - loss: 1.6236 - categorical_accuracy: 0.4078 - val_loss: 1.6025 - val_categorical_accuracy: 0.3987\n", "Epoch 7/10\n", "50/50 [==============================] - 91s - loss: 1.6169 - categorical_accuracy: 0.4145 - val_loss: 1.6012 - val_categorical_accuracy: 0.4012\n", "Epoch 8/10\n", "50/50 [==============================] - 89s - loss: 1.6067 - categorical_accuracy: 0.4198 - val_loss: 1.5959 - val_categorical_accuracy: 0.3962\n", "Epoch 9/10\n", "50/50 [==============================] - 89s - loss: 1.5922 - categorical_accuracy: 0.4306 - val_loss: 1.5949 - val_categorical_accuracy: 0.4037\n", "Epoch 10/10\n", "50/50 [==============================] - 89s - loss: 1.5940 - categorical_accuracy: 0.4259 - val_loss: 1.5869 - val_categorical_accuracy: 0.4075\n", "\n", "Training for epochs 11 to 20...\n", "Epoch 11/20\n", "50/50 [==============================] - 96s - loss: 1.5313 - categorical_accuracy: 0.4519 - val_loss: 1.5888 - val_categorical_accuracy: 0.4000\n", "Epoch 12/20\n", "50/50 [==============================] - 90s - loss: 1.4943 - categorical_accuracy: 0.4655 - val_loss: 1.5766 - val_categorical_accuracy: 0.4138\n", "Epoch 13/20\n", "50/50 [==============================] - 90s - loss: 1.5163 - categorical_accuracy: 0.4562 - val_loss: 1.5769 - val_categorical_accuracy: 0.4075\n", "Epoch 14/20\n", "50/50 [==============================] - 90s - loss: 1.5092 - categorical_accuracy: 0.4516 - val_loss: 1.5763 - val_categorical_accuracy: 0.4100\n", "Epoch 15/20\n", "50/50 [==============================] - 89s - loss: 1.5251 - categorical_accuracy: 0.4558 - val_loss: 1.5767 - val_categorical_accuracy: 0.4113\n", "Epoch 16/20\n", "50/50 [==============================] - 90s - loss: 1.5296 - categorical_accuracy: 0.4519 - val_loss: 1.5700 - val_categorical_accuracy: 0.4100\n", "Epoch 17/20\n", "50/50 [==============================] - 91s - loss: 1.5382 - categorical_accuracy: 0.4556 - val_loss: 1.5728 - val_categorical_accuracy: 0.4188\n", "Epoch 18/20\n", "50/50 [==============================] - 91s - loss: 1.5355 - categorical_accuracy: 0.4487 - val_loss: 1.5730 - val_categorical_accuracy: 0.4037\n", "Epoch 19/20\n", "50/50 [==============================] - 91s - loss: 1.5284 - categorical_accuracy: 0.4500 - val_loss: 1.5743 - val_categorical_accuracy: 0.4100\n", "Epoch 20/20\n", "50/50 [==============================] - 91s - loss: 1.5350 - categorical_accuracy: 0.4472 - val_loss: 1.5766 - val_categorical_accuracy: 0.4150\n", "\n", "Training for epochs 21 to 30...\n", "Epoch 21/30\n", "50/50 [==============================] - 94s - loss: 1.4498 - categorical_accuracy: 0.4816 - val_loss: 1.5800 - val_categorical_accuracy: 0.4062\n", "Epoch 22/30\n", "50/50 [==============================] - 91s - loss: 1.4359 - categorical_accuracy: 0.4937 - val_loss: 1.5706 - val_categorical_accuracy: 0.4175\n", "Epoch 23/30\n", "50/50 [==============================] - 91s - loss: 1.4506 - categorical_accuracy: 0.4875 - val_loss: 1.5672 - val_categorical_accuracy: 0.4100\n", "Epoch 24/30\n", "50/50 [==============================] - 91s - loss: 1.4601 - categorical_accuracy: 0.4775 - val_loss: 1.5687 - val_categorical_accuracy: 0.4238\n", "Epoch 25/30\n", "50/50 [==============================] - 91s - loss: 1.4806 - categorical_accuracy: 0.4742 - val_loss: 1.5702 - val_categorical_accuracy: 0.4100\n", "Epoch 26/30\n", "50/50 [==============================] - 89s - loss: 1.4816 - categorical_accuracy: 0.4706 - val_loss: 1.5691 - val_categorical_accuracy: 0.4113\n", "Epoch 27/30\n", "50/50 [==============================] - 91s - loss: 1.4823 - categorical_accuracy: 0.4736 - val_loss: 1.5721 - val_categorical_accuracy: 0.4163\n", "Epoch 28/30\n", "50/50 [==============================] - 92s - loss: 1.4906 - categorical_accuracy: 0.4698 - val_loss: 1.5751 - val_categorical_accuracy: 0.4113\n", "Epoch 29/30\n", "50/50 [==============================] - 90s - loss: 1.4941 - categorical_accuracy: 0.4705 - val_loss: 1.5698 - val_categorical_accuracy: 0.4163\n", "Epoch 30/30\n", "50/50 [==============================] - 90s - loss: 1.4941 - categorical_accuracy: 0.4684 - val_loss: 1.5727 - val_categorical_accuracy: 0.4150\n", "\n", "00:45:27 for Two-Layer Network to yield 46.8% training accuracy and 41.5% validation accuracy in 5 \n", "epochs (x3 training phases).\n", "\n", "Fully connected run complete at Tuesday, 2017 October 17, 10:13 AM.\n", "Clearing keras's backend Tensorflow session...\n", "\n" ] }, { "data": { "text/html": [ "\n", " \n", " " ], "text/plain": [ "" ] }, "execution_count": 22, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# Now rerun all 5 models, starting with FCNN:\n", "run_fcnn()\n", "Audio(url=audio_file, autoplay=True)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [ { "data": { "text/html": [ "" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "# Must be after Alert() call and in a separate cell for both audio and pop-up; sleep \n", "# allows the audio to play before the pop-up alters HTML output on the page:\n", "delayed_popup()" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Best optimizer (0.01, 0.02, 0.85, 1e-10) loaded from file.\n", "Using optimizer (0.01, 0.02, 0.85, 1e-10)...\n", "Xception run begun at Tuesday, 2017 October 17, 10:13 AM.\n", "\t[5 epochs (x6 passes) on small FMA on GPU takes\n", "\t01:23:31 ± 00:48:24 ([00:49:18,01:57:45]).]\n", "\n", "First-round training (training the classifier)...\n", "\n", "Training for epochs 1 to 5...\n", "Epoch 1/5\n", "50/50 [==============================] - 194s - loss: 1.9076 - categorical_accuracy: 0.2753 - val_loss: 2.1800 - val_categorical_accuracy: 0.1663\n", "Epoch 2/5\n", "50/50 [==============================] - 191s - loss: 1.7660 - categorical_accuracy: 0.3481 - val_loss: 2.0781 - val_categorical_accuracy: 0.2087\n", "Epoch 3/5\n", "50/50 [==============================] - 192s - loss: 1.7340 - categorical_accuracy: 0.3603 - val_loss: 2.0065 - val_categorical_accuracy: 0.2637\n", "Epoch 4/5\n", "50/50 [==============================] - 192s - loss: 1.6890 - categorical_accuracy: 0.3717 - val_loss: 1.9344 - val_categorical_accuracy: 0.2925\n", "Epoch 5/5\n", "50/50 [==============================] - 191s - loss: 1.6932 - categorical_accuracy: 0.3759 - val_loss: 1.8439 - val_categorical_accuracy: 0.3200\n", "\n", "\n", "Further training (refining convolutional blocks, starting with\n", "\tlayer 122)...\n", "\n", "Training for epochs 6 to 10...\n", "Epoch 6/10\n", "50/50 [==============================] - 204s - loss: 1.6005 - categorical_accuracy: 0.4164 - val_loss: 1.7750 - val_categorical_accuracy: 0.3650\n", "Epoch 7/10\n", "50/50 [==============================] - 201s - loss: 1.5958 - categorical_accuracy: 0.4225 - val_loss: 1.7546 - val_categorical_accuracy: 0.3738\n", "Epoch 8/10\n", "50/50 [==============================] - 200s - loss: 1.5921 - categorical_accuracy: 0.4167 - val_loss: 1.7462 - val_categorical_accuracy: 0.3688\n", "Epoch 9/10\n", "50/50 [==============================] - 200s - loss: 1.5967 - categorical_accuracy: 0.4211 - val_loss: 1.7406 - val_categorical_accuracy: 0.3713\n", "Epoch 10/10\n", "50/50 [==============================] - 200s - loss: 1.5982 - categorical_accuracy: 0.4250 - val_loss: 1.7372 - val_categorical_accuracy: 0.3713\n", "\n", "\n", "Further training (refining convolutional blocks, starting with\n", "\tlayer 105)...\n", "\n", "Training for epochs 11 to 15...\n", "Epoch 11/15\n", "50/50 [==============================] - 232s - loss: 1.5813 - categorical_accuracy: 0.4206 - val_loss: 1.7322 - val_categorical_accuracy: 0.3725\n", "Epoch 12/15\n", "50/50 [==============================] - 229s - loss: 1.5853 - categorical_accuracy: 0.4283 - val_loss: 1.7299 - val_categorical_accuracy: 0.3738\n", "Epoch 13/15\n", "50/50 [==============================] - 229s - loss: 1.5827 - categorical_accuracy: 0.4236 - val_loss: 1.7270 - val_categorical_accuracy: 0.3775\n", "Epoch 14/15\n", "50/50 [==============================] - 229s - loss: 1.5814 - categorical_accuracy: 0.4252 - val_loss: 1.7236 - val_categorical_accuracy: 0.3787\n", "Epoch 15/15\n", "50/50 [==============================] - 229s - loss: 1.5796 - categorical_accuracy: 0.4233 - val_loss: 1.7207 - val_categorical_accuracy: 0.3837\n", "\n", "\n", "Further training (refining convolutional blocks, starting with\n", "\tlayer 95)...\n", "\n", "Training for epochs 16 to 20...\n", "Epoch 16/20\n", "50/50 [==============================] - 248s - loss: 1.5761 - categorical_accuracy: 0.4241 - val_loss: 1.7176 - val_categorical_accuracy: 0.3862\n", "Epoch 17/20\n", "50/50 [==============================] - 246s - loss: 1.5672 - categorical_accuracy: 0.4269 - val_loss: 1.7149 - val_categorical_accuracy: 0.3887\n", "Epoch 18/20\n", "50/50 [==============================] - 246s - loss: 1.5641 - categorical_accuracy: 0.4256 - val_loss: 1.7126 - val_categorical_accuracy: 0.3837\n", "Epoch 19/20\n", "50/50 [==============================] - 246s - loss: 1.5648 - categorical_accuracy: 0.4361 - val_loss: 1.7097 - val_categorical_accuracy: 0.3875\n", "Epoch 20/20\n", "50/50 [==============================] - 246s - loss: 1.5702 - categorical_accuracy: 0.4314 - val_loss: 1.7073 - val_categorical_accuracy: 0.3900\n", "\n", "\n", "Further training (refining convolutional blocks, starting with\n", "\tlayer 85)...\n", "\n", "Training for epochs 21 to 25...\n", "Epoch 21/25\n", "50/50 [==============================] - 268s - loss: 1.5527 - categorical_accuracy: 0.4417 - val_loss: 1.7049 - val_categorical_accuracy: 0.3912\n", "Epoch 22/25\n", "50/50 [==============================] - 264s - loss: 1.5572 - categorical_accuracy: 0.4325 - val_loss: 1.7030 - val_categorical_accuracy: 0.3912\n", "Epoch 23/25\n", "50/50 [==============================] - 265s - loss: 1.5549 - categorical_accuracy: 0.4311 - val_loss: 1.7012 - val_categorical_accuracy: 0.3937\n", "Epoch 24/25\n", "50/50 [==============================] - 265s - loss: 1.5530 - categorical_accuracy: 0.4375 - val_loss: 1.6989 - val_categorical_accuracy: 0.3937\n", "Epoch 25/25\n", "50/50 [==============================] - 265s - loss: 1.5584 - categorical_accuracy: 0.4327 - val_loss: 1.6974 - val_categorical_accuracy: 0.3900\n", "\n", "\n", "Further training (refining convolutional blocks, starting with\n", "\tlayer 75)...\n", "\n", "Training for epochs 26 to 30...\n", "Epoch 26/30\n", "50/50 [==============================] - 284s - loss: 1.5431 - categorical_accuracy: 0.4456 - val_loss: 1.6950 - val_categorical_accuracy: 0.3912\n", "Epoch 27/30\n", "50/50 [==============================] - 279s - loss: 1.5404 - categorical_accuracy: 0.4434 - val_loss: 1.6928 - val_categorical_accuracy: 0.3925\n", "Epoch 28/30\n", "50/50 [==============================] - 279s - loss: 1.5467 - categorical_accuracy: 0.4409 - val_loss: 1.6911 - val_categorical_accuracy: 0.3950\n", "Epoch 29/30\n", "50/50 [==============================] - 280s - loss: 1.5542 - categorical_accuracy: 0.4370 - val_loss: 1.6889 - val_categorical_accuracy: 0.3975\n", "Epoch 30/30\n", "50/50 [==============================] - 279s - loss: 1.5474 - categorical_accuracy: 0.4381 - val_loss: 1.6871 - val_categorical_accuracy: 0.4000\n", "\n", "01:58:29 for Xception to yield 43.8% training accuracy and 40.0% validation accuracy in 5 \n", "epochs (x6 training phases).\n", "\n", "Xception run complete at Tuesday, 2017 October 17, 12:11 PM.\n", "Clearing keras's backend Tensorflow session...\n", "\n" ] }, { "data": { "text/html": [ "\n", " \n", " " ], "text/plain": [ "" ] }, "execution_count": 24, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# Xception:\n", "run_xception()\n", "Audio(url=audio_file, autoplay=True)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [ { "data": { "text/html": [ "" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "# Must be after Alert() call and in a separate cell for both audio and pop-up; sleep \n", "# allows the audio to play before the pop-up alters HTML output on the page:\n", "delayed_popup()" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Best optimizer (0.01, 0.02, 0.85, 1e-10) loaded from file.\n", "Using optimizer (0.01, 0.02, 0.85, 1e-10)...\n", "Inception V3 run begun at Tuesday, 2017 October 17, 12:11 PM.\n", "\t[5 epochs (x6 passes) on small FMA on GPU takes\n", "\tunknown (no similar runs found).]\n", "\n", "First-round training (training the classifier)...\n", "\n", "Training for epochs 1 to 5...\n", "Epoch 1/5\n", "50/50 [==============================] - 126s - loss: 1.9564 - categorical_accuracy: 0.2580 - val_loss: 3.3636 - val_categorical_accuracy: 0.1250\n", "Epoch 2/5\n", "50/50 [==============================] - 122s - loss: 1.8535 - categorical_accuracy: 0.3005 - val_loss: 2.6411 - val_categorical_accuracy: 0.1737\n", "Epoch 3/5\n", "50/50 [==============================] - 122s - loss: 1.8162 - categorical_accuracy: 0.3152 - val_loss: 2.6110 - val_categorical_accuracy: 0.1825\n", "Epoch 4/5\n", "50/50 [==============================] - 122s - loss: 1.7987 - categorical_accuracy: 0.3277 - val_loss: 2.1234 - val_categorical_accuracy: 0.2313\n", "Epoch 5/5\n", "50/50 [==============================] - 122s - loss: 1.7694 - categorical_accuracy: 0.3416 - val_loss: 1.8991 - val_categorical_accuracy: 0.2963\n", "\n", "\n", "Further training (refining convolutional blocks, starting with\n", "\tlayer 249)...\n", "\n", "Training for epochs 6 to 10...\n", "Epoch 6/10\n", "50/50 [==============================] - 139s - loss: 1.6712 - categorical_accuracy: 0.3889 - val_loss: 1.8237 - val_categorical_accuracy: 0.3225\n", "Epoch 7/10\n", "50/50 [==============================] - 137s - loss: 1.6750 - categorical_accuracy: 0.3866 - val_loss: 1.7933 - val_categorical_accuracy: 0.3400\n", "Epoch 8/10\n", "50/50 [==============================] - 136s - loss: 1.6673 - categorical_accuracy: 0.3870 - val_loss: 1.7791 - val_categorical_accuracy: 0.3412\n", "Epoch 9/10\n", "50/50 [==============================] - 136s - loss: 1.6813 - categorical_accuracy: 0.3805 - val_loss: 1.7704 - val_categorical_accuracy: 0.3387\n", "Epoch 10/10\n", "50/50 [==============================] - 136s - loss: 1.6753 - categorical_accuracy: 0.3859 - val_loss: 1.7658 - val_categorical_accuracy: 0.3412\n", "\n", "\n", "Further training (refining convolutional blocks, starting with\n", "\tlayer 232)...\n", "\n", "Training for epochs 11 to 15...\n", "Epoch 11/15\n", "50/50 [==============================] - 147s - loss: 1.6417 - categorical_accuracy: 0.3962 - val_loss: 1.7600 - val_categorical_accuracy: 0.3463\n", "Epoch 12/15\n", "50/50 [==============================] - 142s - loss: 1.6546 - categorical_accuracy: 0.3969 - val_loss: 1.7573 - val_categorical_accuracy: 0.3488\n", "Epoch 13/15\n", "50/50 [==============================] - 143s - loss: 1.6434 - categorical_accuracy: 0.4053 - val_loss: 1.7545 - val_categorical_accuracy: 0.3500\n", "Epoch 14/15\n", "50/50 [==============================] - 143s - loss: 1.6536 - categorical_accuracy: 0.3917 - val_loss: 1.7521 - val_categorical_accuracy: 0.3500\n", "Epoch 15/15\n", "50/50 [==============================] - 142s - loss: 1.6409 - categorical_accuracy: 0.4047 - val_loss: 1.7506 - val_categorical_accuracy: 0.3538\n", "\n", "\n", "Further training (refining convolutional blocks, starting with\n", "\tlayer 229)...\n", "\n", "Training for epochs 16 to 20...\n", "Epoch 16/20\n", "50/50 [==============================] - 149s - loss: 1.6128 - categorical_accuracy: 0.4147 - val_loss: 1.7480 - val_categorical_accuracy: 0.3525\n", "Epoch 17/20\n", "50/50 [==============================] - 143s - loss: 1.6131 - categorical_accuracy: 0.4102 - val_loss: 1.7464 - val_categorical_accuracy: 0.3588\n", "Epoch 18/20\n", "50/50 [==============================] - 144s - loss: 1.6187 - categorical_accuracy: 0.4127 - val_loss: 1.7442 - val_categorical_accuracy: 0.3613\n", "Epoch 19/20\n", "50/50 [==============================] - 144s - loss: 1.6383 - categorical_accuracy: 0.4016 - val_loss: 1.7434 - val_categorical_accuracy: 0.3625\n", "Epoch 20/20\n", "50/50 [==============================] - 144s - loss: 1.6194 - categorical_accuracy: 0.4141 - val_loss: 1.7424 - val_categorical_accuracy: 0.3625\n", "\n", "\n", "Further training (refining convolutional blocks, starting with\n", "\tlayer 200)...\n", "\n", "Training for epochs 21 to 25...\n", "Epoch 21/25\n", "50/50 [==============================] - 161s - loss: 1.5780 - categorical_accuracy: 0.4353 - val_loss: 1.7394 - val_categorical_accuracy: 0.3650\n", "Epoch 22/25\n", "50/50 [==============================] - 157s - loss: 1.5872 - categorical_accuracy: 0.4263 - val_loss: 1.7379 - val_categorical_accuracy: 0.3725\n", "Epoch 23/25\n", "50/50 [==============================] - 157s - loss: 1.5867 - categorical_accuracy: 0.4217 - val_loss: 1.7357 - val_categorical_accuracy: 0.3688\n", "Epoch 24/25\n", "50/50 [==============================] - 157s - loss: 1.5980 - categorical_accuracy: 0.4208 - val_loss: 1.7353 - val_categorical_accuracy: 0.3663\n", "Epoch 25/25\n", "50/50 [==============================] - 156s - loss: 1.5933 - categorical_accuracy: 0.4253 - val_loss: 1.7340 - val_categorical_accuracy: 0.3650\n", "\n", "\n", "Further training (refining convolutional blocks, starting with\n", "\tlayer 187)...\n", "\n", "Training for epochs 26 to 30...\n", "Epoch 26/30\n", "50/50 [==============================] - 166s - loss: 1.5446 - categorical_accuracy: 0.4480 - val_loss: 1.7314 - val_categorical_accuracy: 0.3688\n", "Epoch 27/30\n", "50/50 [==============================] - 160s - loss: 1.5531 - categorical_accuracy: 0.4439 - val_loss: 1.7304 - val_categorical_accuracy: 0.3738\n", "Epoch 28/30\n", "50/50 [==============================] - 161s - loss: 1.5443 - categorical_accuracy: 0.4477 - val_loss: 1.7291 - val_categorical_accuracy: 0.3725\n", "Epoch 29/30\n", "50/50 [==============================] - 161s - loss: 1.5654 - categorical_accuracy: 0.4305 - val_loss: 1.7291 - val_categorical_accuracy: 0.3738\n", "Epoch 30/30\n", "50/50 [==============================] - 160s - loss: 1.5631 - categorical_accuracy: 0.4347 - val_loss: 1.7277 - val_categorical_accuracy: 0.3688\n", "\n", "01:13:03 for Inception V3 to yield 43.5% training accuracy and 36.9% validation accuracy in 5 \n", "epochs (x6 training phases).\n", "\n", "Inception V3 run complete at Tuesday, 2017 October 17, 1:25 PM.\n", "Clearing keras's backend Tensorflow session...\n", "\n" ] }, { "data": { "text/html": [ "\n", " \n", " " ], "text/plain": [ "" ] }, "execution_count": 26, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# InceptionV3:\n", "run_inception_v3()\n", "Audio(url=audio_file, autoplay=True)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [ { "data": { "text/html": [ "" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "# Must be after Alert() call and in a separate cell for both audio and pop-up; sleep \n", "# allows the audio to play before the pop-up alters HTML output on the page:\n", "delayed_popup()" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Best optimizer (0.01, 0.02, 0.85, 1e-10) loaded from file.\n", "Using optimizer (0.01, 0.02, 0.85, 1e-10)...\n", "ResNet50 run begun at Tuesday, 2017 October 17, 1:25 PM.\n", "\t[5 epochs (x6 passes) on small FMA on GPU takes\n", "\tunknown (no similar runs found).]\n", "\n", "First-round training (training the classifier)...\n", "\n", "Training for epochs 1 to 5...\n", "Epoch 1/5\n", "50/50 [==============================] - 150s - loss: 1.7666 - categorical_accuracy: 0.3488 - val_loss: 3.1254 - val_categorical_accuracy: 0.1825\n", "Epoch 2/5\n", "50/50 [==============================] - 147s - loss: 1.6133 - categorical_accuracy: 0.4153 - val_loss: 2.3868 - val_categorical_accuracy: 0.2575\n", "Epoch 3/5\n", "50/50 [==============================] - 148s - loss: 1.5702 - categorical_accuracy: 0.4256 - val_loss: 2.0260 - val_categorical_accuracy: 0.3075\n", "Epoch 4/5\n", "50/50 [==============================] - 148s - loss: 1.5362 - categorical_accuracy: 0.4394 - val_loss: 1.8820 - val_categorical_accuracy: 0.3513\n", "Epoch 5/5\n", "50/50 [==============================] - 149s - loss: 1.5008 - categorical_accuracy: 0.4578 - val_loss: 1.7036 - val_categorical_accuracy: 0.3912\n", "\n", "\n", "Further training (refining convolutional blocks, starting with\n", "\tlayer 161)...\n", "\n", "Training for epochs 6 to 10...\n", "Epoch 6/10\n", "50/50 [==============================] - 158s - loss: 1.3932 - categorical_accuracy: 0.5038 - val_loss: 1.6633 - val_categorical_accuracy: 0.4037\n", "Epoch 7/10\n", "50/50 [==============================] - 154s - loss: 1.4004 - categorical_accuracy: 0.4986 - val_loss: 1.6490 - val_categorical_accuracy: 0.3925\n", "Epoch 8/10\n", "50/50 [==============================] - 155s - loss: 1.4020 - categorical_accuracy: 0.4963 - val_loss: 1.6421 - val_categorical_accuracy: 0.3937\n", "Epoch 9/10\n", "50/50 [==============================] - 155s - loss: 1.4135 - categorical_accuracy: 0.5003 - val_loss: 1.6386 - val_categorical_accuracy: 0.3962\n", "Epoch 10/10\n", "50/50 [==============================] - 155s - loss: 1.4193 - categorical_accuracy: 0.4867 - val_loss: 1.6373 - val_categorical_accuracy: 0.3987\n", "\n", "\n", "Further training (refining convolutional blocks, starting with\n", "\tlayer 151)...\n", "\n", "Training for epochs 11 to 15...\n", "Epoch 11/15\n", "50/50 [==============================] - 165s - loss: 1.3864 - categorical_accuracy: 0.4986 - val_loss: 1.6363 - val_categorical_accuracy: 0.4037\n", "Epoch 12/15\n", "50/50 [==============================] - 161s - loss: 1.3927 - categorical_accuracy: 0.4983 - val_loss: 1.6351 - val_categorical_accuracy: 0.4000\n", "Epoch 13/15\n", "50/50 [==============================] - 161s - loss: 1.3973 - categorical_accuracy: 0.5005 - val_loss: 1.6341 - val_categorical_accuracy: 0.4000\n", "Epoch 14/15\n", "50/50 [==============================] - 162s - loss: 1.3964 - categorical_accuracy: 0.5000 - val_loss: 1.6334 - val_categorical_accuracy: 0.4025\n", "Epoch 15/15\n", "50/50 [==============================] - 162s - loss: 1.4030 - categorical_accuracy: 0.4942 - val_loss: 1.6330 - val_categorical_accuracy: 0.4050\n", "\n", "\n", "Further training (refining convolutional blocks, starting with\n", "\tlayer 139)...\n", "\n", "Training for epochs 16 to 20...\n", "Epoch 16/20\n", "50/50 [==============================] - 173s - loss: 1.3668 - categorical_accuracy: 0.5130 - val_loss: 1.6322 - val_categorical_accuracy: 0.4025\n", "Epoch 17/20\n", "50/50 [==============================] - 170s - loss: 1.3836 - categorical_accuracy: 0.5066 - val_loss: 1.6305 - val_categorical_accuracy: 0.4088\n", "Epoch 18/20\n", "50/50 [==============================] - 169s - loss: 1.3812 - categorical_accuracy: 0.5050 - val_loss: 1.6293 - val_categorical_accuracy: 0.4050\n", "Epoch 19/20\n", "50/50 [==============================] - 168s - loss: 1.3891 - categorical_accuracy: 0.5053 - val_loss: 1.6280 - val_categorical_accuracy: 0.4050\n", "Epoch 20/20\n", "50/50 [==============================] - 169s - loss: 1.3920 - categorical_accuracy: 0.4966 - val_loss: 1.6276 - val_categorical_accuracy: 0.4037\n", "\n", "\n", "Further training (refining convolutional blocks, starting with\n", "\tlayer 129)...\n", "\n", "Training for epochs 21 to 25...\n", "Epoch 21/25\n", "50/50 [==============================] - 184s - loss: 1.3524 - categorical_accuracy: 0.5130 - val_loss: 1.6271 - val_categorical_accuracy: 0.4062\n", "Epoch 22/25\n", "50/50 [==============================] - 180s - loss: 1.3620 - categorical_accuracy: 0.5147 - val_loss: 1.6258 - val_categorical_accuracy: 0.4088\n", "Epoch 23/25\n", "50/50 [==============================] - 180s - loss: 1.3703 - categorical_accuracy: 0.5052 - val_loss: 1.6250 - val_categorical_accuracy: 0.4088\n", "Epoch 24/25\n", "50/50 [==============================] - 180s - loss: 1.3664 - categorical_accuracy: 0.5086 - val_loss: 1.6241 - val_categorical_accuracy: 0.4113\n", "Epoch 25/25\n", "50/50 [==============================] - 180s - loss: 1.3682 - categorical_accuracy: 0.5067 - val_loss: 1.6232 - val_categorical_accuracy: 0.4075\n", "\n", "\n", "Further training (refining convolutional blocks, starting with\n", "\tlayer 119)...\n", "\n", "Training for epochs 26 to 30...\n", "Epoch 26/30\n", "50/50 [==============================] - 193s - loss: 1.3342 - categorical_accuracy: 0.5238 - val_loss: 1.6222 - val_categorical_accuracy: 0.4088\n", "Epoch 27/30\n", "50/50 [==============================] - 188s - loss: 1.3368 - categorical_accuracy: 0.5317 - val_loss: 1.6205 - val_categorical_accuracy: 0.4075\n", "Epoch 28/30\n", "50/50 [==============================] - 189s - loss: 1.3419 - categorical_accuracy: 0.5134 - val_loss: 1.6207 - val_categorical_accuracy: 0.4075\n", "Epoch 29/30\n", "50/50 [==============================] - 189s - loss: 1.3492 - categorical_accuracy: 0.5205 - val_loss: 1.6194 - val_categorical_accuracy: 0.4100\n", "Epoch 30/30\n", "50/50 [==============================] - 188s - loss: 1.3511 - categorical_accuracy: 0.5158 - val_loss: 1.6192 - val_categorical_accuracy: 0.4100\n", "\n", "01:24:20 for ResNet50 to yield 51.6% training accuracy and 41.0% validation accuracy in 5 \n", "epochs (x6 training phases).\n", "\n", "ResNet50 run complete at Tuesday, 2017 October 17, 2:49 PM.\n", "Clearing keras's backend Tensorflow session...\n", "\n" ] }, { "data": { "text/html": [ "\n", " \n", " " ], "text/plain": [ "" ] }, "execution_count": 28, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# ResNet50:\n", "run_resnet50()\n", "Audio(url=audio_file, autoplay=True)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [ { "data": { "text/html": [ "" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "# Must be after Alert() call and in a separate cell for both audio and pop-up; sleep \n", "# allows the audio to play before the pop-up alters HTML output on the page:\n", "delayed_popup()" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Best optimizer (0.01, 0.02, 0.85, 1e-10) loaded from file.\n", "Using optimizer (0.01, 0.02, 0.85, 1e-10)...\n", "VGG16 run begun at Tuesday, 2017 October 17, 2:49 PM.\n", "\t[5 epochs (x6 passes) on small FMA on GPU takes\n", "\tunknown (no similar runs found).]\n", "\n", "First-round training (training the classifier)...\n", "\n", "Training for epochs 1 to 3...\n", "Epoch 1/3\n", "50/50 [==============================] - 251s - loss: 1.8052 - categorical_accuracy: 0.3428 - val_loss: 3.0211 - val_categorical_accuracy: 0.1825\n", "Epoch 2/3\n", "50/50 [==============================] - 249s - loss: 1.6764 - categorical_accuracy: 0.3912 - val_loss: 1.8996 - val_categorical_accuracy: 0.2975\n", "Epoch 3/3\n", "50/50 [==============================] - 249s - loss: 1.6254 - categorical_accuracy: 0.4081 - val_loss: 1.7037 - val_categorical_accuracy: 0.3700\n", "\n", "Training for epochs 4 to 5...\n", "Epoch 4/5\n", "50/50 [==============================] - 251s - loss: 1.5762 - categorical_accuracy: 0.4281 - val_loss: 1.6391 - val_categorical_accuracy: 0.3850\n", "Epoch 5/5\n", "50/50 [==============================] - 249s - loss: 1.5647 - categorical_accuracy: 0.4302 - val_loss: 1.6285 - val_categorical_accuracy: 0.3862\n", "\n", "\n", "Further training (refining convolutional blocks, starting with\n", "\tlayer 17)...\n", "\n", "Training for epochs 6 to 8...\n", "Epoch 6/8\n", "50/50 [==============================] - 258s - loss: 1.5193 - categorical_accuracy: 0.4506 - val_loss: 1.6428 - val_categorical_accuracy: 0.3912\n", "Epoch 7/8\n", "50/50 [==============================] - 256s - loss: 1.5129 - categorical_accuracy: 0.4598 - val_loss: 1.6108 - val_categorical_accuracy: 0.4125\n", "Epoch 8/8\n", "50/50 [==============================] - 256s - loss: 1.5039 - categorical_accuracy: 0.4572 - val_loss: 1.6036 - val_categorical_accuracy: 0.4138\n", "\n", "Training for epochs 9 to 10...\n", "Epoch 9/10\n", "50/50 [==============================] - 259s - loss: 1.4776 - categorical_accuracy: 0.4702 - val_loss: 1.6048 - val_categorical_accuracy: 0.4100\n", "Epoch 10/10\n", "50/50 [==============================] - 257s - loss: 1.4751 - categorical_accuracy: 0.4697 - val_loss: 1.5804 - val_categorical_accuracy: 0.4313\n", "\n", "\n", "Further training (refining convolutional blocks, starting with\n", "\tlayer 15)...\n", "\n", "Training for epochs 11 to 13...\n", "Epoch 11/13\n", "50/50 [==============================] - 285s - loss: 1.4724 - categorical_accuracy: 0.4719 - val_loss: 1.5873 - val_categorical_accuracy: 0.4037\n", "Epoch 12/13\n", "50/50 [==============================] - 282s - loss: 1.4591 - categorical_accuracy: 0.4833 - val_loss: 1.5765 - val_categorical_accuracy: 0.4163\n", "Epoch 13/13\n", "50/50 [==============================] - 284s - loss: 1.4603 - categorical_accuracy: 0.4717 - val_loss: 1.6121 - val_categorical_accuracy: 0.4125\n", "\n", "Training for epochs 14 to 15...\n", "Epoch 14/15\n", "50/50 [==============================] - 284s - loss: 1.4206 - categorical_accuracy: 0.4881 - val_loss: 1.6401 - val_categorical_accuracy: 0.4113\n", "Epoch 15/15\n", "50/50 [==============================] - 283s - loss: 1.4069 - categorical_accuracy: 0.4925 - val_loss: 1.5665 - val_categorical_accuracy: 0.4313\n", "\n", "\n", "Further training (refining convolutional blocks, starting with\n", "\tlayer 13)...\n", "\n", "Training for epochs 16 to 18...\n", "Epoch 16/18\n", "50/50 [==============================] - 312s - loss: 1.3914 - categorical_accuracy: 0.4998 - val_loss: 1.5789 - val_categorical_accuracy: 0.4188\n", "Epoch 17/18\n", "50/50 [==============================] - 311s - loss: 1.3884 - categorical_accuracy: 0.5059 - val_loss: 1.5743 - val_categorical_accuracy: 0.4163\n", "Epoch 18/18\n", "50/50 [==============================] - 311s - loss: 1.3915 - categorical_accuracy: 0.4983 - val_loss: 1.5942 - val_categorical_accuracy: 0.4225\n", "\n", "Training for epochs 19 to 20...\n", "Epoch 19/20\n", "50/50 [==============================] - 312s - loss: 1.3608 - categorical_accuracy: 0.5097 - val_loss: 1.5825 - val_categorical_accuracy: 0.4250\n", "Epoch 20/20\n", "50/50 [==============================] - 310s - loss: 1.3456 - categorical_accuracy: 0.5212 - val_loss: 1.5407 - val_categorical_accuracy: 0.4475\n", "\n", "\n", "Further training (refining convolutional blocks, starting with\n", "\tlayer 11)...\n", "\n", "Training for epochs 21 to 23...\n", "Epoch 21/23\n", "50/50 [==============================] - 391s - loss: 1.3276 - categorical_accuracy: 0.5228 - val_loss: 1.5517 - val_categorical_accuracy: 0.4375\n", "Epoch 22/23\n", "50/50 [==============================] - 389s - loss: 1.3289 - categorical_accuracy: 0.5258 - val_loss: 1.5961 - val_categorical_accuracy: 0.4288\n", "Epoch 23/23\n", "50/50 [==============================] - 389s - loss: 1.3389 - categorical_accuracy: 0.5219 - val_loss: 1.5799 - val_categorical_accuracy: 0.4263\n", "\n", "Training for epochs 24 to 25...\n", "Epoch 24/25\n", "50/50 [==============================] - 392s - loss: 1.2914 - categorical_accuracy: 0.5406 - val_loss: 1.6107 - val_categorical_accuracy: 0.4062\n", "Epoch 25/25\n", "50/50 [==============================] - 388s - loss: 1.2806 - categorical_accuracy: 0.5477 - val_loss: 1.5012 - val_categorical_accuracy: 0.4875\n", "\n", "\n", "Further training (refining convolutional blocks, starting with\n", "\tlayer 8)...\n", "\n", "Training for epochs 26 to 28...\n", "Epoch 26/28\n", "50/50 [==============================] - 486s - loss: 1.2673 - categorical_accuracy: 0.5494 - val_loss: 1.6947 - val_categorical_accuracy: 0.4037\n", "Epoch 27/28\n", "50/50 [==============================] - 483s - loss: 1.2531 - categorical_accuracy: 0.5561 - val_loss: 1.5906 - val_categorical_accuracy: 0.4462\n", "Epoch 28/28\n", "50/50 [==============================] - 483s - loss: 1.2753 - categorical_accuracy: 0.5530 - val_loss: 1.6108 - val_categorical_accuracy: 0.4313\n", "\n", "Training for epochs 29 to 30...\n", "Epoch 29/30\n", "50/50 [==============================] - 485s - loss: 1.2262 - categorical_accuracy: 0.5716 - val_loss: 1.5848 - val_categorical_accuracy: 0.4288\n", "Epoch 30/30\n", "50/50 [==============================] - 484s - loss: 1.2140 - categorical_accuracy: 0.5769 - val_loss: 1.5979 - val_categorical_accuracy: 0.4450\n", "\n", "02:44:59 for VGG16 to yield 57.7% training accuracy and 44.5% validation accuracy in 5 \n", "epochs (x6 training phases).\n", "\n", "VGG16 run complete at Tuesday, 2017 October 17, 5:34 PM.\n", "Clearing keras's backend Tensorflow session...\n", "\n" ] }, { "data": { "text/html": [ "\n", " \n", " " ], "text/plain": [ "" ] }, "execution_count": 30, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# VGG16:\n", "run_vgg16()\n", "Audio(url=audio_file, autoplay=True)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [ { "data": { "text/html": [ "" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "# Must be after Alert() call and in a separate cell for both audio and pop-up; sleep \n", "# allows the audio to play before the pop-up alters HTML output on the page:\n", "delayed_popup()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Section-Specific Setup (50% Augmentation)" ] }, { "cell_type": "code", "execution_count": 10, "metadata": { "scrolled": true }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Creating generators with batch size 128...\n", "Loading mean and standard deviation for the training set from file 'saved_objects/fma_small_dwt_stats.npz'.\n", "\n", "Using up to 50.0% horizontal shift to augment training data.\n", "Found 6400 images belonging to 8 classes.\n", "Found 800 images belonging to 8 classes.\n", "Found 800 images belonging to 8 classes.\n" ] } ], "source": [ "# Alter data augmentation value:\n", "param_dict[\"augmentation\"] = 0.5 # i.e., keep at least 15 sec of 30 sec clips\n", "# Reconfigure the generators based on the specified parameters:\n", "generators = {}\n", "(generators[\"train\"], \n", " generators[\"val\"], \n", " generators[\"test\"]) = cku.set_up_generators(param_dict)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Model Reruns" ] }, { "cell_type": "code", "execution_count": 12, "metadata": { "scrolled": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Best optimizer (0.01, 0.02, 0.85, 1e-10) loaded from file.\n", "Using hidden size 128 and optimizer (0.01, 0.02, 0.85, 1e-10)...\n", "Fully connected network run begun at Tuesday, 2017 October 17, 11:16 PM.\n", "\t[30 epochs on small FMA on GPU takes\n", "\t00:19:18 ± 00:17:42 ([00:06:16,00:45:27]).]\n", "\n", "\n", "Training for epochs 1 to 10...\n", "Epoch 1/10\n", "50/50 [==============================] - 95s - loss: 1.8272 - categorical_accuracy: 0.3162 - val_loss: 1.8042 - val_categorical_accuracy: 0.3613\n", "Epoch 2/10\n", "50/50 [==============================] - 91s - loss: 1.7175 - categorical_accuracy: 0.3644 - val_loss: 1.6865 - val_categorical_accuracy: 0.3837\n", "Epoch 3/10\n", "50/50 [==============================] - 91s - loss: 1.6722 - categorical_accuracy: 0.3853 - val_loss: 1.6498 - val_categorical_accuracy: 0.3750\n", "Epoch 4/10\n", "50/50 [==============================] - 88s - loss: 1.6570 - categorical_accuracy: 0.3964 - val_loss: 1.6056 - val_categorical_accuracy: 0.4025\n", "Epoch 5/10\n", "50/50 [==============================] - 86s - loss: 1.6348 - categorical_accuracy: 0.4084 - val_loss: 1.6081 - val_categorical_accuracy: 0.3975\n", "Epoch 6/10\n", "50/50 [==============================] - 88s - loss: 1.6237 - categorical_accuracy: 0.4197 - val_loss: 1.5945 - val_categorical_accuracy: 0.4075\n", "Epoch 7/10\n", "50/50 [==============================] - 87s - loss: 1.6164 - categorical_accuracy: 0.4158 - val_loss: 1.5903 - val_categorical_accuracy: 0.4150\n", "Epoch 8/10\n", "50/50 [==============================] - 89s - loss: 1.6063 - categorical_accuracy: 0.4158 - val_loss: 1.5874 - val_categorical_accuracy: 0.4037\n", "Epoch 9/10\n", "50/50 [==============================] - 91s - loss: 1.6085 - categorical_accuracy: 0.4145 - val_loss: 1.5874 - val_categorical_accuracy: 0.4188\n", "Epoch 10/10\n", "50/50 [==============================] - 89s - loss: 1.5986 - categorical_accuracy: 0.4178 - val_loss: 1.5876 - val_categorical_accuracy: 0.4062\n", "\n", "Training for epochs 11 to 20...\n", "Epoch 11/20\n", "50/50 [==============================] - 92s - loss: 1.5304 - categorical_accuracy: 0.4536 - val_loss: 1.5781 - val_categorical_accuracy: 0.4150\n", "Epoch 12/20\n", "50/50 [==============================] - 90s - loss: 1.4969 - categorical_accuracy: 0.4648 - val_loss: 1.5721 - val_categorical_accuracy: 0.4325\n", "Epoch 13/20\n", "50/50 [==============================] - 89s - loss: 1.5120 - categorical_accuracy: 0.4600 - val_loss: 1.5685 - val_categorical_accuracy: 0.4325\n", "Epoch 14/20\n", "50/50 [==============================] - 90s - loss: 1.5045 - categorical_accuracy: 0.4570 - val_loss: 1.5637 - val_categorical_accuracy: 0.4375\n", "Epoch 15/20\n", "50/50 [==============================] - 89s - loss: 1.5225 - categorical_accuracy: 0.4562 - val_loss: 1.5645 - val_categorical_accuracy: 0.4325\n", "Epoch 16/20\n", "50/50 [==============================] - 89s - loss: 1.5318 - categorical_accuracy: 0.4517 - val_loss: 1.5568 - val_categorical_accuracy: 0.4313\n", "Epoch 17/20\n", "50/50 [==============================] - 89s - loss: 1.5313 - categorical_accuracy: 0.4481 - val_loss: 1.5650 - val_categorical_accuracy: 0.4462\n", "Epoch 18/20\n", "50/50 [==============================] - 88s - loss: 1.5438 - categorical_accuracy: 0.4473 - val_loss: 1.5633 - val_categorical_accuracy: 0.4437\n", "Epoch 19/20\n", "50/50 [==============================] - 89s - loss: 1.5408 - categorical_accuracy: 0.4522 - val_loss: 1.5608 - val_categorical_accuracy: 0.4437\n", "Epoch 20/20\n", "50/50 [==============================] - 89s - loss: 1.5304 - categorical_accuracy: 0.4462 - val_loss: 1.5657 - val_categorical_accuracy: 0.4338\n", "\n", "Training for epochs 21 to 30...\n", "Epoch 21/30\n", "50/50 [==============================] - 94s - loss: 1.4598 - categorical_accuracy: 0.4798 - val_loss: 1.5615 - val_categorical_accuracy: 0.4338\n", "Epoch 22/30\n", "50/50 [==============================] - 89s - loss: 1.4115 - categorical_accuracy: 0.4992 - val_loss: 1.5594 - val_categorical_accuracy: 0.4462\n", "Epoch 23/30\n", "50/50 [==============================] - 88s - loss: 1.4491 - categorical_accuracy: 0.4817 - val_loss: 1.5601 - val_categorical_accuracy: 0.4425\n", "Epoch 24/30\n", "50/50 [==============================] - 88s - loss: 1.4659 - categorical_accuracy: 0.4770 - val_loss: 1.5619 - val_categorical_accuracy: 0.4375\n", "Epoch 25/30\n", "50/50 [==============================] - 92s - loss: 1.4767 - categorical_accuracy: 0.4731 - val_loss: 1.5633 - val_categorical_accuracy: 0.4350\n", "Epoch 26/30\n", "50/50 [==============================] - 89s - loss: 1.4783 - categorical_accuracy: 0.4680 - val_loss: 1.5599 - val_categorical_accuracy: 0.4275\n", "Epoch 27/30\n", "50/50 [==============================] - 87s - loss: 1.4906 - categorical_accuracy: 0.4670 - val_loss: 1.5602 - val_categorical_accuracy: 0.4387\n", "Epoch 28/30\n", "50/50 [==============================] - 89s - loss: 1.4898 - categorical_accuracy: 0.4616 - val_loss: 1.5611 - val_categorical_accuracy: 0.4363\n", "Epoch 29/30\n", "50/50 [==============================] - 88s - loss: 1.4853 - categorical_accuracy: 0.4694 - val_loss: 1.5573 - val_categorical_accuracy: 0.4400\n", "Epoch 30/30\n", "50/50 [==============================] - 89s - loss: 1.4988 - categorical_accuracy: 0.4619 - val_loss: 1.5575 - val_categorical_accuracy: 0.4400\n", "\n", "00:44:59 for Two-Layer Network to yield 46.2% training accuracy and 44.0% validation accuracy in 5 \n", "epochs (x3 training phases).\n", "\n", "Fully connected run complete at Wednesday, 2017 October 18, 12:01 AM.\n", "Clearing keras's backend Tensorflow session...\n", "\n" ] }, { "data": { "text/html": [ "\n", " \n", " " ], "text/plain": [ "" ] }, "execution_count": 12, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# FCNN:\n", "run_fcnn()\n", "Audio(url=audio_file, autoplay=True)" ] }, { "cell_type": "code", "execution_count": 13, "metadata": { "collapsed": true, "scrolled": true }, "outputs": [], "source": [ "# Must be after Alert() call and in a separate cell for both audio and pop-up; sleep \n", "# allows the audio to play before the pop-up alters HTML output on the page:\n", "delayed_popup()" ] }, { "cell_type": "code", "execution_count": 14, "metadata": { "scrolled": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Best optimizer (0.01, 0.02, 0.85, 1e-10) loaded from file.\n", "Using optimizer (0.01, 0.02, 0.85, 1e-10)...\n", "Xception run begun at Wednesday, 2017 October 18, 12:01 AM.\n", "\t[5 epochs (x6 passes) on small FMA on GPU takes\n", "\t01:35:10 ± 00:39:44 ([00:49:18,01:58:29]).]\n", "\n", "First-round training (training the classifier)...\n", "\n", "Training for epochs 1 to 5...\n", "Epoch 1/5\n", "50/50 [==============================] - 193s - loss: 1.9261 - categorical_accuracy: 0.2717 - val_loss: 2.3211 - val_categorical_accuracy: 0.2100\n", "Epoch 2/5\n", "50/50 [==============================] - 190s - loss: 1.7864 - categorical_accuracy: 0.3294 - val_loss: 2.7821 - val_categorical_accuracy: 0.1375\n", "Epoch 3/5\n", "50/50 [==============================] - 190s - loss: 1.7452 - categorical_accuracy: 0.3478 - val_loss: 1.9734 - val_categorical_accuracy: 0.2250\n", "Epoch 4/5\n", "50/50 [==============================] - 190s - loss: 1.7187 - categorical_accuracy: 0.3575 - val_loss: 1.9125 - val_categorical_accuracy: 0.2825\n", "Epoch 5/5\n", "50/50 [==============================] - 190s - loss: 1.6948 - categorical_accuracy: 0.3773 - val_loss: 1.8809 - val_categorical_accuracy: 0.3175\n", "\n", "\n", "Further training (refining convolutional blocks, starting with\n", "\tlayer 122)...\n", "\n", "Training for epochs 6 to 10...\n", "Epoch 6/10\n", "50/50 [==============================] - 202s - loss: 1.6164 - categorical_accuracy: 0.4109 - val_loss: 1.7947 - val_categorical_accuracy: 0.3400\n", "Epoch 7/10\n", "50/50 [==============================] - 200s - loss: 1.6067 - categorical_accuracy: 0.4081 - val_loss: 1.7562 - val_categorical_accuracy: 0.3588\n", "Epoch 8/10\n", "50/50 [==============================] - 200s - loss: 1.6115 - categorical_accuracy: 0.4077 - val_loss: 1.7388 - val_categorical_accuracy: 0.3725\n", "Epoch 9/10\n", "50/50 [==============================] - 200s - loss: 1.6102 - categorical_accuracy: 0.4020 - val_loss: 1.7327 - val_categorical_accuracy: 0.3775\n", "Epoch 10/10\n", "50/50 [==============================] - 200s - loss: 1.6115 - categorical_accuracy: 0.4125 - val_loss: 1.7285 - val_categorical_accuracy: 0.3825\n", "\n", "\n", "Further training (refining convolutional blocks, starting with\n", "\tlayer 105)...\n", "\n", "Training for epochs 11 to 15...\n", "Epoch 11/15\n", "50/50 [==============================] - 231s - loss: 1.6000 - categorical_accuracy: 0.4134 - val_loss: 1.7255 - val_categorical_accuracy: 0.3762\n", "Epoch 12/15\n", "50/50 [==============================] - 228s - loss: 1.5892 - categorical_accuracy: 0.4202 - val_loss: 1.7234 - val_categorical_accuracy: 0.3812\n", "Epoch 13/15\n", "50/50 [==============================] - 228s - loss: 1.5951 - categorical_accuracy: 0.4194 - val_loss: 1.7201 - val_categorical_accuracy: 0.3812\n", "Epoch 14/15\n", "50/50 [==============================] - 228s - loss: 1.6023 - categorical_accuracy: 0.4125 - val_loss: 1.7176 - val_categorical_accuracy: 0.3825\n", "Epoch 15/15\n", "50/50 [==============================] - 228s - loss: 1.5918 - categorical_accuracy: 0.4152 - val_loss: 1.7146 - val_categorical_accuracy: 0.3800\n", "\n", "\n", "Further training (refining convolutional blocks, starting with\n", "\tlayer 95)...\n", "\n", "Training for epochs 16 to 20...\n", "Epoch 16/20\n", "50/50 [==============================] - 248s - loss: 1.5913 - categorical_accuracy: 0.4183 - val_loss: 1.7119 - val_categorical_accuracy: 0.3787\n", "Epoch 17/20\n", "50/50 [==============================] - 246s - loss: 1.5722 - categorical_accuracy: 0.4295 - val_loss: 1.7097 - val_categorical_accuracy: 0.3800\n", "Epoch 18/20\n", "50/50 [==============================] - 246s - loss: 1.5899 - categorical_accuracy: 0.4228 - val_loss: 1.7069 - val_categorical_accuracy: 0.3800\n", "Epoch 19/20\n", "50/50 [==============================] - 246s - loss: 1.5819 - categorical_accuracy: 0.4241 - val_loss: 1.7047 - val_categorical_accuracy: 0.3812\n", "Epoch 20/20\n", "50/50 [==============================] - 246s - loss: 1.5821 - categorical_accuracy: 0.4277 - val_loss: 1.7023 - val_categorical_accuracy: 0.3812\n", "\n", "\n", "Further training (refining convolutional blocks, starting with\n", "\tlayer 85)...\n", "\n", "Training for epochs 21 to 25...\n", "Epoch 21/25\n", "50/50 [==============================] - 264s - loss: 1.5743 - categorical_accuracy: 0.4264 - val_loss: 1.6999 - val_categorical_accuracy: 0.3825\n", "Epoch 22/25\n", "50/50 [==============================] - 261s - loss: 1.5568 - categorical_accuracy: 0.4303 - val_loss: 1.6984 - val_categorical_accuracy: 0.3837\n", "Epoch 23/25\n", "50/50 [==============================] - 261s - loss: 1.5751 - categorical_accuracy: 0.4253 - val_loss: 1.6963 - val_categorical_accuracy: 0.3825\n", "Epoch 24/25\n", "50/50 [==============================] - 261s - loss: 1.5637 - categorical_accuracy: 0.4244 - val_loss: 1.6945 - val_categorical_accuracy: 0.3825\n", "Epoch 25/25\n", "50/50 [==============================] - 262s - loss: 1.5725 - categorical_accuracy: 0.4295 - val_loss: 1.6927 - val_categorical_accuracy: 0.3837\n", "\n", "\n", "Further training (refining convolutional blocks, starting with\n", "\tlayer 75)...\n", "\n", "Training for epochs 26 to 30...\n", "Epoch 26/30\n", "50/50 [==============================] - 280s - loss: 1.5659 - categorical_accuracy: 0.4336 - val_loss: 1.6904 - val_categorical_accuracy: 0.3850\n", "Epoch 27/30\n", "50/50 [==============================] - 276s - loss: 1.5502 - categorical_accuracy: 0.4353 - val_loss: 1.6888 - val_categorical_accuracy: 0.3850\n", "Epoch 28/30\n", "50/50 [==============================] - 276s - loss: 1.5623 - categorical_accuracy: 0.4289 - val_loss: 1.6869 - val_categorical_accuracy: 0.3875\n", "Epoch 29/30\n", "50/50 [==============================] - 277s - loss: 1.5552 - categorical_accuracy: 0.4234 - val_loss: 1.6851 - val_categorical_accuracy: 0.3875\n", "Epoch 30/30\n", "50/50 [==============================] - 277s - loss: 1.5572 - categorical_accuracy: 0.4370 - val_loss: 1.6834 - val_categorical_accuracy: 0.3900\n", "\n", "01:57:37 for Xception to yield 43.7% training accuracy and 39.0% validation accuracy in 5 \n", "epochs (x6 training phases).\n", "\n", "Xception run complete at Wednesday, 2017 October 18, 1:59 AM.\n", "Clearing keras's backend Tensorflow session...\n", "\n" ] }, { "data": { "text/html": [ "\n", " \n", " " ], "text/plain": [ "" ] }, "execution_count": 14, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# Xception:\n", "run_xception()\n", "Audio(url=audio_file, autoplay=True)" ] }, { "cell_type": "code", "execution_count": 15, "metadata": { "collapsed": true, "scrolled": true }, "outputs": [], "source": [ "# Must be after Alert() call and in a separate cell for both audio and pop-up; sleep \n", "# allows the audio to play before the pop-up alters HTML output on the page:\n", "delayed_popup()" ] }, { "cell_type": "code", "execution_count": 16, "metadata": { "scrolled": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Best optimizer (0.01, 0.02, 0.85, 1e-10) loaded from file.\n", "Using optimizer (0.01, 0.02, 0.85, 1e-10)...\n", "Inception V3 run begun at Wednesday, 2017 October 18, 1:59 AM.\n", "\t[5 epochs (x6 passes) on small FMA on GPU takes\n", "\t01:12:29 ± 47.949 sec ([01:11:55,01:13:03]).]\n", "\n", "First-round training (training the classifier)...\n", "\n", "Training for epochs 1 to 5...\n", "Epoch 1/5\n", "50/50 [==============================] - 127s - loss: 1.9566 - categorical_accuracy: 0.2469 - val_loss: 4.4766 - val_categorical_accuracy: 0.1263\n", "Epoch 2/5\n", "50/50 [==============================] - 121s - loss: 1.8499 - categorical_accuracy: 0.3014 - val_loss: 2.7545 - val_categorical_accuracy: 0.1525\n", "Epoch 3/5\n", "50/50 [==============================] - 121s - loss: 1.8171 - categorical_accuracy: 0.3233 - val_loss: 2.3754 - val_categorical_accuracy: 0.1800\n", "Epoch 4/5\n", "50/50 [==============================] - 121s - loss: 1.7820 - categorical_accuracy: 0.3308 - val_loss: 2.1441 - val_categorical_accuracy: 0.2238\n", "Epoch 5/5\n", "50/50 [==============================] - 120s - loss: 1.7802 - categorical_accuracy: 0.3319 - val_loss: 1.8661 - val_categorical_accuracy: 0.3063\n", "\n", "\n", "Further training (refining convolutional blocks, starting with\n", "\tlayer 249)...\n", "\n", "Training for epochs 6 to 10...\n", "Epoch 6/10\n", "50/50 [==============================] - 139s - loss: 1.6768 - categorical_accuracy: 0.3789 - val_loss: 1.8187 - val_categorical_accuracy: 0.3300\n", "Epoch 7/10\n", "50/50 [==============================] - 135s - loss: 1.6885 - categorical_accuracy: 0.3720 - val_loss: 1.7941 - val_categorical_accuracy: 0.3375\n", "Epoch 8/10\n", "50/50 [==============================] - 135s - loss: 1.6778 - categorical_accuracy: 0.3848 - val_loss: 1.7813 - val_categorical_accuracy: 0.3450\n", "Epoch 9/10\n", "50/50 [==============================] - 135s - loss: 1.6788 - categorical_accuracy: 0.3798 - val_loss: 1.7752 - val_categorical_accuracy: 0.3475\n", "Epoch 10/10\n", "50/50 [==============================] - 135s - loss: 1.6696 - categorical_accuracy: 0.3848 - val_loss: 1.7712 - val_categorical_accuracy: 0.3488\n", "\n", "\n", "Further training (refining convolutional blocks, starting with\n", "\tlayer 232)...\n", "\n", "Training for epochs 11 to 15...\n", "Epoch 11/15\n", "50/50 [==============================] - 145s - loss: 1.6431 - categorical_accuracy: 0.3973 - val_loss: 1.7687 - val_categorical_accuracy: 0.3538\n", "Epoch 12/15\n", "50/50 [==============================] - 140s - loss: 1.6538 - categorical_accuracy: 0.3994 - val_loss: 1.7660 - val_categorical_accuracy: 0.3550\n", "Epoch 13/15\n", "50/50 [==============================] - 141s - loss: 1.6530 - categorical_accuracy: 0.3991 - val_loss: 1.7633 - val_categorical_accuracy: 0.3538\n", "Epoch 14/15\n", "50/50 [==============================] - 141s - loss: 1.6558 - categorical_accuracy: 0.3905 - val_loss: 1.7604 - val_categorical_accuracy: 0.3550\n", "Epoch 15/15\n", "50/50 [==============================] - 140s - loss: 1.6442 - categorical_accuracy: 0.4002 - val_loss: 1.7578 - val_categorical_accuracy: 0.3575\n", "\n", "\n", "Further training (refining convolutional blocks, starting with\n", "\tlayer 229)...\n", "\n", "Training for epochs 16 to 20...\n", "Epoch 16/20\n", "50/50 [==============================] - 146s - loss: 1.6184 - categorical_accuracy: 0.4095 - val_loss: 1.7569 - val_categorical_accuracy: 0.3538\n", "Epoch 17/20\n", "50/50 [==============================] - 142s - loss: 1.6285 - categorical_accuracy: 0.4000 - val_loss: 1.7553 - val_categorical_accuracy: 0.3550\n", "Epoch 18/20\n", "50/50 [==============================] - 142s - loss: 1.6262 - categorical_accuracy: 0.4062 - val_loss: 1.7532 - val_categorical_accuracy: 0.3563\n", "Epoch 19/20\n", "50/50 [==============================] - 142s - loss: 1.6213 - categorical_accuracy: 0.4147 - val_loss: 1.7512 - val_categorical_accuracy: 0.3575\n", "Epoch 20/20\n", "50/50 [==============================] - 142s - loss: 1.6287 - categorical_accuracy: 0.4088 - val_loss: 1.7495 - val_categorical_accuracy: 0.3550\n", "\n", "\n", "Further training (refining convolutional blocks, starting with\n", "\tlayer 200)...\n", "\n", "Training for epochs 21 to 25...\n", "Epoch 21/25\n", "50/50 [==============================] - 158s - loss: 1.5848 - categorical_accuracy: 0.4147 - val_loss: 1.7483 - val_categorical_accuracy: 0.3563\n", "Epoch 22/25\n", "50/50 [==============================] - 151s - loss: 1.6024 - categorical_accuracy: 0.4122 - val_loss: 1.7466 - val_categorical_accuracy: 0.3550\n", "Epoch 23/25\n", "50/50 [==============================] - 151s - loss: 1.5962 - categorical_accuracy: 0.4217 - val_loss: 1.7445 - val_categorical_accuracy: 0.3613\n", "Epoch 24/25\n", "50/50 [==============================] - 151s - loss: 1.5912 - categorical_accuracy: 0.4213 - val_loss: 1.7426 - val_categorical_accuracy: 0.3625\n", "Epoch 25/25\n", "50/50 [==============================] - 151s - loss: 1.6034 - categorical_accuracy: 0.4150 - val_loss: 1.7405 - val_categorical_accuracy: 0.3650\n", "\n", "\n", "Further training (refining convolutional blocks, starting with\n", "\tlayer 187)...\n", "\n", "Training for epochs 26 to 30...\n", "Epoch 26/30\n", "50/50 [==============================] - 164s - loss: 1.5517 - categorical_accuracy: 0.4381 - val_loss: 1.7402 - val_categorical_accuracy: 0.3613\n", "Epoch 27/30\n", "50/50 [==============================] - 157s - loss: 1.5567 - categorical_accuracy: 0.4372 - val_loss: 1.7388 - val_categorical_accuracy: 0.3600\n", "Epoch 28/30\n", "50/50 [==============================] - 157s - loss: 1.5589 - categorical_accuracy: 0.4311 - val_loss: 1.7376 - val_categorical_accuracy: 0.3600\n", "Epoch 29/30\n", "50/50 [==============================] - 157s - loss: 1.5628 - categorical_accuracy: 0.4381 - val_loss: 1.7361 - val_categorical_accuracy: 0.3638\n", "Epoch 30/30\n", "50/50 [==============================] - 157s - loss: 1.5656 - categorical_accuracy: 0.4316 - val_loss: 1.7344 - val_categorical_accuracy: 0.3650\n", "\n", "01:11:53 for Inception V3 to yield 43.2% training accuracy and 36.5% validation accuracy in 5 \n", "epochs (x6 training phases).\n", "\n", "Inception V3 run complete at Wednesday, 2017 October 18, 3:11 AM.\n", "Clearing keras's backend Tensorflow session...\n", "\n" ] }, { "data": { "text/html": [ "\n", " \n", " " ], "text/plain": [ "" ] }, "execution_count": 16, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# Inception V3:\n", "run_inception_v3()\n", "Audio(url=audio_file, autoplay=True)" ] }, { "cell_type": "code", "execution_count": 17, "metadata": { "collapsed": true, "scrolled": true }, "outputs": [], "source": [ "# Must be after Alert() call and in a separate cell for both audio and pop-up; sleep \n", "# allows the audio to play before the pop-up alters HTML output on the page:\n", "delayed_popup()" ] }, { "cell_type": "code", "execution_count": 18, "metadata": { "scrolled": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Best optimizer (0.01, 0.02, 0.85, 1e-10) loaded from file.\n", "Using optimizer (0.01, 0.02, 0.85, 1e-10)...\n", "ResNet50 run begun at Wednesday, 2017 October 18, 3:11 AM.\n", "\t[5 epochs (x6 passes) on small FMA on GPU takes\n", "\t01:23:47 ± 46.411 sec ([01:23:14,01:24:20]).]\n", "\n", "First-round training (training the classifier)...\n", "\n", "Training for epochs 1 to 5...\n", "Epoch 1/5\n", "50/50 [==============================] - 151s - loss: 1.7646 - categorical_accuracy: 0.3555 - val_loss: 3.9853 - val_categorical_accuracy: 0.1363\n", "Epoch 2/5\n", "50/50 [==============================] - 145s - loss: 1.6147 - categorical_accuracy: 0.4095 - val_loss: 3.0745 - val_categorical_accuracy: 0.2625\n", "Epoch 3/5\n", "50/50 [==============================] - 145s - loss: 1.5612 - categorical_accuracy: 0.4270 - val_loss: 2.2987 - val_categorical_accuracy: 0.3025\n", "Epoch 4/5\n", "50/50 [==============================] - 145s - loss: 1.5421 - categorical_accuracy: 0.4416 - val_loss: 1.8570 - val_categorical_accuracy: 0.3525\n", "Epoch 5/5\n", "50/50 [==============================] - 145s - loss: 1.5159 - categorical_accuracy: 0.4539 - val_loss: 1.7400 - val_categorical_accuracy: 0.3775\n", "\n", "\n", "Further training (refining convolutional blocks, starting with\n", "\tlayer 161)...\n", "\n", "Training for epochs 6 to 10...\n", "Epoch 6/10\n", "50/50 [==============================] - 155s - loss: 1.3934 - categorical_accuracy: 0.5033 - val_loss: 1.6788 - val_categorical_accuracy: 0.3812\n", "Epoch 7/10\n", "50/50 [==============================] - 152s - loss: 1.4012 - categorical_accuracy: 0.4998 - val_loss: 1.6568 - val_categorical_accuracy: 0.3937\n", "Epoch 8/10\n", "50/50 [==============================] - 152s - loss: 1.4053 - categorical_accuracy: 0.5022 - val_loss: 1.6472 - val_categorical_accuracy: 0.4075\n", "Epoch 9/10\n", "50/50 [==============================] - 151s - loss: 1.4229 - categorical_accuracy: 0.4878 - val_loss: 1.6437 - val_categorical_accuracy: 0.4125\n", "Epoch 10/10\n", "50/50 [==============================] - 152s - loss: 1.4163 - categorical_accuracy: 0.4978 - val_loss: 1.6424 - val_categorical_accuracy: 0.4088\n", "\n", "\n", "Further training (refining convolutional blocks, starting with\n", "\tlayer 151)...\n", "\n", "Training for epochs 11 to 15...\n", "Epoch 11/15\n", "50/50 [==============================] - 160s - loss: 1.3853 - categorical_accuracy: 0.5014 - val_loss: 1.6412 - val_categorical_accuracy: 0.4075\n", "Epoch 12/15\n", "50/50 [==============================] - 157s - loss: 1.3871 - categorical_accuracy: 0.5098 - val_loss: 1.6403 - val_categorical_accuracy: 0.4088\n", "Epoch 13/15\n", "50/50 [==============================] - 156s - loss: 1.3931 - categorical_accuracy: 0.4955 - val_loss: 1.6389 - val_categorical_accuracy: 0.4088\n", "Epoch 14/15\n", "50/50 [==============================] - 158s - loss: 1.4081 - categorical_accuracy: 0.4908 - val_loss: 1.6383 - val_categorical_accuracy: 0.4088\n", "Epoch 15/15\n", "50/50 [==============================] - 157s - loss: 1.4093 - categorical_accuracy: 0.4906 - val_loss: 1.6381 - val_categorical_accuracy: 0.4075\n", "\n", "\n", "Further training (refining convolutional blocks, starting with\n", "\tlayer 139)...\n", "\n", "Training for epochs 16 to 20...\n", "Epoch 16/20\n", "50/50 [==============================] - 171s - loss: 1.3755 - categorical_accuracy: 0.5130 - val_loss: 1.6375 - val_categorical_accuracy: 0.4113\n", "Epoch 17/20\n", "50/50 [==============================] - 167s - loss: 1.3799 - categorical_accuracy: 0.5070 - val_loss: 1.6365 - val_categorical_accuracy: 0.4100\n", "Epoch 18/20\n", "50/50 [==============================] - 167s - loss: 1.3939 - categorical_accuracy: 0.4997 - val_loss: 1.6352 - val_categorical_accuracy: 0.4113\n", "Epoch 19/20\n", "50/50 [==============================] - 168s - loss: 1.4075 - categorical_accuracy: 0.4897 - val_loss: 1.6334 - val_categorical_accuracy: 0.4113\n", "Epoch 20/20\n", "50/50 [==============================] - 167s - loss: 1.3946 - categorical_accuracy: 0.4977 - val_loss: 1.6325 - val_categorical_accuracy: 0.4088\n", "\n", "\n", "Further training (refining convolutional blocks, starting with\n", "\tlayer 129)...\n", "\n", "Training for epochs 21 to 25...\n", "Epoch 21/25\n", "50/50 [==============================] - 183s - loss: 1.3566 - categorical_accuracy: 0.5172 - val_loss: 1.6323 - val_categorical_accuracy: 0.4113\n", "Epoch 22/25\n", "50/50 [==============================] - 178s - loss: 1.3530 - categorical_accuracy: 0.5148 - val_loss: 1.6310 - val_categorical_accuracy: 0.4125\n", "Epoch 23/25\n", "50/50 [==============================] - 178s - loss: 1.3662 - categorical_accuracy: 0.5069 - val_loss: 1.6290 - val_categorical_accuracy: 0.4075\n", "Epoch 24/25\n", "50/50 [==============================] - 178s - loss: 1.3741 - categorical_accuracy: 0.5062 - val_loss: 1.6276 - val_categorical_accuracy: 0.4075\n", "Epoch 25/25\n", "50/50 [==============================] - 178s - loss: 1.3703 - categorical_accuracy: 0.5106 - val_loss: 1.6266 - val_categorical_accuracy: 0.4100\n", "\n", "\n", "Further training (refining convolutional blocks, starting with\n", "\tlayer 119)...\n", "\n", "Training for epochs 26 to 30...\n", "Epoch 26/30\n", "50/50 [==============================] - 192s - loss: 1.3311 - categorical_accuracy: 0.5262 - val_loss: 1.6261 - val_categorical_accuracy: 0.4075\n", "Epoch 27/30\n", "50/50 [==============================] - 188s - loss: 1.3319 - categorical_accuracy: 0.5247 - val_loss: 1.6259 - val_categorical_accuracy: 0.4075\n", "Epoch 28/30\n", "50/50 [==============================] - 186s - loss: 1.3458 - categorical_accuracy: 0.5145 - val_loss: 1.6236 - val_categorical_accuracy: 0.4088\n", "Epoch 29/30\n", "50/50 [==============================] - 187s - loss: 1.3502 - categorical_accuracy: 0.5092 - val_loss: 1.6220 - val_categorical_accuracy: 0.4100\n", "Epoch 30/30\n", "50/50 [==============================] - 187s - loss: 1.3467 - categorical_accuracy: 0.5270 - val_loss: 1.6220 - val_categorical_accuracy: 0.4113\n", "\n", "01:23:09 for ResNet50 to yield 52.7% training accuracy and 41.1% validation accuracy in 5 \n", "epochs (x6 training phases).\n", "\n", "ResNet50 run complete at Wednesday, 2017 October 18, 4:35 AM.\n", "Clearing keras's backend Tensorflow session...\n", "\n" ] }, { "data": { "text/html": [ "\n", " \n", " " ], "text/plain": [ "" ] }, "execution_count": 18, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# ResNet50:\n", "run_resnet50()\n", "Audio(url=audio_file, autoplay=True)" ] }, { "cell_type": "code", "execution_count": 19, "metadata": { "collapsed": true, "scrolled": true }, "outputs": [], "source": [ "# Must be after Alert() call and in a separate cell for both audio and pop-up; sleep \n", "# allows the audio to play before the pop-up alters HTML output on the page:\n", "delayed_popup()" ] }, { "cell_type": "code", "execution_count": 11, "metadata": { "scrolled": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Best optimizer (0.01, 0.02, 0.85, 1e-10) loaded from file.\n", "Using optimizer (0.01, 0.02, 0.85, 1e-10)...\n", "VGG16 run begun at Friday, 2017 October 20, 8:20 PM.\n", "\t[5 epochs (x6 passes) on small FMA on GPU takes\n", "\t02:44:48 ± 15.410 sec ([02:44:37,02:44:59]).]\n", "\n", "First-round training (training the classifier)...\n", "\n", "Training for epochs 1 to 3...\n", "Epoch 1/3\n", "50/50 [==============================] - 259s - loss: 1.8070 - categorical_accuracy: 0.3317 - val_loss: 2.9281 - val_categorical_accuracy: 0.2175\n", "Epoch 2/3\n", "50/50 [==============================] - 239s - loss: 1.6816 - categorical_accuracy: 0.3877 - val_loss: 1.8851 - val_categorical_accuracy: 0.2888\n", "Epoch 3/3\n", "50/50 [==============================] - 240s - loss: 1.6257 - categorical_accuracy: 0.4053 - val_loss: 1.8181 - val_categorical_accuracy: 0.2875\n", "\n", "Training for epochs 4 to 5...\n", "Epoch 4/5\n", "50/50 [==============================] - 244s - loss: 1.5842 - categorical_accuracy: 0.4228 - val_loss: 1.6720 - val_categorical_accuracy: 0.3775\n", "Epoch 5/5\n", "50/50 [==============================] - 247s - loss: 1.5677 - categorical_accuracy: 0.4292 - val_loss: 1.6242 - val_categorical_accuracy: 0.3800\n", "\n", "\n", "Further training (refining convolutional blocks, starting with\n", "\tlayer 17)...\n", "\n", "Training for epochs 6 to 8...\n", "Epoch 6/8\n", "50/50 [==============================] - 257s - loss: 1.5261 - categorical_accuracy: 0.4505 - val_loss: 1.6081 - val_categorical_accuracy: 0.3925\n", "Epoch 7/8\n", "50/50 [==============================] - 253s - loss: 1.5090 - categorical_accuracy: 0.4505 - val_loss: 1.5956 - val_categorical_accuracy: 0.4188\n", "Epoch 8/8\n", "50/50 [==============================] - 254s - loss: 1.5156 - categorical_accuracy: 0.4555 - val_loss: 1.5798 - val_categorical_accuracy: 0.4113\n", "\n", "Training for epochs 9 to 10...\n", "Epoch 9/10\n", "50/50 [==============================] - 255s - loss: 1.4913 - categorical_accuracy: 0.4637 - val_loss: 1.5820 - val_categorical_accuracy: 0.4088\n", "Epoch 10/10\n", "50/50 [==============================] - 254s - loss: 1.4887 - categorical_accuracy: 0.4614 - val_loss: 1.5766 - val_categorical_accuracy: 0.4288\n", "\n", "\n", "Further training (refining convolutional blocks, starting with\n", "\tlayer 15)...\n", "\n", "Training for epochs 11 to 13...\n", "Epoch 11/13\n", "50/50 [==============================] - 277s - loss: 1.4800 - categorical_accuracy: 0.4648 - val_loss: 1.6647 - val_categorical_accuracy: 0.4150\n", "Epoch 12/13\n", "50/50 [==============================] - 275s - loss: 1.4561 - categorical_accuracy: 0.4763 - val_loss: 1.5598 - val_categorical_accuracy: 0.4387\n", "Epoch 13/13\n", "50/50 [==============================] - 275s - loss: 1.4640 - categorical_accuracy: 0.4708 - val_loss: 1.7159 - val_categorical_accuracy: 0.3900\n", "\n", "Training for epochs 14 to 15...\n", "Epoch 14/15\n", "50/50 [==============================] - 277s - loss: 1.4243 - categorical_accuracy: 0.4866 - val_loss: 1.5645 - val_categorical_accuracy: 0.4350\n", "Epoch 15/15\n", "50/50 [==============================] - 275s - loss: 1.4093 - categorical_accuracy: 0.4944 - val_loss: 1.5910 - val_categorical_accuracy: 0.4175\n", "\n", "\n", "Further training (refining convolutional blocks, starting with\n", "\tlayer 13)...\n", "\n", "Training for epochs 16 to 18...\n", "Epoch 16/18\n", "50/50 [==============================] - 309s - loss: 1.4075 - categorical_accuracy: 0.4925 - val_loss: 1.5826 - val_categorical_accuracy: 0.4412\n", "Epoch 17/18\n", "50/50 [==============================] - 304s - loss: 1.3880 - categorical_accuracy: 0.5056 - val_loss: 1.5759 - val_categorical_accuracy: 0.4288\n", "Epoch 18/18\n", "50/50 [==============================] - 305s - loss: 1.4005 - categorical_accuracy: 0.4981 - val_loss: 1.6441 - val_categorical_accuracy: 0.3975\n", "\n", "Training for epochs 19 to 20...\n", "Epoch 19/20\n", "50/50 [==============================] - 307s - loss: 1.3664 - categorical_accuracy: 0.5080 - val_loss: 1.5706 - val_categorical_accuracy: 0.4462\n", "Epoch 20/20\n", "50/50 [==============================] - 305s - loss: 1.3603 - categorical_accuracy: 0.5134 - val_loss: 1.6023 - val_categorical_accuracy: 0.4263\n", "\n", "\n", "Further training (refining convolutional blocks, starting with\n", "\tlayer 11)...\n", "\n", "Training for epochs 21 to 23...\n", "Epoch 21/23\n", "50/50 [==============================] - 390s - loss: 1.3489 - categorical_accuracy: 0.5192 - val_loss: 1.5927 - val_categorical_accuracy: 0.4238\n", "Epoch 22/23\n", "50/50 [==============================] - 385s - loss: 1.3433 - categorical_accuracy: 0.5172 - val_loss: 1.5325 - val_categorical_accuracy: 0.4425\n", "Epoch 23/23\n", "50/50 [==============================] - 385s - loss: 1.3503 - categorical_accuracy: 0.5177 - val_loss: 1.5948 - val_categorical_accuracy: 0.4313\n", "\n", "Training for epochs 24 to 25...\n", "Epoch 24/25\n", "50/50 [==============================] - 386s - loss: 1.3129 - categorical_accuracy: 0.5259 - val_loss: 1.5278 - val_categorical_accuracy: 0.4525\n", "Epoch 25/25\n", "50/50 [==============================] - 384s - loss: 1.2852 - categorical_accuracy: 0.5394 - val_loss: 1.5319 - val_categorical_accuracy: 0.4575\n", "\n", "\n", "Further training (refining convolutional blocks, starting with\n", "\tlayer 8)...\n", "\n", "Training for epochs 26 to 28...\n", "Epoch 26/28\n", "50/50 [==============================] - 482s - loss: 1.2771 - categorical_accuracy: 0.5491 - val_loss: 1.5673 - val_categorical_accuracy: 0.4400\n", "Epoch 27/28\n", "50/50 [==============================] - 475s - loss: 1.2754 - categorical_accuracy: 0.5475 - val_loss: 1.5291 - val_categorical_accuracy: 0.4512\n", "Epoch 28/28\n", "50/50 [==============================] - 474s - loss: 1.2911 - categorical_accuracy: 0.5403 - val_loss: 1.5364 - val_categorical_accuracy: 0.4525\n", "\n", "Training for epochs 29 to 30...\n", "Epoch 29/30\n", "50/50 [==============================] - 477s - loss: 1.2459 - categorical_accuracy: 0.5623 - val_loss: 1.5411 - val_categorical_accuracy: 0.4462\n", "Epoch 30/30\n", "50/50 [==============================] - 475s - loss: 1.2335 - categorical_accuracy: 0.5666 - val_loss: 1.5343 - val_categorical_accuracy: 0.4700\n", "\n", "02:42:22 for VGG16 to yield 56.7% training accuracy and 47.0% validation accuracy in 5 \n", "epochs (x6 training phases).\n", "\n", "VGG16 run complete at Friday, 2017 October 20, 11:02 PM.\n", "Clearing keras's backend Tensorflow session...\n", "\n" ] }, { "data": { "text/html": [ "\n", " \n", " " ], "text/plain": [ "" ] }, "execution_count": 11, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# VGG16:\n", "run_vgg16()\n", "Audio(url=audio_file, autoplay=True)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true, "scrolled": true }, "outputs": [], "source": [ "# Must be after Alert() call and in a separate cell for both audio and pop-up; sleep \n", "# allows the audio to play before the pop-up alters HTML output on the page:\n", "delayed_popup()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Section-Specific Setup (33% Augmentation)" ] }, { "cell_type": "code", "execution_count": 12, "metadata": { "scrolled": true }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Creating generators with batch size 128...\n", "Loading mean and standard deviation for the training set from file 'saved_objects/fma_small_dwt_stats.npz'.\n", "\n", "Using up to 33.0% horizontal shift to augment training data.\n", "Found 6400 images belonging to 8 classes.\n", "Found 800 images belonging to 8 classes.\n", "Found 800 images belonging to 8 classes.\n" ] } ], "source": [ "# Alter data augmentation value:\n", "param_dict[\"augmentation\"] = 0.33 # i.e., keep at least 20 sec of 30 sec clips\n", "# Reconfigure the generators based on the specified parameters:\n", "generators = {}\n", "(generators[\"train\"], \n", " generators[\"val\"], \n", " generators[\"test\"]) = cku.set_up_generators(param_dict)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Model Reruns" ] }, { "cell_type": "code", "execution_count": 21, "metadata": { "scrolled": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Best optimizer (0.01, 0.02, 0.85, 1e-10) loaded from file.\n", "Using hidden size 128 and optimizer (0.01, 0.02, 0.85, 1e-10)...\n", "Fully connected network run begun at Wednesday, 2017 October 18, 4:35 AM.\n", "\t[30 epochs on small FMA on GPU takes\n", "\t00:24:26 ± 00:19:09 ([00:06:16,00:45:27]).]\n", "\n", "\n", "Training for epochs 1 to 10...\n", "Epoch 1/10\n", "50/50 [==============================] - 97s - loss: 1.8202 - categorical_accuracy: 0.3175 - val_loss: 1.8147 - val_categorical_accuracy: 0.3500\n", "Epoch 2/10\n", "50/50 [==============================] - 88s - loss: 1.7107 - categorical_accuracy: 0.3711 - val_loss: 1.6795 - val_categorical_accuracy: 0.3750\n", "Epoch 3/10\n", "50/50 [==============================] - 89s - loss: 1.6714 - categorical_accuracy: 0.3883 - val_loss: 1.6312 - val_categorical_accuracy: 0.3900\n", "Epoch 4/10\n", "50/50 [==============================] - 87s - loss: 1.6512 - categorical_accuracy: 0.4058 - val_loss: 1.6011 - val_categorical_accuracy: 0.4050\n", "Epoch 5/10\n", "50/50 [==============================] - 87s - loss: 1.6360 - categorical_accuracy: 0.4006 - val_loss: 1.6020 - val_categorical_accuracy: 0.3987\n", "Epoch 6/10\n", "50/50 [==============================] - 88s - loss: 1.6192 - categorical_accuracy: 0.4144 - val_loss: 1.5874 - val_categorical_accuracy: 0.4113\n", "Epoch 7/10\n", "50/50 [==============================] - 88s - loss: 1.6048 - categorical_accuracy: 0.4181 - val_loss: 1.5933 - val_categorical_accuracy: 0.4100\n", "Epoch 8/10\n", "50/50 [==============================] - 89s - loss: 1.6054 - categorical_accuracy: 0.4198 - val_loss: 1.5879 - val_categorical_accuracy: 0.4100\n", "Epoch 9/10\n", "50/50 [==============================] - 88s - loss: 1.6026 - categorical_accuracy: 0.4248 - val_loss: 1.5844 - val_categorical_accuracy: 0.4088\n", "Epoch 10/10\n", "50/50 [==============================] - 88s - loss: 1.5913 - categorical_accuracy: 0.4214 - val_loss: 1.5829 - val_categorical_accuracy: 0.4000\n", "\n", "Training for epochs 11 to 20...\n", "Epoch 11/20\n", "50/50 [==============================] - 92s - loss: 1.5209 - categorical_accuracy: 0.4531 - val_loss: 1.5846 - val_categorical_accuracy: 0.4113\n", "Epoch 12/20\n", "50/50 [==============================] - 87s - loss: 1.4919 - categorical_accuracy: 0.4612 - val_loss: 1.5773 - val_categorical_accuracy: 0.4188\n", "Epoch 13/20\n", "50/50 [==============================] - 89s - loss: 1.5075 - categorical_accuracy: 0.4666 - val_loss: 1.5737 - val_categorical_accuracy: 0.4012\n", "Epoch 14/20\n", "50/50 [==============================] - 88s - loss: 1.5247 - categorical_accuracy: 0.4564 - val_loss: 1.5743 - val_categorical_accuracy: 0.4150\n", "Epoch 15/20\n", "50/50 [==============================] - 88s - loss: 1.5203 - categorical_accuracy: 0.4523 - val_loss: 1.5663 - val_categorical_accuracy: 0.4175\n", "Epoch 16/20\n", "50/50 [==============================] - 88s - loss: 1.5296 - categorical_accuracy: 0.4483 - val_loss: 1.5651 - val_categorical_accuracy: 0.4200\n", "Epoch 17/20\n", "50/50 [==============================] - 88s - loss: 1.5281 - categorical_accuracy: 0.4497 - val_loss: 1.5614 - val_categorical_accuracy: 0.4288\n", "Epoch 18/20\n", "50/50 [==============================] - 91s - loss: 1.5342 - categorical_accuracy: 0.4481 - val_loss: 1.5592 - val_categorical_accuracy: 0.4163\n", "Epoch 19/20\n", "50/50 [==============================] - 91s - loss: 1.5233 - categorical_accuracy: 0.4547 - val_loss: 1.5611 - val_categorical_accuracy: 0.4288\n", "Epoch 20/20\n", "50/50 [==============================] - 89s - loss: 1.5298 - categorical_accuracy: 0.4572 - val_loss: 1.5591 - val_categorical_accuracy: 0.4213\n", "\n", "Training for epochs 21 to 30...\n", "Epoch 21/30\n", "50/50 [==============================] - 94s - loss: 1.4376 - categorical_accuracy: 0.4909 - val_loss: 1.5626 - val_categorical_accuracy: 0.4188\n", "Epoch 22/30\n", "50/50 [==============================] - 89s - loss: 1.4211 - categorical_accuracy: 0.4913 - val_loss: 1.5593 - val_categorical_accuracy: 0.4225\n", "Epoch 23/30\n", "50/50 [==============================] - 90s - loss: 1.4400 - categorical_accuracy: 0.4861 - val_loss: 1.5617 - val_categorical_accuracy: 0.4250\n", "Epoch 24/30\n", "50/50 [==============================] - 91s - loss: 1.4592 - categorical_accuracy: 0.4780 - val_loss: 1.5626 - val_categorical_accuracy: 0.4263\n", "Epoch 25/30\n", "50/50 [==============================] - 90s - loss: 1.4597 - categorical_accuracy: 0.4778 - val_loss: 1.5575 - val_categorical_accuracy: 0.4263\n", "Epoch 26/30\n", "50/50 [==============================] - 90s - loss: 1.4730 - categorical_accuracy: 0.4789 - val_loss: 1.5579 - val_categorical_accuracy: 0.4225\n", "Epoch 27/30\n", "50/50 [==============================] - 90s - loss: 1.4676 - categorical_accuracy: 0.4730 - val_loss: 1.5582 - val_categorical_accuracy: 0.4300\n", "Epoch 28/30\n", "50/50 [==============================] - 90s - loss: 1.4679 - categorical_accuracy: 0.4798 - val_loss: 1.5557 - val_categorical_accuracy: 0.4300\n", "Epoch 29/30\n", "50/50 [==============================] - 89s - loss: 1.4867 - categorical_accuracy: 0.4713 - val_loss: 1.5555 - val_categorical_accuracy: 0.4375\n", "Epoch 30/30\n", "50/50 [==============================] - 90s - loss: 1.4915 - categorical_accuracy: 0.4608 - val_loss: 1.5583 - val_categorical_accuracy: 0.4338\n", "\n", "00:45:00 for Two-Layer Network to yield 46.1% training accuracy and 43.4% validation accuracy in 5 \n", "epochs (x3 training phases).\n", "\n", "Fully connected run complete at Wednesday, 2017 October 18, 5:20 AM.\n", "Clearing keras's backend Tensorflow session...\n", "\n" ] }, { "data": { "text/html": [ "\n", " \n", " " ], "text/plain": [ "" ] }, "execution_count": 21, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# FCNN:\n", "run_fcnn()\n", "Audio(url=audio_file, autoplay=True)" ] }, { "cell_type": "code", "execution_count": 22, "metadata": { "collapsed": true, "scrolled": true }, "outputs": [], "source": [ "# Must be after Alert() call and in a separate cell for both audio and pop-up; sleep \n", "# allows the audio to play before the pop-up alters HTML output on the page:\n", "delayed_popup()" ] }, { "cell_type": "code", "execution_count": 23, "metadata": { "scrolled": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Best optimizer (0.01, 0.02, 0.85, 1e-10) loaded from file.\n", "Using optimizer (0.01, 0.02, 0.85, 1e-10)...\n", "Xception run begun at Wednesday, 2017 October 18, 5:20 AM.\n", "\t[5 epochs (x6 passes) on small FMA on GPU takes\n", "\t01:40:47 ± 00:34:20 ([00:49:18,01:58:29]).]\n", "\n", "First-round training (training the classifier)...\n", "\n", "Training for epochs 1 to 5...\n", "Epoch 1/5\n", "50/50 [==============================] - 194s - loss: 1.9206 - categorical_accuracy: 0.2734 - val_loss: 2.2645 - val_categorical_accuracy: 0.1525\n", "Epoch 2/5\n", "50/50 [==============================] - 191s - loss: 1.7789 - categorical_accuracy: 0.3384 - val_loss: 2.1427 - val_categorical_accuracy: 0.2450\n", "Epoch 3/5\n", "50/50 [==============================] - 191s - loss: 1.7419 - categorical_accuracy: 0.3519 - val_loss: 2.0722 - val_categorical_accuracy: 0.2137\n", "Epoch 4/5\n", "50/50 [==============================] - 191s - loss: 1.7101 - categorical_accuracy: 0.3684 - val_loss: 1.8965 - val_categorical_accuracy: 0.3088\n", "Epoch 5/5\n", "50/50 [==============================] - 191s - loss: 1.7003 - categorical_accuracy: 0.3717 - val_loss: 1.8520 - val_categorical_accuracy: 0.3137\n", "\n", "\n", "Further training (refining convolutional blocks, starting with\n", "\tlayer 122)...\n", "\n", "Training for epochs 6 to 10...\n", "Epoch 6/10\n", "50/50 [==============================] - 203s - loss: 1.6100 - categorical_accuracy: 0.4009 - val_loss: 1.7795 - val_categorical_accuracy: 0.3588\n", "Epoch 7/10\n", "50/50 [==============================] - 200s - loss: 1.6091 - categorical_accuracy: 0.4203 - val_loss: 1.7490 - val_categorical_accuracy: 0.3700\n", "Epoch 8/10\n", "50/50 [==============================] - 200s - loss: 1.6151 - categorical_accuracy: 0.4091 - val_loss: 1.7368 - val_categorical_accuracy: 0.3738\n", "Epoch 9/10\n", "50/50 [==============================] - 200s - loss: 1.6148 - categorical_accuracy: 0.4156 - val_loss: 1.7310 - val_categorical_accuracy: 0.3787\n", "Epoch 10/10\n", "50/50 [==============================] - 200s - loss: 1.6085 - categorical_accuracy: 0.4150 - val_loss: 1.7275 - val_categorical_accuracy: 0.3775\n", "\n", "\n", "Further training (refining convolutional blocks, starting with\n", "\tlayer 105)...\n", "\n", "Training for epochs 11 to 15...\n", "Epoch 11/15\n", "50/50 [==============================] - 230s - loss: 1.6076 - categorical_accuracy: 0.4097 - val_loss: 1.7250 - val_categorical_accuracy: 0.3862\n", "Epoch 12/15\n", "50/50 [==============================] - 229s - loss: 1.5913 - categorical_accuracy: 0.4197 - val_loss: 1.7227 - val_categorical_accuracy: 0.3850\n", "Epoch 13/15\n", "50/50 [==============================] - 229s - loss: 1.5935 - categorical_accuracy: 0.4131 - val_loss: 1.7189 - val_categorical_accuracy: 0.3937\n", "Epoch 14/15\n", "50/50 [==============================] - 229s - loss: 1.6051 - categorical_accuracy: 0.4247 - val_loss: 1.7163 - val_categorical_accuracy: 0.3912\n", "Epoch 15/15\n", "50/50 [==============================] - 228s - loss: 1.5946 - categorical_accuracy: 0.4158 - val_loss: 1.7138 - val_categorical_accuracy: 0.3912\n", "\n", "\n", "Further training (refining convolutional blocks, starting with\n", "\tlayer 95)...\n", "\n", "Training for epochs 16 to 20...\n", "Epoch 16/20\n", "50/50 [==============================] - 248s - loss: 1.5764 - categorical_accuracy: 0.4241 - val_loss: 1.7117 - val_categorical_accuracy: 0.3925\n", "Epoch 17/20\n", "50/50 [==============================] - 245s - loss: 1.5762 - categorical_accuracy: 0.4253 - val_loss: 1.7093 - val_categorical_accuracy: 0.3937\n", "Epoch 18/20\n", "50/50 [==============================] - 245s - loss: 1.5838 - categorical_accuracy: 0.4197 - val_loss: 1.7060 - val_categorical_accuracy: 0.3950\n", "Epoch 19/20\n", "50/50 [==============================] - 245s - loss: 1.5880 - categorical_accuracy: 0.4233 - val_loss: 1.7038 - val_categorical_accuracy: 0.3937\n", "Epoch 20/20\n", "50/50 [==============================] - 245s - loss: 1.5811 - categorical_accuracy: 0.4188 - val_loss: 1.7017 - val_categorical_accuracy: 0.3900\n", "\n", "\n", "Further training (refining convolutional blocks, starting with\n", "\tlayer 85)...\n", "\n", "Training for epochs 21 to 25...\n", "Epoch 21/25\n", "50/50 [==============================] - 267s - loss: 1.5636 - categorical_accuracy: 0.4281 - val_loss: 1.7003 - val_categorical_accuracy: 0.3937\n", "Epoch 22/25\n", "50/50 [==============================] - 264s - loss: 1.5628 - categorical_accuracy: 0.4348 - val_loss: 1.6986 - val_categorical_accuracy: 0.3925\n", "Epoch 23/25\n", "50/50 [==============================] - 264s - loss: 1.5679 - categorical_accuracy: 0.4237 - val_loss: 1.6957 - val_categorical_accuracy: 0.3912\n", "Epoch 24/25\n", "50/50 [==============================] - 263s - loss: 1.5754 - categorical_accuracy: 0.4303 - val_loss: 1.6936 - val_categorical_accuracy: 0.3912\n", "Epoch 25/25\n", "50/50 [==============================] - 263s - loss: 1.5770 - categorical_accuracy: 0.4180 - val_loss: 1.6918 - val_categorical_accuracy: 0.3912\n", "\n", "\n", "Further training (refining convolutional blocks, starting with\n", "\tlayer 75)...\n", "\n", "Training for epochs 26 to 30...\n", "Epoch 26/30\n", "50/50 [==============================] - 283s - loss: 1.5539 - categorical_accuracy: 0.4358 - val_loss: 1.6905 - val_categorical_accuracy: 0.3937\n", "Epoch 27/30\n", "50/50 [==============================] - 280s - loss: 1.5489 - categorical_accuracy: 0.4458 - val_loss: 1.6890 - val_categorical_accuracy: 0.3925\n", "Epoch 28/30\n", "50/50 [==============================] - 280s - loss: 1.5470 - categorical_accuracy: 0.4323 - val_loss: 1.6864 - val_categorical_accuracy: 0.3950\n", "Epoch 29/30\n", "50/50 [==============================] - 280s - loss: 1.5518 - categorical_accuracy: 0.4425 - val_loss: 1.6846 - val_categorical_accuracy: 0.3925\n", "Epoch 30/30\n", "50/50 [==============================] - 279s - loss: 1.5532 - categorical_accuracy: 0.4364 - val_loss: 1.6827 - val_categorical_accuracy: 0.3962\n", "\n", "01:58:14 for Xception to yield 43.6% training accuracy and 39.6% validation accuracy in 5 \n", "epochs (x6 training phases).\n", "\n", "Xception run complete at Wednesday, 2017 October 18, 7:18 AM.\n", "Clearing keras's backend Tensorflow session...\n", "\n" ] }, { "data": { "text/html": [ "\n", " \n", " " ], "text/plain": [ "" ] }, "execution_count": 23, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# Xception:\n", "run_xception()\n", "Audio(url=audio_file, autoplay=True)" ] }, { "cell_type": "code", "execution_count": 24, "metadata": { "scrolled": true }, "outputs": [ { "data": { "text/html": [ "" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "# Must be after Alert() call and in a separate cell for both audio and pop-up; sleep \n", "# allows the audio to play before the pop-up alters HTML output on the page:\n", "delayed_popup()" ] }, { "cell_type": "code", "execution_count": 25, "metadata": { "scrolled": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Best optimizer (0.01, 0.02, 0.85, 1e-10) loaded from file.\n", "Using optimizer (0.01, 0.02, 0.85, 1e-10)...\n", "Inception V3 run begun at Wednesday, 2017 October 18, 7:18 AM.\n", "\t[5 epochs (x6 passes) on small FMA on GPU takes\n", "\t01:12:17 ± 39.542 sec ([01:11:53,01:13:03]).]\n", "\n", "First-round training (training the classifier)...\n", "\n", "Training for epochs 1 to 5...\n", "Epoch 1/5\n", "50/50 [==============================] - 126s - loss: 1.9689 - categorical_accuracy: 0.2400 - val_loss: 2.5845 - val_categorical_accuracy: 0.1437\n", "Epoch 2/5\n", "50/50 [==============================] - 123s - loss: 1.8507 - categorical_accuracy: 0.3053 - val_loss: 2.3247 - val_categorical_accuracy: 0.1737\n", "Epoch 3/5\n", "50/50 [==============================] - 123s - loss: 1.8323 - categorical_accuracy: 0.3086 - val_loss: 2.2459 - val_categorical_accuracy: 0.2225\n", "Epoch 4/5\n", "50/50 [==============================] - 122s - loss: 1.7882 - categorical_accuracy: 0.3358 - val_loss: 2.1756 - val_categorical_accuracy: 0.2263\n", "Epoch 5/5\n", "50/50 [==============================] - 122s - loss: 1.7758 - categorical_accuracy: 0.3353 - val_loss: 1.9289 - val_categorical_accuracy: 0.2875\n", "\n", "\n", "Further training (refining convolutional blocks, starting with\n", "\tlayer 249)...\n", "\n", "Training for epochs 6 to 10...\n", "Epoch 6/10\n", "50/50 [==============================] - 141s - loss: 1.6855 - categorical_accuracy: 0.3830 - val_loss: 1.8520 - val_categorical_accuracy: 0.3150\n", "Epoch 7/10\n", "50/50 [==============================] - 136s - loss: 1.6685 - categorical_accuracy: 0.3875 - val_loss: 1.8149 - val_categorical_accuracy: 0.3250\n", "Epoch 8/10\n", "50/50 [==============================] - 136s - loss: 1.6850 - categorical_accuracy: 0.3802 - val_loss: 1.7965 - val_categorical_accuracy: 0.3337\n", "Epoch 9/10\n", "50/50 [==============================] - 136s - loss: 1.6768 - categorical_accuracy: 0.3761 - val_loss: 1.7861 - val_categorical_accuracy: 0.3450\n", "Epoch 10/10\n", "50/50 [==============================] - 136s - loss: 1.6672 - categorical_accuracy: 0.3875 - val_loss: 1.7808 - val_categorical_accuracy: 0.3550\n", "\n", "\n", "Further training (refining convolutional blocks, starting with\n", "\tlayer 232)...\n", "\n", "Training for epochs 11 to 15...\n", "Epoch 11/15\n", "50/50 [==============================] - 146s - loss: 1.6582 - categorical_accuracy: 0.3900 - val_loss: 1.7770 - val_categorical_accuracy: 0.3488\n", "Epoch 12/15\n", "50/50 [==============================] - 141s - loss: 1.6404 - categorical_accuracy: 0.4006 - val_loss: 1.7728 - val_categorical_accuracy: 0.3575\n", "Epoch 13/15\n", "50/50 [==============================] - 140s - loss: 1.6577 - categorical_accuracy: 0.3955 - val_loss: 1.7706 - val_categorical_accuracy: 0.3575\n", "Epoch 14/15\n", "50/50 [==============================] - 142s - loss: 1.6549 - categorical_accuracy: 0.3905 - val_loss: 1.7686 - val_categorical_accuracy: 0.3550\n", "Epoch 15/15\n", "50/50 [==============================] - 141s - loss: 1.6481 - categorical_accuracy: 0.4003 - val_loss: 1.7664 - val_categorical_accuracy: 0.3575\n", "\n", "\n", "Further training (refining convolutional blocks, starting with\n", "\tlayer 229)...\n", "\n", "Training for epochs 16 to 20...\n", "Epoch 16/20\n", "50/50 [==============================] - 148s - loss: 1.6233 - categorical_accuracy: 0.4106 - val_loss: 1.7654 - val_categorical_accuracy: 0.3600\n", "Epoch 17/20\n", "50/50 [==============================] - 142s - loss: 1.6178 - categorical_accuracy: 0.4130 - val_loss: 1.7630 - val_categorical_accuracy: 0.3588\n", "Epoch 18/20\n", "50/50 [==============================] - 143s - loss: 1.6342 - categorical_accuracy: 0.4027 - val_loss: 1.7621 - val_categorical_accuracy: 0.3575\n", "Epoch 19/20\n", "50/50 [==============================] - 142s - loss: 1.6200 - categorical_accuracy: 0.4117 - val_loss: 1.7600 - val_categorical_accuracy: 0.3550\n", "Epoch 20/20\n", "50/50 [==============================] - 143s - loss: 1.6234 - categorical_accuracy: 0.4053 - val_loss: 1.7585 - val_categorical_accuracy: 0.3613\n", "\n", "\n", "Further training (refining convolutional blocks, starting with\n", "\tlayer 200)...\n", "\n", "Training for epochs 21 to 25...\n", "Epoch 21/25\n", "50/50 [==============================] - 160s - loss: 1.5944 - categorical_accuracy: 0.4220 - val_loss: 1.7574 - val_categorical_accuracy: 0.3613\n", "Epoch 22/25\n", "50/50 [==============================] - 156s - loss: 1.5829 - categorical_accuracy: 0.4283 - val_loss: 1.7547 - val_categorical_accuracy: 0.3663\n", "Epoch 23/25\n", "50/50 [==============================] - 156s - loss: 1.6029 - categorical_accuracy: 0.4188 - val_loss: 1.7535 - val_categorical_accuracy: 0.3663\n", "Epoch 24/25\n", "50/50 [==============================] - 155s - loss: 1.5932 - categorical_accuracy: 0.4159 - val_loss: 1.7521 - val_categorical_accuracy: 0.3713\n", "Epoch 25/25\n", "50/50 [==============================] - 155s - loss: 1.5940 - categorical_accuracy: 0.4186 - val_loss: 1.7504 - val_categorical_accuracy: 0.3713\n", "\n", "\n", "Further training (refining convolutional blocks, starting with\n", "\tlayer 187)...\n", "\n", "Training for epochs 26 to 30...\n", "Epoch 26/30\n", "50/50 [==============================] - 166s - loss: 1.5613 - categorical_accuracy: 0.4366 - val_loss: 1.7493 - val_categorical_accuracy: 0.3700\n", "Epoch 27/30\n", "50/50 [==============================] - 161s - loss: 1.5481 - categorical_accuracy: 0.4412 - val_loss: 1.7469 - val_categorical_accuracy: 0.3738\n", "Epoch 28/30\n", "50/50 [==============================] - 161s - loss: 1.5687 - categorical_accuracy: 0.4275 - val_loss: 1.7458 - val_categorical_accuracy: 0.3713\n", "Epoch 29/30\n", "50/50 [==============================] - 160s - loss: 1.5504 - categorical_accuracy: 0.4397 - val_loss: 1.7451 - val_categorical_accuracy: 0.3738\n", "Epoch 30/30\n", "50/50 [==============================] - 161s - loss: 1.5552 - categorical_accuracy: 0.4356 - val_loss: 1.7427 - val_categorical_accuracy: 0.3750\n", "\n", "01:12:53 for Inception V3 to yield 43.6% training accuracy and 37.5% validation accuracy in 5 \n", "epochs (x6 training phases).\n", "\n", "Inception V3 run complete at Wednesday, 2017 October 18, 8:32 AM.\n", "Clearing keras's backend Tensorflow session...\n", "\n" ] }, { "data": { "text/html": [ "\n", " \n", " " ], "text/plain": [ "" ] }, "execution_count": 25, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# Inception V3:\n", "run_inception_v3()\n", "Audio(url=audio_file, autoplay=True)" ] }, { "cell_type": "code", "execution_count": 26, "metadata": { "scrolled": true }, "outputs": [ { "data": { "text/html": [ "" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "# Must be after Alert() call and in a separate cell for both audio and pop-up; sleep \n", "# allows the audio to play before the pop-up alters HTML output on the page:\n", "delayed_popup()" ] }, { "cell_type": "code", "execution_count": 27, "metadata": { "scrolled": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Best optimizer (0.01, 0.02, 0.85, 1e-10) loaded from file.\n", "Using optimizer (0.01, 0.02, 0.85, 1e-10)...\n", "ResNet50 run begun at Wednesday, 2017 October 18, 8:32 AM.\n", "\t[5 epochs (x6 passes) on small FMA on GPU takes\n", "\t01:23:34 ± 39.646 sec ([01:23:09,01:24:20]).]\n", "\n", "First-round training (training the classifier)...\n", "\n", "Training for epochs 1 to 5...\n", "Epoch 1/5\n", "50/50 [==============================] - 150s - loss: 1.7686 - categorical_accuracy: 0.3448 - val_loss: 3.1638 - val_categorical_accuracy: 0.1425\n", "Epoch 2/5\n", "50/50 [==============================] - 147s - loss: 1.6120 - categorical_accuracy: 0.4164 - val_loss: 2.7485 - val_categorical_accuracy: 0.2462\n", "Epoch 3/5\n", "50/50 [==============================] - 146s - loss: 1.5702 - categorical_accuracy: 0.4306 - val_loss: 2.2845 - val_categorical_accuracy: 0.3075\n", "Epoch 4/5\n", "50/50 [==============================] - 147s - loss: 1.5422 - categorical_accuracy: 0.4380 - val_loss: 1.9024 - val_categorical_accuracy: 0.3425\n", "Epoch 5/5\n", "50/50 [==============================] - 148s - loss: 1.5061 - categorical_accuracy: 0.4556 - val_loss: 1.7105 - val_categorical_accuracy: 0.3825\n", "\n", "\n", "Further training (refining convolutional blocks, starting with\n", "\tlayer 161)...\n", "\n", "Training for epochs 6 to 10...\n", "Epoch 6/10\n", "50/50 [==============================] - 157s - loss: 1.3933 - categorical_accuracy: 0.5009 - val_loss: 1.6555 - val_categorical_accuracy: 0.4012\n", "Epoch 7/10\n", "50/50 [==============================] - 154s - loss: 1.4049 - categorical_accuracy: 0.5022 - val_loss: 1.6325 - val_categorical_accuracy: 0.4037\n", "Epoch 8/10\n", "50/50 [==============================] - 154s - loss: 1.4141 - categorical_accuracy: 0.4989 - val_loss: 1.6221 - val_categorical_accuracy: 0.4150\n", "Epoch 9/10\n", "50/50 [==============================] - 154s - loss: 1.4154 - categorical_accuracy: 0.4959 - val_loss: 1.6171 - val_categorical_accuracy: 0.4175\n", "Epoch 10/10\n", "50/50 [==============================] - 154s - loss: 1.4127 - categorical_accuracy: 0.4941 - val_loss: 1.6148 - val_categorical_accuracy: 0.4188\n", "\n", "\n", "Further training (refining convolutional blocks, starting with\n", "\tlayer 151)...\n", "\n", "Training for epochs 11 to 15...\n", "Epoch 11/15\n", "50/50 [==============================] - 164s - loss: 1.3955 - categorical_accuracy: 0.5000 - val_loss: 1.6136 - val_categorical_accuracy: 0.4188\n", "Epoch 12/15\n", "50/50 [==============================] - 161s - loss: 1.3901 - categorical_accuracy: 0.5061 - val_loss: 1.6121 - val_categorical_accuracy: 0.4188\n", "Epoch 13/15\n", "50/50 [==============================] - 161s - loss: 1.4065 - categorical_accuracy: 0.5025 - val_loss: 1.6111 - val_categorical_accuracy: 0.4200\n", "Epoch 14/15\n", "50/50 [==============================] - 160s - loss: 1.4115 - categorical_accuracy: 0.4950 - val_loss: 1.6103 - val_categorical_accuracy: 0.4188\n", "Epoch 15/15\n", "50/50 [==============================] - 161s - loss: 1.4030 - categorical_accuracy: 0.4977 - val_loss: 1.6102 - val_categorical_accuracy: 0.4163\n", "\n", "\n", "Further training (refining convolutional blocks, starting with\n", "\tlayer 139)...\n", "\n", "Training for epochs 16 to 20...\n", "Epoch 16/20\n", "50/50 [==============================] - 171s - loss: 1.3846 - categorical_accuracy: 0.5042 - val_loss: 1.6095 - val_categorical_accuracy: 0.4188\n", "Epoch 17/20\n", "50/50 [==============================] - 167s - loss: 1.3850 - categorical_accuracy: 0.5088 - val_loss: 1.6082 - val_categorical_accuracy: 0.4213\n", "Epoch 18/20\n", "50/50 [==============================] - 168s - loss: 1.3889 - categorical_accuracy: 0.5044 - val_loss: 1.6072 - val_categorical_accuracy: 0.4188\n", "Epoch 19/20\n", "50/50 [==============================] - 168s - loss: 1.3936 - categorical_accuracy: 0.5072 - val_loss: 1.6064 - val_categorical_accuracy: 0.4188\n", "Epoch 20/20\n", "50/50 [==============================] - 168s - loss: 1.3902 - categorical_accuracy: 0.5066 - val_loss: 1.6059 - val_categorical_accuracy: 0.4225\n", "\n", "\n", "Further training (refining convolutional blocks, starting with\n", "\tlayer 129)...\n", "\n", "Training for epochs 21 to 25...\n", "Epoch 21/25\n", "50/50 [==============================] - 183s - loss: 1.3638 - categorical_accuracy: 0.5133 - val_loss: 1.6049 - val_categorical_accuracy: 0.4200\n", "Epoch 22/25\n", "50/50 [==============================] - 179s - loss: 1.3522 - categorical_accuracy: 0.5188 - val_loss: 1.6034 - val_categorical_accuracy: 0.4238\n", "Epoch 23/25\n", "50/50 [==============================] - 178s - loss: 1.3631 - categorical_accuracy: 0.5156 - val_loss: 1.6022 - val_categorical_accuracy: 0.4225\n", "Epoch 24/25\n", "50/50 [==============================] - 178s - loss: 1.3788 - categorical_accuracy: 0.5091 - val_loss: 1.6017 - val_categorical_accuracy: 0.4238\n", "Epoch 25/25\n", "50/50 [==============================] - 179s - loss: 1.3766 - categorical_accuracy: 0.5014 - val_loss: 1.6017 - val_categorical_accuracy: 0.4225\n", "\n", "\n", "Further training (refining convolutional blocks, starting with\n", "\tlayer 119)...\n", "\n", "Training for epochs 26 to 30...\n", "Epoch 26/30\n", "50/50 [==============================] - 193s - loss: 1.3460 - categorical_accuracy: 0.5208 - val_loss: 1.6007 - val_categorical_accuracy: 0.4263\n", "Epoch 27/30\n", "50/50 [==============================] - 188s - loss: 1.3308 - categorical_accuracy: 0.5253 - val_loss: 1.5991 - val_categorical_accuracy: 0.4250\n", "Epoch 28/30\n", "50/50 [==============================] - 188s - loss: 1.3519 - categorical_accuracy: 0.5233 - val_loss: 1.5982 - val_categorical_accuracy: 0.4300\n", "Epoch 29/30\n", "50/50 [==============================] - 188s - loss: 1.3525 - categorical_accuracy: 0.5122 - val_loss: 1.5973 - val_categorical_accuracy: 0.4288\n", "Epoch 30/30\n", "50/50 [==============================] - 188s - loss: 1.3520 - categorical_accuracy: 0.5111 - val_loss: 1.5966 - val_categorical_accuracy: 0.4300\n", "\n", "01:24:00 for ResNet50 to yield 51.1% training accuracy and 43.0% validation accuracy in 5 \n", "epochs (x6 training phases).\n", "\n", "ResNet50 run complete at Wednesday, 2017 October 18, 9:56 AM.\n", "Clearing keras's backend Tensorflow session...\n", "\n" ] }, { "data": { "text/html": [ "\n", " \n", " " ], "text/plain": [ "" ] }, "execution_count": 27, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# ResNet50:\n", "run_resnet50()\n", "Audio(url=audio_file, autoplay=True)" ] }, { "cell_type": "code", "execution_count": 28, "metadata": { "scrolled": true }, "outputs": [ { "data": { "text/html": [ "" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "# Must be after Alert() call and in a separate cell for both audio and pop-up; sleep \n", "# allows the audio to play before the pop-up alters HTML output on the page:\n", "delayed_popup()" ] }, { "cell_type": "code", "execution_count": 13, "metadata": { "scrolled": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Best optimizer (0.01, 0.02, 0.85, 1e-10) loaded from file.\n", "Using optimizer (0.01, 0.02, 0.85, 1e-10)...\n", "VGG16 run begun at Friday, 2017 October 20, 11:02 PM.\n", "\t[5 epochs (x6 passes) on small FMA on GPU takes\n", "\t02:44:00 ± 00:01:25 ([02:42:22,02:44:59]).]\n", "\n", "First-round training (training the classifier)...\n", "\n", "Training for epochs 1 to 3...\n", "Epoch 1/3\n", "50/50 [==============================] - 248s - loss: 1.8049 - categorical_accuracy: 0.3383 - val_loss: 2.6464 - val_categorical_accuracy: 0.1925\n", "Epoch 2/3\n", "50/50 [==============================] - 247s - loss: 1.6688 - categorical_accuracy: 0.3934 - val_loss: 1.8562 - val_categorical_accuracy: 0.3088\n", "Epoch 3/3\n", "50/50 [==============================] - 247s - loss: 1.6295 - categorical_accuracy: 0.4138 - val_loss: 1.7421 - val_categorical_accuracy: 0.3350\n", "\n", "Training for epochs 4 to 5...\n", "Epoch 4/5\n", "50/50 [==============================] - 249s - loss: 1.5873 - categorical_accuracy: 0.4312 - val_loss: 1.6919 - val_categorical_accuracy: 0.3850\n", "Epoch 5/5\n", "50/50 [==============================] - 248s - loss: 1.5691 - categorical_accuracy: 0.4356 - val_loss: 1.6296 - val_categorical_accuracy: 0.4025\n", "\n", "\n", "Further training (refining convolutional blocks, starting with\n", "\tlayer 17)...\n", "\n", "Training for epochs 6 to 8...\n", "Epoch 6/8\n", "50/50 [==============================] - 255s - loss: 1.5293 - categorical_accuracy: 0.4513 - val_loss: 1.6246 - val_categorical_accuracy: 0.4012\n", "Epoch 7/8\n", "50/50 [==============================] - 253s - loss: 1.5125 - categorical_accuracy: 0.4513 - val_loss: 1.6016 - val_categorical_accuracy: 0.4200\n", "Epoch 8/8\n", "50/50 [==============================] - 253s - loss: 1.5046 - categorical_accuracy: 0.4517 - val_loss: 1.5973 - val_categorical_accuracy: 0.4088\n", "\n", "Training for epochs 9 to 10...\n", "Epoch 9/10\n", "50/50 [==============================] - 255s - loss: 1.4838 - categorical_accuracy: 0.4755 - val_loss: 1.6014 - val_categorical_accuracy: 0.4163\n", "Epoch 10/10\n", "50/50 [==============================] - 253s - loss: 1.4837 - categorical_accuracy: 0.4653 - val_loss: 1.5935 - val_categorical_accuracy: 0.4175\n", "\n", "\n", "Further training (refining convolutional blocks, starting with\n", "\tlayer 15)...\n", "\n", "Training for epochs 11 to 13...\n", "Epoch 11/13\n", "50/50 [==============================] - 276s - loss: 1.4692 - categorical_accuracy: 0.4770 - val_loss: 1.6635 - val_categorical_accuracy: 0.4138\n", "Epoch 12/13\n", "50/50 [==============================] - 275s - loss: 1.4562 - categorical_accuracy: 0.4677 - val_loss: 1.5692 - val_categorical_accuracy: 0.4425\n", "Epoch 13/13\n", "50/50 [==============================] - 275s - loss: 1.4648 - categorical_accuracy: 0.4731 - val_loss: 1.5792 - val_categorical_accuracy: 0.4338\n", "\n", "Training for epochs 14 to 15...\n", "Epoch 14/15\n", "50/50 [==============================] - 276s - loss: 1.4320 - categorical_accuracy: 0.4919 - val_loss: 1.6109 - val_categorical_accuracy: 0.4350\n", "Epoch 15/15\n", "50/50 [==============================] - 275s - loss: 1.4171 - categorical_accuracy: 0.4961 - val_loss: 1.5873 - val_categorical_accuracy: 0.4338\n", "\n", "\n", "Further training (refining convolutional blocks, starting with\n", "\tlayer 13)...\n", "\n", "Training for epochs 16 to 18...\n", "Epoch 16/18\n", "50/50 [==============================] - 307s - loss: 1.4060 - categorical_accuracy: 0.4959 - val_loss: 1.5736 - val_categorical_accuracy: 0.4512\n", "Epoch 17/18\n", "50/50 [==============================] - 305s - loss: 1.3942 - categorical_accuracy: 0.5048 - val_loss: 1.5544 - val_categorical_accuracy: 0.4412\n", "Epoch 18/18\n", "50/50 [==============================] - 304s - loss: 1.4024 - categorical_accuracy: 0.4922 - val_loss: 1.5448 - val_categorical_accuracy: 0.4562\n", "\n", "Training for epochs 19 to 20...\n", "Epoch 19/20\n", "50/50 [==============================] - 307s - loss: 1.3649 - categorical_accuracy: 0.5094 - val_loss: 1.5574 - val_categorical_accuracy: 0.4462\n", "Epoch 20/20\n", "50/50 [==============================] - 304s - loss: 1.3466 - categorical_accuracy: 0.5188 - val_loss: 1.5442 - val_categorical_accuracy: 0.4462\n", "\n", "\n", "Further training (refining convolutional blocks, starting with\n", "\tlayer 11)...\n", "\n", "Training for epochs 21 to 23...\n", "Epoch 21/23\n", "50/50 [==============================] - 386s - loss: 1.3407 - categorical_accuracy: 0.5142 - val_loss: 1.5040 - val_categorical_accuracy: 0.4750\n", "Epoch 22/23\n", "50/50 [==============================] - 384s - loss: 1.3284 - categorical_accuracy: 0.5202 - val_loss: 1.5070 - val_categorical_accuracy: 0.4700\n", "Epoch 23/23\n", "50/50 [==============================] - 385s - loss: 1.3446 - categorical_accuracy: 0.5148 - val_loss: 1.5431 - val_categorical_accuracy: 0.4612\n", "\n", "Training for epochs 24 to 25...\n", "Epoch 24/25\n", "50/50 [==============================] - 387s - loss: 1.2979 - categorical_accuracy: 0.5342 - val_loss: 1.4753 - val_categorical_accuracy: 0.4800\n", "Epoch 25/25\n", "50/50 [==============================] - 384s - loss: 1.2984 - categorical_accuracy: 0.5333 - val_loss: 1.5051 - val_categorical_accuracy: 0.4813\n", "\n", "\n", "Further training (refining convolutional blocks, starting with\n", "\tlayer 8)...\n", "\n", "Training for epochs 26 to 28...\n", "Epoch 26/28\n", "50/50 [==============================] - 477s - loss: 1.2701 - categorical_accuracy: 0.5463 - val_loss: 1.5331 - val_categorical_accuracy: 0.4637\n", "Epoch 27/28\n", "50/50 [==============================] - 474s - loss: 1.2633 - categorical_accuracy: 0.5500 - val_loss: 1.5729 - val_categorical_accuracy: 0.4750\n", "Epoch 28/28\n", "50/50 [==============================] - 475s - loss: 1.2822 - categorical_accuracy: 0.5333 - val_loss: 1.5983 - val_categorical_accuracy: 0.4462\n", "\n", "Training for epochs 29 to 30...\n", "Epoch 29/30\n", "50/50 [==============================] - 477s - loss: 1.2365 - categorical_accuracy: 0.5539 - val_loss: 1.5142 - val_categorical_accuracy: 0.4763\n", "Epoch 30/30\n", "50/50 [==============================] - 473s - loss: 1.2256 - categorical_accuracy: 0.5683 - val_loss: 1.5256 - val_categorical_accuracy: 0.4625\n", "\n", "02:42:13 for VGG16 to yield 56.8% training accuracy and 46.2% validation accuracy in 5 \n", "epochs (x6 training phases).\n", "\n", "VGG16 run complete at Saturday, 2017 October 21, 1:44 AM.\n", "Clearing keras's backend Tensorflow session...\n", "\n" ] }, { "data": { "text/html": [ "\n", " \n", " " ], "text/plain": [ "" ] }, "execution_count": 13, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# VGG16:\n", "run_vgg16()\n", "Audio(url=audio_file, autoplay=True)" ] }, { "cell_type": "code", "execution_count": 14, "metadata": { "collapsed": true, "scrolled": true }, "outputs": [], "source": [ "# Must be after Alert() call and in a separate cell for both audio and pop-up; sleep \n", "# allows the audio to play before the pop-up alters HTML output on the page:\n", "delayed_popup()" ] }, { "cell_type": "code", "execution_count": 15, "metadata": { "scrolled": true }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Backed up 'saved_objects/fma_results_gpu.pkl' to\n", "\t'saved_object_backups/fma_results_gpu-2017-10-21+0144.pkl.bak'.\n", "\n", "Backed up 'saved_objects/crossval_results_gpu.pkl' to\n", "\t'saved_object_backups/crossval_results_gpu-2017-10-21+0144.pkl.bak'.\n", "\n" ] } ], "source": [ "# Back up the results dataframes\n", "import shutil\n", "\n", "for key in [\"fma_results_name\", \"crossval_results_name\"]:\n", " src = os.path.join(\"saved_objects\", \"{}.pkl\".format(param_dict[key])) \n", " dst = os.path.join(\"saved_object_backups\", \n", " \"{}-{}.pkl.bak\".format(param_dict[key],\n", " timer.datetimepath()))\n", " directory = os.path.dirname(dst)\n", " if not os.path.exists(directory):\n", " os.makedirs(directory)\n", " #shutil.copyfile(src, dst)\n", "\n", " print (\"Backed up '{}' to\\n\\t'{}'.\\n\".format(src, dst))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Experiment 5: Extended Dataset\n", "\n", "The **FMA extended** dataset expands the 8-genre, \"small\" subset included in the FMA project by gathering all audio tracks in those 8 genres from the \"large\" subset included in the FMA project. This yields a total dataset size of 37,316 tracks, versus the 8,000 tracks in the **FMA small** dataset. However, unlike the FMA small dataset used earlier, the FMA extended dataset is not balanced by genre, i.e., there are many more examples of some genres, like rock, than there are of other genres, like folk. Unbalanced datasets are known to be a potential source of problems in training.\n", "\n", "For more information, see Section 4.6 of the accompanying paper, \"Experiment 5: Extended Dataset.\"\n", "\n", "### Section-Specific Setup" ] }, { "cell_type": "code", "execution_count": 8, "metadata": { "scrolled": true }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Creating generators with batch size 128...\n", "Loading mean and standard deviation for the training set from file 'saved_objects/fma_extended_dwt_stats.npz'.\n", "\n", "Found 37316 images belonging to 8 classes.\n", "Found 4350 images belonging to 8 classes.\n", "Found 4651 images belonging to 8 classes.\n" ] } ], "source": [ "param_dict[\"augmentation\"] = 0 # Start with no data augmentation\n", "param_dict[\"which_size\"] = \"extended\" \n", "\n", "real_dataset_size = 37316 # There are 37316 training images in the data set\n", "epoch_ratio = 1/4 # This is to keep the number of steps per epoch under control, especially\n", " # for large networks like VGG16/VGG19. It necessitates multiplying the \n", " # number of epochs by the inverse to get an accurate number of \"real\"\n", " # epochs.\n", "param_dict[\"dataset_size\"] = math.ceil(epoch_ratio*real_dataset_size) # Round up\n", "\n", "# Reconfigure the GPU-available/CPU-only specific options:\n", "if using_gpu:\n", " # Large training runs are OK to process on the GPU:\n", " param_dict[\"spu\"] = \"gpu\" \n", " param_dict[\"pass_epochs\"] = math.ceil(5/epoch_ratio) # Run for more \"epochs\" to make up\n", " # for dividing epochs into parts\n", " # Adjust so each epoch sees every image about once:\n", " param_dict[\"batch_size\"] = 128 # 192 is too high for even one epoch of VGG19 on GCE.\n", " # Note: going higher than 64 sometimes will lead to \n", " # memory issues on VGG16, b/c garbage collection isn't \n", " # instantaneous and VGG16 has a huge number of \n", " # parameters, but we want this as large as possible, \n", " # This problem is ameliorated somewhat by specifying\n", " # a small epoch_batch_size in the call to \n", " # run_pretrained_model(), which will checkpoint the \n", " # training every epoch_batch_size epochs to clean up\n", " # memory fragmentation (see also http://bit.ly/2hDHJay )\n", " param_dict[\"steps_per_epoch\"] = math.ceil(param_dict[\"dataset_size\"]/\n", " param_dict[\"batch_size\"]) \n", " param_dict[\"validation_steps\"] = math.ceil(param_dict[\"dataset_size\"]/\n", " (8*param_dict[\"batch_size\"])) \n", "\n", "# Update the path for images:\n", "param_dict[\"img_dir\"] = os.path.join(\"data\",\n", " os.path.join(\"fma_images\",\n", " os.path.join(\"byclass\", \n", " os.path.join(param_dict[\"which_size\"], \n", " param_dict[\"which_wavelet\"])\n", " )\n", " )\n", " )\n", " \n", "# Update ETAs:\n", "calc_etas(param_dict)\n", "\n", "# Reconfigure the generators based on the specified parameters:\n", "generators = {}\n", "(generators[\"train\"], \n", " generators[\"val\"], \n", " generators[\"test\"]) = cku.set_up_generators(param_dict)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Model Reruns" ] }, { "cell_type": "code", "execution_count": 36, "metadata": { "scrolled": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Best optimizer (0.01, 0.02, 0.85, 1e-10) loaded from file.\n", "Using hidden size 128 and optimizer (0.01, 0.02, 0.85, 1e-10)...\n", "Fully connected network run begun at Wednesday, 2017 October 18, 11:51 AM.\n", "\t[120 epochs on extended FMA on GPU takes\n", "\tunknown (no similar runs found).]\n", "\n", "\n", "Training for epochs 1 to 10...\n", "Epoch 1/10\n", "73/73 [==============================] - 40s - loss: 1.5859 - categorical_accuracy: 0.4341 - val_loss: 1.5664 - val_categorical_accuracy: 0.4914\n", "Epoch 2/10\n", "73/73 [==============================] - 39s - loss: 1.4681 - categorical_accuracy: 0.4942 - val_loss: 1.4673 - val_categorical_accuracy: 0.5000\n", "Epoch 3/10\n", "73/73 [==============================] - 40s - loss: 1.4401 - categorical_accuracy: 0.4973 - val_loss: 1.4393 - val_categorical_accuracy: 0.5062\n", "Epoch 4/10\n", "73/73 [==============================] - 39s - loss: 1.4341 - categorical_accuracy: 0.4953 - val_loss: 1.4281 - val_categorical_accuracy: 0.4922\n", "Epoch 5/10\n", "73/73 [==============================] - 39s - loss: 1.3734 - categorical_accuracy: 0.5208 - val_loss: 1.4226 - val_categorical_accuracy: 0.5117\n", "Epoch 6/10\n", "73/73 [==============================] - 39s - loss: 1.3555 - categorical_accuracy: 0.5260 - val_loss: 1.4126 - val_categorical_accuracy: 0.5164\n", "Epoch 7/10\n", "73/73 [==============================] - 39s - loss: 1.3713 - categorical_accuracy: 0.5283 - val_loss: 1.3960 - val_categorical_accuracy: 0.5211\n", "Epoch 8/10\n", "73/73 [==============================] - 39s - loss: 1.3425 - categorical_accuracy: 0.5395 - val_loss: 1.3953 - val_categorical_accuracy: 0.5305\n", "Epoch 9/10\n", "73/73 [==============================] - 40s - loss: 1.3172 - categorical_accuracy: 0.5409 - val_loss: 1.3942 - val_categorical_accuracy: 0.5281\n", "Epoch 10/10\n", "73/73 [==============================] - 40s - loss: 1.3186 - categorical_accuracy: 0.5441 - val_loss: 1.3937 - val_categorical_accuracy: 0.5320\n", "\n", "Training for epochs 11 to 20...\n", "Epoch 11/20\n", "73/73 [==============================] - 41s - loss: 1.2938 - categorical_accuracy: 0.5573 - val_loss: 1.3912 - val_categorical_accuracy: 0.5305\n", "Epoch 12/20\n", "73/73 [==============================] - 39s - loss: 1.2527 - categorical_accuracy: 0.5757 - val_loss: 1.4034 - val_categorical_accuracy: 0.5266\n", "Epoch 13/20\n", "73/73 [==============================] - 39s - loss: 1.2770 - categorical_accuracy: 0.5593 - val_loss: 1.3998 - val_categorical_accuracy: 0.5258\n", "Epoch 14/20\n", "73/73 [==============================] - 40s - loss: 1.2923 - categorical_accuracy: 0.5504 - val_loss: 1.3985 - val_categorical_accuracy: 0.5328\n", "Epoch 15/20\n", "73/73 [==============================] - 40s - loss: 1.2466 - categorical_accuracy: 0.5689 - val_loss: 1.4004 - val_categorical_accuracy: 0.5320\n", "Epoch 16/20\n", "73/73 [==============================] - 40s - loss: 1.2366 - categorical_accuracy: 0.5714 - val_loss: 1.3946 - val_categorical_accuracy: 0.5336\n", "Epoch 17/20\n", "73/73 [==============================] - 39s - loss: 1.2602 - categorical_accuracy: 0.5670 - val_loss: 1.3901 - val_categorical_accuracy: 0.5305\n", "Epoch 18/20\n", "73/73 [==============================] - 40s - loss: 1.2281 - categorical_accuracy: 0.5790 - val_loss: 1.3974 - val_categorical_accuracy: 0.5320\n", "Epoch 19/20\n", "73/73 [==============================] - 39s - loss: 1.2001 - categorical_accuracy: 0.5847 - val_loss: 1.3988 - val_categorical_accuracy: 0.5336\n", "Epoch 20/20\n", "73/73 [==============================] - 40s - loss: 1.2029 - categorical_accuracy: 0.5896 - val_loss: 1.4006 - val_categorical_accuracy: 0.5336\n", "\n", "Training for epochs 21 to 30...\n", "Epoch 21/30\n", "73/73 [==============================] - 41s - loss: 1.1919 - categorical_accuracy: 0.5913 - val_loss: 1.3992 - val_categorical_accuracy: 0.5320\n", "Epoch 22/30\n", "73/73 [==============================] - 39s - loss: 1.1664 - categorical_accuracy: 0.6011 - val_loss: 1.4129 - val_categorical_accuracy: 0.5273\n", "Epoch 23/30\n", "73/73 [==============================] - 39s - loss: 1.2009 - categorical_accuracy: 0.5843 - val_loss: 1.4135 - val_categorical_accuracy: 0.5273\n", "Epoch 24/30\n", "73/73 [==============================] - 39s - loss: 1.2131 - categorical_accuracy: 0.5832 - val_loss: 1.4105 - val_categorical_accuracy: 0.5328\n", "Epoch 25/30\n", "73/73 [==============================] - 40s - loss: 1.1628 - categorical_accuracy: 0.5997 - val_loss: 1.4241 - val_categorical_accuracy: 0.5297\n", "Epoch 26/30\n", "73/73 [==============================] - 40s - loss: 1.1566 - categorical_accuracy: 0.6009 - val_loss: 1.4164 - val_categorical_accuracy: 0.5305\n", "Epoch 27/30\n", "73/73 [==============================] - 39s - loss: 1.1701 - categorical_accuracy: 0.5936 - val_loss: 1.4161 - val_categorical_accuracy: 0.5281\n", "Epoch 28/30\n", "73/73 [==============================] - 40s - loss: 1.1591 - categorical_accuracy: 0.6047 - val_loss: 1.4169 - val_categorical_accuracy: 0.5297\n", "Epoch 29/30\n", "73/73 [==============================] - 40s - loss: 1.1189 - categorical_accuracy: 0.6119 - val_loss: 1.4213 - val_categorical_accuracy: 0.5234\n", "Epoch 30/30\n", "73/73 [==============================] - 40s - loss: 1.1273 - categorical_accuracy: 0.6102 - val_loss: 1.4276 - val_categorical_accuracy: 0.5297\n", "\n", "Training for epochs 31 to 40...\n", "Epoch 31/40\n", "73/73 [==============================] - 41s - loss: 1.1122 - categorical_accuracy: 0.6196 - val_loss: 1.4278 - val_categorical_accuracy: 0.5281\n", "Epoch 32/40\n", "73/73 [==============================] - 40s - loss: 1.0950 - categorical_accuracy: 0.6249 - val_loss: 1.4389 - val_categorical_accuracy: 0.5172\n", "Epoch 33/40\n", "73/73 [==============================] - 39s - loss: 1.1287 - categorical_accuracy: 0.6116 - val_loss: 1.4476 - val_categorical_accuracy: 0.5195\n", "Epoch 34/40\n", "73/73 [==============================] - 39s - loss: 1.1490 - categorical_accuracy: 0.6064 - val_loss: 1.4483 - val_categorical_accuracy: 0.5195\n", "Epoch 35/40\n", "73/73 [==============================] - 39s - loss: 1.0914 - categorical_accuracy: 0.6192 - val_loss: 1.4531 - val_categorical_accuracy: 0.5234\n", "Epoch 36/40\n", "73/73 [==============================] - 39s - loss: 1.0974 - categorical_accuracy: 0.6259 - val_loss: 1.4486 - val_categorical_accuracy: 0.5266\n", "Epoch 37/40\n", "73/73 [==============================] - 40s - loss: 1.1169 - categorical_accuracy: 0.6160 - val_loss: 1.4461 - val_categorical_accuracy: 0.5203\n", "Epoch 38/40\n", "73/73 [==============================] - 40s - loss: 1.1048 - categorical_accuracy: 0.6238 - val_loss: 1.4493 - val_categorical_accuracy: 0.5273\n", "Epoch 39/40\n", "73/73 [==============================] - 40s - loss: 1.0429 - categorical_accuracy: 0.6422 - val_loss: 1.4571 - val_categorical_accuracy: 0.5172\n", "Epoch 40/40\n", "73/73 [==============================] - 40s - loss: 1.0491 - categorical_accuracy: 0.6363 - val_loss: 1.4622 - val_categorical_accuracy: 0.5211\n", "\n", "Training for epochs 41 to 50...\n", "Epoch 41/50\n", "73/73 [==============================] - 41s - loss: 1.0718 - categorical_accuracy: 0.6344 - val_loss: 1.4639 - val_categorical_accuracy: 0.5188\n", "Epoch 42/50\n", "73/73 [==============================] - 39s - loss: 1.0337 - categorical_accuracy: 0.6440 - val_loss: 1.4757 - val_categorical_accuracy: 0.5188\n", "Epoch 43/50\n", "73/73 [==============================] - 39s - loss: 1.0809 - categorical_accuracy: 0.6271 - val_loss: 1.4820 - val_categorical_accuracy: 0.5133\n", "Epoch 44/50\n", "73/73 [==============================] - 39s - loss: 1.1019 - categorical_accuracy: 0.6191 - val_loss: 1.4824 - val_categorical_accuracy: 0.5133\n", "Epoch 45/50\n", "73/73 [==============================] - 39s - loss: 1.0379 - categorical_accuracy: 0.6381 - val_loss: 1.4878 - val_categorical_accuracy: 0.5164\n", "Epoch 46/50\n", "73/73 [==============================] - 40s - loss: 1.0476 - categorical_accuracy: 0.6377 - val_loss: 1.4801 - val_categorical_accuracy: 0.5156\n", "Epoch 47/50\n", "73/73 [==============================] - 39s - loss: 1.0636 - categorical_accuracy: 0.6333 - val_loss: 1.4819 - val_categorical_accuracy: 0.5117\n", "Epoch 48/50\n", "73/73 [==============================] - 40s - loss: 1.0545 - categorical_accuracy: 0.6363 - val_loss: 1.4858 - val_categorical_accuracy: 0.5180\n", "Epoch 49/50\n", "73/73 [==============================] - 40s - loss: 0.9890 - categorical_accuracy: 0.6582 - val_loss: 1.4897 - val_categorical_accuracy: 0.5141\n", "Epoch 50/50\n", "73/73 [==============================] - 40s - loss: 1.0078 - categorical_accuracy: 0.6570 - val_loss: 1.4950 - val_categorical_accuracy: 0.5156\n", "\n", "Training for epochs 51 to 60...\n", "Epoch 51/60\n", "73/73 [==============================] - 41s - loss: 1.0213 - categorical_accuracy: 0.6530 - val_loss: 1.4948 - val_categorical_accuracy: 0.5109\n", "Epoch 52/60\n", "73/73 [==============================] - 39s - loss: 0.9923 - categorical_accuracy: 0.6597 - val_loss: 1.5088 - val_categorical_accuracy: 0.5148\n", "Epoch 53/60\n", "73/73 [==============================] - 40s - loss: 1.0332 - categorical_accuracy: 0.6465 - val_loss: 1.5092 - val_categorical_accuracy: 0.5125\n", "Epoch 54/60\n", "73/73 [==============================] - 39s - loss: 1.0620 - categorical_accuracy: 0.6328 - val_loss: 1.5137 - val_categorical_accuracy: 0.5156\n", "Epoch 55/60\n", "73/73 [==============================] - 39s - loss: 1.0032 - categorical_accuracy: 0.6562 - val_loss: 1.5139 - val_categorical_accuracy: 0.5172\n", "Epoch 56/60\n", "73/73 [==============================] - 38s - loss: 1.0093 - categorical_accuracy: 0.6501 - val_loss: 1.5163 - val_categorical_accuracy: 0.5141\n", "Epoch 57/60\n", "73/73 [==============================] - 40s - loss: 1.0253 - categorical_accuracy: 0.6463 - val_loss: 1.5129 - val_categorical_accuracy: 0.5141\n", "Epoch 58/60\n", "73/73 [==============================] - 39s - loss: 1.0163 - categorical_accuracy: 0.6506 - val_loss: 1.5198 - val_categorical_accuracy: 0.5156\n", "Epoch 59/60\n", "73/73 [==============================] - 39s - loss: 0.9442 - categorical_accuracy: 0.6752 - val_loss: 1.5251 - val_categorical_accuracy: 0.5117\n", "Epoch 60/60\n", "73/73 [==============================] - 40s - loss: 0.9591 - categorical_accuracy: 0.6698 - val_loss: 1.5229 - val_categorical_accuracy: 0.5133\n", "\n", "Training for epochs 61 to 70...\n", "Epoch 61/70\n", "73/73 [==============================] - 40s - loss: 0.9853 - categorical_accuracy: 0.6595 - val_loss: 1.5297 - val_categorical_accuracy: 0.5055\n", "Epoch 62/70\n", "73/73 [==============================] - 40s - loss: 0.9496 - categorical_accuracy: 0.6743 - val_loss: 1.5411 - val_categorical_accuracy: 0.5070\n", "Epoch 63/70\n", "73/73 [==============================] - 39s - loss: 0.9979 - categorical_accuracy: 0.6615 - val_loss: 1.5422 - val_categorical_accuracy: 0.5086\n", "Epoch 64/70\n", "73/73 [==============================] - 39s - loss: 1.0240 - categorical_accuracy: 0.6443 - val_loss: 1.5444 - val_categorical_accuracy: 0.5102\n", "Epoch 65/70\n", "73/73 [==============================] - 39s - loss: 0.9720 - categorical_accuracy: 0.6618 - val_loss: 1.5467 - val_categorical_accuracy: 0.5102\n", "Epoch 66/70\n", "73/73 [==============================] - 39s - loss: 0.9755 - categorical_accuracy: 0.6574 - val_loss: 1.5417 - val_categorical_accuracy: 0.5109\n", "Epoch 67/70\n", "73/73 [==============================] - 39s - loss: 0.9931 - categorical_accuracy: 0.6526 - val_loss: 1.5423 - val_categorical_accuracy: 0.5102\n", "Epoch 68/70\n", "73/73 [==============================] - 40s - loss: 0.9828 - categorical_accuracy: 0.6659 - val_loss: 1.5487 - val_categorical_accuracy: 0.5109\n", "Epoch 69/70\n", "73/73 [==============================] - 40s - loss: 0.9028 - categorical_accuracy: 0.6887 - val_loss: 1.5530 - val_categorical_accuracy: 0.5094\n", "Epoch 70/70\n", "73/73 [==============================] - 40s - loss: 0.9091 - categorical_accuracy: 0.6904 - val_loss: 1.5566 - val_categorical_accuracy: 0.5125\n", "\n", "Training for epochs 71 to 80...\n", "Epoch 71/80\n", "73/73 [==============================] - 41s - loss: 0.9387 - categorical_accuracy: 0.6779 - val_loss: 1.5559 - val_categorical_accuracy: 0.5047\n", "Epoch 72/80\n", "73/73 [==============================] - 40s - loss: 0.9065 - categorical_accuracy: 0.6877 - val_loss: 1.5695 - val_categorical_accuracy: 0.5047\n", "Epoch 73/80\n", "73/73 [==============================] - 39s - loss: 0.9647 - categorical_accuracy: 0.6694 - val_loss: 1.5676 - val_categorical_accuracy: 0.5062\n", "Epoch 74/80\n", "73/73 [==============================] - 39s - loss: 1.0006 - categorical_accuracy: 0.6591 - val_loss: 1.5740 - val_categorical_accuracy: 0.5117\n", "Epoch 75/80\n", "73/73 [==============================] - 39s - loss: 0.9330 - categorical_accuracy: 0.6795 - val_loss: 1.5725 - val_categorical_accuracy: 0.5094\n", "Epoch 76/80\n", "73/73 [==============================] - 40s - loss: 0.9468 - categorical_accuracy: 0.6709 - val_loss: 1.5690 - val_categorical_accuracy: 0.5070\n", "Epoch 77/80\n", "73/73 [==============================] - 39s - loss: 0.9521 - categorical_accuracy: 0.6707 - val_loss: 1.5668 - val_categorical_accuracy: 0.5055\n", "Epoch 78/80\n", "73/73 [==============================] - 39s - loss: 0.9504 - categorical_accuracy: 0.6766 - val_loss: 1.5749 - val_categorical_accuracy: 0.5109\n", "Epoch 79/80\n", "73/73 [==============================] - 39s - loss: 0.8716 - categorical_accuracy: 0.7012 - val_loss: 1.5807 - val_categorical_accuracy: 0.5039\n", "Epoch 80/80\n", "73/73 [==============================] - 40s - loss: 0.8775 - categorical_accuracy: 0.7026 - val_loss: 1.5868 - val_categorical_accuracy: 0.5070\n", "\n", "Training for epochs 81 to 90...\n", "Epoch 81/90\n", "73/73 [==============================] - 41s - loss: 0.9175 - categorical_accuracy: 0.6811 - val_loss: 1.5855 - val_categorical_accuracy: 0.5039\n", "Epoch 82/90\n", "73/73 [==============================] - 40s - loss: 0.8880 - categorical_accuracy: 0.6921 - val_loss: 1.5924 - val_categorical_accuracy: 0.5047\n", "Epoch 83/90\n", "73/73 [==============================] - 40s - loss: 0.9371 - categorical_accuracy: 0.6796 - val_loss: 1.5932 - val_categorical_accuracy: 0.5047\n", "Epoch 84/90\n", "73/73 [==============================] - 39s - loss: 0.9692 - categorical_accuracy: 0.6647 - val_loss: 1.5981 - val_categorical_accuracy: 0.5117\n", "Epoch 85/90\n", "73/73 [==============================] - 40s - loss: 0.9137 - categorical_accuracy: 0.6851 - val_loss: 1.6051 - val_categorical_accuracy: 0.5086\n", "Epoch 86/90\n", "73/73 [==============================] - 40s - loss: 0.9211 - categorical_accuracy: 0.6896 - val_loss: 1.5953 - val_categorical_accuracy: 0.5109\n", "Epoch 87/90\n", "73/73 [==============================] - 39s - loss: 0.9342 - categorical_accuracy: 0.6781 - val_loss: 1.5940 - val_categorical_accuracy: 0.5078\n", "Epoch 88/90\n", "73/73 [==============================] - 39s - loss: 0.9322 - categorical_accuracy: 0.6794 - val_loss: 1.5967 - val_categorical_accuracy: 0.5070\n", "Epoch 89/90\n", "73/73 [==============================] - 40s - loss: 0.8384 - categorical_accuracy: 0.7121 - val_loss: 1.6067 - val_categorical_accuracy: 0.5023\n", "Epoch 90/90\n", "73/73 [==============================] - 40s - loss: 0.8442 - categorical_accuracy: 0.7098 - val_loss: 1.6103 - val_categorical_accuracy: 0.5016\n", "\n", "Training for epochs 91 to 100...\n", "Epoch 91/100\n", "73/73 [==============================] - 40s - loss: 0.8983 - categorical_accuracy: 0.6921 - val_loss: 1.6035 - val_categorical_accuracy: 0.5031\n", "Epoch 92/100\n", "73/73 [==============================] - 39s - loss: 0.8676 - categorical_accuracy: 0.7053 - val_loss: 1.6175 - val_categorical_accuracy: 0.5078\n", "Epoch 93/100\n", "73/73 [==============================] - 39s - loss: 0.9187 - categorical_accuracy: 0.6820 - val_loss: 1.6188 - val_categorical_accuracy: 0.5062\n", "Epoch 94/100\n", "73/73 [==============================] - 40s - loss: 0.9446 - categorical_accuracy: 0.6715 - val_loss: 1.6241 - val_categorical_accuracy: 0.5023\n", "Epoch 95/100\n", "73/73 [==============================] - 40s - loss: 0.8856 - categorical_accuracy: 0.6969 - val_loss: 1.6290 - val_categorical_accuracy: 0.4992\n", "Epoch 96/100\n", "73/73 [==============================] - 39s - loss: 0.9008 - categorical_accuracy: 0.6897 - val_loss: 1.6228 - val_categorical_accuracy: 0.5055\n", "Epoch 97/100\n", "73/73 [==============================] - 39s - loss: 0.9108 - categorical_accuracy: 0.6909 - val_loss: 1.6242 - val_categorical_accuracy: 0.5062\n", "Epoch 98/100\n", "73/73 [==============================] - 38s - loss: 0.9187 - categorical_accuracy: 0.6823 - val_loss: 1.6266 - val_categorical_accuracy: 0.5047\n", "Epoch 99/100\n", "73/73 [==============================] - 38s - loss: 0.8133 - categorical_accuracy: 0.7239 - val_loss: 1.6296 - val_categorical_accuracy: 0.5047\n", "Epoch 100/100\n", "73/73 [==============================] - 40s - loss: 0.8241 - categorical_accuracy: 0.7219 - val_loss: 1.6335 - val_categorical_accuracy: 0.5016\n", "\n", "Training for epochs 101 to 110...\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Epoch 101/110\n", "73/73 [==============================] - 41s - loss: 0.8714 - categorical_accuracy: 0.7029 - val_loss: 1.6303 - val_categorical_accuracy: 0.4992\n", "Epoch 102/110\n", "73/73 [==============================] - 40s - loss: 0.8434 - categorical_accuracy: 0.7143 - val_loss: 1.6408 - val_categorical_accuracy: 0.5055\n", "Epoch 103/110\n", "73/73 [==============================] - 39s - loss: 0.8921 - categorical_accuracy: 0.6925 - val_loss: 1.6366 - val_categorical_accuracy: 0.5023\n", "Epoch 104/110\n", "73/73 [==============================] - 39s - loss: 0.9264 - categorical_accuracy: 0.6818 - val_loss: 1.6445 - val_categorical_accuracy: 0.4992\n", "Epoch 105/110\n", "73/73 [==============================] - 39s - loss: 0.8718 - categorical_accuracy: 0.6990 - val_loss: 1.6486 - val_categorical_accuracy: 0.4969\n", "Epoch 106/110\n", "73/73 [==============================] - 40s - loss: 0.8705 - categorical_accuracy: 0.7025 - val_loss: 1.6399 - val_categorical_accuracy: 0.5039\n", "Epoch 107/110\n", "73/73 [==============================] - 40s - loss: 0.8862 - categorical_accuracy: 0.6961 - val_loss: 1.6381 - val_categorical_accuracy: 0.5031\n", "Epoch 108/110\n", "73/73 [==============================] - 39s - loss: 0.8862 - categorical_accuracy: 0.6911 - val_loss: 1.6452 - val_categorical_accuracy: 0.5008\n", "Epoch 109/110\n", "73/73 [==============================] - 40s - loss: 0.7888 - categorical_accuracy: 0.7304 - val_loss: 1.6493 - val_categorical_accuracy: 0.4992\n", "Epoch 110/110\n", "73/73 [==============================] - 40s - loss: 0.7983 - categorical_accuracy: 0.7287 - val_loss: 1.6563 - val_categorical_accuracy: 0.4961\n", "\n", "Training for epochs 111 to 120...\n", "Epoch 111/120\n", "73/73 [==============================] - 41s - loss: 0.8532 - categorical_accuracy: 0.7112 - val_loss: 1.6503 - val_categorical_accuracy: 0.4961\n", "Epoch 112/120\n", "73/73 [==============================] - 39s - loss: 0.8188 - categorical_accuracy: 0.7204 - val_loss: 1.6632 - val_categorical_accuracy: 0.4945\n", "Epoch 113/120\n", "73/73 [==============================] - 40s - loss: 0.8702 - categorical_accuracy: 0.7014 - val_loss: 1.6632 - val_categorical_accuracy: 0.4984\n", "Epoch 114/120\n", "73/73 [==============================] - 40s - loss: 0.9092 - categorical_accuracy: 0.6882 - val_loss: 1.6654 - val_categorical_accuracy: 0.4984\n", "Epoch 115/120\n", "73/73 [==============================] - 40s - loss: 0.8470 - categorical_accuracy: 0.7092 - val_loss: 1.6677 - val_categorical_accuracy: 0.4945\n", "Epoch 116/120\n", "73/73 [==============================] - 40s - loss: 0.8483 - categorical_accuracy: 0.7135 - val_loss: 1.6613 - val_categorical_accuracy: 0.4984\n", "Epoch 117/120\n", "73/73 [==============================] - 40s - loss: 0.8659 - categorical_accuracy: 0.7023 - val_loss: 1.6582 - val_categorical_accuracy: 0.4984\n", "Epoch 118/120\n", "73/73 [==============================] - 39s - loss: 0.8694 - categorical_accuracy: 0.7042 - val_loss: 1.6667 - val_categorical_accuracy: 0.5008\n", "Epoch 119/120\n", "73/73 [==============================] - 39s - loss: 0.7555 - categorical_accuracy: 0.7456 - val_loss: 1.6690 - val_categorical_accuracy: 0.4953\n", "Epoch 120/120\n", "73/73 [==============================] - 39s - loss: 0.7759 - categorical_accuracy: 0.7343 - val_loss: 1.6703 - val_categorical_accuracy: 0.4938\n", "\n", "01:20:03 for Two-Layer Network to yield 73.4% training accuracy and 49.4% validation accuracy in 20 \n", "epochs (x3 training phases).\n", "\n", "Fully connected run complete at Wednesday, 2017 October 18, 1:11 PM.\n", "Clearing keras's backend Tensorflow session...\n", "\n" ] }, { "data": { "text/html": [ "\n", " \n", " " ], "text/plain": [ "" ] }, "execution_count": 36, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# Now rerun all 5 models, starting with FCNN:\n", "run_fcnn()\n", "Audio(url=audio_file, autoplay=True)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": true, "scrolled": true }, "outputs": [], "source": [ "# Must be after Alert() call and in a separate cell for both audio and pop-up; sleep \n", "# allows the audio to play before the pop-up alters HTML output on the page:\n", "delayed_popup()" ] }, { "cell_type": "code", "execution_count": 17, "metadata": { "scrolled": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Best optimizer (0.01, 0.02, 0.85, 1e-10) loaded from file.\n", "Using optimizer (0.01, 0.02, 0.85, 1e-10)...\n", "Xception run begun at Saturday, 2017 October 21, 1:44 AM.\n", "\t[20 epochs (x6 passes) on extended FMA on GPU takes\n", "\tunknown (no similar runs found).]\n", "\n", "First-round training (training the classifier)...\n", "\n", "Training for epochs 1 to 7...\n", "Epoch 1/7\n", "73/73 [==============================] - 281s - loss: 1.6094 - categorical_accuracy: 0.4272 - val_loss: 1.7768 - val_categorical_accuracy: 0.3867\n", "Epoch 2/7\n", "73/73 [==============================] - 281s - loss: 1.5034 - categorical_accuracy: 0.4721 - val_loss: 1.7590 - val_categorical_accuracy: 0.4188\n", "Epoch 3/7\n", "73/73 [==============================] - 281s - loss: 1.4911 - categorical_accuracy: 0.4714 - val_loss: 1.6319 - val_categorical_accuracy: 0.4320\n", "Epoch 4/7\n", "73/73 [==============================] - 281s - loss: 1.4762 - categorical_accuracy: 0.4802 - val_loss: 1.5202 - val_categorical_accuracy: 0.4914\n", "Epoch 5/7\n", "73/73 [==============================] - 281s - loss: 1.4288 - categorical_accuracy: 0.4960 - val_loss: 1.4934 - val_categorical_accuracy: 0.4891\n", "Epoch 6/7\n", "73/73 [==============================] - 280s - loss: 1.4222 - categorical_accuracy: 0.4984 - val_loss: 1.4963 - val_categorical_accuracy: 0.4844\n", "Epoch 7/7\n", "73/73 [==============================] - 281s - loss: 1.4508 - categorical_accuracy: 0.4873 - val_loss: 1.4903 - val_categorical_accuracy: 0.4883\n", "\n", "Training for epochs 8 to 14...\n", "Epoch 8/14\n", "73/73 [==============================] - 281s - loss: 1.3781 - categorical_accuracy: 0.5134 - val_loss: 1.4869 - val_categorical_accuracy: 0.4914\n", "Epoch 9/14\n", "73/73 [==============================] - 280s - loss: 1.3735 - categorical_accuracy: 0.5198 - val_loss: 1.4846 - val_categorical_accuracy: 0.4820\n", "Epoch 10/14\n", "73/73 [==============================] - 281s - loss: 1.3911 - categorical_accuracy: 0.5138 - val_loss: 1.4884 - val_categorical_accuracy: 0.4922\n", "Epoch 11/14\n", "73/73 [==============================] - 279s - loss: 1.3893 - categorical_accuracy: 0.5117 - val_loss: 1.4971 - val_categorical_accuracy: 0.4883\n", "Epoch 12/14\n", "73/73 [==============================] - 281s - loss: 1.3665 - categorical_accuracy: 0.5171 - val_loss: 1.4839 - val_categorical_accuracy: 0.5000\n", "Epoch 13/14\n", "73/73 [==============================] - 281s - loss: 1.3636 - categorical_accuracy: 0.5240 - val_loss: 1.4831 - val_categorical_accuracy: 0.4984\n", "Epoch 14/14\n", "73/73 [==============================] - 281s - loss: 1.3920 - categorical_accuracy: 0.5101 - val_loss: 1.4768 - val_categorical_accuracy: 0.4945\n", "\n", "Training for epochs 15 to 20...\n", "Epoch 15/20\n", "73/73 [==============================] - 282s - loss: 1.3428 - categorical_accuracy: 0.5286 - val_loss: 1.4776 - val_categorical_accuracy: 0.4922\n", "Epoch 16/20\n", "73/73 [==============================] - 282s - loss: 1.3326 - categorical_accuracy: 0.5339 - val_loss: 1.4767 - val_categorical_accuracy: 0.4914\n", "Epoch 17/20\n", "73/73 [==============================] - 282s - loss: 1.3520 - categorical_accuracy: 0.5238 - val_loss: 1.4826 - val_categorical_accuracy: 0.4922\n", "Epoch 18/20\n", "73/73 [==============================] - 280s - loss: 1.3644 - categorical_accuracy: 0.5241 - val_loss: 1.4797 - val_categorical_accuracy: 0.4906\n", "Epoch 19/20\n", "73/73 [==============================] - 281s - loss: 1.3226 - categorical_accuracy: 0.5346 - val_loss: 1.4784 - val_categorical_accuracy: 0.5039\n", "Epoch 20/20\n", "73/73 [==============================] - 281s - loss: 1.3339 - categorical_accuracy: 0.5322 - val_loss: 1.4766 - val_categorical_accuracy: 0.4984\n", "\n", "\n", "Further training (refining convolutional blocks, starting with\n", "\tlayer 122)...\n", "\n", "Training for epochs 21 to 27...\n", "Epoch 21/27\n", "73/73 [==============================] - 296s - loss: 1.3146 - categorical_accuracy: 0.5403 - val_loss: 1.4754 - val_categorical_accuracy: 0.4977\n", "Epoch 22/27\n", "73/73 [==============================] - 295s - loss: 1.3091 - categorical_accuracy: 0.5407 - val_loss: 1.4741 - val_categorical_accuracy: 0.5016\n", "Epoch 23/27\n", "73/73 [==============================] - 293s - loss: 1.3231 - categorical_accuracy: 0.5287 - val_loss: 1.4737 - val_categorical_accuracy: 0.5008\n", "Epoch 24/27\n", "73/73 [==============================] - 293s - loss: 1.3399 - categorical_accuracy: 0.5328 - val_loss: 1.4746 - val_categorical_accuracy: 0.5008\n", "Epoch 25/27\n", "73/73 [==============================] - 295s - loss: 1.3016 - categorical_accuracy: 0.5378 - val_loss: 1.4723 - val_categorical_accuracy: 0.5016\n", "Epoch 26/27\n", "73/73 [==============================] - 294s - loss: 1.3041 - categorical_accuracy: 0.5438 - val_loss: 1.4712 - val_categorical_accuracy: 0.4992\n", "Epoch 27/27\n", "73/73 [==============================] - 295s - loss: 1.3565 - categorical_accuracy: 0.5209 - val_loss: 1.4675 - val_categorical_accuracy: 0.4984\n", "\n", "Training for epochs 28 to 34...\n", "Epoch 28/34\n", "73/73 [==============================] - 295s - loss: 1.3048 - categorical_accuracy: 0.5431 - val_loss: 1.4681 - val_categorical_accuracy: 0.4992\n", "Epoch 29/34\n", "73/73 [==============================] - 295s - loss: 1.2986 - categorical_accuracy: 0.5427 - val_loss: 1.4673 - val_categorical_accuracy: 0.5008\n", "Epoch 30/34\n", "73/73 [==============================] - 295s - loss: 1.3155 - categorical_accuracy: 0.5365 - val_loss: 1.4672 - val_categorical_accuracy: 0.5031\n", "Epoch 31/34\n", "73/73 [==============================] - 294s - loss: 1.3355 - categorical_accuracy: 0.5315 - val_loss: 1.4684 - val_categorical_accuracy: 0.4984\n", "Epoch 32/34\n", "73/73 [==============================] - 295s - loss: 1.2933 - categorical_accuracy: 0.5432 - val_loss: 1.4658 - val_categorical_accuracy: 0.5008\n", "Epoch 33/34\n", "73/73 [==============================] - 295s - loss: 1.2973 - categorical_accuracy: 0.5435 - val_loss: 1.4650 - val_categorical_accuracy: 0.4977\n", "Epoch 34/34\n", "73/73 [==============================] - 295s - loss: 1.3490 - categorical_accuracy: 0.5260 - val_loss: 1.4620 - val_categorical_accuracy: 0.5016\n", "\n", "Training for epochs 35 to 40...\n", "Epoch 35/40\n", "73/73 [==============================] - 295s - loss: 1.3097 - categorical_accuracy: 0.5464 - val_loss: 1.4627 - val_categorical_accuracy: 0.4992\n", "Epoch 36/40\n", "73/73 [==============================] - 295s - loss: 1.2965 - categorical_accuracy: 0.5430 - val_loss: 1.4622 - val_categorical_accuracy: 0.4977\n", "Epoch 37/40\n", "73/73 [==============================] - 294s - loss: 1.3082 - categorical_accuracy: 0.5398 - val_loss: 1.4620 - val_categorical_accuracy: 0.5016\n", "Epoch 38/40\n", "73/73 [==============================] - 292s - loss: 1.3174 - categorical_accuracy: 0.5359 - val_loss: 1.4635 - val_categorical_accuracy: 0.4984\n", "Epoch 39/40\n", "73/73 [==============================] - 295s - loss: 1.2963 - categorical_accuracy: 0.5437 - val_loss: 1.4611 - val_categorical_accuracy: 0.5008\n", "Epoch 40/40\n", "73/73 [==============================] - 294s - loss: 1.2948 - categorical_accuracy: 0.5491 - val_loss: 1.4601 - val_categorical_accuracy: 0.4984\n", "\n", "\n", "Further training (refining convolutional blocks, starting with\n", "\tlayer 105)...\n", "\n", "Training for epochs 41 to 47...\n", "Epoch 41/47\n", "73/73 [==============================] - 336s - loss: 1.2977 - categorical_accuracy: 0.5482 - val_loss: 1.4595 - val_categorical_accuracy: 0.5008\n", "Epoch 42/47\n", "73/73 [==============================] - 335s - loss: 1.2908 - categorical_accuracy: 0.5471 - val_loss: 1.4581 - val_categorical_accuracy: 0.5039\n", "Epoch 43/47\n", "73/73 [==============================] - 335s - loss: 1.3012 - categorical_accuracy: 0.5397 - val_loss: 1.4566 - val_categorical_accuracy: 0.5008\n", "Epoch 44/47\n", "73/73 [==============================] - 334s - loss: 1.3130 - categorical_accuracy: 0.5415 - val_loss: 1.4582 - val_categorical_accuracy: 0.5023\n", "Epoch 45/47\n", "73/73 [==============================] - 335s - loss: 1.2824 - categorical_accuracy: 0.5519 - val_loss: 1.4555 - val_categorical_accuracy: 0.5039\n", "Epoch 46/47\n", "73/73 [==============================] - 336s - loss: 1.2819 - categorical_accuracy: 0.5578 - val_loss: 1.4545 - val_categorical_accuracy: 0.5023\n", "Epoch 47/47\n", "73/73 [==============================] - 336s - loss: 1.3320 - categorical_accuracy: 0.5315 - val_loss: 1.4512 - val_categorical_accuracy: 0.5055\n", "\n", "Training for epochs 48 to 54...\n", "Epoch 48/54\n", "73/73 [==============================] - 337s - loss: 1.2911 - categorical_accuracy: 0.5492 - val_loss: 1.4520 - val_categorical_accuracy: 0.5047\n", "Epoch 49/54\n", "73/73 [==============================] - 336s - loss: 1.2821 - categorical_accuracy: 0.5519 - val_loss: 1.4513 - val_categorical_accuracy: 0.5102\n", "Epoch 50/54\n", "73/73 [==============================] - 336s - loss: 1.2940 - categorical_accuracy: 0.5448 - val_loss: 1.4503 - val_categorical_accuracy: 0.5078\n", "Epoch 51/54\n", "73/73 [==============================] - 334s - loss: 1.3019 - categorical_accuracy: 0.5466 - val_loss: 1.4521 - val_categorical_accuracy: 0.5070\n", "Epoch 52/54\n", "73/73 [==============================] - 336s - loss: 1.2757 - categorical_accuracy: 0.5500 - val_loss: 1.4499 - val_categorical_accuracy: 0.5086\n", "Epoch 53/54\n", "73/73 [==============================] - 336s - loss: 1.2746 - categorical_accuracy: 0.5555 - val_loss: 1.4490 - val_categorical_accuracy: 0.5062\n", "Epoch 54/54\n", "73/73 [==============================] - 336s - loss: 1.3223 - categorical_accuracy: 0.5355 - val_loss: 1.4461 - val_categorical_accuracy: 0.5086\n", "\n", "Training for epochs 55 to 60...\n", "Epoch 55/60\n", "73/73 [==============================] - 336s - loss: 1.2806 - categorical_accuracy: 0.5542 - val_loss: 1.4469 - val_categorical_accuracy: 0.5062\n", "Epoch 56/60\n", "73/73 [==============================] - 336s - loss: 1.2692 - categorical_accuracy: 0.5575 - val_loss: 1.4467 - val_categorical_accuracy: 0.5117\n", "Epoch 57/60\n", "73/73 [==============================] - 337s - loss: 1.2858 - categorical_accuracy: 0.5460 - val_loss: 1.4460 - val_categorical_accuracy: 0.5094\n", "Epoch 58/60\n", "73/73 [==============================] - 334s - loss: 1.3019 - categorical_accuracy: 0.5430 - val_loss: 1.4479 - val_categorical_accuracy: 0.5086\n", "Epoch 59/60\n", "73/73 [==============================] - 335s - loss: 1.2699 - categorical_accuracy: 0.5519 - val_loss: 1.4454 - val_categorical_accuracy: 0.5086\n", "Epoch 60/60\n", "73/73 [==============================] - 336s - loss: 1.2769 - categorical_accuracy: 0.5534 - val_loss: 1.4446 - val_categorical_accuracy: 0.5062\n", "\n", "\n", "Further training (refining convolutional blocks, starting with\n", "\tlayer 95)...\n", "\n", "Training for epochs 61 to 67...\n", "Epoch 61/67\n", "73/73 [==============================] - 364s - loss: 1.2794 - categorical_accuracy: 0.5559 - val_loss: 1.4442 - val_categorical_accuracy: 0.5078\n", "Epoch 62/67\n", "73/73 [==============================] - 362s - loss: 1.2627 - categorical_accuracy: 0.5540 - val_loss: 1.4434 - val_categorical_accuracy: 0.5125\n", "Epoch 63/67\n", "73/73 [==============================] - 362s - loss: 1.2768 - categorical_accuracy: 0.5503 - val_loss: 1.4421 - val_categorical_accuracy: 0.5117\n", "Epoch 64/67\n", "73/73 [==============================] - 359s - loss: 1.2976 - categorical_accuracy: 0.5497 - val_loss: 1.4440 - val_categorical_accuracy: 0.5117\n", "Epoch 65/67\n", "73/73 [==============================] - 361s - loss: 1.2572 - categorical_accuracy: 0.5618 - val_loss: 1.4416 - val_categorical_accuracy: 0.5125\n", "Epoch 66/67\n", "73/73 [==============================] - 363s - loss: 1.2700 - categorical_accuracy: 0.5551 - val_loss: 1.4405 - val_categorical_accuracy: 0.5078\n", "Epoch 67/67\n", "73/73 [==============================] - 361s - loss: 1.3115 - categorical_accuracy: 0.5401 - val_loss: 1.4373 - val_categorical_accuracy: 0.5086\n", "\n", "Training for epochs 68 to 74...\n", "Epoch 68/74\n", "73/73 [==============================] - 362s - loss: 1.2645 - categorical_accuracy: 0.5614 - val_loss: 1.4383 - val_categorical_accuracy: 0.5086\n", "Epoch 69/74\n", "73/73 [==============================] - 361s - loss: 1.2526 - categorical_accuracy: 0.5580 - val_loss: 1.4384 - val_categorical_accuracy: 0.5141\n", "Epoch 70/74\n", "73/73 [==============================] - 361s - loss: 1.2644 - categorical_accuracy: 0.5574 - val_loss: 1.4375 - val_categorical_accuracy: 0.5117\n", "Epoch 71/74\n", "73/73 [==============================] - 360s - loss: 1.2811 - categorical_accuracy: 0.5527 - val_loss: 1.4396 - val_categorical_accuracy: 0.5141\n", "Epoch 72/74\n", "73/73 [==============================] - 363s - loss: 1.2479 - categorical_accuracy: 0.5646 - val_loss: 1.4373 - val_categorical_accuracy: 0.5125\n", "Epoch 73/74\n", "73/73 [==============================] - 362s - loss: 1.2501 - categorical_accuracy: 0.5659 - val_loss: 1.4365 - val_categorical_accuracy: 0.5094\n", "Epoch 74/74\n", "73/73 [==============================] - 362s - loss: 1.3011 - categorical_accuracy: 0.5428 - val_loss: 1.4335 - val_categorical_accuracy: 0.5117\n", "\n", "Training for epochs 75 to 80...\n", "Epoch 75/80\n", "73/73 [==============================] - 361s - loss: 1.2565 - categorical_accuracy: 0.5637 - val_loss: 1.4345 - val_categorical_accuracy: 0.5102\n", "Epoch 76/80\n", "73/73 [==============================] - 360s - loss: 1.2439 - categorical_accuracy: 0.5681 - val_loss: 1.4346 - val_categorical_accuracy: 0.5141\n", "Epoch 77/80\n", "73/73 [==============================] - 362s - loss: 1.2592 - categorical_accuracy: 0.5588 - val_loss: 1.4337 - val_categorical_accuracy: 0.5148\n", "Epoch 78/80\n", "73/73 [==============================] - 360s - loss: 1.2673 - categorical_accuracy: 0.5618 - val_loss: 1.4359 - val_categorical_accuracy: 0.5148\n", "Epoch 79/80\n", "73/73 [==============================] - 363s - loss: 1.2399 - categorical_accuracy: 0.5643 - val_loss: 1.4334 - val_categorical_accuracy: 0.5125\n", "Epoch 80/80\n", "73/73 [==============================] - 361s - loss: 1.2451 - categorical_accuracy: 0.5682 - val_loss: 1.4325 - val_categorical_accuracy: 0.5117\n", "\n", "\n", "Further training (refining convolutional blocks, starting with\n", "\tlayer 85)...\n", "\n", "Training for epochs 81 to 87...\n", "Epoch 81/87\n", "73/73 [==============================] - 388s - loss: 1.2527 - categorical_accuracy: 0.5632 - val_loss: 1.4323 - val_categorical_accuracy: 0.5102\n", "Epoch 82/87\n", "73/73 [==============================] - 387s - loss: 1.2344 - categorical_accuracy: 0.5644 - val_loss: 1.4319 - val_categorical_accuracy: 0.5141\n", "Epoch 83/87\n", "73/73 [==============================] - 387s - loss: 1.2529 - categorical_accuracy: 0.5685 - val_loss: 1.4308 - val_categorical_accuracy: 0.5156\n", "Epoch 84/87\n", "73/73 [==============================] - 384s - loss: 1.2629 - categorical_accuracy: 0.5569 - val_loss: 1.4329 - val_categorical_accuracy: 0.5164\n", "Epoch 85/87\n", "73/73 [==============================] - 386s - loss: 1.2262 - categorical_accuracy: 0.5703 - val_loss: 1.4301 - val_categorical_accuracy: 0.5141\n", "Epoch 86/87\n", "73/73 [==============================] - 385s - loss: 1.2309 - categorical_accuracy: 0.5696 - val_loss: 1.4291 - val_categorical_accuracy: 0.5125\n", "Epoch 87/87\n", "73/73 [==============================] - 385s - loss: 1.2810 - categorical_accuracy: 0.5499 - val_loss: 1.4266 - val_categorical_accuracy: 0.5172\n", "\n", "Training for epochs 88 to 94...\n", "Epoch 88/94\n", "73/73 [==============================] - 387s - loss: 1.2395 - categorical_accuracy: 0.5668 - val_loss: 1.4276 - val_categorical_accuracy: 0.5141\n", "Epoch 89/94\n", "73/73 [==============================] - 386s - loss: 1.2220 - categorical_accuracy: 0.5719 - val_loss: 1.4281 - val_categorical_accuracy: 0.5148\n", "Epoch 90/94\n", "73/73 [==============================] - 385s - loss: 1.2402 - categorical_accuracy: 0.5649 - val_loss: 1.4271 - val_categorical_accuracy: 0.5164\n", "Epoch 91/94\n", "73/73 [==============================] - 384s - loss: 1.2565 - categorical_accuracy: 0.5655 - val_loss: 1.4291 - val_categorical_accuracy: 0.5172\n", "Epoch 92/94\n", "73/73 [==============================] - 385s - loss: 1.2277 - categorical_accuracy: 0.5698 - val_loss: 1.4264 - val_categorical_accuracy: 0.5156\n", "Epoch 93/94\n", "73/73 [==============================] - 386s - loss: 1.2203 - categorical_accuracy: 0.5757 - val_loss: 1.4256 - val_categorical_accuracy: 0.5164\n", "Epoch 94/94\n", "73/73 [==============================] - 386s - loss: 1.2728 - categorical_accuracy: 0.5557 - val_loss: 1.4232 - val_categorical_accuracy: 0.5164\n", "\n", "Training for epochs 95 to 100...\n", "Epoch 95/100\n", "73/73 [==============================] - 387s - loss: 1.2350 - categorical_accuracy: 0.5730 - val_loss: 1.4243 - val_categorical_accuracy: 0.5156\n", "Epoch 96/100\n", "73/73 [==============================] - 386s - loss: 1.2163 - categorical_accuracy: 0.5763 - val_loss: 1.4246 - val_categorical_accuracy: 0.5180\n", "Epoch 97/100\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "73/73 [==============================] - 386s - loss: 1.2316 - categorical_accuracy: 0.5693 - val_loss: 1.4237 - val_categorical_accuracy: 0.5164\n", "Epoch 98/100\n", "73/73 [==============================] - 385s - loss: 1.2457 - categorical_accuracy: 0.5638 - val_loss: 1.4265 - val_categorical_accuracy: 0.5172\n", "Epoch 99/100\n", "73/73 [==============================] - 387s - loss: 1.2043 - categorical_accuracy: 0.5750 - val_loss: 1.4236 - val_categorical_accuracy: 0.5141\n", "Epoch 100/100\n", "73/73 [==============================] - 388s - loss: 1.2155 - categorical_accuracy: 0.5747 - val_loss: 1.4229 - val_categorical_accuracy: 0.5156\n", "\n", "\n", "Further training (refining convolutional blocks, starting with\n", "\tlayer 75)...\n", "\n", "Training for epochs 101 to 107...\n", "Epoch 101/107\n", "73/73 [==============================] - 411s - loss: 1.2226 - categorical_accuracy: 0.5716 - val_loss: 1.4228 - val_categorical_accuracy: 0.5148\n", "Epoch 102/107\n", "73/73 [==============================] - 411s - loss: 1.2079 - categorical_accuracy: 0.5783 - val_loss: 1.4228 - val_categorical_accuracy: 0.5195\n", "Epoch 103/107\n", "73/73 [==============================] - 411s - loss: 1.2191 - categorical_accuracy: 0.5784 - val_loss: 1.4218 - val_categorical_accuracy: 0.5148\n", "Epoch 104/107\n", "73/73 [==============================] - 408s - loss: 1.2311 - categorical_accuracy: 0.5692 - val_loss: 1.4239 - val_categorical_accuracy: 0.5203\n", "Epoch 105/107\n", "73/73 [==============================] - 411s - loss: 1.2014 - categorical_accuracy: 0.5764 - val_loss: 1.4209 - val_categorical_accuracy: 0.5180\n", "Epoch 106/107\n", "73/73 [==============================] - 410s - loss: 1.2027 - categorical_accuracy: 0.5858 - val_loss: 1.4203 - val_categorical_accuracy: 0.5133\n", "Epoch 107/107\n", "73/73 [==============================] - 411s - loss: 1.2582 - categorical_accuracy: 0.5624 - val_loss: 1.4179 - val_categorical_accuracy: 0.5141\n", "\n", "Training for epochs 108 to 114...\n", "Epoch 108/114\n", "73/73 [==============================] - 412s - loss: 1.2126 - categorical_accuracy: 0.5741 - val_loss: 1.4186 - val_categorical_accuracy: 0.5203\n", "Epoch 109/114\n", "73/73 [==============================] - 411s - loss: 1.1921 - categorical_accuracy: 0.5856 - val_loss: 1.4191 - val_categorical_accuracy: 0.5180\n", "Epoch 110/114\n", "73/73 [==============================] - 411s - loss: 1.2105 - categorical_accuracy: 0.5762 - val_loss: 1.4180 - val_categorical_accuracy: 0.5164\n", "Epoch 111/114\n", "73/73 [==============================] - 408s - loss: 1.2225 - categorical_accuracy: 0.5692 - val_loss: 1.4205 - val_categorical_accuracy: 0.5211\n", "Epoch 112/114\n", "73/73 [==============================] - 411s - loss: 1.1842 - categorical_accuracy: 0.5833 - val_loss: 1.4177 - val_categorical_accuracy: 0.5195\n", "Epoch 113/114\n", "73/73 [==============================] - 411s - loss: 1.1857 - categorical_accuracy: 0.5875 - val_loss: 1.4170 - val_categorical_accuracy: 0.5180\n", "Epoch 114/114\n", "73/73 [==============================] - 411s - loss: 1.2405 - categorical_accuracy: 0.5654 - val_loss: 1.4149 - val_categorical_accuracy: 0.5180\n", "\n", "Training for epochs 115 to 120...\n", "Epoch 115/120\n", "73/73 [==============================] - 412s - loss: 1.2038 - categorical_accuracy: 0.5795 - val_loss: 1.4157 - val_categorical_accuracy: 0.5195\n", "Epoch 116/120\n", "73/73 [==============================] - 411s - loss: 1.1859 - categorical_accuracy: 0.5859 - val_loss: 1.4162 - val_categorical_accuracy: 0.5188\n", "Epoch 117/120\n", "73/73 [==============================] - 411s - loss: 1.2045 - categorical_accuracy: 0.5808 - val_loss: 1.4156 - val_categorical_accuracy: 0.5164\n", "Epoch 118/120\n", "73/73 [==============================] - 407s - loss: 1.2092 - categorical_accuracy: 0.5822 - val_loss: 1.4180 - val_categorical_accuracy: 0.5195\n", "Epoch 119/120\n", "73/73 [==============================] - 412s - loss: 1.1724 - categorical_accuracy: 0.5878 - val_loss: 1.4149 - val_categorical_accuracy: 0.5203\n", "Epoch 120/120\n", "73/73 [==============================] - 411s - loss: 1.1777 - categorical_accuracy: 0.5905 - val_loss: 1.4142 - val_categorical_accuracy: 0.5180\n", "\n", "11:30:54 for Xception to yield 59.1% training accuracy and 51.8% validation accuracy in 20 \n", "epochs (x6 training phases).\n", "\n", "Xception run complete at Saturday, 2017 October 21, 1:15 PM.\n", "Clearing keras's backend Tensorflow session...\n", "\n" ] }, { "data": { "text/html": [ "\n", " \n", " " ], "text/plain": [ "" ] }, "execution_count": 17, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# Xception:\n", "run_xception()\n", "Audio(url=audio_file, autoplay=True)" ] }, { "cell_type": "code", "execution_count": 18, "metadata": { "scrolled": true }, "outputs": [ { "data": { "text/html": [ "" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "# Must be after Alert() call and in a separate cell for both audio and pop-up; sleep \n", "# allows the audio to play before the pop-up alters HTML output on the page:\n", "delayed_popup()" ] }, { "cell_type": "code", "execution_count": 38, "metadata": { "scrolled": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Best optimizer (0.01, 0.02, 0.85, 1e-10) loaded from file.\n", "Using optimizer (0.01, 0.02, 0.85, 1e-10)...\n", "Inception V3 run begun at Wednesday, 2017 October 18, 1:33 PM.\n", "\t[20 epochs (x6 passes) on extended FMA on GPU takes\n", "\tunknown (no similar runs found).]\n", "\n", "First-round training (training the classifier)...\n", "\n", "Training for epochs 1 to 10...\n", "Epoch 1/10\n", "73/73 [==============================] - 176s - loss: 1.6641 - categorical_accuracy: 0.4037 - val_loss: 3.6289 - val_categorical_accuracy: 0.1938\n", "Epoch 2/10\n", "73/73 [==============================] - 174s - loss: 1.5677 - categorical_accuracy: 0.4429 - val_loss: 2.2224 - val_categorical_accuracy: 0.2102\n", "Epoch 3/10\n", "73/73 [==============================] - 174s - loss: 1.5564 - categorical_accuracy: 0.4467 - val_loss: 1.6468 - val_categorical_accuracy: 0.4266\n", "Epoch 4/10\n", "73/73 [==============================] - 175s - loss: 1.5341 - categorical_accuracy: 0.4494 - val_loss: 1.5183 - val_categorical_accuracy: 0.4695\n", "Epoch 5/10\n", "73/73 [==============================] - 174s - loss: 1.4915 - categorical_accuracy: 0.4609 - val_loss: 1.5560 - val_categorical_accuracy: 0.4617\n", "Epoch 6/10\n", "73/73 [==============================] - 175s - loss: 1.5033 - categorical_accuracy: 0.4610 - val_loss: 1.5308 - val_categorical_accuracy: 0.4750\n", "Epoch 7/10\n", "73/73 [==============================] - 175s - loss: 1.4973 - categorical_accuracy: 0.4705 - val_loss: 1.5492 - val_categorical_accuracy: 0.4641\n", "Epoch 8/10\n", "73/73 [==============================] - 173s - loss: 1.4854 - categorical_accuracy: 0.4714 - val_loss: 1.5072 - val_categorical_accuracy: 0.4711\n", "Epoch 9/10\n", "73/73 [==============================] - 174s - loss: 1.4670 - categorical_accuracy: 0.4728 - val_loss: 1.5029 - val_categorical_accuracy: 0.4773\n", "Epoch 10/10\n", "73/73 [==============================] - 174s - loss: 1.4839 - categorical_accuracy: 0.4792 - val_loss: 1.4970 - val_categorical_accuracy: 0.4750\n", "\n", "Training for epochs 11 to 20...\n", "Epoch 11/20\n", "73/73 [==============================] - 175s - loss: 1.4394 - categorical_accuracy: 0.4927 - val_loss: 1.5009 - val_categorical_accuracy: 0.4883\n", "Epoch 12/20\n", "73/73 [==============================] - 175s - loss: 1.4126 - categorical_accuracy: 0.4989 - val_loss: 1.5019 - val_categorical_accuracy: 0.4859\n", "Epoch 13/20\n", "73/73 [==============================] - 174s - loss: 1.4379 - categorical_accuracy: 0.4863 - val_loss: 1.4963 - val_categorical_accuracy: 0.4859\n", "Epoch 14/20\n", "73/73 [==============================] - 173s - loss: 1.4444 - categorical_accuracy: 0.4826 - val_loss: 1.4858 - val_categorical_accuracy: 0.4906\n", "Epoch 15/20\n", "73/73 [==============================] - 174s - loss: 1.4245 - categorical_accuracy: 0.4875 - val_loss: 1.4998 - val_categorical_accuracy: 0.4906\n", "Epoch 16/20\n", "73/73 [==============================] - 175s - loss: 1.4387 - categorical_accuracy: 0.4844 - val_loss: 1.4956 - val_categorical_accuracy: 0.4898\n", "Epoch 17/20\n", "73/73 [==============================] - 175s - loss: 1.4444 - categorical_accuracy: 0.4859 - val_loss: 1.5111 - val_categorical_accuracy: 0.4891\n", "Epoch 18/20\n", "73/73 [==============================] - 175s - loss: 1.4446 - categorical_accuracy: 0.4869 - val_loss: 1.4971 - val_categorical_accuracy: 0.4906\n", "Epoch 19/20\n", "73/73 [==============================] - 176s - loss: 1.4212 - categorical_accuracy: 0.4923 - val_loss: 1.4949 - val_categorical_accuracy: 0.4867\n", "Epoch 20/20\n", "73/73 [==============================] - 175s - loss: 1.4332 - categorical_accuracy: 0.4924 - val_loss: 1.4849 - val_categorical_accuracy: 0.4914\n", "\n", "\n", "Further training (refining convolutional blocks, starting with\n", "\tlayer 249)...\n", "\n", "Training for epochs 21 to 30...\n", "Epoch 21/30\n", "73/73 [==============================] - 200s - loss: 1.4119 - categorical_accuracy: 0.4955 - val_loss: 1.4865 - val_categorical_accuracy: 0.4906\n", "Epoch 22/30\n", "73/73 [==============================] - 196s - loss: 1.3815 - categorical_accuracy: 0.5059 - val_loss: 1.4866 - val_categorical_accuracy: 0.4914\n", "Epoch 23/30\n", "73/73 [==============================] - 198s - loss: 1.4034 - categorical_accuracy: 0.4972 - val_loss: 1.4855 - val_categorical_accuracy: 0.4914\n", "Epoch 24/30\n", "73/73 [==============================] - 196s - loss: 1.4014 - categorical_accuracy: 0.4923 - val_loss: 1.4839 - val_categorical_accuracy: 0.4891\n", "Epoch 25/30\n", "73/73 [==============================] - 199s - loss: 1.3901 - categorical_accuracy: 0.5100 - val_loss: 1.4822 - val_categorical_accuracy: 0.4891\n", "Epoch 26/30\n", "73/73 [==============================] - 198s - loss: 1.4045 - categorical_accuracy: 0.5030 - val_loss: 1.4816 - val_categorical_accuracy: 0.4898\n", "Epoch 27/30\n", "73/73 [==============================] - 197s - loss: 1.4042 - categorical_accuracy: 0.4948 - val_loss: 1.4796 - val_categorical_accuracy: 0.4914\n", "Epoch 28/30\n", "73/73 [==============================] - 197s - loss: 1.4008 - categorical_accuracy: 0.5055 - val_loss: 1.4751 - val_categorical_accuracy: 0.4914\n", "Epoch 29/30\n", "73/73 [==============================] - 198s - loss: 1.3845 - categorical_accuracy: 0.5056 - val_loss: 1.4730 - val_categorical_accuracy: 0.4930\n", "Epoch 30/30\n", "73/73 [==============================] - 198s - loss: 1.3931 - categorical_accuracy: 0.5050 - val_loss: 1.4725 - val_categorical_accuracy: 0.4930\n", "\n", "Training for epochs 31 to 40...\n", "Epoch 31/40\n", "73/73 [==============================] - 198s - loss: 1.3710 - categorical_accuracy: 0.5149 - val_loss: 1.4734 - val_categorical_accuracy: 0.4922\n", "Epoch 32/40\n", "73/73 [==============================] - 198s - loss: 1.3469 - categorical_accuracy: 0.5235 - val_loss: 1.4711 - val_categorical_accuracy: 0.4930\n", "Epoch 33/40\n", "73/73 [==============================] - 198s - loss: 1.3660 - categorical_accuracy: 0.5177 - val_loss: 1.4695 - val_categorical_accuracy: 0.4945\n", "Epoch 34/40\n", "73/73 [==============================] - 195s - loss: 1.3668 - categorical_accuracy: 0.5087 - val_loss: 1.4687 - val_categorical_accuracy: 0.4898\n", "Epoch 35/40\n", "73/73 [==============================] - 198s - loss: 1.3536 - categorical_accuracy: 0.5205 - val_loss: 1.4676 - val_categorical_accuracy: 0.4938\n", "Epoch 36/40\n", "73/73 [==============================] - 198s - loss: 1.3671 - categorical_accuracy: 0.5126 - val_loss: 1.4672 - val_categorical_accuracy: 0.4891\n", "Epoch 37/40\n", "73/73 [==============================] - 199s - loss: 1.3734 - categorical_accuracy: 0.5121 - val_loss: 1.4655 - val_categorical_accuracy: 0.4938\n", "Epoch 38/40\n", "73/73 [==============================] - 196s - loss: 1.3749 - categorical_accuracy: 0.5097 - val_loss: 1.4616 - val_categorical_accuracy: 0.4938\n", "Epoch 39/40\n", "73/73 [==============================] - 197s - loss: 1.3472 - categorical_accuracy: 0.5193 - val_loss: 1.4600 - val_categorical_accuracy: 0.4953\n", "Epoch 40/40\n", "73/73 [==============================] - 197s - loss: 1.3559 - categorical_accuracy: 0.5223 - val_loss: 1.4613 - val_categorical_accuracy: 0.4945\n", "\n", "\n", "Further training (refining convolutional blocks, starting with\n", "\tlayer 232)...\n", "\n", "Training for epochs 41 to 50...\n", "Epoch 41/50\n", "73/73 [==============================] - 212s - loss: 1.3456 - categorical_accuracy: 0.5202 - val_loss: 1.4618 - val_categorical_accuracy: 0.4938\n", "Epoch 42/50\n", "73/73 [==============================] - 209s - loss: 1.3162 - categorical_accuracy: 0.5365 - val_loss: 1.4596 - val_categorical_accuracy: 0.4930\n", "Epoch 43/50\n", "73/73 [==============================] - 208s - loss: 1.3380 - categorical_accuracy: 0.5263 - val_loss: 1.4572 - val_categorical_accuracy: 0.4961\n", "Epoch 44/50\n", "73/73 [==============================] - 208s - loss: 1.3406 - categorical_accuracy: 0.5192 - val_loss: 1.4561 - val_categorical_accuracy: 0.4922\n", "Epoch 45/50\n", "73/73 [==============================] - 210s - loss: 1.3260 - categorical_accuracy: 0.5275 - val_loss: 1.4555 - val_categorical_accuracy: 0.4930\n", "Epoch 46/50\n", "73/73 [==============================] - 210s - loss: 1.3383 - categorical_accuracy: 0.5232 - val_loss: 1.4552 - val_categorical_accuracy: 0.4945\n", "Epoch 47/50\n", "73/73 [==============================] - 209s - loss: 1.3428 - categorical_accuracy: 0.5251 - val_loss: 1.4528 - val_categorical_accuracy: 0.4953\n", "Epoch 48/50\n", "73/73 [==============================] - 208s - loss: 1.3383 - categorical_accuracy: 0.5263 - val_loss: 1.4492 - val_categorical_accuracy: 0.4930\n", "Epoch 49/50\n", "73/73 [==============================] - 210s - loss: 1.3132 - categorical_accuracy: 0.5296 - val_loss: 1.4474 - val_categorical_accuracy: 0.4969\n", "Epoch 50/50\n", "73/73 [==============================] - 210s - loss: 1.3225 - categorical_accuracy: 0.5374 - val_loss: 1.4486 - val_categorical_accuracy: 0.4930\n", "\n", "Training for epochs 51 to 60...\n", "Epoch 51/60\n", "73/73 [==============================] - 210s - loss: 1.3102 - categorical_accuracy: 0.5378 - val_loss: 1.4492 - val_categorical_accuracy: 0.4961\n", "Epoch 52/60\n", "73/73 [==============================] - 210s - loss: 1.2801 - categorical_accuracy: 0.5487 - val_loss: 1.4489 - val_categorical_accuracy: 0.4953\n", "Epoch 53/60\n", "73/73 [==============================] - 210s - loss: 1.3033 - categorical_accuracy: 0.5384 - val_loss: 1.4472 - val_categorical_accuracy: 0.4914\n", "Epoch 54/60\n", "73/73 [==============================] - 209s - loss: 1.3055 - categorical_accuracy: 0.5302 - val_loss: 1.4467 - val_categorical_accuracy: 0.4914\n", "Epoch 55/60\n", "73/73 [==============================] - 209s - loss: 1.2969 - categorical_accuracy: 0.5469 - val_loss: 1.4466 - val_categorical_accuracy: 0.4906\n", "Epoch 56/60\n", "73/73 [==============================] - 211s - loss: 1.3102 - categorical_accuracy: 0.5350 - val_loss: 1.4470 - val_categorical_accuracy: 0.4945\n", "Epoch 57/60\n", "73/73 [==============================] - 211s - loss: 1.3093 - categorical_accuracy: 0.5437 - val_loss: 1.4445 - val_categorical_accuracy: 0.4945\n", "Epoch 58/60\n", "73/73 [==============================] - 210s - loss: 1.3062 - categorical_accuracy: 0.5425 - val_loss: 1.4416 - val_categorical_accuracy: 0.4922\n", "Epoch 59/60\n", "73/73 [==============================] - 210s - loss: 1.2772 - categorical_accuracy: 0.5416 - val_loss: 1.4406 - val_categorical_accuracy: 0.4969\n", "Epoch 60/60\n", "73/73 [==============================] - 211s - loss: 1.2888 - categorical_accuracy: 0.5456 - val_loss: 1.4418 - val_categorical_accuracy: 0.4938\n", "\n", "\n", "Further training (refining convolutional blocks, starting with\n", "\tlayer 229)...\n", "\n", "Training for epochs 61 to 70...\n", "Epoch 61/70\n", "73/73 [==============================] - 215s - loss: 1.2783 - categorical_accuracy: 0.5504 - val_loss: 1.4429 - val_categorical_accuracy: 0.4977\n", "Epoch 62/70\n", "73/73 [==============================] - 212s - loss: 1.2486 - categorical_accuracy: 0.5623 - val_loss: 1.4427 - val_categorical_accuracy: 0.4961\n", "Epoch 63/70\n", "73/73 [==============================] - 212s - loss: 1.2636 - categorical_accuracy: 0.5532 - val_loss: 1.4418 - val_categorical_accuracy: 0.4922\n", "Epoch 64/70\n", "73/73 [==============================] - 212s - loss: 1.2741 - categorical_accuracy: 0.5477 - val_loss: 1.4411 - val_categorical_accuracy: 0.4953\n", "Epoch 65/70\n", "73/73 [==============================] - 212s - loss: 1.2624 - categorical_accuracy: 0.5553 - val_loss: 1.4412 - val_categorical_accuracy: 0.4945\n", "Epoch 66/70\n", "73/73 [==============================] - 213s - loss: 1.2726 - categorical_accuracy: 0.5499 - val_loss: 1.4415 - val_categorical_accuracy: 0.4969\n", "Epoch 67/70\n", "73/73 [==============================] - 213s - loss: 1.2776 - categorical_accuracy: 0.5558 - val_loss: 1.4395 - val_categorical_accuracy: 0.4953\n", "Epoch 68/70\n", "73/73 [==============================] - 212s - loss: 1.2710 - categorical_accuracy: 0.5565 - val_loss: 1.4375 - val_categorical_accuracy: 0.4945\n", "Epoch 69/70\n", "73/73 [==============================] - 213s - loss: 1.2407 - categorical_accuracy: 0.5558 - val_loss: 1.4361 - val_categorical_accuracy: 0.4969\n", "Epoch 70/70\n", "73/73 [==============================] - 212s - loss: 1.2558 - categorical_accuracy: 0.5636 - val_loss: 1.4391 - val_categorical_accuracy: 0.4938\n", "\n", "Training for epochs 71 to 80...\n", "Epoch 71/80\n", "73/73 [==============================] - 214s - loss: 1.2466 - categorical_accuracy: 0.5603 - val_loss: 1.4389 - val_categorical_accuracy: 0.4977\n", "Epoch 72/80\n", "73/73 [==============================] - 212s - loss: 1.2188 - categorical_accuracy: 0.5708 - val_loss: 1.4408 - val_categorical_accuracy: 0.4930\n", "Epoch 73/80\n", "73/73 [==============================] - 212s - loss: 1.2362 - categorical_accuracy: 0.5623 - val_loss: 1.4388 - val_categorical_accuracy: 0.4930\n", "Epoch 74/80\n", "73/73 [==============================] - 212s - loss: 1.2361 - categorical_accuracy: 0.5630 - val_loss: 1.4394 - val_categorical_accuracy: 0.4938\n", "Epoch 75/80\n", "73/73 [==============================] - 213s - loss: 1.2298 - categorical_accuracy: 0.5729 - val_loss: 1.4389 - val_categorical_accuracy: 0.4953\n", "Epoch 76/80\n", "73/73 [==============================] - 213s - loss: 1.2350 - categorical_accuracy: 0.5712 - val_loss: 1.4390 - val_categorical_accuracy: 0.4977\n", "Epoch 77/80\n", "73/73 [==============================] - 211s - loss: 1.2415 - categorical_accuracy: 0.5647 - val_loss: 1.4372 - val_categorical_accuracy: 0.4984\n", "Epoch 78/80\n", "73/73 [==============================] - 210s - loss: 1.2374 - categorical_accuracy: 0.5646 - val_loss: 1.4359 - val_categorical_accuracy: 0.4984\n", "Epoch 79/80\n", "73/73 [==============================] - 213s - loss: 1.2070 - categorical_accuracy: 0.5743 - val_loss: 1.4350 - val_categorical_accuracy: 0.4977\n", "Epoch 80/80\n", "73/73 [==============================] - 211s - loss: 1.2155 - categorical_accuracy: 0.5751 - val_loss: 1.4365 - val_categorical_accuracy: 0.4961\n", "\n", "\n", "Further training (refining convolutional blocks, starting with\n", "\tlayer 200)...\n", "\n", "Training for epochs 81 to 90...\n", "Epoch 81/90\n", "73/73 [==============================] - 234s - loss: 1.2088 - categorical_accuracy: 0.5782 - val_loss: 1.4365 - val_categorical_accuracy: 0.5000\n", "Epoch 82/90\n", "73/73 [==============================] - 232s - loss: 1.1882 - categorical_accuracy: 0.5856 - val_loss: 1.4379 - val_categorical_accuracy: 0.4969\n", "Epoch 83/90\n", "73/73 [==============================] - 230s - loss: 1.1995 - categorical_accuracy: 0.5791 - val_loss: 1.4367 - val_categorical_accuracy: 0.4992\n", "Epoch 84/90\n", "73/73 [==============================] - 231s - loss: 1.1973 - categorical_accuracy: 0.5717 - val_loss: 1.4380 - val_categorical_accuracy: 0.4992\n", "Epoch 85/90\n", "73/73 [==============================] - 232s - loss: 1.1922 - categorical_accuracy: 0.5821 - val_loss: 1.4382 - val_categorical_accuracy: 0.4992\n", "Epoch 86/90\n", "73/73 [==============================] - 231s - loss: 1.1981 - categorical_accuracy: 0.5798 - val_loss: 1.4366 - val_categorical_accuracy: 0.5008\n", "Epoch 87/90\n", "73/73 [==============================] - 231s - loss: 1.1979 - categorical_accuracy: 0.5801 - val_loss: 1.4355 - val_categorical_accuracy: 0.4984\n", "Epoch 88/90\n", "73/73 [==============================] - 231s - loss: 1.2042 - categorical_accuracy: 0.5807 - val_loss: 1.4345 - val_categorical_accuracy: 0.5016\n", "Epoch 89/90\n", "73/73 [==============================] - 232s - loss: 1.1686 - categorical_accuracy: 0.5872 - val_loss: 1.4330 - val_categorical_accuracy: 0.5055\n", "Epoch 90/90\n", "73/73 [==============================] - 232s - loss: 1.1744 - categorical_accuracy: 0.5920 - val_loss: 1.4358 - val_categorical_accuracy: 0.5016\n", "\n", "Training for epochs 91 to 100...\n", "Epoch 91/100\n", "73/73 [==============================] - 233s - loss: 1.1611 - categorical_accuracy: 0.5934 - val_loss: 1.4359 - val_categorical_accuracy: 0.5047\n", "Epoch 92/100\n", "73/73 [==============================] - 233s - loss: 1.1386 - categorical_accuracy: 0.6028 - val_loss: 1.4377 - val_categorical_accuracy: 0.4969\n", "Epoch 93/100\n", "73/73 [==============================] - 232s - loss: 1.1571 - categorical_accuracy: 0.5969 - val_loss: 1.4367 - val_categorical_accuracy: 0.4992\n", "Epoch 94/100\n", "73/73 [==============================] - 231s - loss: 1.1479 - categorical_accuracy: 0.5921 - val_loss: 1.4373 - val_categorical_accuracy: 0.4961\n", "Epoch 95/100\n", "73/73 [==============================] - 232s - loss: 1.1426 - categorical_accuracy: 0.6028 - val_loss: 1.4375 - val_categorical_accuracy: 0.4969\n", "Epoch 96/100\n", "73/73 [==============================] - 232s - loss: 1.1477 - categorical_accuracy: 0.6008 - val_loss: 1.4370 - val_categorical_accuracy: 0.5000\n", "Epoch 97/100\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "73/73 [==============================] - 231s - loss: 1.1561 - categorical_accuracy: 0.5899 - val_loss: 1.4359 - val_categorical_accuracy: 0.4992\n", "Epoch 98/100\n", "73/73 [==============================] - 229s - loss: 1.1564 - categorical_accuracy: 0.5989 - val_loss: 1.4355 - val_categorical_accuracy: 0.5023\n", "Epoch 99/100\n", "73/73 [==============================] - 231s - loss: 1.1069 - categorical_accuracy: 0.6113 - val_loss: 1.4348 - val_categorical_accuracy: 0.5031\n", "Epoch 100/100\n", "73/73 [==============================] - 232s - loss: 1.1158 - categorical_accuracy: 0.6128 - val_loss: 1.4374 - val_categorical_accuracy: 0.5023\n", "\n", "\n", "Further training (refining convolutional blocks, starting with\n", "\tlayer 187)...\n", "\n", "Training for epochs 101 to 110...\n", "Epoch 101/110\n", "73/73 [==============================] - 242s - loss: 1.1076 - categorical_accuracy: 0.6123 - val_loss: 1.4377 - val_categorical_accuracy: 0.5039\n", "Epoch 102/110\n", "73/73 [==============================] - 238s - loss: 1.0882 - categorical_accuracy: 0.6231 - val_loss: 1.4409 - val_categorical_accuracy: 0.4969\n", "Epoch 103/110\n", "73/73 [==============================] - 234s - loss: 1.0994 - categorical_accuracy: 0.6148 - val_loss: 1.4396 - val_categorical_accuracy: 0.5000\n", "Epoch 104/110\n", "73/73 [==============================] - 235s - loss: 1.0984 - categorical_accuracy: 0.6124 - val_loss: 1.4422 - val_categorical_accuracy: 0.4984\n", "Epoch 105/110\n", "73/73 [==============================] - 238s - loss: 1.0922 - categorical_accuracy: 0.6192 - val_loss: 1.4429 - val_categorical_accuracy: 0.5000\n", "Epoch 106/110\n", "73/73 [==============================] - 237s - loss: 1.0910 - categorical_accuracy: 0.6159 - val_loss: 1.4420 - val_categorical_accuracy: 0.5016\n", "Epoch 107/110\n", "73/73 [==============================] - 237s - loss: 1.0961 - categorical_accuracy: 0.6215 - val_loss: 1.4426 - val_categorical_accuracy: 0.5023\n", "Epoch 108/110\n", "73/73 [==============================] - 236s - loss: 1.1021 - categorical_accuracy: 0.6196 - val_loss: 1.4411 - val_categorical_accuracy: 0.5031\n", "Epoch 109/110\n", "73/73 [==============================] - 237s - loss: 1.0472 - categorical_accuracy: 0.6360 - val_loss: 1.4402 - val_categorical_accuracy: 0.5031\n", "Epoch 110/110\n", "73/73 [==============================] - 237s - loss: 1.0618 - categorical_accuracy: 0.6309 - val_loss: 1.4431 - val_categorical_accuracy: 0.5062\n", "\n", "Training for epochs 111 to 120...\n", "Epoch 111/120\n", "73/73 [==============================] - 239s - loss: 1.0586 - categorical_accuracy: 0.6313 - val_loss: 1.4436 - val_categorical_accuracy: 0.5047\n", "Epoch 112/120\n", "73/73 [==============================] - 237s - loss: 1.0320 - categorical_accuracy: 0.6458 - val_loss: 1.4468 - val_categorical_accuracy: 0.5000\n", "Epoch 113/120\n", "73/73 [==============================] - 238s - loss: 1.0536 - categorical_accuracy: 0.6355 - val_loss: 1.4454 - val_categorical_accuracy: 0.5016\n", "Epoch 114/120\n", "73/73 [==============================] - 236s - loss: 1.0405 - categorical_accuracy: 0.6369 - val_loss: 1.4505 - val_categorical_accuracy: 0.4977\n", "Epoch 115/120\n", "73/73 [==============================] - 238s - loss: 1.0423 - categorical_accuracy: 0.6394 - val_loss: 1.4509 - val_categorical_accuracy: 0.4992\n", "Epoch 116/120\n", "73/73 [==============================] - 238s - loss: 1.0360 - categorical_accuracy: 0.6428 - val_loss: 1.4498 - val_categorical_accuracy: 0.4969\n", "Epoch 117/120\n", "73/73 [==============================] - 238s - loss: 1.0357 - categorical_accuracy: 0.6448 - val_loss: 1.4496 - val_categorical_accuracy: 0.5047\n", "Epoch 118/120\n", "73/73 [==============================] - 237s - loss: 1.0387 - categorical_accuracy: 0.6435 - val_loss: 1.4486 - val_categorical_accuracy: 0.4984\n", "Epoch 119/120\n", "73/73 [==============================] - 239s - loss: 0.9940 - categorical_accuracy: 0.6574 - val_loss: 1.4492 - val_categorical_accuracy: 0.5008\n", "Epoch 120/120\n", "73/73 [==============================] - 239s - loss: 0.9968 - categorical_accuracy: 0.6576 - val_loss: 1.4544 - val_categorical_accuracy: 0.5055\n", "\n", "07:02:48 for Inception V3 to yield 65.8% training accuracy and 50.5% validation accuracy in 20 \n", "epochs (x6 training phases).\n", "\n", "Inception V3 run complete at Wednesday, 2017 October 18, 8:36 PM.\n", "Clearing keras's backend Tensorflow session...\n", "\n" ] }, { "data": { "text/html": [ "\n", " \n", " " ], "text/plain": [ "" ] }, "execution_count": 38, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# InceptionV3:\n", "run_inception_v3()\n", "Audio(url=audio_file, autoplay=True)" ] }, { "cell_type": "code", "execution_count": 39, "metadata": { "scrolled": true }, "outputs": [ { "data": { "text/html": [ "" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "# Must be after Alert() call and in a separate cell for both audio and pop-up; sleep \n", "# allows the audio to play before the pop-up alters HTML output on the page:\n", "delayed_popup()" ] }, { "cell_type": "code", "execution_count": 40, "metadata": { "scrolled": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Best optimizer (0.01, 0.02, 0.85, 1e-10) loaded from file.\n", "Using optimizer (0.01, 0.02, 0.85, 1e-10)...\n", "ResNet50 run begun at Wednesday, 2017 October 18, 8:36 PM.\n", "\t[20 epochs (x6 passes) on extended FMA on GPU takes\n", "\tunknown (no similar runs found).]\n", "\n", "First-round training (training the classifier)...\n", "\n", "Training for epochs 1 to 7...\n", "Epoch 1/7\n", "73/73 [==============================] - 221s - loss: 1.5158 - categorical_accuracy: 0.4727 - val_loss: 1.6949 - val_categorical_accuracy: 0.4422\n", "Epoch 2/7\n", "73/73 [==============================] - 218s - loss: 1.4046 - categorical_accuracy: 0.5134 - val_loss: 1.5776 - val_categorical_accuracy: 0.4414\n", "Epoch 3/7\n", "73/73 [==============================] - 217s - loss: 1.3760 - categorical_accuracy: 0.5222 - val_loss: 1.4455 - val_categorical_accuracy: 0.5070\n", "Epoch 4/7\n", "73/73 [==============================] - 218s - loss: 1.3786 - categorical_accuracy: 0.5180 - val_loss: 1.3972 - val_categorical_accuracy: 0.5492\n", "Epoch 5/7\n", "73/73 [==============================] - 220s - loss: 1.3204 - categorical_accuracy: 0.5418 - val_loss: 1.4144 - val_categorical_accuracy: 0.5414\n", "Epoch 6/7\n", "73/73 [==============================] - 218s - loss: 1.2957 - categorical_accuracy: 0.5502 - val_loss: 1.4014 - val_categorical_accuracy: 0.5422\n", "Epoch 7/7\n", "73/73 [==============================] - 219s - loss: 1.3257 - categorical_accuracy: 0.5432 - val_loss: 1.3827 - val_categorical_accuracy: 0.5359\n", "\n", "Training for epochs 8 to 14...\n", "Epoch 8/14\n", "73/73 [==============================] - 220s - loss: 1.2802 - categorical_accuracy: 0.5574 - val_loss: 1.3939 - val_categorical_accuracy: 0.5328\n", "Epoch 9/14\n", "73/73 [==============================] - 219s - loss: 1.2576 - categorical_accuracy: 0.5635 - val_loss: 1.3770 - val_categorical_accuracy: 0.5430\n", "Epoch 10/14\n", "73/73 [==============================] - 220s - loss: 1.2705 - categorical_accuracy: 0.5649 - val_loss: 1.3739 - val_categorical_accuracy: 0.5414\n", "Epoch 11/14\n", "73/73 [==============================] - 217s - loss: 1.2934 - categorical_accuracy: 0.5517 - val_loss: 1.3709 - val_categorical_accuracy: 0.5430\n", "Epoch 12/14\n", "73/73 [==============================] - 219s - loss: 1.2497 - categorical_accuracy: 0.5687 - val_loss: 1.3780 - val_categorical_accuracy: 0.5422\n", "Epoch 13/14\n", "73/73 [==============================] - 220s - loss: 1.2357 - categorical_accuracy: 0.5692 - val_loss: 1.3805 - val_categorical_accuracy: 0.5383\n", "Epoch 14/14\n", "73/73 [==============================] - 218s - loss: 1.2686 - categorical_accuracy: 0.5631 - val_loss: 1.3765 - val_categorical_accuracy: 0.5375\n", "\n", "Training for epochs 15 to 20...\n", "Epoch 15/20\n", "73/73 [==============================] - 220s - loss: 1.2248 - categorical_accuracy: 0.5796 - val_loss: 1.3826 - val_categorical_accuracy: 0.5359\n", "Epoch 16/20\n", "73/73 [==============================] - 218s - loss: 1.2122 - categorical_accuracy: 0.5781 - val_loss: 1.3742 - val_categorical_accuracy: 0.5344\n", "Epoch 17/20\n", "73/73 [==============================] - 218s - loss: 1.2246 - categorical_accuracy: 0.5747 - val_loss: 1.3768 - val_categorical_accuracy: 0.5367\n", "Epoch 18/20\n", "73/73 [==============================] - 217s - loss: 1.2682 - categorical_accuracy: 0.5568 - val_loss: 1.3742 - val_categorical_accuracy: 0.5359\n", "Epoch 19/20\n", "73/73 [==============================] - 219s - loss: 1.2155 - categorical_accuracy: 0.5775 - val_loss: 1.3760 - val_categorical_accuracy: 0.5352\n", "Epoch 20/20\n", "73/73 [==============================] - 218s - loss: 1.2044 - categorical_accuracy: 0.5897 - val_loss: 1.3835 - val_categorical_accuracy: 0.5359\n", "\n", "\n", "Further training (refining convolutional blocks, starting with\n", "\tlayer 161)...\n", "\n", "Training for epochs 21 to 27...\n", "Epoch 21/27\n", "73/73 [==============================] - 234s - loss: 1.2029 - categorical_accuracy: 0.5873 - val_loss: 1.3807 - val_categorical_accuracy: 0.5383\n", "Epoch 22/27\n", "73/73 [==============================] - 233s - loss: 1.1916 - categorical_accuracy: 0.5882 - val_loss: 1.3789 - val_categorical_accuracy: 0.5375\n", "Epoch 23/27\n", "73/73 [==============================] - 230s - loss: 1.2073 - categorical_accuracy: 0.5880 - val_loss: 1.3773 - val_categorical_accuracy: 0.5344\n", "Epoch 24/27\n", "73/73 [==============================] - 231s - loss: 1.2310 - categorical_accuracy: 0.5741 - val_loss: 1.3773 - val_categorical_accuracy: 0.5359\n", "Epoch 25/27\n", "73/73 [==============================] - 231s - loss: 1.1920 - categorical_accuracy: 0.5865 - val_loss: 1.3776 - val_categorical_accuracy: 0.5375\n", "Epoch 26/27\n", "73/73 [==============================] - 231s - loss: 1.1832 - categorical_accuracy: 0.5898 - val_loss: 1.3773 - val_categorical_accuracy: 0.5383\n", "Epoch 27/27\n", "73/73 [==============================] - 231s - loss: 1.2319 - categorical_accuracy: 0.5769 - val_loss: 1.3754 - val_categorical_accuracy: 0.5352\n", "\n", "Training for epochs 28 to 34...\n", "Epoch 28/34\n", "73/73 [==============================] - 231s - loss: 1.2088 - categorical_accuracy: 0.5825 - val_loss: 1.3755 - val_categorical_accuracy: 0.5367\n", "Epoch 29/34\n", "73/73 [==============================] - 232s - loss: 1.1791 - categorical_accuracy: 0.5891 - val_loss: 1.3748 - val_categorical_accuracy: 0.5375\n", "Epoch 30/34\n", "73/73 [==============================] - 230s - loss: 1.1994 - categorical_accuracy: 0.5859 - val_loss: 1.3744 - val_categorical_accuracy: 0.5352\n", "Epoch 31/34\n", "73/73 [==============================] - 230s - loss: 1.2215 - categorical_accuracy: 0.5780 - val_loss: 1.3742 - val_categorical_accuracy: 0.5359\n", "Epoch 32/34\n", "73/73 [==============================] - 230s - loss: 1.1920 - categorical_accuracy: 0.5901 - val_loss: 1.3745 - val_categorical_accuracy: 0.5367\n", "Epoch 33/34\n", "73/73 [==============================] - 230s - loss: 1.1718 - categorical_accuracy: 0.5895 - val_loss: 1.3746 - val_categorical_accuracy: 0.5352\n", "Epoch 34/34\n", "73/73 [==============================] - 229s - loss: 1.2295 - categorical_accuracy: 0.5760 - val_loss: 1.3733 - val_categorical_accuracy: 0.5359\n", "\n", "Training for epochs 35 to 40...\n", "Epoch 35/40\n", "73/73 [==============================] - 230s - loss: 1.1920 - categorical_accuracy: 0.5889 - val_loss: 1.3736 - val_categorical_accuracy: 0.5383\n", "Epoch 36/40\n", "73/73 [==============================] - 230s - loss: 1.1775 - categorical_accuracy: 0.5898 - val_loss: 1.3729 - val_categorical_accuracy: 0.5359\n", "Epoch 37/40\n", "73/73 [==============================] - 229s - loss: 1.1967 - categorical_accuracy: 0.5876 - val_loss: 1.3728 - val_categorical_accuracy: 0.5375\n", "Epoch 38/40\n", "73/73 [==============================] - 227s - loss: 1.2170 - categorical_accuracy: 0.5751 - val_loss: 1.3728 - val_categorical_accuracy: 0.5359\n", "Epoch 39/40\n", "73/73 [==============================] - 229s - loss: 1.1839 - categorical_accuracy: 0.5897 - val_loss: 1.3728 - val_categorical_accuracy: 0.5359\n", "Epoch 40/40\n", "73/73 [==============================] - 230s - loss: 1.1702 - categorical_accuracy: 0.5977 - val_loss: 1.3729 - val_categorical_accuracy: 0.5375\n", "\n", "\n", "Further training (refining convolutional blocks, starting with\n", "\tlayer 151)...\n", "\n", "Training for epochs 41 to 47...\n", "Epoch 41/47\n", "73/73 [==============================] - 247s - loss: 1.1870 - categorical_accuracy: 0.5895 - val_loss: 1.3728 - val_categorical_accuracy: 0.5367\n", "Epoch 42/47\n", "73/73 [==============================] - 244s - loss: 1.1780 - categorical_accuracy: 0.5898 - val_loss: 1.3714 - val_categorical_accuracy: 0.5359\n", "Epoch 43/47\n", "73/73 [==============================] - 245s - loss: 1.1931 - categorical_accuracy: 0.5899 - val_loss: 1.3717 - val_categorical_accuracy: 0.5375\n", "Epoch 44/47\n", "73/73 [==============================] - 243s - loss: 1.2146 - categorical_accuracy: 0.5763 - val_loss: 1.3711 - val_categorical_accuracy: 0.5367\n", "Epoch 45/47\n", "73/73 [==============================] - 245s - loss: 1.1744 - categorical_accuracy: 0.5867 - val_loss: 1.3710 - val_categorical_accuracy: 0.5367\n", "Epoch 46/47\n", "73/73 [==============================] - 245s - loss: 1.1610 - categorical_accuracy: 0.5961 - val_loss: 1.3700 - val_categorical_accuracy: 0.5367\n", "Epoch 47/47\n", "73/73 [==============================] - 245s - loss: 1.2121 - categorical_accuracy: 0.5833 - val_loss: 1.3691 - val_categorical_accuracy: 0.5367\n", "\n", "Training for epochs 48 to 54...\n", "Epoch 48/54\n", "73/73 [==============================] - 246s - loss: 1.1788 - categorical_accuracy: 0.5954 - val_loss: 1.3696 - val_categorical_accuracy: 0.5344\n", "Epoch 49/54\n", "73/73 [==============================] - 245s - loss: 1.1637 - categorical_accuracy: 0.5931 - val_loss: 1.3686 - val_categorical_accuracy: 0.5367\n", "Epoch 50/54\n", "73/73 [==============================] - 245s - loss: 1.1723 - categorical_accuracy: 0.6012 - val_loss: 1.3691 - val_categorical_accuracy: 0.5352\n", "Epoch 51/54\n", "73/73 [==============================] - 243s - loss: 1.1961 - categorical_accuracy: 0.5822 - val_loss: 1.3687 - val_categorical_accuracy: 0.5352\n", "Epoch 52/54\n", "73/73 [==============================] - 244s - loss: 1.1580 - categorical_accuracy: 0.5967 - val_loss: 1.3692 - val_categorical_accuracy: 0.5367\n", "Epoch 53/54\n", "73/73 [==============================] - 245s - loss: 1.1485 - categorical_accuracy: 0.6032 - val_loss: 1.3685 - val_categorical_accuracy: 0.5359\n", "Epoch 54/54\n", "73/73 [==============================] - 246s - loss: 1.2030 - categorical_accuracy: 0.5895 - val_loss: 1.3674 - val_categorical_accuracy: 0.5352\n", "\n", "Training for epochs 55 to 60...\n", "Epoch 55/60\n", "73/73 [==============================] - 245s - loss: 1.1651 - categorical_accuracy: 0.5970 - val_loss: 1.3683 - val_categorical_accuracy: 0.5359\n", "Epoch 56/60\n", "73/73 [==============================] - 246s - loss: 1.1470 - categorical_accuracy: 0.6053 - val_loss: 1.3676 - val_categorical_accuracy: 0.5367\n", "Epoch 57/60\n", "73/73 [==============================] - 245s - loss: 1.1689 - categorical_accuracy: 0.5963 - val_loss: 1.3680 - val_categorical_accuracy: 0.5336\n", "Epoch 58/60\n", "73/73 [==============================] - 242s - loss: 1.1883 - categorical_accuracy: 0.5900 - val_loss: 1.3679 - val_categorical_accuracy: 0.5359\n", "Epoch 59/60\n", "73/73 [==============================] - 243s - loss: 1.1513 - categorical_accuracy: 0.6007 - val_loss: 1.3691 - val_categorical_accuracy: 0.5352\n", "Epoch 60/60\n", "73/73 [==============================] - 244s - loss: 1.1377 - categorical_accuracy: 0.6042 - val_loss: 1.3686 - val_categorical_accuracy: 0.5352\n", "\n", "\n", "Further training (refining convolutional blocks, starting with\n", "\tlayer 139)...\n", "\n", "Training for epochs 61 to 67...\n", "Epoch 61/67\n", "73/73 [==============================] - 261s - loss: 1.1568 - categorical_accuracy: 0.5997 - val_loss: 1.3693 - val_categorical_accuracy: 0.5336\n", "Epoch 62/67\n", "73/73 [==============================] - 259s - loss: 1.1383 - categorical_accuracy: 0.6078 - val_loss: 1.3675 - val_categorical_accuracy: 0.5359\n", "Epoch 63/67\n", "73/73 [==============================] - 259s - loss: 1.1598 - categorical_accuracy: 0.5988 - val_loss: 1.3673 - val_categorical_accuracy: 0.5328\n", "Epoch 64/67\n", "73/73 [==============================] - 257s - loss: 1.1758 - categorical_accuracy: 0.5930 - val_loss: 1.3663 - val_categorical_accuracy: 0.5328\n", "Epoch 65/67\n", "73/73 [==============================] - 259s - loss: 1.1333 - categorical_accuracy: 0.6028 - val_loss: 1.3670 - val_categorical_accuracy: 0.5352\n", "Epoch 66/67\n", "73/73 [==============================] - 258s - loss: 1.1268 - categorical_accuracy: 0.6084 - val_loss: 1.3648 - val_categorical_accuracy: 0.5367\n", "Epoch 67/67\n", "73/73 [==============================] - 257s - loss: 1.1753 - categorical_accuracy: 0.5946 - val_loss: 1.3641 - val_categorical_accuracy: 0.5359\n", "\n", "Training for epochs 68 to 74...\n", "Epoch 68/74\n", "73/73 [==============================] - 260s - loss: 1.1348 - categorical_accuracy: 0.6058 - val_loss: 1.3652 - val_categorical_accuracy: 0.5336\n", "Epoch 69/74\n", "73/73 [==============================] - 259s - loss: 1.1206 - categorical_accuracy: 0.6097 - val_loss: 1.3636 - val_categorical_accuracy: 0.5375\n", "Epoch 70/74\n", "73/73 [==============================] - 258s - loss: 1.1301 - categorical_accuracy: 0.6061 - val_loss: 1.3639 - val_categorical_accuracy: 0.5344\n", "Epoch 71/74\n", "73/73 [==============================] - 256s - loss: 1.1581 - categorical_accuracy: 0.6003 - val_loss: 1.3642 - val_categorical_accuracy: 0.5328\n", "Epoch 72/74\n", "73/73 [==============================] - 257s - loss: 1.1132 - categorical_accuracy: 0.6142 - val_loss: 1.3649 - val_categorical_accuracy: 0.5359\n", "Epoch 73/74\n", "73/73 [==============================] - 259s - loss: 1.0994 - categorical_accuracy: 0.6223 - val_loss: 1.3641 - val_categorical_accuracy: 0.5383\n", "Epoch 74/74\n", "73/73 [==============================] - 259s - loss: 1.1448 - categorical_accuracy: 0.6042 - val_loss: 1.3634 - val_categorical_accuracy: 0.5352\n", "\n", "Training for epochs 75 to 80...\n", "Epoch 75/80\n", "73/73 [==============================] - 260s - loss: 1.1178 - categorical_accuracy: 0.6156 - val_loss: 1.3637 - val_categorical_accuracy: 0.5352\n", "Epoch 76/80\n", "73/73 [==============================] - 258s - loss: 1.0998 - categorical_accuracy: 0.6184 - val_loss: 1.3633 - val_categorical_accuracy: 0.5352\n", "Epoch 77/80\n", "73/73 [==============================] - 258s - loss: 1.1157 - categorical_accuracy: 0.6180 - val_loss: 1.3637 - val_categorical_accuracy: 0.5328\n", "Epoch 78/80\n", "73/73 [==============================] - 256s - loss: 1.1341 - categorical_accuracy: 0.6058 - val_loss: 1.3639 - val_categorical_accuracy: 0.5328\n", "Epoch 79/80\n", "73/73 [==============================] - 257s - loss: 1.0857 - categorical_accuracy: 0.6232 - val_loss: 1.3643 - val_categorical_accuracy: 0.5320\n", "Epoch 80/80\n", "73/73 [==============================] - 258s - loss: 1.0813 - categorical_accuracy: 0.6285 - val_loss: 1.3633 - val_categorical_accuracy: 0.5352\n", "\n", "\n", "Further training (refining convolutional blocks, starting with\n", "\tlayer 129)...\n", "\n", "Training for epochs 81 to 87...\n", "Epoch 81/87\n", "73/73 [==============================] - 275s - loss: 1.0983 - categorical_accuracy: 0.6222 - val_loss: 1.3630 - val_categorical_accuracy: 0.5344\n", "Epoch 82/87\n", "73/73 [==============================] - 272s - loss: 1.0801 - categorical_accuracy: 0.6276 - val_loss: 1.3621 - val_categorical_accuracy: 0.5336\n", "Epoch 83/87\n", "73/73 [==============================] - 272s - loss: 1.0939 - categorical_accuracy: 0.6247 - val_loss: 1.3629 - val_categorical_accuracy: 0.5320\n", "Epoch 84/87\n", "73/73 [==============================] - 270s - loss: 1.1144 - categorical_accuracy: 0.6167 - val_loss: 1.3630 - val_categorical_accuracy: 0.5320\n", "Epoch 85/87\n", "73/73 [==============================] - 272s - loss: 1.0674 - categorical_accuracy: 0.6298 - val_loss: 1.3658 - val_categorical_accuracy: 0.5336\n", "Epoch 86/87\n", "73/73 [==============================] - 272s - loss: 1.0593 - categorical_accuracy: 0.6342 - val_loss: 1.3641 - val_categorical_accuracy: 0.5344\n", "Epoch 87/87\n", "73/73 [==============================] - 273s - loss: 1.1162 - categorical_accuracy: 0.6110 - val_loss: 1.3630 - val_categorical_accuracy: 0.5352\n", "\n", "Training for epochs 88 to 94...\n", "Epoch 88/94\n", "73/73 [==============================] - 272s - loss: 1.0672 - categorical_accuracy: 0.6359 - val_loss: 1.3627 - val_categorical_accuracy: 0.5305\n", "Epoch 89/94\n", "73/73 [==============================] - 271s - loss: 1.0597 - categorical_accuracy: 0.6326 - val_loss: 1.3636 - val_categorical_accuracy: 0.5336\n", "Epoch 90/94\n", "73/73 [==============================] - 271s - loss: 1.0665 - categorical_accuracy: 0.6346 - val_loss: 1.3638 - val_categorical_accuracy: 0.5336\n", "Epoch 91/94\n", "73/73 [==============================] - 271s - loss: 1.0942 - categorical_accuracy: 0.6253 - val_loss: 1.3639 - val_categorical_accuracy: 0.5312\n", "Epoch 92/94\n", "73/73 [==============================] - 272s - loss: 1.0416 - categorical_accuracy: 0.6393 - val_loss: 1.3645 - val_categorical_accuracy: 0.5305\n", "Epoch 93/94\n", "73/73 [==============================] - 272s - loss: 1.0277 - categorical_accuracy: 0.6468 - val_loss: 1.3639 - val_categorical_accuracy: 0.5336\n", "Epoch 94/94\n", "73/73 [==============================] - 272s - loss: 1.0882 - categorical_accuracy: 0.6265 - val_loss: 1.3641 - val_categorical_accuracy: 0.5312\n", "\n", "Training for epochs 95 to 100...\n", "Epoch 95/100\n", "73/73 [==============================] - 272s - loss: 1.0430 - categorical_accuracy: 0.6439 - val_loss: 1.3661 - val_categorical_accuracy: 0.5273\n", "Epoch 96/100\n", "73/73 [==============================] - 273s - loss: 1.0356 - categorical_accuracy: 0.6366 - val_loss: 1.3652 - val_categorical_accuracy: 0.5305\n", "Epoch 97/100\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "73/73 [==============================] - 272s - loss: 1.0499 - categorical_accuracy: 0.6387 - val_loss: 1.3664 - val_categorical_accuracy: 0.5320\n", "Epoch 98/100\n", "73/73 [==============================] - 271s - loss: 1.0697 - categorical_accuracy: 0.6323 - val_loss: 1.3669 - val_categorical_accuracy: 0.5297\n", "Epoch 99/100\n", "73/73 [==============================] - 271s - loss: 1.0101 - categorical_accuracy: 0.6514 - val_loss: 1.3684 - val_categorical_accuracy: 0.5305\n", "Epoch 100/100\n", "73/73 [==============================] - 273s - loss: 0.9987 - categorical_accuracy: 0.6565 - val_loss: 1.3684 - val_categorical_accuracy: 0.5281\n", "\n", "\n", "Further training (refining convolutional blocks, starting with\n", "\tlayer 119)...\n", "\n", "Training for epochs 101 to 107...\n", "Epoch 101/107\n", "73/73 [==============================] - 288s - loss: 1.0243 - categorical_accuracy: 0.6459 - val_loss: 1.3666 - val_categorical_accuracy: 0.5297\n", "Epoch 102/107\n", "73/73 [==============================] - 283s - loss: 1.0090 - categorical_accuracy: 0.6549 - val_loss: 1.3672 - val_categorical_accuracy: 0.5312\n", "Epoch 103/107\n", "73/73 [==============================] - 285s - loss: 1.0227 - categorical_accuracy: 0.6460 - val_loss: 1.3682 - val_categorical_accuracy: 0.5320\n", "Epoch 104/107\n", "73/73 [==============================] - 284s - loss: 1.0393 - categorical_accuracy: 0.6399 - val_loss: 1.3678 - val_categorical_accuracy: 0.5336\n", "Epoch 105/107\n", "73/73 [==============================] - 284s - loss: 0.9903 - categorical_accuracy: 0.6597 - val_loss: 1.3711 - val_categorical_accuracy: 0.5305\n", "Epoch 106/107\n", "73/73 [==============================] - 286s - loss: 0.9753 - categorical_accuracy: 0.6650 - val_loss: 1.3689 - val_categorical_accuracy: 0.5312\n", "Epoch 107/107\n", "73/73 [==============================] - 285s - loss: 1.0433 - categorical_accuracy: 0.6379 - val_loss: 1.3674 - val_categorical_accuracy: 0.5289\n", "\n", "Training for epochs 108 to 114...\n", "Epoch 108/114\n", "73/73 [==============================] - 285s - loss: 0.9963 - categorical_accuracy: 0.6595 - val_loss: 1.3685 - val_categorical_accuracy: 0.5320\n", "Epoch 109/114\n", "73/73 [==============================] - 284s - loss: 0.9758 - categorical_accuracy: 0.6655 - val_loss: 1.3692 - val_categorical_accuracy: 0.5289\n", "Epoch 110/114\n", "73/73 [==============================] - 285s - loss: 0.9974 - categorical_accuracy: 0.6597 - val_loss: 1.3694 - val_categorical_accuracy: 0.5320\n", "Epoch 111/114\n", "73/73 [==============================] - 284s - loss: 1.0082 - categorical_accuracy: 0.6549 - val_loss: 1.3710 - val_categorical_accuracy: 0.5312\n", "Epoch 112/114\n", "73/73 [==============================] - 283s - loss: 0.9569 - categorical_accuracy: 0.6727 - val_loss: 1.3740 - val_categorical_accuracy: 0.5305\n", "Epoch 113/114\n", "73/73 [==============================] - 284s - loss: 0.9399 - categorical_accuracy: 0.6801 - val_loss: 1.3716 - val_categorical_accuracy: 0.5320\n", "Epoch 114/114\n", "73/73 [==============================] - 288s - loss: 1.0004 - categorical_accuracy: 0.6562 - val_loss: 1.3721 - val_categorical_accuracy: 0.5312\n", "\n", "Training for epochs 115 to 120...\n", "Epoch 115/120\n", "73/73 [==============================] - 284s - loss: 0.9650 - categorical_accuracy: 0.6682 - val_loss: 1.3724 - val_categorical_accuracy: 0.5312\n", "Epoch 116/120\n", "73/73 [==============================] - 283s - loss: 0.9429 - categorical_accuracy: 0.6787 - val_loss: 1.3748 - val_categorical_accuracy: 0.5297\n", "Epoch 117/120\n", "73/73 [==============================] - 284s - loss: 0.9621 - categorical_accuracy: 0.6731 - val_loss: 1.3753 - val_categorical_accuracy: 0.5328\n", "Epoch 118/120\n", "73/73 [==============================] - 283s - loss: 0.9763 - categorical_accuracy: 0.6643 - val_loss: 1.3774 - val_categorical_accuracy: 0.5344\n", "Epoch 119/120\n", "73/73 [==============================] - 286s - loss: 0.9222 - categorical_accuracy: 0.6821 - val_loss: 1.3800 - val_categorical_accuracy: 0.5312\n", "Epoch 120/120\n", "73/73 [==============================] - 283s - loss: 0.9129 - categorical_accuracy: 0.6922 - val_loss: 1.3757 - val_categorical_accuracy: 0.5336\n", "\n", "08:24:11 for ResNet50 to yield 69.2% training accuracy and 53.4% validation accuracy in 20 \n", "epochs (x6 training phases).\n", "\n", "ResNet50 run complete at Thursday, 2017 October 19, 5:01 AM.\n", "Clearing keras's backend Tensorflow session...\n", "\n" ] }, { "data": { "text/html": [ "\n", " \n", " " ], "text/plain": [ "" ] }, "execution_count": 40, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# ResNet50:\n", "run_resnet50()\n", "Audio(url=audio_file, autoplay=True)" ] }, { "cell_type": "code", "execution_count": 41, "metadata": { "collapsed": true, "scrolled": true }, "outputs": [], "source": [ "# Must be after Alert() call and in a separate cell for both audio and pop-up; sleep \n", "# allows the audio to play before the pop-up alters HTML output on the page:\n", "delayed_popup()" ] }, { "cell_type": "code", "execution_count": 9, "metadata": { "scrolled": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Best optimizer (0.01, 0.02, 0.85, 1e-10) loaded from file.\n", "Using optimizer (0.01, 0.02, 0.85, 1e-10)...\n", "VGG16 run begun at Saturday, 2017 October 21, 1:41 PM.\n", "\t[20 epochs (x6 passes) on extended FMA on GPU takes\n", "\tunknown (no similar runs found).]\n", "\n", "First-round training (training the classifier)...\n", "\n", "Training for epochs 1 to 3...\n", "Epoch 1/3\n", "73/73 [==============================] - 367s - loss: 1.5393 - categorical_accuracy: 0.4618 - val_loss: 1.9379 - val_categorical_accuracy: 0.4820\n", "Epoch 2/3\n", "73/73 [==============================] - 354s - loss: 1.4202 - categorical_accuracy: 0.5056 - val_loss: 1.6293 - val_categorical_accuracy: 0.4062\n", "Epoch 3/3\n", "73/73 [==============================] - 358s - loss: 1.3949 - categorical_accuracy: 0.5205 - val_loss: 1.4364 - val_categorical_accuracy: 0.4992\n", "\n", "Training for epochs 4 to 6...\n", "Epoch 4/6\n", "73/73 [==============================] - 366s - loss: 1.3942 - categorical_accuracy: 0.5196 - val_loss: 1.4910 - val_categorical_accuracy: 0.4883\n", "Epoch 5/6\n", "73/73 [==============================] - 364s - loss: 1.3593 - categorical_accuracy: 0.5264 - val_loss: 1.3907 - val_categorical_accuracy: 0.5234\n", "Epoch 6/6\n", "73/73 [==============================] - 363s - loss: 1.3557 - categorical_accuracy: 0.5309 - val_loss: 1.3800 - val_categorical_accuracy: 0.5305\n", "\n", "Training for epochs 7 to 9...\n", "Epoch 7/9\n", "73/73 [==============================] - 364s - loss: 1.3629 - categorical_accuracy: 0.5310 - val_loss: 1.4563 - val_categorical_accuracy: 0.5039\n", "Epoch 8/9\n", "73/73 [==============================] - 364s - loss: 1.3314 - categorical_accuracy: 0.5351 - val_loss: 1.3726 - val_categorical_accuracy: 0.5375\n", "Epoch 9/9\n", "73/73 [==============================] - 364s - loss: 1.3375 - categorical_accuracy: 0.5423 - val_loss: 1.3670 - val_categorical_accuracy: 0.5336\n", "\n", "Training for epochs 10 to 12...\n", "Epoch 10/12\n", "73/73 [==============================] - 364s - loss: 1.3452 - categorical_accuracy: 0.5384 - val_loss: 1.4051 - val_categorical_accuracy: 0.5148\n", "Epoch 11/12\n", "73/73 [==============================] - 364s - loss: 1.3159 - categorical_accuracy: 0.5448 - val_loss: 1.3567 - val_categorical_accuracy: 0.5352\n", "Epoch 12/12\n", "73/73 [==============================] - 364s - loss: 1.3222 - categorical_accuracy: 0.5414 - val_loss: 1.3550 - val_categorical_accuracy: 0.5391\n", "\n", "Training for epochs 13 to 15...\n", "Epoch 13/15\n", "73/73 [==============================] - 365s - loss: 1.3305 - categorical_accuracy: 0.5443 - val_loss: 1.3755 - val_categorical_accuracy: 0.5234\n", "Epoch 14/15\n", "73/73 [==============================] - 365s - loss: 1.3058 - categorical_accuracy: 0.5516 - val_loss: 1.3521 - val_categorical_accuracy: 0.5391\n", "Epoch 15/15\n", "73/73 [==============================] - 364s - loss: 1.3185 - categorical_accuracy: 0.5456 - val_loss: 1.3547 - val_categorical_accuracy: 0.5375\n", "\n", "Training for epochs 16 to 18...\n", "Epoch 16/18\n", "73/73 [==============================] - 365s - loss: 1.3270 - categorical_accuracy: 0.5432 - val_loss: 1.3628 - val_categorical_accuracy: 0.5297\n", "Epoch 17/18\n", "73/73 [==============================] - 363s - loss: 1.3095 - categorical_accuracy: 0.5481 - val_loss: 1.3480 - val_categorical_accuracy: 0.5344\n", "Epoch 18/18\n", "73/73 [==============================] - 364s - loss: 1.3073 - categorical_accuracy: 0.5439 - val_loss: 1.3501 - val_categorical_accuracy: 0.5414\n", "\n", "Training for epochs 19 to 20...\n", "Epoch 19/20\n", "73/73 [==============================] - 365s - loss: 1.3161 - categorical_accuracy: 0.5467 - val_loss: 1.3493 - val_categorical_accuracy: 0.5367\n", "Epoch 20/20\n", "73/73 [==============================] - 364s - loss: 1.2976 - categorical_accuracy: 0.5490 - val_loss: 1.3450 - val_categorical_accuracy: 0.5430\n", "\n", "\n", "Further training (refining convolutional blocks, starting with\n", "\tlayer 17)...\n", "\n", "Training for epochs 21 to 23...\n", "Epoch 21/23\n", "73/73 [==============================] - 374s - loss: 1.3175 - categorical_accuracy: 0.5512 - val_loss: 1.3603 - val_categorical_accuracy: 0.5359\n", "Epoch 22/23\n", "73/73 [==============================] - 373s - loss: 1.2825 - categorical_accuracy: 0.5586 - val_loss: 1.3627 - val_categorical_accuracy: 0.5281\n", "Epoch 23/23\n", "73/73 [==============================] - 373s - loss: 1.2921 - categorical_accuracy: 0.5542 - val_loss: 1.3500 - val_categorical_accuracy: 0.5414\n", "\n", "Training for epochs 24 to 26...\n", "Epoch 24/26\n", "73/73 [==============================] - 374s - loss: 1.2931 - categorical_accuracy: 0.5590 - val_loss: 1.3482 - val_categorical_accuracy: 0.5320\n", "Epoch 25/26\n", "73/73 [==============================] - 373s - loss: 1.2573 - categorical_accuracy: 0.5632 - val_loss: 1.3366 - val_categorical_accuracy: 0.5422\n", "Epoch 26/26\n", "73/73 [==============================] - 374s - loss: 1.2782 - categorical_accuracy: 0.5616 - val_loss: 1.3516 - val_categorical_accuracy: 0.5297\n", "\n", "Training for epochs 27 to 29...\n", "Epoch 27/29\n", "73/73 [==============================] - 372s - loss: 1.2798 - categorical_accuracy: 0.5626 - val_loss: 1.3371 - val_categorical_accuracy: 0.5375\n", "Epoch 28/29\n", "73/73 [==============================] - 372s - loss: 1.2481 - categorical_accuracy: 0.5677 - val_loss: 1.3096 - val_categorical_accuracy: 0.5508\n", "Epoch 29/29\n", "73/73 [==============================] - 373s - loss: 1.2667 - categorical_accuracy: 0.5607 - val_loss: 1.3212 - val_categorical_accuracy: 0.5539\n", "\n", "Training for epochs 30 to 32...\n", "Epoch 30/32\n", "73/73 [==============================] - 373s - loss: 1.2705 - categorical_accuracy: 0.5666 - val_loss: 1.3315 - val_categorical_accuracy: 0.5477\n", "Epoch 31/32\n", "73/73 [==============================] - 372s - loss: 1.2374 - categorical_accuracy: 0.5732 - val_loss: 1.3166 - val_categorical_accuracy: 0.5492\n", "Epoch 32/32\n", "73/73 [==============================] - 371s - loss: 1.2565 - categorical_accuracy: 0.5687 - val_loss: 1.3200 - val_categorical_accuracy: 0.5508\n", "\n", "Training for epochs 33 to 35...\n", "Epoch 33/35\n", "73/73 [==============================] - 373s - loss: 1.2671 - categorical_accuracy: 0.5684 - val_loss: 1.3207 - val_categorical_accuracy: 0.5477\n", "Epoch 34/35\n", "73/73 [==============================] - 374s - loss: 1.2277 - categorical_accuracy: 0.5790 - val_loss: 1.3070 - val_categorical_accuracy: 0.5484\n", "Epoch 35/35\n", "73/73 [==============================] - 372s - loss: 1.2551 - categorical_accuracy: 0.5656 - val_loss: 1.3038 - val_categorical_accuracy: 0.5648\n", "\n", "Training for epochs 36 to 38...\n", "Epoch 36/38\n", "73/73 [==============================] - 372s - loss: 1.2530 - categorical_accuracy: 0.5734 - val_loss: 1.3325 - val_categorical_accuracy: 0.5469\n", "Epoch 37/38\n", "73/73 [==============================] - 372s - loss: 1.2239 - categorical_accuracy: 0.5804 - val_loss: 1.2933 - val_categorical_accuracy: 0.5531\n", "Epoch 38/38\n", "73/73 [==============================] - 372s - loss: 1.2442 - categorical_accuracy: 0.5725 - val_loss: 1.3099 - val_categorical_accuracy: 0.5469\n", "\n", "Training for epochs 39 to 40...\n", "Epoch 39/40\n", "73/73 [==============================] - 373s - loss: 1.2500 - categorical_accuracy: 0.5753 - val_loss: 1.3183 - val_categorical_accuracy: 0.5539\n", "Epoch 40/40\n", "73/73 [==============================] - 373s - loss: 1.2112 - categorical_accuracy: 0.5853 - val_loss: 1.3110 - val_categorical_accuracy: 0.5539\n", "\n", "\n", "Further training (refining convolutional blocks, starting with\n", "\tlayer 15)...\n", "\n", "Training for epochs 41 to 43...\n", "Epoch 41/43\n", "73/73 [==============================] - 405s - loss: 1.2509 - categorical_accuracy: 0.5681 - val_loss: 1.3721 - val_categorical_accuracy: 0.5305\n", "Epoch 42/43\n", "73/73 [==============================] - 404s - loss: 1.2168 - categorical_accuracy: 0.5835 - val_loss: 1.3221 - val_categorical_accuracy: 0.5500\n", "Epoch 43/43\n", "73/73 [==============================] - 404s - loss: 1.2399 - categorical_accuracy: 0.5751 - val_loss: 1.3009 - val_categorical_accuracy: 0.5555\n", "\n", "Training for epochs 44 to 46...\n", "Epoch 44/46\n", "73/73 [==============================] - 404s - loss: 1.2214 - categorical_accuracy: 0.5826 - val_loss: 1.3545 - val_categorical_accuracy: 0.5328\n", "Epoch 45/46\n", "73/73 [==============================] - 404s - loss: 1.1985 - categorical_accuracy: 0.5902 - val_loss: 1.3164 - val_categorical_accuracy: 0.5570\n", "Epoch 46/46\n", "73/73 [==============================] - 404s - loss: 1.2256 - categorical_accuracy: 0.5823 - val_loss: 1.2867 - val_categorical_accuracy: 0.5570\n", "\n", "Training for epochs 47 to 49...\n", "Epoch 47/49\n", "73/73 [==============================] - 405s - loss: 1.2140 - categorical_accuracy: 0.5867 - val_loss: 1.3160 - val_categorical_accuracy: 0.5539\n", "Epoch 48/49\n", "73/73 [==============================] - 404s - loss: 1.1766 - categorical_accuracy: 0.6023 - val_loss: 1.3273 - val_categorical_accuracy: 0.5531\n", "Epoch 49/49\n", "73/73 [==============================] - 404s - loss: 1.2018 - categorical_accuracy: 0.5864 - val_loss: 1.2920 - val_categorical_accuracy: 0.5625\n", "\n", "Training for epochs 50 to 52...\n", "Epoch 50/52\n", "73/73 [==============================] - 404s - loss: 1.2014 - categorical_accuracy: 0.5933 - val_loss: 1.3058 - val_categorical_accuracy: 0.5492\n", "Epoch 51/52\n", "73/73 [==============================] - 405s - loss: 1.1665 - categorical_accuracy: 0.6033 - val_loss: 1.3187 - val_categorical_accuracy: 0.5609\n", "Epoch 52/52\n", "73/73 [==============================] - 406s - loss: 1.1858 - categorical_accuracy: 0.5952 - val_loss: 1.2757 - val_categorical_accuracy: 0.5594\n", "\n", "Training for epochs 53 to 55...\n", "Epoch 53/55\n", "73/73 [==============================] - 405s - loss: 1.1778 - categorical_accuracy: 0.5978 - val_loss: 1.3238 - val_categorical_accuracy: 0.5383\n", "Epoch 54/55\n", "73/73 [==============================] - 404s - loss: 1.1486 - categorical_accuracy: 0.6052 - val_loss: 1.2942 - val_categorical_accuracy: 0.5680\n", "Epoch 55/55\n", "73/73 [==============================] - 405s - loss: 1.1783 - categorical_accuracy: 0.5920 - val_loss: 1.2948 - val_categorical_accuracy: 0.5586\n", "\n", "Training for epochs 56 to 58...\n", "Epoch 56/58\n", "73/73 [==============================] - 406s - loss: 1.1648 - categorical_accuracy: 0.6053 - val_loss: 1.3394 - val_categorical_accuracy: 0.5414\n", "Epoch 57/58\n", "73/73 [==============================] - 405s - loss: 1.1374 - categorical_accuracy: 0.6101 - val_loss: 1.3161 - val_categorical_accuracy: 0.5680\n", "Epoch 58/58\n", "73/73 [==============================] - 405s - loss: 1.1628 - categorical_accuracy: 0.5993 - val_loss: 1.3000 - val_categorical_accuracy: 0.5523\n", "\n", "Training for epochs 59 to 60...\n", "Epoch 59/60\n", "73/73 [==============================] - 406s - loss: 1.1631 - categorical_accuracy: 0.6052 - val_loss: 1.3410 - val_categorical_accuracy: 0.5367\n", "Epoch 60/60\n", "73/73 [==============================] - 405s - loss: 1.1273 - categorical_accuracy: 0.6196 - val_loss: 1.2838 - val_categorical_accuracy: 0.5719\n", "\n", "\n", "Further training (refining convolutional blocks, starting with\n", "\tlayer 13)...\n", "\n", "Training for epochs 61 to 63...\n", "Epoch 61/63\n", "73/73 [==============================] - 449s - loss: 1.1517 - categorical_accuracy: 0.6106 - val_loss: 1.3564 - val_categorical_accuracy: 0.5477\n", "Epoch 62/63\n", "73/73 [==============================] - 448s - loss: 1.1230 - categorical_accuracy: 0.6144 - val_loss: 1.3192 - val_categorical_accuracy: 0.5555\n", "Epoch 63/63\n", "73/73 [==============================] - 448s - loss: 1.1604 - categorical_accuracy: 0.6015 - val_loss: 1.2819 - val_categorical_accuracy: 0.5656\n", "\n", "Training for epochs 64 to 66...\n", "Epoch 64/66\n", "73/73 [==============================] - 448s - loss: 1.1371 - categorical_accuracy: 0.6111 - val_loss: 1.3615 - val_categorical_accuracy: 0.5297\n", "Epoch 65/66\n", "73/73 [==============================] - 449s - loss: 1.1073 - categorical_accuracy: 0.6237 - val_loss: 1.2943 - val_categorical_accuracy: 0.5570\n", "Epoch 66/66\n", "73/73 [==============================] - 448s - loss: 1.1427 - categorical_accuracy: 0.6113 - val_loss: 1.2793 - val_categorical_accuracy: 0.5617\n", "\n", "Training for epochs 67 to 69...\n", "Epoch 67/69\n", "73/73 [==============================] - 449s - loss: 1.1253 - categorical_accuracy: 0.6182 - val_loss: 1.3848 - val_categorical_accuracy: 0.5578\n", "Epoch 68/69\n", "73/73 [==============================] - 448s - loss: 1.0943 - categorical_accuracy: 0.6310 - val_loss: 1.3418 - val_categorical_accuracy: 0.5430\n", "Epoch 69/69\n", "73/73 [==============================] - 447s - loss: 1.1283 - categorical_accuracy: 0.6113 - val_loss: 1.2854 - val_categorical_accuracy: 0.5695\n", "\n", "Training for epochs 70 to 72...\n", "Epoch 70/72\n", "73/73 [==============================] - 448s - loss: 1.1036 - categorical_accuracy: 0.6218 - val_loss: 1.3295 - val_categorical_accuracy: 0.5617\n", "Epoch 71/72\n", "73/73 [==============================] - 448s - loss: 1.0776 - categorical_accuracy: 0.6318 - val_loss: 1.3105 - val_categorical_accuracy: 0.5555\n", "Epoch 72/72\n", "73/73 [==============================] - 448s - loss: 1.1083 - categorical_accuracy: 0.6194 - val_loss: 1.3031 - val_categorical_accuracy: 0.5602\n", "\n", "Training for epochs 73 to 75...\n", "Epoch 73/75\n", "73/73 [==============================] - 448s - loss: 1.0935 - categorical_accuracy: 0.6295 - val_loss: 1.2996 - val_categorical_accuracy: 0.5570\n", "Epoch 74/75\n", "73/73 [==============================] - 448s - loss: 1.0658 - categorical_accuracy: 0.6394 - val_loss: 1.3089 - val_categorical_accuracy: 0.5586\n", "Epoch 75/75\n", "73/73 [==============================] - 448s - loss: 1.0923 - categorical_accuracy: 0.6250 - val_loss: 1.3243 - val_categorical_accuracy: 0.5531\n", "\n", "Training for epochs 76 to 78...\n", "Epoch 76/78\n", "73/73 [==============================] - 448s - loss: 1.0775 - categorical_accuracy: 0.6298 - val_loss: 1.3058 - val_categorical_accuracy: 0.5648\n", "Epoch 77/78\n", "73/73 [==============================] - 448s - loss: 1.0473 - categorical_accuracy: 0.6444 - val_loss: 1.3216 - val_categorical_accuracy: 0.5563\n", "Epoch 78/78\n", "73/73 [==============================] - 448s - loss: 1.0779 - categorical_accuracy: 0.6316 - val_loss: 1.2840 - val_categorical_accuracy: 0.5633\n", "\n", "Training for epochs 79 to 80...\n", "Epoch 79/80\n", "73/73 [==============================] - 449s - loss: 1.0695 - categorical_accuracy: 0.6402 - val_loss: 1.2955 - val_categorical_accuracy: 0.5555\n", "Epoch 80/80\n", "73/73 [==============================] - 448s - loss: 1.0364 - categorical_accuracy: 0.6460 - val_loss: 1.3065 - val_categorical_accuracy: 0.5547\n", "\n", "\n", "Further training (refining convolutional blocks, starting with\n", "\tlayer 11)...\n", "\n", "Training for epochs 81 to 83...\n", "Epoch 81/83\n", "73/73 [==============================] - 570s - loss: 1.0516 - categorical_accuracy: 0.6407 - val_loss: 1.3650 - val_categorical_accuracy: 0.5398\n", "Epoch 82/83\n", "73/73 [==============================] - 566s - loss: 1.0406 - categorical_accuracy: 0.6458 - val_loss: 1.2925 - val_categorical_accuracy: 0.5719\n", "Epoch 83/83\n", "73/73 [==============================] - 566s - loss: 1.0861 - categorical_accuracy: 0.6260 - val_loss: 1.2695 - val_categorical_accuracy: 0.5656\n", "\n", "Training for epochs 84 to 86...\n", "Epoch 84/86\n", "73/73 [==============================] - 568s - loss: 1.0483 - categorical_accuracy: 0.6433 - val_loss: 1.2992 - val_categorical_accuracy: 0.5617\n", "Epoch 85/86\n", "73/73 [==============================] - 568s - loss: 1.0158 - categorical_accuracy: 0.6544 - val_loss: 1.2911 - val_categorical_accuracy: 0.5680\n", "Epoch 86/86\n", "73/73 [==============================] - 566s - loss: 1.0663 - categorical_accuracy: 0.6351 - val_loss: 1.3150 - val_categorical_accuracy: 0.5641\n", "\n", "Training for epochs 87 to 89...\n", "Epoch 87/89\n", "73/73 [==============================] - 567s - loss: 1.0316 - categorical_accuracy: 0.6506 - val_loss: 1.3436 - val_categorical_accuracy: 0.5586\n", "Epoch 88/89\n", "73/73 [==============================] - 567s - loss: 1.0018 - categorical_accuracy: 0.6589 - val_loss: 1.2938 - val_categorical_accuracy: 0.5625\n", "Epoch 89/89\n", "73/73 [==============================] - 567s - loss: 1.0414 - categorical_accuracy: 0.6450 - val_loss: 1.2752 - val_categorical_accuracy: 0.5687\n", "\n", "Training for epochs 90 to 92...\n", "Epoch 90/92\n", "73/73 [==============================] - 567s - loss: 1.0092 - categorical_accuracy: 0.6561 - val_loss: 1.5115 - val_categorical_accuracy: 0.5227\n", "Epoch 91/92\n", "73/73 [==============================] - 566s - loss: 0.9841 - categorical_accuracy: 0.6606 - val_loss: 1.3161 - val_categorical_accuracy: 0.5594\n", "Epoch 92/92\n", "73/73 [==============================] - 566s - loss: 1.0207 - categorical_accuracy: 0.6509 - val_loss: 1.3627 - val_categorical_accuracy: 0.5453\n", "\n", "Training for epochs 93 to 95...\n", "Epoch 93/95\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "73/73 [==============================] - 567s - loss: 0.9914 - categorical_accuracy: 0.6663 - val_loss: 1.3479 - val_categorical_accuracy: 0.5445\n", "Epoch 94/95\n", "73/73 [==============================] - 566s - loss: 0.9632 - categorical_accuracy: 0.6736 - val_loss: 1.3339 - val_categorical_accuracy: 0.5508\n", "Epoch 95/95\n", "73/73 [==============================] - 567s - loss: 1.0041 - categorical_accuracy: 0.6550 - val_loss: 1.3064 - val_categorical_accuracy: 0.5687\n", "\n", "Training for epochs 96 to 98...\n", "Epoch 96/98\n", "73/73 [==============================] - 567s - loss: 0.9732 - categorical_accuracy: 0.6729 - val_loss: 1.3797 - val_categorical_accuracy: 0.5477\n", "Epoch 97/98\n", "73/73 [==============================] - 566s - loss: 0.9456 - categorical_accuracy: 0.6771 - val_loss: 1.3242 - val_categorical_accuracy: 0.5500\n", "Epoch 98/98\n", "73/73 [==============================] - 567s - loss: 0.9834 - categorical_accuracy: 0.6614 - val_loss: 1.4140 - val_categorical_accuracy: 0.5258\n", "\n", "Training for epochs 99 to 100...\n", "Epoch 99/100\n", "73/73 [==============================] - 566s - loss: 0.9537 - categorical_accuracy: 0.6794 - val_loss: 1.3526 - val_categorical_accuracy: 0.5578\n", "Epoch 100/100\n", "73/73 [==============================] - 566s - loss: 0.9276 - categorical_accuracy: 0.6853 - val_loss: 1.3440 - val_categorical_accuracy: 0.5391\n", "\n", "\n", "Further training (refining convolutional blocks, starting with\n", "\tlayer 8)...\n", "\n", "Training for epochs 101 to 103...\n", "Epoch 101/103\n", "73/73 [==============================] - 703s - loss: 0.9346 - categorical_accuracy: 0.6841 - val_loss: 1.3771 - val_categorical_accuracy: 0.5570\n", "Epoch 102/103\n", "73/73 [==============================] - 697s - loss: 0.9097 - categorical_accuracy: 0.6884 - val_loss: 1.3348 - val_categorical_accuracy: 0.5500\n", "Epoch 103/103\n", "73/73 [==============================] - 698s - loss: 0.9838 - categorical_accuracy: 0.6607 - val_loss: 1.3459 - val_categorical_accuracy: 0.5539\n", "\n", "Training for epochs 104 to 106...\n", "Epoch 104/106\n", "73/73 [==============================] - 697s - loss: 0.9121 - categorical_accuracy: 0.6942 - val_loss: 1.3570 - val_categorical_accuracy: 0.5484\n", "Epoch 105/106\n", "73/73 [==============================] - 697s - loss: 0.8959 - categorical_accuracy: 0.6957 - val_loss: 1.3903 - val_categorical_accuracy: 0.5320\n", "Epoch 106/106\n", "73/73 [==============================] - 697s - loss: 0.9539 - categorical_accuracy: 0.6734 - val_loss: 1.3440 - val_categorical_accuracy: 0.5539\n", "\n", "Training for epochs 107 to 109...\n", "Epoch 107/109\n", "73/73 [==============================] - 698s - loss: 0.8957 - categorical_accuracy: 0.6924 - val_loss: 1.4126 - val_categorical_accuracy: 0.5406\n", "Epoch 108/109\n", "73/73 [==============================] - 696s - loss: 0.8665 - categorical_accuracy: 0.7045 - val_loss: 1.3795 - val_categorical_accuracy: 0.5445\n", "Epoch 109/109\n", "73/73 [==============================] - 697s - loss: 0.9263 - categorical_accuracy: 0.6811 - val_loss: 1.5197 - val_categorical_accuracy: 0.5125\n", "\n", "Training for epochs 110 to 112...\n", "Epoch 110/112\n", "73/73 [==============================] - 697s - loss: 0.8774 - categorical_accuracy: 0.7048 - val_loss: 1.3917 - val_categorical_accuracy: 0.5500\n", "Epoch 111/112\n", "73/73 [==============================] - 697s - loss: 0.8515 - categorical_accuracy: 0.7120 - val_loss: 1.4399 - val_categorical_accuracy: 0.5289\n", "Epoch 112/112\n", "73/73 [==============================] - 697s - loss: 0.9146 - categorical_accuracy: 0.6856 - val_loss: 1.4821 - val_categorical_accuracy: 0.5211\n", "\n", "Training for epochs 113 to 115...\n", "Epoch 113/115\n", "73/73 [==============================] - 697s - loss: 0.8632 - categorical_accuracy: 0.7083 - val_loss: 1.4567 - val_categorical_accuracy: 0.5492\n", "Epoch 114/115\n", "73/73 [==============================] - 697s - loss: 0.8336 - categorical_accuracy: 0.7232 - val_loss: 1.3690 - val_categorical_accuracy: 0.5320\n", "Epoch 115/115\n", "73/73 [==============================] - 696s - loss: 0.8807 - categorical_accuracy: 0.7032 - val_loss: 1.7046 - val_categorical_accuracy: 0.4945\n", "\n", "Training for epochs 116 to 118...\n", "Epoch 116/118\n", "73/73 [==============================] - 697s - loss: 0.8417 - categorical_accuracy: 0.7163 - val_loss: 1.4763 - val_categorical_accuracy: 0.5336\n", "Epoch 117/118\n", "73/73 [==============================] - 697s - loss: 0.8130 - categorical_accuracy: 0.7266 - val_loss: 1.4147 - val_categorical_accuracy: 0.5258\n", "Epoch 118/118\n", "73/73 [==============================] - 696s - loss: 0.8606 - categorical_accuracy: 0.7040 - val_loss: 1.5192 - val_categorical_accuracy: 0.5367\n", "\n", "Training for epochs 119 to 120...\n", "Epoch 119/120\n", "73/73 [==============================] - 698s - loss: 0.8170 - categorical_accuracy: 0.7232 - val_loss: 1.3931 - val_categorical_accuracy: 0.5453\n", "Epoch 120/120\n", "73/73 [==============================] - 697s - loss: 0.7938 - categorical_accuracy: 0.7382 - val_loss: 1.5565 - val_categorical_accuracy: 0.5086\n", "\n", "15:52:12 for VGG16 to yield 73.8% training accuracy and 50.9% validation accuracy in 20 \n", "epochs (x6 training phases).\n", "\n", "VGG16 run complete at Sunday, 2017 October 22, 5:34 AM.\n", "Clearing keras's backend Tensorflow session...\n", "\n" ] }, { "data": { "text/html": [ "\n", " \n", " " ], "text/plain": [ "" ] }, "execution_count": 9, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# VGG16:\n", "run_vgg16()\n", "Audio(url=audio_file, autoplay=True)" ] }, { "cell_type": "code", "execution_count": 10, "metadata": { "collapsed": true, "scrolled": true }, "outputs": [], "source": [ "# Must be after Alert() call and in a separate cell for both audio and pop-up; sleep \n", "# allows the audio to play before the pop-up alters HTML output on the page:\n", "delayed_popup()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Section-Specific Setup (33% Augmentation)" ] }, { "cell_type": "code", "execution_count": 11, "metadata": { "scrolled": true }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Creating generators with batch size 128...\n", "Loading mean and standard deviation for the training set from file 'saved_objects/fma_extended_dwt_stats.npz'.\n", "\n", "Using up to 33.0% horizontal shift to augment training data.\n", "Found 37316 images belonging to 8 classes.\n", "Found 4350 images belonging to 8 classes.\n", "Found 4651 images belonging to 8 classes.\n" ] } ], "source": [ "param_dict[\"augmentation\"] = 0.33 \n", "param_dict[\"which_size\"] = \"extended\" \n", "\n", "real_dataset_size = 37316 # There are 37316 training images in the data set\n", "epoch_ratio = 1/4 # This is to keep the number of steps per epoch under control, especially\n", " # for large networks like VGG16/VGG19. It necessitates multiplying the \n", " # number of epochs by the inverse to get an accurate number of \"real\"\n", " # epochs.\n", "param_dict[\"dataset_size\"] = math.ceil(epoch_ratio*real_dataset_size) # Round up\n", "\n", "# Reconfigure the GPU-available/CPU-only specific options:\n", "if using_gpu:\n", " # Large training runs are OK to process on the GPU:\n", " param_dict[\"spu\"] = \"gpu\"\n", " param_dict[\"pass_epochs\"] = math.ceil(5/epoch_ratio) # Run for more \"epochs\" to make up\n", " # for dividing epochs into parts\n", " # Adjust so each epoch sees every image about once:\n", " param_dict[\"batch_size\"] = 128 # 192 is too high for even one epoch of VGG19 on GCE.\n", " # Note: going higher than 64 sometimes will lead to \n", " # memory issues on VGG16, b/c garbage collection isn't \n", " # instantaneous and VGG16 has a huge number of \n", " # parameters, but we want this as large as possible, \n", " # This problem is ameliorated somewhat by specifying\n", " # a small epoch_batch_size in the call to \n", " # run_pretrained_model(), which will checkpoint the \n", " # training every epoch_batch_size epochs to clean up\n", " # memory fragmentation (see also http://bit.ly/2hDHJay )\n", " param_dict[\"steps_per_epoch\"] = math.ceil(param_dict[\"dataset_size\"]/\n", " param_dict[\"batch_size\"]) \n", " param_dict[\"validation_steps\"] = math.ceil(param_dict[\"dataset_size\"]/\n", " (8*param_dict[\"batch_size\"])) \n", " \n", "# Update the path for images:\n", "param_dict[\"img_dir\"] = os.path.join(\"data\",\n", " os.path.join(\"fma_images\",\n", " os.path.join(\"byclass\", \n", " os.path.join(param_dict[\"which_size\"], \n", " param_dict[\"which_wavelet\"])\n", " )\n", " )\n", " )\n", " \n", "# Update ETAs:\n", "calc_etas(param_dict)\n", "\n", "# Reconfigure the generators based on the specified parameters:\n", "generators = {}\n", "(generators[\"train\"], \n", " generators[\"val\"], \n", " generators[\"test\"]) = cku.set_up_generators(param_dict)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Model Reruns" ] }, { "cell_type": "code", "execution_count": 17, "metadata": { "scrolled": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Best optimizer (0.01, 0.02, 0.85, 1e-10) loaded from file.\n", "Using hidden size 128 and optimizer (0.01, 0.02, 0.85, 1e-10)...\n", "Fully connected network run begun at Thursday, 2017 October 19, 12:47 PM.\n", "\t[120 epochs on extended FMA on GPU takes\n", "\tunknown (no similar runs found).]\n", "\n", "\n", "Training for epochs 1 to 10...\n", "Epoch 1/10\n", "73/73 [==============================] - 135s - loss: 1.5870 - categorical_accuracy: 0.4331 - val_loss: 1.5440 - val_categorical_accuracy: 0.4945\n", "Epoch 2/10\n", "73/73 [==============================] - 129s - loss: 1.4728 - categorical_accuracy: 0.4890 - val_loss: 1.4833 - val_categorical_accuracy: 0.4898\n", "Epoch 3/10\n", "73/73 [==============================] - 129s - loss: 1.4476 - categorical_accuracy: 0.4892 - val_loss: 1.4524 - val_categorical_accuracy: 0.4969\n", "Epoch 4/10\n", "73/73 [==============================] - 128s - loss: 1.4422 - categorical_accuracy: 0.4966 - val_loss: 1.4277 - val_categorical_accuracy: 0.4938\n", "Epoch 5/10\n", "73/73 [==============================] - 128s - loss: 1.4342 - categorical_accuracy: 0.4976 - val_loss: 1.4255 - val_categorical_accuracy: 0.5078\n", "Epoch 6/10\n", "73/73 [==============================] - 131s - loss: 1.4230 - categorical_accuracy: 0.5003 - val_loss: 1.4172 - val_categorical_accuracy: 0.5047\n", "Epoch 7/10\n", "73/73 [==============================] - 131s - loss: 1.4419 - categorical_accuracy: 0.5015 - val_loss: 1.4093 - val_categorical_accuracy: 0.5109\n", "Epoch 8/10\n", "73/73 [==============================] - 129s - loss: 1.4090 - categorical_accuracy: 0.5121 - val_loss: 1.4022 - val_categorical_accuracy: 0.5156\n", "Epoch 9/10\n", "73/73 [==============================] - 130s - loss: 1.4179 - categorical_accuracy: 0.5071 - val_loss: 1.4063 - val_categorical_accuracy: 0.5133\n", "Epoch 10/10\n", "73/73 [==============================] - 130s - loss: 1.4234 - categorical_accuracy: 0.5073 - val_loss: 1.4058 - val_categorical_accuracy: 0.5094\n", "\n", "Training for epochs 11 to 20...\n", "Epoch 11/20\n", "73/73 [==============================] - 136s - loss: 1.3675 - categorical_accuracy: 0.5300 - val_loss: 1.4029 - val_categorical_accuracy: 0.5227\n", "Epoch 12/20\n", "73/73 [==============================] - 129s - loss: 1.3350 - categorical_accuracy: 0.5451 - val_loss: 1.4046 - val_categorical_accuracy: 0.5164\n", "Epoch 13/20\n", "73/73 [==============================] - 128s - loss: 1.3446 - categorical_accuracy: 0.5325 - val_loss: 1.4057 - val_categorical_accuracy: 0.5164\n", "Epoch 14/20\n", "73/73 [==============================] - 130s - loss: 1.3619 - categorical_accuracy: 0.5293 - val_loss: 1.3999 - val_categorical_accuracy: 0.5172\n", "Epoch 15/20\n", "73/73 [==============================] - 130s - loss: 1.3631 - categorical_accuracy: 0.5231 - val_loss: 1.3974 - val_categorical_accuracy: 0.5211\n", "Epoch 16/20\n", "73/73 [==============================] - 130s - loss: 1.3685 - categorical_accuracy: 0.5180 - val_loss: 1.3949 - val_categorical_accuracy: 0.5203\n", "Epoch 17/20\n", "73/73 [==============================] - 129s - loss: 1.3925 - categorical_accuracy: 0.5148 - val_loss: 1.3893 - val_categorical_accuracy: 0.5203\n", "Epoch 18/20\n", "73/73 [==============================] - 128s - loss: 1.3652 - categorical_accuracy: 0.5305 - val_loss: 1.3885 - val_categorical_accuracy: 0.5281\n", "Epoch 19/20\n", "73/73 [==============================] - 130s - loss: 1.3802 - categorical_accuracy: 0.5186 - val_loss: 1.3859 - val_categorical_accuracy: 0.5281\n", "Epoch 20/20\n", "73/73 [==============================] - 130s - loss: 1.3893 - categorical_accuracy: 0.5199 - val_loss: 1.3884 - val_categorical_accuracy: 0.5289\n", "\n", "Training for epochs 21 to 30...\n", "Epoch 21/30\n", "73/73 [==============================] - 137s - loss: 1.3241 - categorical_accuracy: 0.5424 - val_loss: 1.3845 - val_categorical_accuracy: 0.5312\n", "Epoch 22/30\n", "73/73 [==============================] - 129s - loss: 1.2962 - categorical_accuracy: 0.5564 - val_loss: 1.3922 - val_categorical_accuracy: 0.5227\n", "Epoch 23/30\n", "73/73 [==============================] - 132s - loss: 1.3156 - categorical_accuracy: 0.5425 - val_loss: 1.3918 - val_categorical_accuracy: 0.5188\n", "Epoch 24/30\n", "73/73 [==============================] - 130s - loss: 1.3397 - categorical_accuracy: 0.5382 - val_loss: 1.3929 - val_categorical_accuracy: 0.5195\n", "Epoch 25/30\n", "73/73 [==============================] - 128s - loss: 1.3369 - categorical_accuracy: 0.5355 - val_loss: 1.3912 - val_categorical_accuracy: 0.5188\n", "Epoch 26/30\n", "73/73 [==============================] - 128s - loss: 1.3443 - categorical_accuracy: 0.5310 - val_loss: 1.3867 - val_categorical_accuracy: 0.5242\n", "Epoch 27/30\n", "73/73 [==============================] - 130s - loss: 1.3732 - categorical_accuracy: 0.5285 - val_loss: 1.3817 - val_categorical_accuracy: 0.5273\n", "Epoch 28/30\n", "73/73 [==============================] - 130s - loss: 1.3406 - categorical_accuracy: 0.5413 - val_loss: 1.3830 - val_categorical_accuracy: 0.5281\n", "Epoch 29/30\n", "73/73 [==============================] - 130s - loss: 1.3485 - categorical_accuracy: 0.5333 - val_loss: 1.3812 - val_categorical_accuracy: 0.5266\n", "Epoch 30/30\n", "73/73 [==============================] - 128s - loss: 1.3577 - categorical_accuracy: 0.5292 - val_loss: 1.3818 - val_categorical_accuracy: 0.5281\n", "\n", "Training for epochs 31 to 40...\n", "Epoch 31/40\n", "73/73 [==============================] - 135s - loss: 1.3037 - categorical_accuracy: 0.5505 - val_loss: 1.3815 - val_categorical_accuracy: 0.5258\n", "Epoch 32/40\n", "73/73 [==============================] - 129s - loss: 1.2736 - categorical_accuracy: 0.5618 - val_loss: 1.3881 - val_categorical_accuracy: 0.5219\n", "Epoch 33/40\n", "73/73 [==============================] - 132s - loss: 1.2887 - categorical_accuracy: 0.5518 - val_loss: 1.3891 - val_categorical_accuracy: 0.5219\n", "Epoch 34/40\n", "73/73 [==============================] - 131s - loss: 1.3179 - categorical_accuracy: 0.5403 - val_loss: 1.3897 - val_categorical_accuracy: 0.5227\n", "Epoch 35/40\n", "73/73 [==============================] - 131s - loss: 1.3248 - categorical_accuracy: 0.5432 - val_loss: 1.3909 - val_categorical_accuracy: 0.5258\n", "Epoch 36/40\n", "73/73 [==============================] - 131s - loss: 1.3231 - categorical_accuracy: 0.5385 - val_loss: 1.3867 - val_categorical_accuracy: 0.5273\n", "Epoch 37/40\n", "73/73 [==============================] - 131s - loss: 1.3528 - categorical_accuracy: 0.5331 - val_loss: 1.3819 - val_categorical_accuracy: 0.5297\n", "Epoch 38/40\n", "73/73 [==============================] - 128s - loss: 1.3319 - categorical_accuracy: 0.5468 - val_loss: 1.3808 - val_categorical_accuracy: 0.5266\n", "Epoch 39/40\n", "73/73 [==============================] - 131s - loss: 1.3370 - categorical_accuracy: 0.5369 - val_loss: 1.3816 - val_categorical_accuracy: 0.5250\n", "Epoch 40/40\n", "73/73 [==============================] - 129s - loss: 1.3506 - categorical_accuracy: 0.5364 - val_loss: 1.3805 - val_categorical_accuracy: 0.5281\n", "\n", "Training for epochs 41 to 50...\n", "Epoch 41/50\n", "73/73 [==============================] - 137s - loss: 1.2802 - categorical_accuracy: 0.5595 - val_loss: 1.3787 - val_categorical_accuracy: 0.5258\n", "Epoch 42/50\n", "73/73 [==============================] - 130s - loss: 1.2572 - categorical_accuracy: 0.5707 - val_loss: 1.3843 - val_categorical_accuracy: 0.5219\n", "Epoch 43/50\n", "73/73 [==============================] - 130s - loss: 1.2790 - categorical_accuracy: 0.5595 - val_loss: 1.3873 - val_categorical_accuracy: 0.5211\n", "Epoch 44/50\n", "73/73 [==============================] - 130s - loss: 1.3047 - categorical_accuracy: 0.5438 - val_loss: 1.3867 - val_categorical_accuracy: 0.5281\n", "Epoch 45/50\n", "73/73 [==============================] - 131s - loss: 1.3036 - categorical_accuracy: 0.5459 - val_loss: 1.3860 - val_categorical_accuracy: 0.5227\n", "Epoch 46/50\n", "73/73 [==============================] - 130s - loss: 1.3070 - categorical_accuracy: 0.5505 - val_loss: 1.3841 - val_categorical_accuracy: 0.5273\n", "Epoch 47/50\n", "73/73 [==============================] - 132s - loss: 1.3431 - categorical_accuracy: 0.5385 - val_loss: 1.3799 - val_categorical_accuracy: 0.5297\n", "Epoch 48/50\n", "73/73 [==============================] - 127s - loss: 1.3165 - categorical_accuracy: 0.5537 - val_loss: 1.3777 - val_categorical_accuracy: 0.5273\n", "Epoch 49/50\n", "73/73 [==============================] - 132s - loss: 1.3344 - categorical_accuracy: 0.5433 - val_loss: 1.3768 - val_categorical_accuracy: 0.5312\n", "Epoch 50/50\n", "73/73 [==============================] - 128s - loss: 1.3333 - categorical_accuracy: 0.5456 - val_loss: 1.3783 - val_categorical_accuracy: 0.5266\n", "\n", "Training for epochs 51 to 60...\n", "Epoch 51/60\n", "73/73 [==============================] - 136s - loss: 1.2718 - categorical_accuracy: 0.5644 - val_loss: 1.3784 - val_categorical_accuracy: 0.5289\n", "Epoch 52/60\n", "73/73 [==============================] - 133s - loss: 1.2418 - categorical_accuracy: 0.5733 - val_loss: 1.3827 - val_categorical_accuracy: 0.5258\n", "Epoch 53/60\n", "73/73 [==============================] - 131s - loss: 1.2649 - categorical_accuracy: 0.5645 - val_loss: 1.3839 - val_categorical_accuracy: 0.5242\n", "Epoch 54/60\n", "73/73 [==============================] - 128s - loss: 1.2883 - categorical_accuracy: 0.5572 - val_loss: 1.3838 - val_categorical_accuracy: 0.5297\n", "Epoch 55/60\n", "73/73 [==============================] - 129s - loss: 1.2916 - categorical_accuracy: 0.5504 - val_loss: 1.3839 - val_categorical_accuracy: 0.5258\n", "Epoch 56/60\n", "73/73 [==============================] - 130s - loss: 1.2980 - categorical_accuracy: 0.5515 - val_loss: 1.3843 - val_categorical_accuracy: 0.5289\n", "Epoch 57/60\n", "73/73 [==============================] - 130s - loss: 1.3262 - categorical_accuracy: 0.5431 - val_loss: 1.3798 - val_categorical_accuracy: 0.5312\n", "Epoch 58/60\n", "73/73 [==============================] - 128s - loss: 1.3033 - categorical_accuracy: 0.5483 - val_loss: 1.3786 - val_categorical_accuracy: 0.5328\n", "Epoch 59/60\n", "73/73 [==============================] - 130s - loss: 1.3238 - categorical_accuracy: 0.5435 - val_loss: 1.3779 - val_categorical_accuracy: 0.5305\n", "Epoch 60/60\n", "73/73 [==============================] - 131s - loss: 1.3258 - categorical_accuracy: 0.5431 - val_loss: 1.3778 - val_categorical_accuracy: 0.5344\n", "\n", "Training for epochs 61 to 70...\n", "Epoch 61/70\n", "73/73 [==============================] - 137s - loss: 1.2542 - categorical_accuracy: 0.5647 - val_loss: 1.3788 - val_categorical_accuracy: 0.5328\n", "Epoch 62/70\n", "73/73 [==============================] - 131s - loss: 1.2281 - categorical_accuracy: 0.5817 - val_loss: 1.3834 - val_categorical_accuracy: 0.5258\n", "Epoch 63/70\n", "73/73 [==============================] - 130s - loss: 1.2537 - categorical_accuracy: 0.5674 - val_loss: 1.3842 - val_categorical_accuracy: 0.5312\n", "Epoch 64/70\n", "73/73 [==============================] - 130s - loss: 1.2788 - categorical_accuracy: 0.5567 - val_loss: 1.3857 - val_categorical_accuracy: 0.5367\n", "Epoch 65/70\n", "73/73 [==============================] - 130s - loss: 1.2828 - categorical_accuracy: 0.5595 - val_loss: 1.3879 - val_categorical_accuracy: 0.5289\n", "Epoch 66/70\n", "73/73 [==============================] - 132s - loss: 1.2958 - categorical_accuracy: 0.5522 - val_loss: 1.3838 - val_categorical_accuracy: 0.5328\n", "Epoch 67/70\n", "73/73 [==============================] - 129s - loss: 1.3196 - categorical_accuracy: 0.5477 - val_loss: 1.3795 - val_categorical_accuracy: 0.5391\n", "Epoch 68/70\n", "73/73 [==============================] - 129s - loss: 1.3008 - categorical_accuracy: 0.5562 - val_loss: 1.3810 - val_categorical_accuracy: 0.5336\n", "Epoch 69/70\n", "73/73 [==============================] - 130s - loss: 1.3147 - categorical_accuracy: 0.5451 - val_loss: 1.3798 - val_categorical_accuracy: 0.5359\n", "Epoch 70/70\n", "73/73 [==============================] - 130s - loss: 1.3204 - categorical_accuracy: 0.5463 - val_loss: 1.3797 - val_categorical_accuracy: 0.5352\n", "\n", "Training for epochs 71 to 80...\n", "Epoch 71/80\n", "73/73 [==============================] - 134s - loss: 1.2492 - categorical_accuracy: 0.5687 - val_loss: 1.3816 - val_categorical_accuracy: 0.5289\n", "Epoch 72/80\n", "73/73 [==============================] - 131s - loss: 1.2138 - categorical_accuracy: 0.5854 - val_loss: 1.3856 - val_categorical_accuracy: 0.5281\n", "Epoch 73/80\n", "73/73 [==============================] - 126s - loss: 1.2433 - categorical_accuracy: 0.5679 - val_loss: 1.3861 - val_categorical_accuracy: 0.5305\n", "Epoch 74/80\n", "73/73 [==============================] - 126s - loss: 1.2694 - categorical_accuracy: 0.5630 - val_loss: 1.3859 - val_categorical_accuracy: 0.5297\n", "Epoch 75/80\n", "73/73 [==============================] - 130s - loss: 1.2743 - categorical_accuracy: 0.5588 - val_loss: 1.3878 - val_categorical_accuracy: 0.5289\n", "Epoch 76/80\n", "73/73 [==============================] - 129s - loss: 1.2837 - categorical_accuracy: 0.5561 - val_loss: 1.3832 - val_categorical_accuracy: 0.5344\n", "Epoch 77/80\n", "73/73 [==============================] - 126s - loss: 1.3100 - categorical_accuracy: 0.5473 - val_loss: 1.3824 - val_categorical_accuracy: 0.5375\n", "Epoch 78/80\n", "73/73 [==============================] - 127s - loss: 1.2881 - categorical_accuracy: 0.5597 - val_loss: 1.3830 - val_categorical_accuracy: 0.5375\n", "Epoch 79/80\n", "73/73 [==============================] - 132s - loss: 1.3120 - categorical_accuracy: 0.5457 - val_loss: 1.3839 - val_categorical_accuracy: 0.5352\n", "Epoch 80/80\n", "73/73 [==============================] - 128s - loss: 1.3140 - categorical_accuracy: 0.5436 - val_loss: 1.3821 - val_categorical_accuracy: 0.5344\n", "\n", "Training for epochs 81 to 90...\n", "Epoch 81/90\n", "73/73 [==============================] - 133s - loss: 1.2301 - categorical_accuracy: 0.5763 - val_loss: 1.3832 - val_categorical_accuracy: 0.5328\n", "Epoch 82/90\n", "73/73 [==============================] - 128s - loss: 1.2093 - categorical_accuracy: 0.5857 - val_loss: 1.3875 - val_categorical_accuracy: 0.5312\n", "Epoch 83/90\n", "73/73 [==============================] - 128s - loss: 1.2326 - categorical_accuracy: 0.5722 - val_loss: 1.3875 - val_categorical_accuracy: 0.5352\n", "Epoch 84/90\n", "73/73 [==============================] - 129s - loss: 1.2612 - categorical_accuracy: 0.5665 - val_loss: 1.3894 - val_categorical_accuracy: 0.5297\n", "Epoch 85/90\n", "73/73 [==============================] - 129s - loss: 1.2688 - categorical_accuracy: 0.5573 - val_loss: 1.3892 - val_categorical_accuracy: 0.5281\n", "Epoch 86/90\n", "73/73 [==============================] - 128s - loss: 1.2736 - categorical_accuracy: 0.5631 - val_loss: 1.3863 - val_categorical_accuracy: 0.5352\n", "Epoch 87/90\n", "73/73 [==============================] - 128s - loss: 1.3060 - categorical_accuracy: 0.5491 - val_loss: 1.3854 - val_categorical_accuracy: 0.5359\n", "Epoch 88/90\n", "73/73 [==============================] - 131s - loss: 1.2843 - categorical_accuracy: 0.5634 - val_loss: 1.3861 - val_categorical_accuracy: 0.5320\n", "Epoch 89/90\n", "73/73 [==============================] - 131s - loss: 1.3041 - categorical_accuracy: 0.5515 - val_loss: 1.3860 - val_categorical_accuracy: 0.5344\n", "Epoch 90/90\n", "73/73 [==============================] - 130s - loss: 1.3027 - categorical_accuracy: 0.5554 - val_loss: 1.3846 - val_categorical_accuracy: 0.5344\n", "\n", "Training for epochs 91 to 100...\n", "Epoch 91/100\n", "73/73 [==============================] - 136s - loss: 1.2318 - categorical_accuracy: 0.5703 - val_loss: 1.3851 - val_categorical_accuracy: 0.5352\n", "Epoch 92/100\n", "73/73 [==============================] - 130s - loss: 1.1964 - categorical_accuracy: 0.5926 - val_loss: 1.3883 - val_categorical_accuracy: 0.5359\n", "Epoch 93/100\n", "73/73 [==============================] - 131s - loss: 1.2243 - categorical_accuracy: 0.5774 - val_loss: 1.3890 - val_categorical_accuracy: 0.5336\n", "Epoch 94/100\n", "73/73 [==============================] - 127s - loss: 1.2509 - categorical_accuracy: 0.5657 - val_loss: 1.3913 - val_categorical_accuracy: 0.5320\n", "Epoch 95/100\n", "73/73 [==============================] - 129s - loss: 1.2693 - categorical_accuracy: 0.5616 - val_loss: 1.3920 - val_categorical_accuracy: 0.5328\n", "Epoch 96/100\n", "73/73 [==============================] - 130s - loss: 1.2674 - categorical_accuracy: 0.5623 - val_loss: 1.3900 - val_categorical_accuracy: 0.5367\n", "Epoch 97/100\n", "73/73 [==============================] - 129s - loss: 1.3037 - categorical_accuracy: 0.5492 - val_loss: 1.3880 - val_categorical_accuracy: 0.5352\n", "Epoch 98/100\n", "73/73 [==============================] - 128s - loss: 1.2718 - categorical_accuracy: 0.5694 - val_loss: 1.3877 - val_categorical_accuracy: 0.5352\n", "Epoch 99/100\n", "73/73 [==============================] - 129s - loss: 1.2906 - categorical_accuracy: 0.5528 - val_loss: 1.3866 - val_categorical_accuracy: 0.5375\n", "Epoch 100/100\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "73/73 [==============================] - 127s - loss: 1.3044 - categorical_accuracy: 0.5515 - val_loss: 1.3872 - val_categorical_accuracy: 0.5383\n", "\n", "Training for epochs 101 to 110...\n", "Epoch 101/110\n", "73/73 [==============================] - 134s - loss: 1.2217 - categorical_accuracy: 0.5771 - val_loss: 1.3857 - val_categorical_accuracy: 0.5406\n", "Epoch 102/110\n", "73/73 [==============================] - 130s - loss: 1.1861 - categorical_accuracy: 0.5947 - val_loss: 1.3920 - val_categorical_accuracy: 0.5320\n", "Epoch 103/110\n", "73/73 [==============================] - 130s - loss: 1.2125 - categorical_accuracy: 0.5761 - val_loss: 1.3931 - val_categorical_accuracy: 0.5344\n", "Epoch 104/110\n", "73/73 [==============================] - 129s - loss: 1.2451 - categorical_accuracy: 0.5723 - val_loss: 1.3943 - val_categorical_accuracy: 0.5312\n", "Epoch 105/110\n", "73/73 [==============================] - 129s - loss: 1.2536 - categorical_accuracy: 0.5628 - val_loss: 1.3943 - val_categorical_accuracy: 0.5320\n", "Epoch 106/110\n", "73/73 [==============================] - 132s - loss: 1.2586 - categorical_accuracy: 0.5641 - val_loss: 1.3932 - val_categorical_accuracy: 0.5352\n", "Epoch 107/110\n", "73/73 [==============================] - 131s - loss: 1.2961 - categorical_accuracy: 0.5592 - val_loss: 1.3903 - val_categorical_accuracy: 0.5320\n", "Epoch 108/110\n", "73/73 [==============================] - 131s - loss: 1.2637 - categorical_accuracy: 0.5672 - val_loss: 1.3909 - val_categorical_accuracy: 0.5336\n", "Epoch 109/110\n", "73/73 [==============================] - 128s - loss: 1.2820 - categorical_accuracy: 0.5603 - val_loss: 1.3891 - val_categorical_accuracy: 0.5375\n", "Epoch 110/110\n", "73/73 [==============================] - 131s - loss: 1.2966 - categorical_accuracy: 0.5518 - val_loss: 1.3882 - val_categorical_accuracy: 0.5406\n", "\n", "Training for epochs 111 to 120...\n", "Epoch 111/120\n", "73/73 [==============================] - 134s - loss: 1.2185 - categorical_accuracy: 0.5765 - val_loss: 1.3882 - val_categorical_accuracy: 0.5336\n", "Epoch 112/120\n", "73/73 [==============================] - 129s - loss: 1.1806 - categorical_accuracy: 0.5948 - val_loss: 1.3925 - val_categorical_accuracy: 0.5320\n", "Epoch 113/120\n", "73/73 [==============================] - 131s - loss: 1.2105 - categorical_accuracy: 0.5834 - val_loss: 1.3936 - val_categorical_accuracy: 0.5367\n", "Epoch 114/120\n", "73/73 [==============================] - 129s - loss: 1.2405 - categorical_accuracy: 0.5669 - val_loss: 1.3959 - val_categorical_accuracy: 0.5289\n", "Epoch 115/120\n", "73/73 [==============================] - 131s - loss: 1.2556 - categorical_accuracy: 0.5693 - val_loss: 1.3954 - val_categorical_accuracy: 0.5336\n", "Epoch 116/120\n", "73/73 [==============================] - 130s - loss: 1.2591 - categorical_accuracy: 0.5639 - val_loss: 1.3944 - val_categorical_accuracy: 0.5336\n", "Epoch 117/120\n", "73/73 [==============================] - 133s - loss: 1.2865 - categorical_accuracy: 0.5585 - val_loss: 1.3907 - val_categorical_accuracy: 0.5328\n", "Epoch 118/120\n", "73/73 [==============================] - 129s - loss: 1.2585 - categorical_accuracy: 0.5666 - val_loss: 1.3903 - val_categorical_accuracy: 0.5359\n", "Epoch 119/120\n", "73/73 [==============================] - 129s - loss: 1.2809 - categorical_accuracy: 0.5571 - val_loss: 1.3910 - val_categorical_accuracy: 0.5367\n", "Epoch 120/120\n", "73/73 [==============================] - 129s - loss: 1.2846 - categorical_accuracy: 0.5581 - val_loss: 1.3898 - val_categorical_accuracy: 0.5391\n", "\n", "04:21:25 for Two-Layer Network to yield 55.8% training accuracy and 53.9% validation accuracy in 20 \n", "epochs (x3 training phases).\n", "\n", "Fully connected run complete at Thursday, 2017 October 19, 5:08 PM.\n", "Clearing keras's backend Tensorflow session...\n", "\n" ] }, { "data": { "text/html": [ "\n", " \n", " " ], "text/plain": [ "" ] }, "execution_count": 17, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# Now rerun all 5 models, starting with FCNN:\n", "run_fcnn()\n", "Audio(url=audio_file, autoplay=True)" ] }, { "cell_type": "code", "execution_count": 19, "metadata": { "scrolled": true }, "outputs": [ { "data": { "text/html": [ "" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "# Must be after Alert() call and in a separate cell for both audio and pop-up; sleep \n", "# allows the audio to play before the pop-up alters HTML output on the page:\n", "delayed_popup()" ] }, { "cell_type": "code", "execution_count": 20, "metadata": { "scrolled": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Best optimizer (0.01, 0.02, 0.85, 1e-10) loaded from file.\n", "Using optimizer (0.01, 0.02, 0.85, 1e-10)...\n", "Xception run begun at Thursday, 2017 October 19, 5:09 PM.\n", "\t[20 epochs (x6 passes) on extended FMA on GPU takes\n", "\tunknown (no similar runs found).]\n", "\n", "First-round training (training the classifier)...\n", "\n", "Training for epochs 1 to 7...\n", "Epoch 1/7\n", "73/73 [==============================] - 285s - loss: 1.6108 - categorical_accuracy: 0.4238 - val_loss: 2.0718 - val_categorical_accuracy: 0.3852\n", "Epoch 2/7\n", "73/73 [==============================] - 282s - loss: 1.5077 - categorical_accuracy: 0.4724 - val_loss: 1.8523 - val_categorical_accuracy: 0.4055\n", "Epoch 3/7\n", "73/73 [==============================] - 282s - loss: 1.5083 - categorical_accuracy: 0.4670 - val_loss: 1.5743 - val_categorical_accuracy: 0.4539\n", "Epoch 4/7\n", "73/73 [==============================] - 280s - loss: 1.4882 - categorical_accuracy: 0.4714 - val_loss: 1.5026 - val_categorical_accuracy: 0.4766\n", "Epoch 5/7\n", "73/73 [==============================] - 282s - loss: 1.4667 - categorical_accuracy: 0.4815 - val_loss: 1.5070 - val_categorical_accuracy: 0.5016\n", "Epoch 6/7\n", "73/73 [==============================] - 282s - loss: 1.4562 - categorical_accuracy: 0.4832 - val_loss: 1.5044 - val_categorical_accuracy: 0.4953\n", "Epoch 7/7\n", "73/73 [==============================] - 282s - loss: 1.4670 - categorical_accuracy: 0.4757 - val_loss: 1.4842 - val_categorical_accuracy: 0.5023\n", "\n", "Training for epochs 8 to 14...\n", "Epoch 8/14\n", "73/73 [==============================] - 284s - loss: 1.4117 - categorical_accuracy: 0.5041 - val_loss: 1.4941 - val_categorical_accuracy: 0.4984\n", "Epoch 9/14\n", "73/73 [==============================] - 282s - loss: 1.3861 - categorical_accuracy: 0.5136 - val_loss: 1.4954 - val_categorical_accuracy: 0.4914\n", "Epoch 10/14\n", "73/73 [==============================] - 282s - loss: 1.4113 - categorical_accuracy: 0.5058 - val_loss: 1.4925 - val_categorical_accuracy: 0.4945\n", "Epoch 11/14\n", "73/73 [==============================] - 281s - loss: 1.4169 - categorical_accuracy: 0.4987 - val_loss: 1.4856 - val_categorical_accuracy: 0.5008\n", "Epoch 12/14\n", "73/73 [==============================] - 282s - loss: 1.4112 - categorical_accuracy: 0.5024 - val_loss: 1.4896 - val_categorical_accuracy: 0.4984\n", "Epoch 13/14\n", "73/73 [==============================] - 282s - loss: 1.4117 - categorical_accuracy: 0.5046 - val_loss: 1.4909 - val_categorical_accuracy: 0.4930\n", "Epoch 14/14\n", "73/73 [==============================] - 281s - loss: 1.4283 - categorical_accuracy: 0.4921 - val_loss: 1.4783 - val_categorical_accuracy: 0.5023\n", "\n", "Training for epochs 15 to 20...\n", "Epoch 15/20\n", "73/73 [==============================] - 284s - loss: 1.3830 - categorical_accuracy: 0.5156 - val_loss: 1.4765 - val_categorical_accuracy: 0.5055\n", "Epoch 16/20\n", "73/73 [==============================] - 281s - loss: 1.3599 - categorical_accuracy: 0.5173 - val_loss: 1.4807 - val_categorical_accuracy: 0.5047\n", "Epoch 17/20\n", "73/73 [==============================] - 282s - loss: 1.3810 - categorical_accuracy: 0.5156 - val_loss: 1.4820 - val_categorical_accuracy: 0.4992\n", "Epoch 18/20\n", "73/73 [==============================] - 281s - loss: 1.3918 - categorical_accuracy: 0.5111 - val_loss: 1.4794 - val_categorical_accuracy: 0.5070\n", "Epoch 19/20\n", "73/73 [==============================] - 281s - loss: 1.3894 - categorical_accuracy: 0.5156 - val_loss: 1.4811 - val_categorical_accuracy: 0.5062\n", "Epoch 20/20\n", "73/73 [==============================] - 283s - loss: 1.3876 - categorical_accuracy: 0.5123 - val_loss: 1.4872 - val_categorical_accuracy: 0.5031\n", "\n", "\n", "Further training (refining convolutional blocks, starting with\n", "\tlayer 122)...\n", "\n", "Training for epochs 21 to 27...\n", "Epoch 21/27\n", "73/73 [==============================] - 297s - loss: 1.3628 - categorical_accuracy: 0.5224 - val_loss: 1.4785 - val_categorical_accuracy: 0.5047\n", "Epoch 22/27\n", "73/73 [==============================] - 294s - loss: 1.3428 - categorical_accuracy: 0.5274 - val_loss: 1.4761 - val_categorical_accuracy: 0.5078\n", "Epoch 23/27\n", "73/73 [==============================] - 294s - loss: 1.3586 - categorical_accuracy: 0.5232 - val_loss: 1.4727 - val_categorical_accuracy: 0.5062\n", "Epoch 24/27\n", "73/73 [==============================] - 293s - loss: 1.3691 - categorical_accuracy: 0.5177 - val_loss: 1.4720 - val_categorical_accuracy: 0.5055\n", "Epoch 25/27\n", "73/73 [==============================] - 294s - loss: 1.3683 - categorical_accuracy: 0.5210 - val_loss: 1.4706 - val_categorical_accuracy: 0.5086\n", "Epoch 26/27\n", "73/73 [==============================] - 294s - loss: 1.3673 - categorical_accuracy: 0.5208 - val_loss: 1.4685 - val_categorical_accuracy: 0.5094\n", "Epoch 27/27\n", "73/73 [==============================] - 293s - loss: 1.4043 - categorical_accuracy: 0.5005 - val_loss: 1.4651 - val_categorical_accuracy: 0.5102\n", "\n", "Training for epochs 28 to 34...\n", "Epoch 28/34\n", "73/73 [==============================] - 296s - loss: 1.3530 - categorical_accuracy: 0.5300 - val_loss: 1.4637 - val_categorical_accuracy: 0.5094\n", "Epoch 29/34\n", "73/73 [==============================] - 293s - loss: 1.3387 - categorical_accuracy: 0.5284 - val_loss: 1.4641 - val_categorical_accuracy: 0.5141\n", "Epoch 30/34\n", "73/73 [==============================] - 294s - loss: 1.3549 - categorical_accuracy: 0.5259 - val_loss: 1.4625 - val_categorical_accuracy: 0.5125\n", "Epoch 31/34\n", "73/73 [==============================] - 293s - loss: 1.3602 - categorical_accuracy: 0.5216 - val_loss: 1.4627 - val_categorical_accuracy: 0.5141\n", "Epoch 32/34\n", "73/73 [==============================] - 294s - loss: 1.3572 - categorical_accuracy: 0.5255 - val_loss: 1.4621 - val_categorical_accuracy: 0.5141\n", "Epoch 33/34\n", "73/73 [==============================] - 294s - loss: 1.3601 - categorical_accuracy: 0.5209 - val_loss: 1.4606 - val_categorical_accuracy: 0.5141\n", "Epoch 34/34\n", "73/73 [==============================] - 294s - loss: 1.3965 - categorical_accuracy: 0.5040 - val_loss: 1.4580 - val_categorical_accuracy: 0.5133\n", "\n", "Training for epochs 35 to 40...\n", "Epoch 35/40\n", "73/73 [==============================] - 296s - loss: 1.3517 - categorical_accuracy: 0.5227 - val_loss: 1.4571 - val_categorical_accuracy: 0.5125\n", "Epoch 36/40\n", "73/73 [==============================] - 294s - loss: 1.3279 - categorical_accuracy: 0.5307 - val_loss: 1.4571 - val_categorical_accuracy: 0.5180\n", "Epoch 37/40\n", "73/73 [==============================] - 293s - loss: 1.3518 - categorical_accuracy: 0.5270 - val_loss: 1.4558 - val_categorical_accuracy: 0.5141\n", "Epoch 38/40\n", "73/73 [==============================] - 291s - loss: 1.3529 - categorical_accuracy: 0.5240 - val_loss: 1.4562 - val_categorical_accuracy: 0.5164\n", "Epoch 39/40\n", "73/73 [==============================] - 292s - loss: 1.3563 - categorical_accuracy: 0.5273 - val_loss: 1.4558 - val_categorical_accuracy: 0.5148\n", "Epoch 40/40\n", "73/73 [==============================] - 293s - loss: 1.3488 - categorical_accuracy: 0.5257 - val_loss: 1.4546 - val_categorical_accuracy: 0.5156\n", "\n", "\n", "Further training (refining convolutional blocks, starting with\n", "\tlayer 105)...\n", "\n", "Training for epochs 41 to 47...\n", "Epoch 41/47\n", "73/73 [==============================] - 338s - loss: 1.3381 - categorical_accuracy: 0.5293 - val_loss: 1.4529 - val_categorical_accuracy: 0.5148\n", "Epoch 42/47\n", "73/73 [==============================] - 336s - loss: 1.3273 - categorical_accuracy: 0.5306 - val_loss: 1.4519 - val_categorical_accuracy: 0.5180\n", "Epoch 43/47\n", "73/73 [==============================] - 336s - loss: 1.3353 - categorical_accuracy: 0.5322 - val_loss: 1.4497 - val_categorical_accuracy: 0.5172\n", "Epoch 44/47\n", "73/73 [==============================] - 334s - loss: 1.3477 - categorical_accuracy: 0.5322 - val_loss: 1.4495 - val_categorical_accuracy: 0.5148\n", "Epoch 45/47\n", "73/73 [==============================] - 335s - loss: 1.3480 - categorical_accuracy: 0.5312 - val_loss: 1.4490 - val_categorical_accuracy: 0.5164\n", "Epoch 46/47\n", "73/73 [==============================] - 335s - loss: 1.3456 - categorical_accuracy: 0.5253 - val_loss: 1.4473 - val_categorical_accuracy: 0.5156\n", "Epoch 47/47\n", "73/73 [==============================] - 336s - loss: 1.3806 - categorical_accuracy: 0.5128 - val_loss: 1.4446 - val_categorical_accuracy: 0.5172\n", "\n", "Training for epochs 48 to 54...\n", "Epoch 48/54\n", "73/73 [==============================] - 338s - loss: 1.3391 - categorical_accuracy: 0.5317 - val_loss: 1.4441 - val_categorical_accuracy: 0.5172\n", "Epoch 49/54\n", "73/73 [==============================] - 336s - loss: 1.3146 - categorical_accuracy: 0.5355 - val_loss: 1.4438 - val_categorical_accuracy: 0.5195\n", "Epoch 50/54\n", "73/73 [==============================] - 336s - loss: 1.3368 - categorical_accuracy: 0.5339 - val_loss: 1.4421 - val_categorical_accuracy: 0.5195\n", "Epoch 51/54\n", "73/73 [==============================] - 334s - loss: 1.3397 - categorical_accuracy: 0.5328 - val_loss: 1.4423 - val_categorical_accuracy: 0.5203\n", "Epoch 52/54\n", "73/73 [==============================] - 336s - loss: 1.3442 - categorical_accuracy: 0.5337 - val_loss: 1.4425 - val_categorical_accuracy: 0.5172\n", "Epoch 53/54\n", "73/73 [==============================] - 337s - loss: 1.3392 - categorical_accuracy: 0.5284 - val_loss: 1.4410 - val_categorical_accuracy: 0.5164\n", "Epoch 54/54\n", "73/73 [==============================] - 336s - loss: 1.3798 - categorical_accuracy: 0.5107 - val_loss: 1.4387 - val_categorical_accuracy: 0.5141\n", "\n", "Training for epochs 55 to 60...\n", "Epoch 55/60\n", "73/73 [==============================] - 337s - loss: 1.3310 - categorical_accuracy: 0.5367 - val_loss: 1.4385 - val_categorical_accuracy: 0.5172\n", "Epoch 56/60\n", "73/73 [==============================] - 335s - loss: 1.3098 - categorical_accuracy: 0.5422 - val_loss: 1.4384 - val_categorical_accuracy: 0.5172\n", "Epoch 57/60\n", "73/73 [==============================] - 336s - loss: 1.3252 - categorical_accuracy: 0.5401 - val_loss: 1.4370 - val_categorical_accuracy: 0.5172\n", "Epoch 58/60\n", "73/73 [==============================] - 334s - loss: 1.3307 - categorical_accuracy: 0.5302 - val_loss: 1.4375 - val_categorical_accuracy: 0.5156\n", "Epoch 59/60\n", "73/73 [==============================] - 336s - loss: 1.3368 - categorical_accuracy: 0.5332 - val_loss: 1.4373 - val_categorical_accuracy: 0.5172\n", "Epoch 60/60\n", "73/73 [==============================] - 336s - loss: 1.3313 - categorical_accuracy: 0.5351 - val_loss: 1.4360 - val_categorical_accuracy: 0.5164\n", "\n", "\n", "Further training (refining convolutional blocks, starting with\n", "\tlayer 95)...\n", "\n", "Training for epochs 61 to 67...\n", "Epoch 61/67\n", "73/73 [==============================] - 364s - loss: 1.3235 - categorical_accuracy: 0.5387 - val_loss: 1.4348 - val_categorical_accuracy: 0.5180\n", "Epoch 62/67\n", "73/73 [==============================] - 360s - loss: 1.3000 - categorical_accuracy: 0.5432 - val_loss: 1.4344 - val_categorical_accuracy: 0.5141\n", "Epoch 63/67\n", "73/73 [==============================] - 360s - loss: 1.3179 - categorical_accuracy: 0.5355 - val_loss: 1.4328 - val_categorical_accuracy: 0.5188\n", "Epoch 64/67\n", "73/73 [==============================] - 358s - loss: 1.3328 - categorical_accuracy: 0.5357 - val_loss: 1.4331 - val_categorical_accuracy: 0.5141\n", "Epoch 65/67\n", "73/73 [==============================] - 360s - loss: 1.3238 - categorical_accuracy: 0.5398 - val_loss: 1.4330 - val_categorical_accuracy: 0.5188\n", "Epoch 66/67\n", "73/73 [==============================] - 360s - loss: 1.3255 - categorical_accuracy: 0.5355 - val_loss: 1.4316 - val_categorical_accuracy: 0.5195\n", "Epoch 67/67\n", "73/73 [==============================] - 360s - loss: 1.3648 - categorical_accuracy: 0.5156 - val_loss: 1.4291 - val_categorical_accuracy: 0.5156\n", "\n", "Training for epochs 68 to 74...\n", "Epoch 68/74\n", "73/73 [==============================] - 362s - loss: 1.3122 - categorical_accuracy: 0.5420 - val_loss: 1.4286 - val_categorical_accuracy: 0.5172\n", "Epoch 69/74\n", "73/73 [==============================] - 359s - loss: 1.2991 - categorical_accuracy: 0.5444 - val_loss: 1.4286 - val_categorical_accuracy: 0.5172\n", "Epoch 70/74\n", "73/73 [==============================] - 360s - loss: 1.3103 - categorical_accuracy: 0.5426 - val_loss: 1.4270 - val_categorical_accuracy: 0.5188\n", "Epoch 71/74\n", "73/73 [==============================] - 358s - loss: 1.3286 - categorical_accuracy: 0.5388 - val_loss: 1.4276 - val_categorical_accuracy: 0.5148\n", "Epoch 72/74\n", "73/73 [==============================] - 359s - loss: 1.3199 - categorical_accuracy: 0.5408 - val_loss: 1.4278 - val_categorical_accuracy: 0.5172\n", "Epoch 73/74\n", "73/73 [==============================] - 360s - loss: 1.3211 - categorical_accuracy: 0.5368 - val_loss: 1.4262 - val_categorical_accuracy: 0.5180\n", "Epoch 74/74\n", "73/73 [==============================] - 360s - loss: 1.3509 - categorical_accuracy: 0.5272 - val_loss: 1.4238 - val_categorical_accuracy: 0.5188\n", "\n", "Training for epochs 75 to 80...\n", "Epoch 75/80\n", "73/73 [==============================] - 362s - loss: 1.3057 - categorical_accuracy: 0.5463 - val_loss: 1.4235 - val_categorical_accuracy: 0.5195\n", "Epoch 76/80\n", "73/73 [==============================] - 359s - loss: 1.2928 - categorical_accuracy: 0.5455 - val_loss: 1.4236 - val_categorical_accuracy: 0.5188\n", "Epoch 77/80\n", "73/73 [==============================] - 360s - loss: 1.2992 - categorical_accuracy: 0.5476 - val_loss: 1.4220 - val_categorical_accuracy: 0.5219\n", "Epoch 78/80\n", "73/73 [==============================] - 359s - loss: 1.3119 - categorical_accuracy: 0.5455 - val_loss: 1.4224 - val_categorical_accuracy: 0.5180\n", "Epoch 79/80\n", "73/73 [==============================] - 360s - loss: 1.3112 - categorical_accuracy: 0.5439 - val_loss: 1.4225 - val_categorical_accuracy: 0.5219\n", "Epoch 80/80\n", "73/73 [==============================] - 360s - loss: 1.3118 - categorical_accuracy: 0.5424 - val_loss: 1.4212 - val_categorical_accuracy: 0.5195\n", "\n", "\n", "Further training (refining convolutional blocks, starting with\n", "\tlayer 85)...\n", "\n", "Training for epochs 81 to 87...\n", "Epoch 81/87\n", "73/73 [==============================] - 388s - loss: 1.3008 - categorical_accuracy: 0.5428 - val_loss: 1.4204 - val_categorical_accuracy: 0.5219\n", "Epoch 82/87\n", "73/73 [==============================] - 385s - loss: 1.2823 - categorical_accuracy: 0.5514 - val_loss: 1.4207 - val_categorical_accuracy: 0.5195\n", "Epoch 83/87\n", "73/73 [==============================] - 386s - loss: 1.2993 - categorical_accuracy: 0.5461 - val_loss: 1.4191 - val_categorical_accuracy: 0.5203\n", "Epoch 84/87\n", "73/73 [==============================] - 384s - loss: 1.3090 - categorical_accuracy: 0.5348 - val_loss: 1.4195 - val_categorical_accuracy: 0.5188\n", "Epoch 85/87\n", "73/73 [==============================] - 386s - loss: 1.3109 - categorical_accuracy: 0.5460 - val_loss: 1.4196 - val_categorical_accuracy: 0.5211\n", "Epoch 86/87\n", "73/73 [==============================] - 388s - loss: 1.3072 - categorical_accuracy: 0.5403 - val_loss: 1.4182 - val_categorical_accuracy: 0.5219\n", "Epoch 87/87\n", "73/73 [==============================] - 386s - loss: 1.3470 - categorical_accuracy: 0.5262 - val_loss: 1.4158 - val_categorical_accuracy: 0.5195\n", "\n", "Training for epochs 88 to 94...\n", "Epoch 88/94\n", "73/73 [==============================] - 389s - loss: 1.2953 - categorical_accuracy: 0.5498 - val_loss: 1.4154 - val_categorical_accuracy: 0.5234\n", "Epoch 89/94\n", "73/73 [==============================] - 386s - loss: 1.2764 - categorical_accuracy: 0.5498 - val_loss: 1.4157 - val_categorical_accuracy: 0.5195\n", "Epoch 90/94\n", "73/73 [==============================] - 385s - loss: 1.2883 - categorical_accuracy: 0.5500 - val_loss: 1.4143 - val_categorical_accuracy: 0.5195\n", "Epoch 91/94\n", "73/73 [==============================] - 383s - loss: 1.2999 - categorical_accuracy: 0.5427 - val_loss: 1.4150 - val_categorical_accuracy: 0.5180\n", "Epoch 92/94\n", "73/73 [==============================] - 386s - loss: 1.3055 - categorical_accuracy: 0.5440 - val_loss: 1.4153 - val_categorical_accuracy: 0.5211\n", "Epoch 93/94\n", "73/73 [==============================] - 386s - loss: 1.3040 - categorical_accuracy: 0.5468 - val_loss: 1.4139 - val_categorical_accuracy: 0.5227\n", "Epoch 94/94\n", "73/73 [==============================] - 385s - loss: 1.3390 - categorical_accuracy: 0.5303 - val_loss: 1.4118 - val_categorical_accuracy: 0.5219\n", "\n", "Training for epochs 95 to 100...\n", "Epoch 95/100\n", "73/73 [==============================] - 388s - loss: 1.2806 - categorical_accuracy: 0.5494 - val_loss: 1.4115 - val_categorical_accuracy: 0.5211\n", "Epoch 96/100\n", "73/73 [==============================] - 386s - loss: 1.2722 - categorical_accuracy: 0.5536 - val_loss: 1.4119 - val_categorical_accuracy: 0.5195\n", "Epoch 97/100\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "73/73 [==============================] - 386s - loss: 1.2868 - categorical_accuracy: 0.5514 - val_loss: 1.4103 - val_categorical_accuracy: 0.5180\n", "Epoch 98/100\n", "73/73 [==============================] - 384s - loss: 1.2848 - categorical_accuracy: 0.5519 - val_loss: 1.4106 - val_categorical_accuracy: 0.5180\n", "Epoch 99/100\n", "73/73 [==============================] - 386s - loss: 1.2960 - categorical_accuracy: 0.5490 - val_loss: 1.4110 - val_categorical_accuracy: 0.5211\n", "Epoch 100/100\n", "73/73 [==============================] - 386s - loss: 1.2885 - categorical_accuracy: 0.5522 - val_loss: 1.4097 - val_categorical_accuracy: 0.5234\n", "\n", "\n", "Further training (refining convolutional blocks, starting with\n", "\tlayer 75)...\n", "\n", "Training for epochs 101 to 107...\n", "Epoch 101/107\n", "73/73 [==============================] - 412s - loss: 1.2820 - categorical_accuracy: 0.5534 - val_loss: 1.4088 - val_categorical_accuracy: 0.5227\n", "Epoch 102/107\n", "73/73 [==============================] - 408s - loss: 1.2595 - categorical_accuracy: 0.5596 - val_loss: 1.4087 - val_categorical_accuracy: 0.5219\n", "Epoch 103/107\n", "73/73 [==============================] - 409s - loss: 1.2727 - categorical_accuracy: 0.5577 - val_loss: 1.4068 - val_categorical_accuracy: 0.5219\n", "Epoch 104/107\n", "73/73 [==============================] - 408s - loss: 1.2872 - categorical_accuracy: 0.5517 - val_loss: 1.4072 - val_categorical_accuracy: 0.5188\n", "Epoch 105/107\n", "73/73 [==============================] - 410s - loss: 1.2890 - categorical_accuracy: 0.5545 - val_loss: 1.4075 - val_categorical_accuracy: 0.5219\n", "Epoch 106/107\n", "73/73 [==============================] - 409s - loss: 1.2845 - categorical_accuracy: 0.5500 - val_loss: 1.4064 - val_categorical_accuracy: 0.5219\n", "Epoch 107/107\n", "73/73 [==============================] - 409s - loss: 1.3277 - categorical_accuracy: 0.5394 - val_loss: 1.4039 - val_categorical_accuracy: 0.5219\n", "\n", "Training for epochs 108 to 114...\n", "Epoch 108/114\n", "73/73 [==============================] - 411s - loss: 1.2650 - categorical_accuracy: 0.5551 - val_loss: 1.4036 - val_categorical_accuracy: 0.5195\n", "Epoch 109/114\n", "73/73 [==============================] - 408s - loss: 1.2523 - categorical_accuracy: 0.5605 - val_loss: 1.4038 - val_categorical_accuracy: 0.5195\n", "Epoch 110/114\n", "73/73 [==============================] - 409s - loss: 1.2625 - categorical_accuracy: 0.5605 - val_loss: 1.4024 - val_categorical_accuracy: 0.5180\n", "Epoch 111/114\n", "73/73 [==============================] - 406s - loss: 1.2773 - categorical_accuracy: 0.5541 - val_loss: 1.4030 - val_categorical_accuracy: 0.5188\n", "Epoch 112/114\n", "73/73 [==============================] - 408s - loss: 1.2751 - categorical_accuracy: 0.5562 - val_loss: 1.4035 - val_categorical_accuracy: 0.5219\n", "Epoch 113/114\n", "73/73 [==============================] - 408s - loss: 1.2728 - categorical_accuracy: 0.5551 - val_loss: 1.4022 - val_categorical_accuracy: 0.5219\n", "Epoch 114/114\n", "73/73 [==============================] - 409s - loss: 1.3124 - categorical_accuracy: 0.5332 - val_loss: 1.3998 - val_categorical_accuracy: 0.5250\n", "\n", "Training for epochs 115 to 120...\n", "Epoch 115/120\n", "73/73 [==============================] - 410s - loss: 1.2605 - categorical_accuracy: 0.5560 - val_loss: 1.3998 - val_categorical_accuracy: 0.5211\n", "Epoch 116/120\n", "73/73 [==============================] - 408s - loss: 1.2451 - categorical_accuracy: 0.5669 - val_loss: 1.4002 - val_categorical_accuracy: 0.5227\n", "Epoch 117/120\n", "73/73 [==============================] - 409s - loss: 1.2582 - categorical_accuracy: 0.5618 - val_loss: 1.3987 - val_categorical_accuracy: 0.5219\n", "Epoch 118/120\n", "73/73 [==============================] - 405s - loss: 1.2695 - categorical_accuracy: 0.5591 - val_loss: 1.3996 - val_categorical_accuracy: 0.5227\n", "Epoch 119/120\n", "73/73 [==============================] - 409s - loss: 1.2715 - categorical_accuracy: 0.5577 - val_loss: 1.3999 - val_categorical_accuracy: 0.5242\n", "Epoch 120/120\n", "73/73 [==============================] - 409s - loss: 1.2701 - categorical_accuracy: 0.5566 - val_loss: 1.3987 - val_categorical_accuracy: 0.5234\n", "\n", "11:30:07 for Xception to yield 55.7% training accuracy and 52.3% validation accuracy in 20 \n", "epochs (x6 training phases).\n", "\n", "Xception run complete at Friday, 2017 October 20, 4:40 AM.\n", "Clearing keras's backend Tensorflow session...\n", "\n" ] }, { "data": { "text/html": [ "\n", " \n", " " ], "text/plain": [ "" ] }, "execution_count": 20, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# Xception:\n", "run_xception()\n", "Audio(url=audio_file, autoplay=True)" ] }, { "cell_type": "code", "execution_count": 21, "metadata": { "collapsed": true, "scrolled": true }, "outputs": [], "source": [ "# Must be after Alert() call and in a separate cell for both audio and pop-up; sleep \n", "# allows the audio to play before the pop-up alters HTML output on the page:\n", "delayed_popup()" ] }, { "cell_type": "code", "execution_count": 22, "metadata": { "scrolled": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Best optimizer (0.01, 0.02, 0.85, 1e-10) loaded from file.\n", "Using optimizer (0.01, 0.02, 0.85, 1e-10)...\n", "Inception V3 run begun at Friday, 2017 October 20, 4:40 AM.\n", "\t[20 epochs (x6 passes) on extended FMA on GPU takes\n", "\tunknown (no similar runs found).]\n", "\n", "First-round training (training the classifier)...\n", "\n", "Training for epochs 1 to 10...\n", "Epoch 1/10\n", "73/73 [==============================] - 186s - loss: 1.6805 - categorical_accuracy: 0.3983 - val_loss: 3.6134 - val_categorical_accuracy: 0.1938\n", "Epoch 2/10\n", "73/73 [==============================] - 180s - loss: 1.5790 - categorical_accuracy: 0.4361 - val_loss: 1.9119 - val_categorical_accuracy: 0.3117\n", "Epoch 3/10\n", "73/73 [==============================] - 180s - loss: 1.5715 - categorical_accuracy: 0.4350 - val_loss: 1.6578 - val_categorical_accuracy: 0.4094\n", "Epoch 4/10\n", "73/73 [==============================] - 180s - loss: 1.5504 - categorical_accuracy: 0.4449 - val_loss: 1.5751 - val_categorical_accuracy: 0.4484\n", "Epoch 5/10\n", "73/73 [==============================] - 180s - loss: 1.5330 - categorical_accuracy: 0.4507 - val_loss: 1.5557 - val_categorical_accuracy: 0.4664\n", "Epoch 6/10\n", "73/73 [==============================] - 179s - loss: 1.5228 - categorical_accuracy: 0.4556 - val_loss: 1.5655 - val_categorical_accuracy: 0.4609\n", "Epoch 7/10\n", "73/73 [==============================] - 179s - loss: 1.5280 - categorical_accuracy: 0.4544 - val_loss: 1.5287 - val_categorical_accuracy: 0.4797\n", "Epoch 8/10\n", "73/73 [==============================] - 178s - loss: 1.5224 - categorical_accuracy: 0.4615 - val_loss: 1.5240 - val_categorical_accuracy: 0.4750\n", "Epoch 9/10\n", "73/73 [==============================] - 178s - loss: 1.5297 - categorical_accuracy: 0.4516 - val_loss: 1.5149 - val_categorical_accuracy: 0.4813\n", "Epoch 10/10\n", "73/73 [==============================] - 178s - loss: 1.5183 - categorical_accuracy: 0.4504 - val_loss: 1.5109 - val_categorical_accuracy: 0.4797\n", "\n", "Training for epochs 11 to 20...\n", "Epoch 11/20\n", "73/73 [==============================] - 180s - loss: 1.4683 - categorical_accuracy: 0.4767 - val_loss: 1.5057 - val_categorical_accuracy: 0.4867\n", "Epoch 12/20\n", "73/73 [==============================] - 176s - loss: 1.4588 - categorical_accuracy: 0.4773 - val_loss: 1.5093 - val_categorical_accuracy: 0.4773\n", "Epoch 13/20\n", "73/73 [==============================] - 176s - loss: 1.4786 - categorical_accuracy: 0.4755 - val_loss: 1.5106 - val_categorical_accuracy: 0.4773\n", "Epoch 14/20\n", "73/73 [==============================] - 175s - loss: 1.4791 - categorical_accuracy: 0.4674 - val_loss: 1.5071 - val_categorical_accuracy: 0.4703\n", "Epoch 15/20\n", "73/73 [==============================] - 176s - loss: 1.4727 - categorical_accuracy: 0.4763 - val_loss: 1.5090 - val_categorical_accuracy: 0.4813\n", "Epoch 16/20\n", "73/73 [==============================] - 177s - loss: 1.4788 - categorical_accuracy: 0.4721 - val_loss: 1.5141 - val_categorical_accuracy: 0.4836\n", "Epoch 17/20\n", "73/73 [==============================] - 176s - loss: 1.4854 - categorical_accuracy: 0.4700 - val_loss: 1.4997 - val_categorical_accuracy: 0.4883\n", "Epoch 18/20\n", "73/73 [==============================] - 176s - loss: 1.4868 - categorical_accuracy: 0.4709 - val_loss: 1.5013 - val_categorical_accuracy: 0.4828\n", "Epoch 19/20\n", "73/73 [==============================] - 176s - loss: 1.4973 - categorical_accuracy: 0.4593 - val_loss: 1.5012 - val_categorical_accuracy: 0.4930\n", "Epoch 20/20\n", "73/73 [==============================] - 177s - loss: 1.4886 - categorical_accuracy: 0.4675 - val_loss: 1.4971 - val_categorical_accuracy: 0.4867\n", "\n", "\n", "Further training (refining convolutional blocks, starting with\n", "\tlayer 249)...\n", "\n", "Training for epochs 21 to 30...\n", "Epoch 21/30\n", "73/73 [==============================] - 203s - loss: 1.4423 - categorical_accuracy: 0.4877 - val_loss: 1.4928 - val_categorical_accuracy: 0.4898\n", "Epoch 22/30\n", "73/73 [==============================] - 199s - loss: 1.4283 - categorical_accuracy: 0.4890 - val_loss: 1.4890 - val_categorical_accuracy: 0.4906\n", "Epoch 23/30\n", "73/73 [==============================] - 199s - loss: 1.4433 - categorical_accuracy: 0.4857 - val_loss: 1.4883 - val_categorical_accuracy: 0.4883\n", "Epoch 24/30\n", "73/73 [==============================] - 198s - loss: 1.4443 - categorical_accuracy: 0.4824 - val_loss: 1.4868 - val_categorical_accuracy: 0.4875\n", "Epoch 25/30\n", "73/73 [==============================] - 199s - loss: 1.4403 - categorical_accuracy: 0.4888 - val_loss: 1.4853 - val_categorical_accuracy: 0.4867\n", "Epoch 26/30\n", "73/73 [==============================] - 199s - loss: 1.4500 - categorical_accuracy: 0.4805 - val_loss: 1.4845 - val_categorical_accuracy: 0.4844\n", "Epoch 27/30\n", "73/73 [==============================] - 199s - loss: 1.4560 - categorical_accuracy: 0.4833 - val_loss: 1.4810 - val_categorical_accuracy: 0.4859\n", "Epoch 28/30\n", "73/73 [==============================] - 196s - loss: 1.4475 - categorical_accuracy: 0.4836 - val_loss: 1.4788 - val_categorical_accuracy: 0.4875\n", "Epoch 29/30\n", "73/73 [==============================] - 199s - loss: 1.4580 - categorical_accuracy: 0.4730 - val_loss: 1.4764 - val_categorical_accuracy: 0.4930\n", "Epoch 30/30\n", "73/73 [==============================] - 199s - loss: 1.4490 - categorical_accuracy: 0.4848 - val_loss: 1.4755 - val_categorical_accuracy: 0.4883\n", "\n", "Training for epochs 31 to 40...\n", "Epoch 31/40\n", "73/73 [==============================] - 200s - loss: 1.4181 - categorical_accuracy: 0.4973 - val_loss: 1.4740 - val_categorical_accuracy: 0.4930\n", "Epoch 32/40\n", "73/73 [==============================] - 198s - loss: 1.4023 - categorical_accuracy: 0.5025 - val_loss: 1.4712 - val_categorical_accuracy: 0.4898\n", "Epoch 33/40\n", "73/73 [==============================] - 198s - loss: 1.4156 - categorical_accuracy: 0.4942 - val_loss: 1.4711 - val_categorical_accuracy: 0.4969\n", "Epoch 34/40\n", "73/73 [==============================] - 196s - loss: 1.4179 - categorical_accuracy: 0.4925 - val_loss: 1.4701 - val_categorical_accuracy: 0.4930\n", "Epoch 35/40\n", "73/73 [==============================] - 198s - loss: 1.4182 - categorical_accuracy: 0.4940 - val_loss: 1.4695 - val_categorical_accuracy: 0.4906\n", "Epoch 36/40\n", "73/73 [==============================] - 198s - loss: 1.4183 - categorical_accuracy: 0.4919 - val_loss: 1.4690 - val_categorical_accuracy: 0.4914\n", "Epoch 37/40\n", "73/73 [==============================] - 199s - loss: 1.4330 - categorical_accuracy: 0.4935 - val_loss: 1.4662 - val_categorical_accuracy: 0.4922\n", "Epoch 38/40\n", "73/73 [==============================] - 198s - loss: 1.4239 - categorical_accuracy: 0.4976 - val_loss: 1.4644 - val_categorical_accuracy: 0.4930\n", "Epoch 39/40\n", "73/73 [==============================] - 199s - loss: 1.4259 - categorical_accuracy: 0.4894 - val_loss: 1.4629 - val_categorical_accuracy: 0.4969\n", "Epoch 40/40\n", "73/73 [==============================] - 198s - loss: 1.4278 - categorical_accuracy: 0.4911 - val_loss: 1.4631 - val_categorical_accuracy: 0.4945\n", "\n", "\n", "Further training (refining convolutional blocks, starting with\n", "\tlayer 232)...\n", "\n", "Training for epochs 41 to 50...\n", "Epoch 41/50\n", "73/73 [==============================] - 211s - loss: 1.3914 - categorical_accuracy: 0.5070 - val_loss: 1.4608 - val_categorical_accuracy: 0.5008\n", "Epoch 42/50\n", "73/73 [==============================] - 207s - loss: 1.3788 - categorical_accuracy: 0.5107 - val_loss: 1.4581 - val_categorical_accuracy: 0.4922\n", "Epoch 43/50\n", "73/73 [==============================] - 207s - loss: 1.3940 - categorical_accuracy: 0.5034 - val_loss: 1.4584 - val_categorical_accuracy: 0.5000\n", "Epoch 44/50\n", "73/73 [==============================] - 206s - loss: 1.3953 - categorical_accuracy: 0.5067 - val_loss: 1.4567 - val_categorical_accuracy: 0.4969\n", "Epoch 45/50\n", "73/73 [==============================] - 206s - loss: 1.3939 - categorical_accuracy: 0.5048 - val_loss: 1.4562 - val_categorical_accuracy: 0.5000\n", "Epoch 46/50\n", "73/73 [==============================] - 206s - loss: 1.3961 - categorical_accuracy: 0.5070 - val_loss: 1.4553 - val_categorical_accuracy: 0.4977\n", "Epoch 47/50\n", "73/73 [==============================] - 207s - loss: 1.4065 - categorical_accuracy: 0.5007 - val_loss: 1.4528 - val_categorical_accuracy: 0.4969\n", "Epoch 48/50\n", "73/73 [==============================] - 206s - loss: 1.3914 - categorical_accuracy: 0.5064 - val_loss: 1.4504 - val_categorical_accuracy: 0.5031\n", "Epoch 49/50\n", "73/73 [==============================] - 208s - loss: 1.4086 - categorical_accuracy: 0.4972 - val_loss: 1.4489 - val_categorical_accuracy: 0.5070\n", "Epoch 50/50\n", "73/73 [==============================] - 208s - loss: 1.4116 - categorical_accuracy: 0.5004 - val_loss: 1.4497 - val_categorical_accuracy: 0.4969\n", "\n", "Training for epochs 51 to 60...\n", "Epoch 51/60\n", "73/73 [==============================] - 211s - loss: 1.3667 - categorical_accuracy: 0.5213 - val_loss: 1.4485 - val_categorical_accuracy: 0.5055\n", "Epoch 52/60\n", "73/73 [==============================] - 208s - loss: 1.3502 - categorical_accuracy: 0.5238 - val_loss: 1.4465 - val_categorical_accuracy: 0.4977\n", "Epoch 53/60\n", "73/73 [==============================] - 208s - loss: 1.3734 - categorical_accuracy: 0.5148 - val_loss: 1.4468 - val_categorical_accuracy: 0.5016\n", "Epoch 54/60\n", "73/73 [==============================] - 206s - loss: 1.3710 - categorical_accuracy: 0.5136 - val_loss: 1.4455 - val_categorical_accuracy: 0.5000\n", "Epoch 55/60\n", "73/73 [==============================] - 207s - loss: 1.3719 - categorical_accuracy: 0.5161 - val_loss: 1.4448 - val_categorical_accuracy: 0.5016\n", "Epoch 56/60\n", "73/73 [==============================] - 207s - loss: 1.3748 - categorical_accuracy: 0.5035 - val_loss: 1.4447 - val_categorical_accuracy: 0.5016\n", "Epoch 57/60\n", "73/73 [==============================] - 206s - loss: 1.3909 - categorical_accuracy: 0.5075 - val_loss: 1.4431 - val_categorical_accuracy: 0.4992\n", "Epoch 58/60\n", "73/73 [==============================] - 206s - loss: 1.3742 - categorical_accuracy: 0.5135 - val_loss: 1.4412 - val_categorical_accuracy: 0.5047\n", "Epoch 59/60\n", "73/73 [==============================] - 207s - loss: 1.3829 - categorical_accuracy: 0.5039 - val_loss: 1.4400 - val_categorical_accuracy: 0.5117\n", "Epoch 60/60\n", "73/73 [==============================] - 208s - loss: 1.3789 - categorical_accuracy: 0.5119 - val_loss: 1.4409 - val_categorical_accuracy: 0.4977\n", "\n", "\n", "Further training (refining convolutional blocks, starting with\n", "\tlayer 229)...\n", "\n", "Training for epochs 61 to 70...\n", "Epoch 61/70\n", "73/73 [==============================] - 213s - loss: 1.3438 - categorical_accuracy: 0.5321 - val_loss: 1.4401 - val_categorical_accuracy: 0.5078\n", "Epoch 62/70\n", "73/73 [==============================] - 209s - loss: 1.3305 - categorical_accuracy: 0.5300 - val_loss: 1.4384 - val_categorical_accuracy: 0.5000\n", "Epoch 63/70\n", "73/73 [==============================] - 209s - loss: 1.3447 - categorical_accuracy: 0.5229 - val_loss: 1.4394 - val_categorical_accuracy: 0.5031\n", "Epoch 64/70\n", "73/73 [==============================] - 208s - loss: 1.3526 - categorical_accuracy: 0.5213 - val_loss: 1.4382 - val_categorical_accuracy: 0.5023\n", "Epoch 65/70\n", "73/73 [==============================] - 210s - loss: 1.3478 - categorical_accuracy: 0.5203 - val_loss: 1.4372 - val_categorical_accuracy: 0.5047\n", "Epoch 66/70\n", "73/73 [==============================] - 209s - loss: 1.3567 - categorical_accuracy: 0.5176 - val_loss: 1.4366 - val_categorical_accuracy: 0.5031\n", "Epoch 67/70\n", "73/73 [==============================] - 209s - loss: 1.3622 - categorical_accuracy: 0.5187 - val_loss: 1.4349 - val_categorical_accuracy: 0.5016\n", "Epoch 68/70\n", "73/73 [==============================] - 207s - loss: 1.3560 - categorical_accuracy: 0.5229 - val_loss: 1.4337 - val_categorical_accuracy: 0.5047\n", "Epoch 69/70\n", "73/73 [==============================] - 209s - loss: 1.3645 - categorical_accuracy: 0.5147 - val_loss: 1.4329 - val_categorical_accuracy: 0.5133\n", "Epoch 70/70\n", "73/73 [==============================] - 209s - loss: 1.3674 - categorical_accuracy: 0.5140 - val_loss: 1.4344 - val_categorical_accuracy: 0.5023\n", "\n", "Training for epochs 71 to 80...\n", "Epoch 71/80\n", "73/73 [==============================] - 211s - loss: 1.3248 - categorical_accuracy: 0.5353 - val_loss: 1.4338 - val_categorical_accuracy: 0.5078\n", "Epoch 72/80\n", "73/73 [==============================] - 209s - loss: 1.3085 - categorical_accuracy: 0.5368 - val_loss: 1.4316 - val_categorical_accuracy: 0.5016\n", "Epoch 73/80\n", "73/73 [==============================] - 210s - loss: 1.3204 - categorical_accuracy: 0.5334 - val_loss: 1.4318 - val_categorical_accuracy: 0.5094\n", "Epoch 74/80\n", "73/73 [==============================] - 208s - loss: 1.3263 - categorical_accuracy: 0.5278 - val_loss: 1.4313 - val_categorical_accuracy: 0.5070\n", "Epoch 75/80\n", "73/73 [==============================] - 209s - loss: 1.3277 - categorical_accuracy: 0.5264 - val_loss: 1.4305 - val_categorical_accuracy: 0.5062\n", "Epoch 76/80\n", "73/73 [==============================] - 210s - loss: 1.3254 - categorical_accuracy: 0.5355 - val_loss: 1.4308 - val_categorical_accuracy: 0.5055\n", "Epoch 77/80\n", "73/73 [==============================] - 209s - loss: 1.3399 - categorical_accuracy: 0.5294 - val_loss: 1.4288 - val_categorical_accuracy: 0.5031\n", "Epoch 78/80\n", "73/73 [==============================] - 209s - loss: 1.3328 - categorical_accuracy: 0.5326 - val_loss: 1.4278 - val_categorical_accuracy: 0.5070\n", "Epoch 79/80\n", "73/73 [==============================] - 209s - loss: 1.3439 - categorical_accuracy: 0.5235 - val_loss: 1.4271 - val_categorical_accuracy: 0.5117\n", "Epoch 80/80\n", "73/73 [==============================] - 210s - loss: 1.3424 - categorical_accuracy: 0.5244 - val_loss: 1.4295 - val_categorical_accuracy: 0.5023\n", "\n", "\n", "Further training (refining convolutional blocks, starting with\n", "\tlayer 200)...\n", "\n", "Training for epochs 81 to 90...\n", "Epoch 81/90\n", "73/73 [==============================] - 233s - loss: 1.3015 - categorical_accuracy: 0.5457 - val_loss: 1.4286 - val_categorical_accuracy: 0.5078\n", "Epoch 82/90\n", "73/73 [==============================] - 228s - loss: 1.2931 - categorical_accuracy: 0.5394 - val_loss: 1.4260 - val_categorical_accuracy: 0.5062\n", "Epoch 83/90\n", "73/73 [==============================] - 228s - loss: 1.2989 - categorical_accuracy: 0.5428 - val_loss: 1.4272 - val_categorical_accuracy: 0.5102\n", "Epoch 84/90\n", "73/73 [==============================] - 226s - loss: 1.3099 - categorical_accuracy: 0.5375 - val_loss: 1.4260 - val_categorical_accuracy: 0.5102\n", "Epoch 85/90\n", "73/73 [==============================] - 228s - loss: 1.3039 - categorical_accuracy: 0.5403 - val_loss: 1.4259 - val_categorical_accuracy: 0.5117\n", "Epoch 86/90\n", "73/73 [==============================] - 227s - loss: 1.3091 - categorical_accuracy: 0.5363 - val_loss: 1.4254 - val_categorical_accuracy: 0.5117\n", "Epoch 87/90\n", "73/73 [==============================] - 228s - loss: 1.3149 - categorical_accuracy: 0.5341 - val_loss: 1.4229 - val_categorical_accuracy: 0.5070\n", "Epoch 88/90\n", "73/73 [==============================] - 226s - loss: 1.3013 - categorical_accuracy: 0.5392 - val_loss: 1.4222 - val_categorical_accuracy: 0.5102\n", "Epoch 89/90\n", "73/73 [==============================] - 227s - loss: 1.3230 - categorical_accuracy: 0.5312 - val_loss: 1.4222 - val_categorical_accuracy: 0.5117\n", "Epoch 90/90\n", "73/73 [==============================] - 227s - loss: 1.3224 - categorical_accuracy: 0.5334 - val_loss: 1.4236 - val_categorical_accuracy: 0.5062\n", "\n", "Training for epochs 91 to 100...\n", "Epoch 91/100\n", "73/73 [==============================] - 229s - loss: 1.2777 - categorical_accuracy: 0.5552 - val_loss: 1.4224 - val_categorical_accuracy: 0.5117\n", "Epoch 92/100\n", "73/73 [==============================] - 227s - loss: 1.2622 - categorical_accuracy: 0.5552 - val_loss: 1.4207 - val_categorical_accuracy: 0.5109\n", "Epoch 93/100\n", "73/73 [==============================] - 228s - loss: 1.2722 - categorical_accuracy: 0.5527 - val_loss: 1.4214 - val_categorical_accuracy: 0.5141\n", "Epoch 94/100\n", "73/73 [==============================] - 226s - loss: 1.2847 - categorical_accuracy: 0.5450 - val_loss: 1.4212 - val_categorical_accuracy: 0.5125\n", "Epoch 95/100\n", "73/73 [==============================] - 227s - loss: 1.2731 - categorical_accuracy: 0.5483 - val_loss: 1.4214 - val_categorical_accuracy: 0.5133\n", "Epoch 96/100\n", "73/73 [==============================] - 226s - loss: 1.2813 - categorical_accuracy: 0.5463 - val_loss: 1.4209 - val_categorical_accuracy: 0.5133\n", "Epoch 97/100\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "73/73 [==============================] - 228s - loss: 1.2928 - categorical_accuracy: 0.5425 - val_loss: 1.4183 - val_categorical_accuracy: 0.5109\n", "Epoch 98/100\n", "73/73 [==============================] - 227s - loss: 1.2785 - categorical_accuracy: 0.5521 - val_loss: 1.4177 - val_categorical_accuracy: 0.5133\n", "Epoch 99/100\n", "73/73 [==============================] - 227s - loss: 1.2820 - categorical_accuracy: 0.5428 - val_loss: 1.4175 - val_categorical_accuracy: 0.5109\n", "Epoch 100/100\n", "73/73 [==============================] - 228s - loss: 1.2881 - categorical_accuracy: 0.5481 - val_loss: 1.4192 - val_categorical_accuracy: 0.5086\n", "\n", "\n", "Further training (refining convolutional blocks, starting with\n", "\tlayer 187)...\n", "\n", "Training for epochs 101 to 110...\n", "Epoch 101/110\n", "73/73 [==============================] - 240s - loss: 1.2437 - categorical_accuracy: 0.5620 - val_loss: 1.4182 - val_categorical_accuracy: 0.5133\n", "Epoch 102/110\n", "73/73 [==============================] - 233s - loss: 1.2335 - categorical_accuracy: 0.5639 - val_loss: 1.4171 - val_categorical_accuracy: 0.5133\n", "Epoch 103/110\n", "73/73 [==============================] - 233s - loss: 1.2427 - categorical_accuracy: 0.5596 - val_loss: 1.4171 - val_categorical_accuracy: 0.5148\n", "Epoch 104/110\n", "73/73 [==============================] - 232s - loss: 1.2496 - categorical_accuracy: 0.5600 - val_loss: 1.4160 - val_categorical_accuracy: 0.5164\n", "Epoch 105/110\n", "73/73 [==============================] - 233s - loss: 1.2516 - categorical_accuracy: 0.5601 - val_loss: 1.4162 - val_categorical_accuracy: 0.5172\n", "Epoch 106/110\n", "73/73 [==============================] - 234s - loss: 1.2472 - categorical_accuracy: 0.5641 - val_loss: 1.4153 - val_categorical_accuracy: 0.5203\n", "Epoch 107/110\n", "73/73 [==============================] - 234s - loss: 1.2652 - categorical_accuracy: 0.5577 - val_loss: 1.4138 - val_categorical_accuracy: 0.5156\n", "Epoch 108/110\n", "73/73 [==============================] - 232s - loss: 1.2488 - categorical_accuracy: 0.5619 - val_loss: 1.4146 - val_categorical_accuracy: 0.5188\n", "Epoch 109/110\n", "73/73 [==============================] - 234s - loss: 1.2595 - categorical_accuracy: 0.5484 - val_loss: 1.4136 - val_categorical_accuracy: 0.5164\n", "Epoch 110/110\n", "73/73 [==============================] - 234s - loss: 1.2697 - categorical_accuracy: 0.5497 - val_loss: 1.4155 - val_categorical_accuracy: 0.5172\n", "\n", "Training for epochs 111 to 120...\n", "Epoch 111/120\n", "73/73 [==============================] - 237s - loss: 1.2112 - categorical_accuracy: 0.5798 - val_loss: 1.4146 - val_categorical_accuracy: 0.5180\n", "Epoch 112/120\n", "73/73 [==============================] - 234s - loss: 1.2030 - categorical_accuracy: 0.5754 - val_loss: 1.4141 - val_categorical_accuracy: 0.5172\n", "Epoch 113/120\n", "73/73 [==============================] - 234s - loss: 1.2120 - categorical_accuracy: 0.5742 - val_loss: 1.4149 - val_categorical_accuracy: 0.5188\n", "Epoch 114/120\n", "73/73 [==============================] - 233s - loss: 1.2199 - categorical_accuracy: 0.5650 - val_loss: 1.4148 - val_categorical_accuracy: 0.5219\n", "Epoch 115/120\n", "73/73 [==============================] - 234s - loss: 1.2177 - categorical_accuracy: 0.5708 - val_loss: 1.4146 - val_categorical_accuracy: 0.5172\n", "Epoch 116/120\n", "73/73 [==============================] - 235s - loss: 1.2198 - categorical_accuracy: 0.5771 - val_loss: 1.4148 - val_categorical_accuracy: 0.5156\n", "Epoch 117/120\n", "73/73 [==============================] - 234s - loss: 1.2343 - categorical_accuracy: 0.5674 - val_loss: 1.4129 - val_categorical_accuracy: 0.5203\n", "Epoch 118/120\n", "73/73 [==============================] - 234s - loss: 1.2179 - categorical_accuracy: 0.5760 - val_loss: 1.4137 - val_categorical_accuracy: 0.5180\n", "Epoch 119/120\n", "73/73 [==============================] - 236s - loss: 1.2263 - categorical_accuracy: 0.5688 - val_loss: 1.4128 - val_categorical_accuracy: 0.5188\n", "Epoch 120/120\n", "73/73 [==============================] - 235s - loss: 1.2378 - categorical_accuracy: 0.5644 - val_loss: 1.4140 - val_categorical_accuracy: 0.5172\n", "\n", "06:59:59 for Inception V3 to yield 56.4% training accuracy and 51.7% validation accuracy in 20 \n", "epochs (x6 training phases).\n", "\n", "Inception V3 run complete at Friday, 2017 October 20, 11:40 AM.\n", "Clearing keras's backend Tensorflow session...\n", "\n" ] }, { "data": { "text/html": [ "\n", " \n", " " ], "text/plain": [ "" ] }, "execution_count": 22, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# InceptionV3:\n", "run_inception_v3()\n", "Audio(url=audio_file, autoplay=True)" ] }, { "cell_type": "code", "execution_count": 23, "metadata": { "scrolled": true }, "outputs": [ { "data": { "text/html": [ "" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "# Must be after Alert() call and in a separate cell for both audio and pop-up; sleep \n", "# allows the audio to play before the pop-up alters HTML output on the page:\n", "delayed_popup()" ] }, { "cell_type": "code", "execution_count": 24, "metadata": { "scrolled": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Best optimizer (0.01, 0.02, 0.85, 1e-10) loaded from file.\n", "Using optimizer (0.01, 0.02, 0.85, 1e-10)...\n", "ResNet50 run begun at Friday, 2017 October 20, 11:40 AM.\n", "\t[20 epochs (x6 passes) on extended FMA on GPU takes\n", "\tunknown (no similar runs found).]\n", "\n", "First-round training (training the classifier)...\n", "\n", "Training for epochs 1 to 7...\n", "Epoch 1/7\n", "73/73 [==============================] - 226s - loss: 1.5162 - categorical_accuracy: 0.4662 - val_loss: 2.6943 - val_categorical_accuracy: 0.2852\n", "Epoch 2/7\n", "73/73 [==============================] - 223s - loss: 1.4102 - categorical_accuracy: 0.5054 - val_loss: 1.6660 - val_categorical_accuracy: 0.4109\n", "Epoch 3/7\n", "73/73 [==============================] - 222s - loss: 1.3872 - categorical_accuracy: 0.5210 - val_loss: 1.4792 - val_categorical_accuracy: 0.5000\n", "Epoch 4/7\n", "73/73 [==============================] - 222s - loss: 1.3853 - categorical_accuracy: 0.5182 - val_loss: 1.4241 - val_categorical_accuracy: 0.5281\n", "Epoch 5/7\n", "73/73 [==============================] - 222s - loss: 1.3489 - categorical_accuracy: 0.5286 - val_loss: 1.4238 - val_categorical_accuracy: 0.5281\n", "Epoch 6/7\n", "73/73 [==============================] - 222s - loss: 1.3476 - categorical_accuracy: 0.5342 - val_loss: 1.4117 - val_categorical_accuracy: 0.5422\n", "Epoch 7/7\n", "73/73 [==============================] - 221s - loss: 1.3651 - categorical_accuracy: 0.5269 - val_loss: 1.4020 - val_categorical_accuracy: 0.5359\n", "\n", "Training for epochs 8 to 14...\n", "Epoch 8/14\n", "73/73 [==============================] - 225s - loss: 1.2910 - categorical_accuracy: 0.5561 - val_loss: 1.3920 - val_categorical_accuracy: 0.5359\n", "Epoch 9/14\n", "73/73 [==============================] - 219s - loss: 1.2743 - categorical_accuracy: 0.5594 - val_loss: 1.3816 - val_categorical_accuracy: 0.5312\n", "Epoch 10/14\n", "73/73 [==============================] - 222s - loss: 1.2846 - categorical_accuracy: 0.5563 - val_loss: 1.3969 - val_categorical_accuracy: 0.5320\n", "Epoch 11/14\n", "73/73 [==============================] - 218s - loss: 1.3081 - categorical_accuracy: 0.5466 - val_loss: 1.3830 - val_categorical_accuracy: 0.5328\n", "Epoch 12/14\n", "73/73 [==============================] - 220s - loss: 1.2914 - categorical_accuracy: 0.5515 - val_loss: 1.3866 - val_categorical_accuracy: 0.5266\n", "Epoch 13/14\n", "73/73 [==============================] - 220s - loss: 1.2916 - categorical_accuracy: 0.5545 - val_loss: 1.3975 - val_categorical_accuracy: 0.5398\n", "Epoch 14/14\n", "73/73 [==============================] - 219s - loss: 1.3161 - categorical_accuracy: 0.5422 - val_loss: 1.3797 - val_categorical_accuracy: 0.5359\n", "\n", "Training for epochs 15 to 20...\n", "Epoch 15/20\n", "73/73 [==============================] - 223s - loss: 1.2539 - categorical_accuracy: 0.5675 - val_loss: 1.3778 - val_categorical_accuracy: 0.5367\n", "Epoch 16/20\n", "73/73 [==============================] - 218s - loss: 1.2435 - categorical_accuracy: 0.5700 - val_loss: 1.3783 - val_categorical_accuracy: 0.5305\n", "Epoch 17/20\n", "73/73 [==============================] - 221s - loss: 1.2586 - categorical_accuracy: 0.5639 - val_loss: 1.3842 - val_categorical_accuracy: 0.5297\n", "Epoch 18/20\n", "73/73 [==============================] - 218s - loss: 1.2788 - categorical_accuracy: 0.5591 - val_loss: 1.3788 - val_categorical_accuracy: 0.5336\n", "Epoch 19/20\n", "73/73 [==============================] - 217s - loss: 1.2765 - categorical_accuracy: 0.5567 - val_loss: 1.3804 - val_categorical_accuracy: 0.5312\n", "Epoch 20/20\n", "73/73 [==============================] - 219s - loss: 1.2685 - categorical_accuracy: 0.5631 - val_loss: 1.3852 - val_categorical_accuracy: 0.5383\n", "\n", "\n", "Further training (refining convolutional blocks, starting with\n", "\tlayer 161)...\n", "\n", "Training for epochs 21 to 27...\n", "Epoch 21/27\n", "73/73 [==============================] - 235s - loss: 1.2342 - categorical_accuracy: 0.5754 - val_loss: 1.3797 - val_categorical_accuracy: 0.5367\n", "Epoch 22/27\n", "73/73 [==============================] - 232s - loss: 1.2166 - categorical_accuracy: 0.5787 - val_loss: 1.3765 - val_categorical_accuracy: 0.5375\n", "Epoch 23/27\n", "73/73 [==============================] - 232s - loss: 1.2385 - categorical_accuracy: 0.5728 - val_loss: 1.3752 - val_categorical_accuracy: 0.5375\n", "Epoch 24/27\n", "73/73 [==============================] - 232s - loss: 1.2590 - categorical_accuracy: 0.5668 - val_loss: 1.3746 - val_categorical_accuracy: 0.5367\n", "Epoch 25/27\n", "73/73 [==============================] - 232s - loss: 1.2551 - categorical_accuracy: 0.5615 - val_loss: 1.3743 - val_categorical_accuracy: 0.5344\n", "Epoch 26/27\n", "73/73 [==============================] - 232s - loss: 1.2456 - categorical_accuracy: 0.5683 - val_loss: 1.3746 - val_categorical_accuracy: 0.5328\n", "Epoch 27/27\n", "73/73 [==============================] - 232s - loss: 1.2964 - categorical_accuracy: 0.5522 - val_loss: 1.3713 - val_categorical_accuracy: 0.5375\n", "\n", "Training for epochs 28 to 34...\n", "Epoch 28/34\n", "73/73 [==============================] - 235s - loss: 1.2314 - categorical_accuracy: 0.5765 - val_loss: 1.3721 - val_categorical_accuracy: 0.5352\n", "Epoch 29/34\n", "73/73 [==============================] - 232s - loss: 1.2185 - categorical_accuracy: 0.5767 - val_loss: 1.3712 - val_categorical_accuracy: 0.5352\n", "Epoch 30/34\n", "73/73 [==============================] - 233s - loss: 1.2337 - categorical_accuracy: 0.5722 - val_loss: 1.3710 - val_categorical_accuracy: 0.5398\n", "Epoch 31/34\n", "73/73 [==============================] - 231s - loss: 1.2573 - categorical_accuracy: 0.5670 - val_loss: 1.3709 - val_categorical_accuracy: 0.5359\n", "Epoch 32/34\n", "73/73 [==============================] - 232s - loss: 1.2429 - categorical_accuracy: 0.5676 - val_loss: 1.3715 - val_categorical_accuracy: 0.5359\n", "Epoch 33/34\n", "73/73 [==============================] - 232s - loss: 1.2388 - categorical_accuracy: 0.5781 - val_loss: 1.3717 - val_categorical_accuracy: 0.5344\n", "Epoch 34/34\n", "73/73 [==============================] - 232s - loss: 1.2882 - categorical_accuracy: 0.5500 - val_loss: 1.3689 - val_categorical_accuracy: 0.5398\n", "\n", "Training for epochs 35 to 40...\n", "Epoch 35/40\n", "73/73 [==============================] - 234s - loss: 1.2246 - categorical_accuracy: 0.5841 - val_loss: 1.3698 - val_categorical_accuracy: 0.5352\n", "Epoch 36/40\n", "73/73 [==============================] - 232s - loss: 1.2123 - categorical_accuracy: 0.5805 - val_loss: 1.3688 - val_categorical_accuracy: 0.5391\n", "Epoch 37/40\n", "73/73 [==============================] - 233s - loss: 1.2296 - categorical_accuracy: 0.5741 - val_loss: 1.3694 - val_categorical_accuracy: 0.5414\n", "Epoch 38/40\n", "73/73 [==============================] - 233s - loss: 1.2547 - categorical_accuracy: 0.5692 - val_loss: 1.3690 - val_categorical_accuracy: 0.5367\n", "Epoch 39/40\n", "73/73 [==============================] - 233s - loss: 1.2440 - categorical_accuracy: 0.5704 - val_loss: 1.3694 - val_categorical_accuracy: 0.5383\n", "Epoch 40/40\n", "73/73 [==============================] - 233s - loss: 1.2340 - categorical_accuracy: 0.5735 - val_loss: 1.3694 - val_categorical_accuracy: 0.5367\n", "\n", "\n", "Further training (refining convolutional blocks, starting with\n", "\tlayer 151)...\n", "\n", "Training for epochs 41 to 47...\n", "Epoch 41/47\n", "73/73 [==============================] - 246s - loss: 1.2198 - categorical_accuracy: 0.5841 - val_loss: 1.3691 - val_categorical_accuracy: 0.5359\n", "Epoch 42/47\n", "73/73 [==============================] - 242s - loss: 1.2140 - categorical_accuracy: 0.5757 - val_loss: 1.3677 - val_categorical_accuracy: 0.5391\n", "Epoch 43/47\n", "73/73 [==============================] - 243s - loss: 1.2264 - categorical_accuracy: 0.5731 - val_loss: 1.3673 - val_categorical_accuracy: 0.5406\n", "Epoch 44/47\n", "73/73 [==============================] - 241s - loss: 1.2433 - categorical_accuracy: 0.5710 - val_loss: 1.3665 - val_categorical_accuracy: 0.5406\n", "Epoch 45/47\n", "73/73 [==============================] - 243s - loss: 1.2415 - categorical_accuracy: 0.5711 - val_loss: 1.3664 - val_categorical_accuracy: 0.5391\n", "Epoch 46/47\n", "73/73 [==============================] - 243s - loss: 1.2348 - categorical_accuracy: 0.5732 - val_loss: 1.3663 - val_categorical_accuracy: 0.5383\n", "Epoch 47/47\n", "73/73 [==============================] - 243s - loss: 1.2836 - categorical_accuracy: 0.5543 - val_loss: 1.3632 - val_categorical_accuracy: 0.5437\n", "\n", "Training for epochs 48 to 54...\n", "Epoch 48/54\n", "73/73 [==============================] - 245s - loss: 1.2179 - categorical_accuracy: 0.5807 - val_loss: 1.3642 - val_categorical_accuracy: 0.5383\n", "Epoch 49/54\n", "73/73 [==============================] - 243s - loss: 1.2080 - categorical_accuracy: 0.5865 - val_loss: 1.3638 - val_categorical_accuracy: 0.5414\n", "Epoch 50/54\n", "73/73 [==============================] - 243s - loss: 1.2165 - categorical_accuracy: 0.5752 - val_loss: 1.3641 - val_categorical_accuracy: 0.5437\n", "Epoch 51/54\n", "73/73 [==============================] - 241s - loss: 1.2401 - categorical_accuracy: 0.5656 - val_loss: 1.3638 - val_categorical_accuracy: 0.5437\n", "Epoch 52/54\n", "73/73 [==============================] - 242s - loss: 1.2246 - categorical_accuracy: 0.5796 - val_loss: 1.3643 - val_categorical_accuracy: 0.5430\n", "Epoch 53/54\n", "73/73 [==============================] - 243s - loss: 1.2266 - categorical_accuracy: 0.5767 - val_loss: 1.3644 - val_categorical_accuracy: 0.5414\n", "Epoch 54/54\n", "73/73 [==============================] - 244s - loss: 1.2804 - categorical_accuracy: 0.5575 - val_loss: 1.3617 - val_categorical_accuracy: 0.5461\n", "\n", "Training for epochs 55 to 60...\n", "Epoch 55/60\n", "73/73 [==============================] - 245s - loss: 1.2087 - categorical_accuracy: 0.5860 - val_loss: 1.3622 - val_categorical_accuracy: 0.5422\n", "Epoch 56/60\n", "73/73 [==============================] - 242s - loss: 1.1972 - categorical_accuracy: 0.5854 - val_loss: 1.3621 - val_categorical_accuracy: 0.5437\n", "Epoch 57/60\n", "73/73 [==============================] - 243s - loss: 1.2164 - categorical_accuracy: 0.5808 - val_loss: 1.3622 - val_categorical_accuracy: 0.5453\n", "Epoch 58/60\n", "73/73 [==============================] - 242s - loss: 1.2359 - categorical_accuracy: 0.5742 - val_loss: 1.3626 - val_categorical_accuracy: 0.5422\n", "Epoch 59/60\n", "73/73 [==============================] - 244s - loss: 1.2245 - categorical_accuracy: 0.5791 - val_loss: 1.3630 - val_categorical_accuracy: 0.5430\n", "Epoch 60/60\n", "73/73 [==============================] - 243s - loss: 1.2162 - categorical_accuracy: 0.5794 - val_loss: 1.3630 - val_categorical_accuracy: 0.5422\n", "\n", "\n", "Further training (refining convolutional blocks, starting with\n", "\tlayer 139)...\n", "\n", "Training for epochs 61 to 67...\n", "Epoch 61/67\n", "73/73 [==============================] - 260s - loss: 1.2000 - categorical_accuracy: 0.5905 - val_loss: 1.3622 - val_categorical_accuracy: 0.5414\n", "Epoch 62/67\n", "73/73 [==============================] - 257s - loss: 1.1889 - categorical_accuracy: 0.5890 - val_loss: 1.3612 - val_categorical_accuracy: 0.5414\n", "Epoch 63/67\n", "73/73 [==============================] - 257s - loss: 1.2048 - categorical_accuracy: 0.5788 - val_loss: 1.3600 - val_categorical_accuracy: 0.5453\n", "Epoch 64/67\n", "73/73 [==============================] - 255s - loss: 1.2223 - categorical_accuracy: 0.5784 - val_loss: 1.3595 - val_categorical_accuracy: 0.5422\n", "Epoch 65/67\n", "73/73 [==============================] - 256s - loss: 1.2191 - categorical_accuracy: 0.5782 - val_loss: 1.3606 - val_categorical_accuracy: 0.5406\n", "Epoch 66/67\n", "73/73 [==============================] - 256s - loss: 1.2093 - categorical_accuracy: 0.5796 - val_loss: 1.3601 - val_categorical_accuracy: 0.5414\n", "Epoch 67/67\n", "73/73 [==============================] - 257s - loss: 1.2654 - categorical_accuracy: 0.5618 - val_loss: 1.3568 - val_categorical_accuracy: 0.5437\n", "\n", "Training for epochs 68 to 74...\n", "Epoch 68/74\n", "73/73 [==============================] - 258s - loss: 1.1852 - categorical_accuracy: 0.5941 - val_loss: 1.3572 - val_categorical_accuracy: 0.5453\n", "Epoch 69/74\n", "73/73 [==============================] - 256s - loss: 1.1719 - categorical_accuracy: 0.5960 - val_loss: 1.3571 - val_categorical_accuracy: 0.5430\n", "Epoch 70/74\n", "73/73 [==============================] - 257s - loss: 1.1857 - categorical_accuracy: 0.5895 - val_loss: 1.3571 - val_categorical_accuracy: 0.5445\n", "Epoch 71/74\n", "73/73 [==============================] - 255s - loss: 1.2170 - categorical_accuracy: 0.5788 - val_loss: 1.3570 - val_categorical_accuracy: 0.5414\n", "Epoch 72/74\n", "73/73 [==============================] - 257s - loss: 1.1990 - categorical_accuracy: 0.5850 - val_loss: 1.3579 - val_categorical_accuracy: 0.5430\n", "Epoch 73/74\n", "73/73 [==============================] - 258s - loss: 1.1920 - categorical_accuracy: 0.5889 - val_loss: 1.3566 - val_categorical_accuracy: 0.5422\n", "Epoch 74/74\n", "73/73 [==============================] - 259s - loss: 1.2553 - categorical_accuracy: 0.5652 - val_loss: 1.3537 - val_categorical_accuracy: 0.5477\n", "\n", "Training for epochs 75 to 80...\n", "Epoch 75/80\n", "73/73 [==============================] - 259s - loss: 1.1746 - categorical_accuracy: 0.5975 - val_loss: 1.3544 - val_categorical_accuracy: 0.5430\n", "Epoch 76/80\n", "73/73 [==============================] - 256s - loss: 1.1616 - categorical_accuracy: 0.5977 - val_loss: 1.3545 - val_categorical_accuracy: 0.5453\n", "Epoch 77/80\n", "73/73 [==============================] - 257s - loss: 1.1806 - categorical_accuracy: 0.5890 - val_loss: 1.3541 - val_categorical_accuracy: 0.5453\n", "Epoch 78/80\n", "73/73 [==============================] - 255s - loss: 1.1971 - categorical_accuracy: 0.5823 - val_loss: 1.3541 - val_categorical_accuracy: 0.5445\n", "Epoch 79/80\n", "73/73 [==============================] - 257s - loss: 1.1871 - categorical_accuracy: 0.5864 - val_loss: 1.3545 - val_categorical_accuracy: 0.5422\n", "Epoch 80/80\n", "73/73 [==============================] - 257s - loss: 1.1788 - categorical_accuracy: 0.5939 - val_loss: 1.3541 - val_categorical_accuracy: 0.5453\n", "\n", "\n", "Further training (refining convolutional blocks, starting with\n", "\tlayer 129)...\n", "\n", "Training for epochs 81 to 87...\n", "Epoch 81/87\n", "73/73 [==============================] - 277s - loss: 1.1616 - categorical_accuracy: 0.6008 - val_loss: 1.3540 - val_categorical_accuracy: 0.5430\n", "Epoch 82/87\n", "73/73 [==============================] - 271s - loss: 1.1514 - categorical_accuracy: 0.6048 - val_loss: 1.3529 - val_categorical_accuracy: 0.5469\n", "Epoch 83/87\n", "73/73 [==============================] - 269s - loss: 1.1651 - categorical_accuracy: 0.5995 - val_loss: 1.3528 - val_categorical_accuracy: 0.5461\n", "Epoch 84/87\n", "73/73 [==============================] - 270s - loss: 1.1826 - categorical_accuracy: 0.5909 - val_loss: 1.3530 - val_categorical_accuracy: 0.5445\n", "Epoch 85/87\n", "73/73 [==============================] - 270s - loss: 1.1720 - categorical_accuracy: 0.5950 - val_loss: 1.3533 - val_categorical_accuracy: 0.5422\n", "Epoch 86/87\n", "73/73 [==============================] - 270s - loss: 1.1760 - categorical_accuracy: 0.5946 - val_loss: 1.3524 - val_categorical_accuracy: 0.5453\n", "Epoch 87/87\n", "73/73 [==============================] - 271s - loss: 1.2346 - categorical_accuracy: 0.5689 - val_loss: 1.3493 - val_categorical_accuracy: 0.5508\n", "\n", "Training for epochs 88 to 94...\n", "Epoch 88/94\n", "73/73 [==============================] - 273s - loss: 1.1494 - categorical_accuracy: 0.6098 - val_loss: 1.3498 - val_categorical_accuracy: 0.5453\n", "Epoch 89/94\n", "73/73 [==============================] - 272s - loss: 1.1345 - categorical_accuracy: 0.6087 - val_loss: 1.3507 - val_categorical_accuracy: 0.5492\n", "Epoch 90/94\n", "73/73 [==============================] - 271s - loss: 1.1449 - categorical_accuracy: 0.6055 - val_loss: 1.3502 - val_categorical_accuracy: 0.5445\n", "Epoch 91/94\n", "73/73 [==============================] - 269s - loss: 1.1619 - categorical_accuracy: 0.5955 - val_loss: 1.3502 - val_categorical_accuracy: 0.5461\n", "Epoch 92/94\n", "73/73 [==============================] - 271s - loss: 1.1521 - categorical_accuracy: 0.6012 - val_loss: 1.3513 - val_categorical_accuracy: 0.5437\n", "Epoch 93/94\n", "73/73 [==============================] - 272s - loss: 1.1552 - categorical_accuracy: 0.5964 - val_loss: 1.3511 - val_categorical_accuracy: 0.5445\n", "Epoch 94/94\n", "73/73 [==============================] - 272s - loss: 1.2141 - categorical_accuracy: 0.5764 - val_loss: 1.3478 - val_categorical_accuracy: 0.5477\n", "\n", "Training for epochs 95 to 100...\n", "Epoch 95/100\n", "73/73 [==============================] - 272s - loss: 1.1279 - categorical_accuracy: 0.6102 - val_loss: 1.3486 - val_categorical_accuracy: 0.5453\n", "Epoch 96/100\n", "73/73 [==============================] - 271s - loss: 1.1178 - categorical_accuracy: 0.6121 - val_loss: 1.3489 - val_categorical_accuracy: 0.5461\n", "Epoch 97/100\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "73/73 [==============================] - 272s - loss: 1.1322 - categorical_accuracy: 0.6094 - val_loss: 1.3491 - val_categorical_accuracy: 0.5469\n", "Epoch 98/100\n", "73/73 [==============================] - 270s - loss: 1.1515 - categorical_accuracy: 0.6032 - val_loss: 1.3488 - val_categorical_accuracy: 0.5437\n", "Epoch 99/100\n", "73/73 [==============================] - 271s - loss: 1.1381 - categorical_accuracy: 0.6088 - val_loss: 1.3496 - val_categorical_accuracy: 0.5422\n", "Epoch 100/100\n", "73/73 [==============================] - 271s - loss: 1.1362 - categorical_accuracy: 0.6093 - val_loss: 1.3487 - val_categorical_accuracy: 0.5469\n", "\n", "\n", "Further training (refining convolutional blocks, starting with\n", "\tlayer 119)...\n", "\n", "Training for epochs 101 to 107...\n", "Epoch 101/107\n", "73/73 [==============================] - 287s - loss: 1.1094 - categorical_accuracy: 0.6183 - val_loss: 1.3496 - val_categorical_accuracy: 0.5469\n", "Epoch 102/107\n", "73/73 [==============================] - 282s - loss: 1.1053 - categorical_accuracy: 0.6174 - val_loss: 1.3492 - val_categorical_accuracy: 0.5445\n", "Epoch 103/107\n", "73/73 [==============================] - 282s - loss: 1.1163 - categorical_accuracy: 0.6194 - val_loss: 1.3493 - val_categorical_accuracy: 0.5437\n", "Epoch 104/107\n", "73/73 [==============================] - 282s - loss: 1.1324 - categorical_accuracy: 0.6087 - val_loss: 1.3491 - val_categorical_accuracy: 0.5414\n", "Epoch 105/107\n", "73/73 [==============================] - 283s - loss: 1.1238 - categorical_accuracy: 0.6097 - val_loss: 1.3491 - val_categorical_accuracy: 0.5422\n", "Epoch 106/107\n", "73/73 [==============================] - 283s - loss: 1.1196 - categorical_accuracy: 0.6110 - val_loss: 1.3482 - val_categorical_accuracy: 0.5453\n", "Epoch 107/107\n", "73/73 [==============================] - 283s - loss: 1.1954 - categorical_accuracy: 0.5850 - val_loss: 1.3440 - val_categorical_accuracy: 0.5445\n", "\n", "Training for epochs 108 to 114...\n", "Epoch 108/114\n", "73/73 [==============================] - 285s - loss: 1.0873 - categorical_accuracy: 0.6281 - val_loss: 1.3454 - val_categorical_accuracy: 0.5453\n", "Epoch 109/114\n", "73/73 [==============================] - 282s - loss: 1.0868 - categorical_accuracy: 0.6302 - val_loss: 1.3463 - val_categorical_accuracy: 0.5430\n", "Epoch 110/114\n", "73/73 [==============================] - 284s - loss: 1.0954 - categorical_accuracy: 0.6208 - val_loss: 1.3470 - val_categorical_accuracy: 0.5437\n", "Epoch 111/114\n", "73/73 [==============================] - 281s - loss: 1.1157 - categorical_accuracy: 0.6176 - val_loss: 1.3467 - val_categorical_accuracy: 0.5406\n", "Epoch 112/114\n", "73/73 [==============================] - 284s - loss: 1.1052 - categorical_accuracy: 0.6146 - val_loss: 1.3474 - val_categorical_accuracy: 0.5437\n", "Epoch 113/114\n", "73/73 [==============================] - 283s - loss: 1.0998 - categorical_accuracy: 0.6195 - val_loss: 1.3471 - val_categorical_accuracy: 0.5461\n", "Epoch 114/114\n", "73/73 [==============================] - 283s - loss: 1.1775 - categorical_accuracy: 0.5868 - val_loss: 1.3437 - val_categorical_accuracy: 0.5430\n", "\n", "Training for epochs 115 to 120...\n", "Epoch 115/120\n", "73/73 [==============================] - 286s - loss: 1.0714 - categorical_accuracy: 0.6315 - val_loss: 1.3459 - val_categorical_accuracy: 0.5461\n", "Epoch 116/120\n", "73/73 [==============================] - 282s - loss: 1.0624 - categorical_accuracy: 0.6333 - val_loss: 1.3469 - val_categorical_accuracy: 0.5469\n", "Epoch 117/120\n", "73/73 [==============================] - 284s - loss: 1.0759 - categorical_accuracy: 0.6279 - val_loss: 1.3473 - val_categorical_accuracy: 0.5469\n", "Epoch 118/120\n", "73/73 [==============================] - 283s - loss: 1.1022 - categorical_accuracy: 0.6179 - val_loss: 1.3480 - val_categorical_accuracy: 0.5453\n", "Epoch 119/120\n", "73/73 [==============================] - 282s - loss: 1.0894 - categorical_accuracy: 0.6222 - val_loss: 1.3493 - val_categorical_accuracy: 0.5437\n", "Epoch 120/120\n", "73/73 [==============================] - 282s - loss: 1.0849 - categorical_accuracy: 0.6254 - val_loss: 1.3490 - val_categorical_accuracy: 0.5461\n", "\n", "08:24:01 for ResNet50 to yield 62.5% training accuracy and 54.6% validation accuracy in 20 \n", "epochs (x6 training phases).\n", "\n", "ResNet50 run complete at Friday, 2017 October 20, 8:04 PM.\n", "Clearing keras's backend Tensorflow session...\n", "\n" ] }, { "data": { "text/html": [ "\n", " \n", " " ], "text/plain": [ "" ] }, "execution_count": 24, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# ResNet50:\n", "run_resnet50()\n", "Audio(url=audio_file, autoplay=True)" ] }, { "cell_type": "code", "execution_count": 25, "metadata": { "scrolled": true }, "outputs": [ { "data": { "text/html": [ "" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "# Must be after Alert() call and in a separate cell for both audio and pop-up; sleep \n", "# allows the audio to play before the pop-up alters HTML output on the page:\n", "delayed_popup()" ] }, { "cell_type": "code", "execution_count": 12, "metadata": { "scrolled": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Best optimizer (0.01, 0.02, 0.85, 1e-10) loaded from file.\n", "Using optimizer (0.01, 0.02, 0.85, 1e-10)...\n", "VGG16 run begun at Sunday, 2017 October 22, 5:34 AM.\n", "\t[20 epochs (x6 passes) on extended FMA on GPU takes\n", "\tunknown (no similar runs found).]\n", "\n", "First-round training (training the classifier)...\n", "\n", "Training for epochs 1 to 3...\n", "Epoch 1/3\n", "73/73 [==============================] - 364s - loss: 1.5456 - categorical_accuracy: 0.4560 - val_loss: 1.9779 - val_categorical_accuracy: 0.4820\n", "Epoch 2/3\n", "73/73 [==============================] - 364s - loss: 1.4396 - categorical_accuracy: 0.4991 - val_loss: 1.5883 - val_categorical_accuracy: 0.4617\n", "Epoch 3/3\n", "73/73 [==============================] - 364s - loss: 1.4133 - categorical_accuracy: 0.5094 - val_loss: 1.4578 - val_categorical_accuracy: 0.4844\n", "\n", "Training for epochs 4 to 6...\n", "Epoch 4/6\n", "73/73 [==============================] - 363s - loss: 1.4040 - categorical_accuracy: 0.5117 - val_loss: 1.5342 - val_categorical_accuracy: 0.4789\n", "Epoch 5/6\n", "73/73 [==============================] - 363s - loss: 1.3774 - categorical_accuracy: 0.5261 - val_loss: 1.4043 - val_categorical_accuracy: 0.5156\n", "Epoch 6/6\n", "73/73 [==============================] - 362s - loss: 1.3658 - categorical_accuracy: 0.5287 - val_loss: 1.3991 - val_categorical_accuracy: 0.5203\n", "\n", "Training for epochs 7 to 9...\n", "Epoch 7/9\n", "73/73 [==============================] - 365s - loss: 1.3705 - categorical_accuracy: 0.5244 - val_loss: 1.4731 - val_categorical_accuracy: 0.4883\n", "Epoch 8/9\n", "73/73 [==============================] - 363s - loss: 1.3485 - categorical_accuracy: 0.5308 - val_loss: 1.3847 - val_categorical_accuracy: 0.5133\n", "Epoch 9/9\n", "73/73 [==============================] - 364s - loss: 1.3561 - categorical_accuracy: 0.5338 - val_loss: 1.3864 - val_categorical_accuracy: 0.5250\n", "\n", "Training for epochs 10 to 12...\n", "Epoch 10/12\n", "73/73 [==============================] - 364s - loss: 1.3536 - categorical_accuracy: 0.5302 - val_loss: 1.4296 - val_categorical_accuracy: 0.4969\n", "Epoch 11/12\n", "73/73 [==============================] - 364s - loss: 1.3384 - categorical_accuracy: 0.5275 - val_loss: 1.3827 - val_categorical_accuracy: 0.5203\n", "Epoch 12/12\n", "73/73 [==============================] - 362s - loss: 1.3462 - categorical_accuracy: 0.5366 - val_loss: 1.3867 - val_categorical_accuracy: 0.5188\n", "\n", "Training for epochs 13 to 15...\n", "Epoch 13/15\n", "73/73 [==============================] - 365s - loss: 1.3460 - categorical_accuracy: 0.5372 - val_loss: 1.4258 - val_categorical_accuracy: 0.5039\n", "Epoch 14/15\n", "73/73 [==============================] - 362s - loss: 1.3236 - categorical_accuracy: 0.5406 - val_loss: 1.3714 - val_categorical_accuracy: 0.5266\n", "Epoch 15/15\n", "73/73 [==============================] - 363s - loss: 1.3371 - categorical_accuracy: 0.5381 - val_loss: 1.3707 - val_categorical_accuracy: 0.5305\n", "\n", "Training for epochs 16 to 18...\n", "Epoch 16/18\n", "73/73 [==============================] - 363s - loss: 1.3439 - categorical_accuracy: 0.5365 - val_loss: 1.3917 - val_categorical_accuracy: 0.5203\n", "Epoch 17/18\n", "73/73 [==============================] - 360s - loss: 1.3220 - categorical_accuracy: 0.5410 - val_loss: 1.3630 - val_categorical_accuracy: 0.5414\n", "Epoch 18/18\n", "73/73 [==============================] - 359s - loss: 1.3239 - categorical_accuracy: 0.5469 - val_loss: 1.3711 - val_categorical_accuracy: 0.5367\n", "\n", "Training for epochs 19 to 20...\n", "Epoch 19/20\n", "73/73 [==============================] - 363s - loss: 1.3310 - categorical_accuracy: 0.5398 - val_loss: 1.3862 - val_categorical_accuracy: 0.5211\n", "Epoch 20/20\n", "73/73 [==============================] - 363s - loss: 1.3168 - categorical_accuracy: 0.5458 - val_loss: 1.3605 - val_categorical_accuracy: 0.5414\n", "\n", "\n", "Further training (refining convolutional blocks, starting with\n", "\tlayer 17)...\n", "\n", "Training for epochs 21 to 23...\n", "Epoch 21/23\n", "73/73 [==============================] - 376s - loss: 1.3273 - categorical_accuracy: 0.5401 - val_loss: 1.3944 - val_categorical_accuracy: 0.5297\n", "Epoch 22/23\n", "73/73 [==============================] - 373s - loss: 1.3003 - categorical_accuracy: 0.5489 - val_loss: 1.3569 - val_categorical_accuracy: 0.5367\n", "Epoch 23/23\n", "73/73 [==============================] - 373s - loss: 1.3062 - categorical_accuracy: 0.5490 - val_loss: 1.3725 - val_categorical_accuracy: 0.5250\n", "\n", "Training for epochs 24 to 26...\n", "Epoch 24/26\n", "73/73 [==============================] - 375s - loss: 1.3071 - categorical_accuracy: 0.5474 - val_loss: 1.3748 - val_categorical_accuracy: 0.5328\n", "Epoch 25/26\n", "73/73 [==============================] - 373s - loss: 1.2768 - categorical_accuracy: 0.5601 - val_loss: 1.3412 - val_categorical_accuracy: 0.5391\n", "Epoch 26/26\n", "73/73 [==============================] - 374s - loss: 1.2906 - categorical_accuracy: 0.5588 - val_loss: 1.3364 - val_categorical_accuracy: 0.5437\n", "\n", "Training for epochs 27 to 29...\n", "Epoch 27/29\n", "73/73 [==============================] - 375s - loss: 1.2933 - categorical_accuracy: 0.5564 - val_loss: 1.3678 - val_categorical_accuracy: 0.5305\n", "Epoch 28/29\n", "73/73 [==============================] - 373s - loss: 1.2675 - categorical_accuracy: 0.5605 - val_loss: 1.3359 - val_categorical_accuracy: 0.5422\n", "Epoch 29/29\n", "73/73 [==============================] - 373s - loss: 1.2771 - categorical_accuracy: 0.5635 - val_loss: 1.3476 - val_categorical_accuracy: 0.5391\n", "\n", "Training for epochs 30 to 32...\n", "Epoch 30/32\n", "73/73 [==============================] - 375s - loss: 1.2838 - categorical_accuracy: 0.5580 - val_loss: 1.3477 - val_categorical_accuracy: 0.5453\n", "Epoch 31/32\n", "73/73 [==============================] - 373s - loss: 1.2543 - categorical_accuracy: 0.5729 - val_loss: 1.3501 - val_categorical_accuracy: 0.5453\n", "Epoch 32/32\n", "73/73 [==============================] - 374s - loss: 1.2759 - categorical_accuracy: 0.5626 - val_loss: 1.3592 - val_categorical_accuracy: 0.5359\n", "\n", "Training for epochs 33 to 35...\n", "Epoch 33/35\n", "73/73 [==============================] - 374s - loss: 1.2730 - categorical_accuracy: 0.5623 - val_loss: 1.3487 - val_categorical_accuracy: 0.5328\n", "Epoch 34/35\n", "73/73 [==============================] - 373s - loss: 1.2457 - categorical_accuracy: 0.5732 - val_loss: 1.3487 - val_categorical_accuracy: 0.5430\n", "Epoch 35/35\n", "73/73 [==============================] - 373s - loss: 1.2650 - categorical_accuracy: 0.5676 - val_loss: 1.3229 - val_categorical_accuracy: 0.5508\n", "\n", "Training for epochs 36 to 38...\n", "Epoch 36/38\n", "73/73 [==============================] - 375s - loss: 1.2705 - categorical_accuracy: 0.5613 - val_loss: 1.3720 - val_categorical_accuracy: 0.5375\n", "Epoch 37/38\n", "73/73 [==============================] - 373s - loss: 1.2323 - categorical_accuracy: 0.5790 - val_loss: 1.3380 - val_categorical_accuracy: 0.5500\n", "Epoch 38/38\n", "73/73 [==============================] - 374s - loss: 1.2610 - categorical_accuracy: 0.5656 - val_loss: 1.3161 - val_categorical_accuracy: 0.5586\n", "\n", "Training for epochs 39 to 40...\n", "Epoch 39/40\n", "73/73 [==============================] - 375s - loss: 1.2518 - categorical_accuracy: 0.5692 - val_loss: 1.3281 - val_categorical_accuracy: 0.5461\n", "Epoch 40/40\n", "73/73 [==============================] - 368s - loss: 1.2291 - categorical_accuracy: 0.5787 - val_loss: 1.3415 - val_categorical_accuracy: 0.5477\n", "\n", "\n", "Further training (refining convolutional blocks, starting with\n", "\tlayer 15)...\n", "\n", "Training for epochs 41 to 43...\n", "Epoch 41/43\n", "73/73 [==============================] - 406s - loss: 1.2598 - categorical_accuracy: 0.5664 - val_loss: 1.3812 - val_categorical_accuracy: 0.5352\n", "Epoch 42/43\n", "73/73 [==============================] - 404s - loss: 1.2292 - categorical_accuracy: 0.5753 - val_loss: 1.3835 - val_categorical_accuracy: 0.5523\n", "Epoch 43/43\n", "73/73 [==============================] - 404s - loss: 1.2463 - categorical_accuracy: 0.5702 - val_loss: 1.3095 - val_categorical_accuracy: 0.5523\n", "\n", "Training for epochs 44 to 46...\n", "Epoch 44/46\n", "73/73 [==============================] - 406s - loss: 1.2319 - categorical_accuracy: 0.5779 - val_loss: 1.3409 - val_categorical_accuracy: 0.5563\n", "Epoch 45/46\n", "73/73 [==============================] - 404s - loss: 1.2060 - categorical_accuracy: 0.5885 - val_loss: 1.3723 - val_categorical_accuracy: 0.5516\n", "Epoch 46/46\n", "73/73 [==============================] - 404s - loss: 1.2288 - categorical_accuracy: 0.5787 - val_loss: 1.3202 - val_categorical_accuracy: 0.5625\n", "\n", "Training for epochs 47 to 49...\n", "Epoch 47/49\n", "73/73 [==============================] - 406s - loss: 1.2224 - categorical_accuracy: 0.5790 - val_loss: 1.3843 - val_categorical_accuracy: 0.5359\n", "Epoch 48/49\n", "73/73 [==============================] - 404s - loss: 1.1935 - categorical_accuracy: 0.5950 - val_loss: 1.3437 - val_categorical_accuracy: 0.5641\n", "Epoch 49/49\n", "73/73 [==============================] - 405s - loss: 1.2144 - categorical_accuracy: 0.5795 - val_loss: 1.2827 - val_categorical_accuracy: 0.5680\n", "\n", "Training for epochs 50 to 52...\n", "Epoch 50/52\n", "73/73 [==============================] - 406s - loss: 1.2074 - categorical_accuracy: 0.5876 - val_loss: 1.3719 - val_categorical_accuracy: 0.5508\n", "Epoch 51/52\n", "73/73 [==============================] - 405s - loss: 1.1807 - categorical_accuracy: 0.6004 - val_loss: 1.3722 - val_categorical_accuracy: 0.5570\n", "Epoch 52/52\n", "73/73 [==============================] - 404s - loss: 1.1986 - categorical_accuracy: 0.5899 - val_loss: 1.2894 - val_categorical_accuracy: 0.5695\n", "\n", "Training for epochs 53 to 55...\n", "Epoch 53/55\n", "73/73 [==============================] - 407s - loss: 1.1939 - categorical_accuracy: 0.5908 - val_loss: 1.3930 - val_categorical_accuracy: 0.5422\n", "Epoch 54/55\n", "73/73 [==============================] - 405s - loss: 1.1644 - categorical_accuracy: 0.6067 - val_loss: 1.3915 - val_categorical_accuracy: 0.5570\n", "Epoch 55/55\n", "73/73 [==============================] - 405s - loss: 1.1820 - categorical_accuracy: 0.5913 - val_loss: 1.2948 - val_categorical_accuracy: 0.5648\n", "\n", "Training for epochs 56 to 58...\n", "Epoch 56/58\n", "73/73 [==============================] - 406s - loss: 1.1803 - categorical_accuracy: 0.6015 - val_loss: 1.3783 - val_categorical_accuracy: 0.5414\n", "Epoch 57/58\n", "73/73 [==============================] - 405s - loss: 1.1541 - categorical_accuracy: 0.6084 - val_loss: 1.3375 - val_categorical_accuracy: 0.5633\n", "Epoch 58/58\n", "73/73 [==============================] - 405s - loss: 1.1693 - categorical_accuracy: 0.5971 - val_loss: 1.2851 - val_categorical_accuracy: 0.5734\n", "\n", "Training for epochs 59 to 60...\n", "Epoch 59/60\n", "73/73 [==============================] - 406s - loss: 1.1747 - categorical_accuracy: 0.6001 - val_loss: 1.3736 - val_categorical_accuracy: 0.5430\n", "Epoch 60/60\n", "73/73 [==============================] - 405s - loss: 1.1346 - categorical_accuracy: 0.6140 - val_loss: 1.3426 - val_categorical_accuracy: 0.5523\n", "\n", "\n", "Further training (refining convolutional blocks, starting with\n", "\tlayer 13)...\n", "\n", "Training for epochs 61 to 63...\n", "Epoch 61/63\n", "73/73 [==============================] - 448s - loss: 1.1702 - categorical_accuracy: 0.6001 - val_loss: 1.3292 - val_categorical_accuracy: 0.5547\n", "Epoch 62/63\n", "73/73 [==============================] - 447s - loss: 1.1348 - categorical_accuracy: 0.6149 - val_loss: 1.3473 - val_categorical_accuracy: 0.5602\n", "Epoch 63/63\n", "73/73 [==============================] - 446s - loss: 1.1720 - categorical_accuracy: 0.6017 - val_loss: 1.2599 - val_categorical_accuracy: 0.5703\n", "\n", "Training for epochs 64 to 66...\n", "Epoch 64/66\n", "73/73 [==============================] - 448s - loss: 1.1516 - categorical_accuracy: 0.6061 - val_loss: 1.4122 - val_categorical_accuracy: 0.5469\n", "Epoch 65/66\n", "73/73 [==============================] - 446s - loss: 1.1134 - categorical_accuracy: 0.6230 - val_loss: 1.3086 - val_categorical_accuracy: 0.5625\n", "Epoch 66/66\n", "73/73 [==============================] - 447s - loss: 1.1489 - categorical_accuracy: 0.6074 - val_loss: 1.2611 - val_categorical_accuracy: 0.5742\n", "\n", "Training for epochs 67 to 69...\n", "Epoch 67/69\n", "73/73 [==============================] - 448s - loss: 1.1404 - categorical_accuracy: 0.6091 - val_loss: 1.3628 - val_categorical_accuracy: 0.5563\n", "Epoch 68/69\n", "73/73 [==============================] - 447s - loss: 1.1103 - categorical_accuracy: 0.6259 - val_loss: 1.4603 - val_categorical_accuracy: 0.5445\n", "Epoch 69/69\n", "73/73 [==============================] - 447s - loss: 1.1365 - categorical_accuracy: 0.6082 - val_loss: 1.2851 - val_categorical_accuracy: 0.5773\n", "\n", "Training for epochs 70 to 72...\n", "Epoch 70/72\n", "73/73 [==============================] - 448s - loss: 1.1280 - categorical_accuracy: 0.6173 - val_loss: 1.3243 - val_categorical_accuracy: 0.5523\n", "Epoch 71/72\n", "73/73 [==============================] - 446s - loss: 1.0875 - categorical_accuracy: 0.6281 - val_loss: 1.3683 - val_categorical_accuracy: 0.5594\n", "Epoch 72/72\n", "73/73 [==============================] - 447s - loss: 1.1169 - categorical_accuracy: 0.6133 - val_loss: 1.3283 - val_categorical_accuracy: 0.5555\n", "\n", "Training for epochs 73 to 75...\n", "Epoch 73/75\n", "73/73 [==============================] - 447s - loss: 1.1145 - categorical_accuracy: 0.6174 - val_loss: 1.4411 - val_categorical_accuracy: 0.5406\n", "Epoch 74/75\n", "73/73 [==============================] - 445s - loss: 1.0819 - categorical_accuracy: 0.6356 - val_loss: 1.4308 - val_categorical_accuracy: 0.5305\n", "Epoch 75/75\n", "73/73 [==============================] - 445s - loss: 1.1060 - categorical_accuracy: 0.6199 - val_loss: 1.2649 - val_categorical_accuracy: 0.5789\n", "\n", "Training for epochs 76 to 78...\n", "Epoch 76/78\n", "73/73 [==============================] - 447s - loss: 1.1039 - categorical_accuracy: 0.6229 - val_loss: 1.3409 - val_categorical_accuracy: 0.5453\n", "Epoch 77/78\n", "73/73 [==============================] - 446s - loss: 1.0633 - categorical_accuracy: 0.6406 - val_loss: 1.3626 - val_categorical_accuracy: 0.5570\n", "Epoch 78/78\n", "73/73 [==============================] - 447s - loss: 1.0913 - categorical_accuracy: 0.6232 - val_loss: 1.2672 - val_categorical_accuracy: 0.5789\n", "\n", "Training for epochs 79 to 80...\n", "Epoch 79/80\n", "73/73 [==============================] - 449s - loss: 1.0853 - categorical_accuracy: 0.6354 - val_loss: 1.3812 - val_categorical_accuracy: 0.5508\n", "Epoch 80/80\n", "73/73 [==============================] - 447s - loss: 1.0508 - categorical_accuracy: 0.6426 - val_loss: 1.3837 - val_categorical_accuracy: 0.5437\n", "\n", "\n", "Further training (refining convolutional blocks, starting with\n", "\tlayer 11)...\n", "\n", "Training for epochs 81 to 83...\n", "Epoch 81/83\n", "73/73 [==============================] - 566s - loss: 1.0752 - categorical_accuracy: 0.6382 - val_loss: 1.3113 - val_categorical_accuracy: 0.5633\n", "Epoch 82/83\n", "73/73 [==============================] - 563s - loss: 1.0483 - categorical_accuracy: 0.6474 - val_loss: 1.3727 - val_categorical_accuracy: 0.5477\n", "Epoch 83/83\n", "73/73 [==============================] - 564s - loss: 1.1045 - categorical_accuracy: 0.6201 - val_loss: 1.2712 - val_categorical_accuracy: 0.5609\n", "\n", "Training for epochs 84 to 86...\n", "Epoch 84/86\n", "73/73 [==============================] - 565s - loss: 1.0702 - categorical_accuracy: 0.6382 - val_loss: 1.3634 - val_categorical_accuracy: 0.5383\n", "Epoch 85/86\n", "73/73 [==============================] - 563s - loss: 1.0348 - categorical_accuracy: 0.6479 - val_loss: 1.3014 - val_categorical_accuracy: 0.5719\n", "Epoch 86/86\n", "73/73 [==============================] - 564s - loss: 1.0772 - categorical_accuracy: 0.6323 - val_loss: 1.2760 - val_categorical_accuracy: 0.5656\n", "\n", "Training for epochs 87 to 89...\n", "Epoch 87/89\n", "73/73 [==============================] - 566s - loss: 1.0578 - categorical_accuracy: 0.6418 - val_loss: 1.3790 - val_categorical_accuracy: 0.5375\n", "Epoch 88/89\n", "73/73 [==============================] - 563s - loss: 1.0154 - categorical_accuracy: 0.6591 - val_loss: 1.3472 - val_categorical_accuracy: 0.5445\n", "Epoch 89/89\n", "73/73 [==============================] - 563s - loss: 1.0631 - categorical_accuracy: 0.6322 - val_loss: 1.2953 - val_categorical_accuracy: 0.5594\n", "\n", "Training for epochs 90 to 92...\n", "Epoch 90/92\n", "73/73 [==============================] - 565s - loss: 1.0367 - categorical_accuracy: 0.6463 - val_loss: 1.3295 - val_categorical_accuracy: 0.5492\n", "Epoch 91/92\n", "73/73 [==============================] - 562s - loss: 1.0014 - categorical_accuracy: 0.6620 - val_loss: 1.3224 - val_categorical_accuracy: 0.5687\n", "Epoch 92/92\n", "73/73 [==============================] - 563s - loss: 1.0431 - categorical_accuracy: 0.6438 - val_loss: 1.2833 - val_categorical_accuracy: 0.5719\n", "\n", "Training for epochs 93 to 95...\n", "Epoch 93/95\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "73/73 [==============================] - 566s - loss: 1.0268 - categorical_accuracy: 0.6535 - val_loss: 1.3800 - val_categorical_accuracy: 0.5406\n", "Epoch 94/95\n", "73/73 [==============================] - 563s - loss: 0.9832 - categorical_accuracy: 0.6707 - val_loss: 1.3812 - val_categorical_accuracy: 0.5367\n", "Epoch 95/95\n", "73/73 [==============================] - 564s - loss: 1.0237 - categorical_accuracy: 0.6480 - val_loss: 1.2899 - val_categorical_accuracy: 0.5734\n", "\n", "Training for epochs 96 to 98...\n", "Epoch 96/98\n", "73/73 [==============================] - 564s - loss: 1.0100 - categorical_accuracy: 0.6581 - val_loss: 1.3179 - val_categorical_accuracy: 0.5586\n", "Epoch 97/98\n", "73/73 [==============================] - 563s - loss: 0.9612 - categorical_accuracy: 0.6752 - val_loss: 1.3812 - val_categorical_accuracy: 0.5539\n", "Epoch 98/98\n", "73/73 [==============================] - 562s - loss: 1.0088 - categorical_accuracy: 0.6510 - val_loss: 1.2936 - val_categorical_accuracy: 0.5734\n", "\n", "Training for epochs 99 to 100...\n", "Epoch 99/100\n", "73/73 [==============================] - 565s - loss: 0.9865 - categorical_accuracy: 0.6676 - val_loss: 1.3716 - val_categorical_accuracy: 0.5398\n", "Epoch 100/100\n", "73/73 [==============================] - 563s - loss: 0.9456 - categorical_accuracy: 0.6809 - val_loss: 1.4441 - val_categorical_accuracy: 0.5469\n", "\n", "\n", "Further training (refining convolutional blocks, starting with\n", "\tlayer 8)...\n", "\n", "Training for epochs 101 to 103...\n", "Epoch 101/103\n", "73/73 [==============================] - 699s - loss: 0.9656 - categorical_accuracy: 0.6734 - val_loss: 1.4785 - val_categorical_accuracy: 0.5242\n", "Epoch 102/103\n", "73/73 [==============================] - 696s - loss: 0.9337 - categorical_accuracy: 0.6860 - val_loss: 1.3964 - val_categorical_accuracy: 0.5523\n", "Epoch 103/103\n", "73/73 [==============================] - 696s - loss: 1.0039 - categorical_accuracy: 0.6546 - val_loss: 1.3150 - val_categorical_accuracy: 0.5625\n", "\n", "Training for epochs 104 to 106...\n", "Epoch 104/106\n", "73/73 [==============================] - 698s - loss: 0.9474 - categorical_accuracy: 0.6800 - val_loss: 1.3328 - val_categorical_accuracy: 0.5656\n", "Epoch 105/106\n", "73/73 [==============================] - 697s - loss: 0.9111 - categorical_accuracy: 0.6949 - val_loss: 1.3300 - val_categorical_accuracy: 0.5609\n", "Epoch 106/106\n", "73/73 [==============================] - 697s - loss: 0.9804 - categorical_accuracy: 0.6614 - val_loss: 1.3433 - val_categorical_accuracy: 0.5523\n", "\n", "Training for epochs 107 to 109...\n", "Epoch 107/109\n", "73/73 [==============================] - 699s - loss: 0.9277 - categorical_accuracy: 0.6857 - val_loss: 1.3589 - val_categorical_accuracy: 0.5563\n", "Epoch 108/109\n", "73/73 [==============================] - 696s - loss: 0.8918 - categorical_accuracy: 0.7015 - val_loss: 1.5011 - val_categorical_accuracy: 0.5344\n", "Epoch 109/109\n", "73/73 [==============================] - 695s - loss: 0.9578 - categorical_accuracy: 0.6638 - val_loss: 1.3463 - val_categorical_accuracy: 0.5734\n", "\n", "Training for epochs 110 to 112...\n", "Epoch 110/112\n", "73/73 [==============================] - 698s - loss: 0.9142 - categorical_accuracy: 0.6873 - val_loss: 1.5554 - val_categorical_accuracy: 0.5227\n", "Epoch 111/112\n", "73/73 [==============================] - 696s - loss: 0.8778 - categorical_accuracy: 0.7040 - val_loss: 1.4354 - val_categorical_accuracy: 0.5437\n", "Epoch 112/112\n", "73/73 [==============================] - 698s - loss: 0.9329 - categorical_accuracy: 0.6771 - val_loss: 1.3542 - val_categorical_accuracy: 0.5609\n", "\n", "Training for epochs 113 to 115...\n", "Epoch 113/115\n", "73/73 [==============================] - 699s - loss: 0.8953 - categorical_accuracy: 0.6984 - val_loss: 1.4455 - val_categorical_accuracy: 0.5523\n", "Epoch 114/115\n", "73/73 [==============================] - 697s - loss: 0.8576 - categorical_accuracy: 0.7147 - val_loss: 1.4199 - val_categorical_accuracy: 0.5555\n", "Epoch 115/115\n", "73/73 [==============================] - 697s - loss: 0.9134 - categorical_accuracy: 0.6904 - val_loss: 1.3845 - val_categorical_accuracy: 0.5406\n", "\n", "Training for epochs 116 to 118...\n", "Epoch 116/118\n", "73/73 [==============================] - 698s - loss: 0.8770 - categorical_accuracy: 0.7017 - val_loss: 1.4537 - val_categorical_accuracy: 0.5492\n", "Epoch 117/118\n", "73/73 [==============================] - 697s - loss: 0.8312 - categorical_accuracy: 0.7227 - val_loss: 1.5334 - val_categorical_accuracy: 0.5414\n", "Epoch 118/118\n", "73/73 [==============================] - 697s - loss: 0.8983 - categorical_accuracy: 0.6909 - val_loss: 1.3750 - val_categorical_accuracy: 0.5453\n", "\n", "Training for epochs 119 to 120...\n", "Epoch 119/120\n", "73/73 [==============================] - 698s - loss: 0.8501 - categorical_accuracy: 0.7135 - val_loss: 1.4438 - val_categorical_accuracy: 0.5516\n", "Epoch 120/120\n", "73/73 [==============================] - 696s - loss: 0.8170 - categorical_accuracy: 0.7259 - val_loss: 1.4699 - val_categorical_accuracy: 0.5516\n", "\n", "15:50:57 for VGG16 to yield 72.6% training accuracy and 55.2% validation accuracy in 20 \n", "epochs (x6 training phases).\n", "\n", "VGG16 run complete at Sunday, 2017 October 22, 9:25 PM.\n", "Clearing keras's backend Tensorflow session...\n", "\n" ] }, { "data": { "text/html": [ "\n", " \n", " " ], "text/plain": [ "" ] }, "execution_count": 12, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# VGG16:\n", "run_vgg16()\n", "Audio(url=audio_file, autoplay=True)" ] }, { "cell_type": "code", "execution_count": 13, "metadata": { "collapsed": true, "scrolled": true }, "outputs": [], "source": [ "# Must be after Alert() call and in a separate cell for both audio and pop-up; sleep \n", "# allows the audio to play before the pop-up alters HTML output on the page:\n", "delayed_popup()" ] }, { "cell_type": "code", "execution_count": 14, "metadata": { "scrolled": true }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Backed up 'saved_objects/fma_results_gpu.pkl' to\n", "\t'saved_object_backups/fma_results_gpu-2017-10-22+2125.pkl.bak'.\n", "\n", "Backed up 'saved_objects/crossval_results_gpu.pkl' to\n", "\t'saved_object_backups/crossval_results_gpu-2017-10-22+2125.pkl.bak'.\n", "\n" ] } ], "source": [ "# Back up the results dataframes\n", "import shutil\n", "\n", "for key in [\"fma_results_name\", \"crossval_results_name\"]:\n", " src = os.path.join(\"saved_objects\", \"{}.pkl\".format(param_dict[key])) \n", " dst = os.path.join(\"saved_object_backups\", \n", " \"{}-{}.pkl.bak\".format(param_dict[key],\n", " timer.datetimepath()))\n", " directory = os.path.dirname(dst)\n", " if not os.path.exists(directory):\n", " os.makedirs(directory)\n", " #shutil.copyfile(src, dst)\n", "\n", " print (\"Backed up '{}' to\\n\\t'{}'.\\n\".format(src, dst))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Test Set Evaluation\n", "\n", "After training, all trained models were evaluated on the test split of the relevant dataset. Because all final weights are saved during training, this can be performed separately from the training phase. It does not require cloud computing resources to run." ] }, { "cell_type": "code", "execution_count": 31, "metadata": { "scrolled": false }, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
Run StartedSource ProcessorSourcePass EpochsBatch SizeSteps Per EpochValidation Steps Per EpochData Augmentation FactorData Set SizeWavelet...Final Validation AccuracyTraining Loss HistoryValidation Loss HistoryTraining Accuracy HistoryValidation Accuracy HistoryWeights File(Test Predictions, inclass)(Test Predictions, correct)(Test Predictions, incorrect_thought_was)(Test Predictions, incorrect_thought_wasnt)
450Monday, 2017 October 16, 10:30 PMgpu7.257224e+135.0128.050.07.00.00smalldwt...0.320000[1.82380256891, 1.6008654356, 1.40744186163, 1...[1.8220331955, 1.66511300087, 1.65917759418, 1...[0.3171875, 0.41875, 0.50671875, 0.6015625, 0....[0.36375, 0.36875, 0.36375, 0.395, 0.36, 0.375...fcnn_0.01x0.02x0.85x1e-10_gpux72572239216642x5...[100.0, 100.0, 100.0, 100.0, 100.0, 100.0, 100...[34.0, 21.0, 19.0, 47.0, 40.0, 29.0, 18.0, 33.0][76.0, 98.0, 84.0, 52.0, 40.0, 67.0, 84.0, 58.0][66.0, 79.0, 81.0, 53.0, 60.0, 71.0, 82.0, 67.0]
465Wednesday, 2017 October 18, 4:35 AMgpu7.257224e+135.0128.050.07.00.33smalldwt...0.433750[1.82024466991, 1.71069197178, 1.67135448933, ...[1.814739151, 1.67947849274, 1.63117892742, 1....[0.3175, 0.37109375, 0.38828125, 0.40578125, 0...[0.35, 0.375, 0.39, 0.405, 0.39875, 0.41125, 0...fcnn_0.01x0.02x0.85x1e-10_gpux72572239216642x5...[100.0, 100.0, 100.0, 100.0, 100.0, 100.0, 100...[51.0, 26.0, 30.0, 70.0, 40.0, 36.0, 13.0, 49.0][52.0, 76.0, 80.0, 65.0, 42.0, 74.0, 50.0, 46.0][49.0, 74.0, 70.0, 30.0, 60.0, 64.0, 87.0, 51.0]
461Tuesday, 2017 October 17, 11:16 PMgpu7.257224e+135.0128.050.07.00.50smalldwt...0.440000[1.82721065044, 1.71751484394, 1.67218647718, ...[1.80424007416, 1.68645466328, 1.64975505829, ...[0.31625, 0.364375, 0.3853125, 0.39640625, 0.4...[0.36125, 0.38375, 0.375, 0.4025, 0.3975, 0.40...fcnn_0.01x0.02x0.85x1e-10_gpux72572239216642x5...[100.0, 100.0, 100.0, 100.0, 100.0, 100.0, 100...[43.0, 25.0, 35.0, 68.0, 38.0, 35.0, 19.0, 48.0][47.0, 72.0, 87.0, 67.0, 44.0, 74.0, 49.0, 49.0][57.0, 75.0, 65.0, 32.0, 62.0, 65.0, 81.0, 52.0]
456Tuesday, 2017 October 17, 9:27 AMgpu7.257224e+135.0128.050.07.00.66smalldwt...0.415000[1.823697927, 1.71283866167, 1.68205006361, 1....[1.83449396133, 1.70822524071, 1.63681973457, ...[0.31234375, 0.37171875, 0.38453125, 0.4017187...[0.35125, 0.3775, 0.38375, 0.405, 0.405, 0.398...fcnn_0.01x0.02x0.85x1e-10_gpux72572239216642x5...[100.0, 100.0, 100.0, 100.0, 100.0, 100.0, 100...[46.0, 24.0, 35.0, 66.0, 38.0, 39.0, 20.0, 45.0][54.0, 77.0, 81.0, 59.0, 44.0, 76.0, 49.0, 47.0][54.0, 76.0, 65.0, 34.0, 62.0, 61.0, 80.0, 55.0]
469Wednesday, 2017 October 18, 11:51 AMgpu7.257224e+1320.0128.073.010.00.00extendeddwt...0.493750[1.58588705814, 1.46814873937, 1.44014323901, ...[1.56640315056, 1.46727433205, 1.43934185505, ...[0.434075342466, 0.494220890411, 0.49732448630...[0.49140625, 0.5, 0.50625, 0.4921875, 0.511718...fcnn_0.01x0.02x0.85x1e-10_gpux72572239216642x2...[839.0, 1085.0, 299.0, 323.0, 309.0, 128.0, 20...[454.0, 442.0, 29.0, 148.0, 11.0, 5.0, 5.0, 11...[568.0, 499.0, 97.0, 220.0, 15.0, 14.0, 97.0, ...[385.0, 643.0, 270.0, 175.0, 298.0, 123.0, 199...
472Thursday, 2017 October 19, 12:47 PMgpu7.257224e+1320.0128.073.010.00.33extendeddwt...0.539062[1.58695063199, 1.47280326608, 1.44755986945, ...[1.54402760267, 1.48328278065, 1.45242022276, ...[0.433112157534, 0.488976883562, 0.48919092465...[0.49453125, 0.48984375, 0.496875, 0.49375, 0....fcnn_0.01x0.02x0.85x1e-10_gpux72572239216642x2...[839.0, 1085.0, 299.0, 323.0, 309.0, 128.0, 20...[601.0, 397.0, 47.0, 171.0, 3.0, 7.0, 3.0, 116...[822.0, 390.0, 108.0, 141.0, 2.0, 10.0, 26.0, ...[238.0, 688.0, 252.0, 152.0, 306.0, 121.0, 201...
451Monday, 2017 October 16, 10:42 PMgpu7.257224e+135.0128.050.07.00.00smalldwt...0.362500[1.90809360027, 1.68670711994, 1.56818962097, ...[2.14867657661, 2.05830182076, 1.9655592823, 1...[0.285625, 0.38515625, 0.434375, 0.45453125, 0...[0.1625, 0.25125, 0.28875, 0.26875, 0.31625, 0...xception_0.01x0.02x0.85x1e-10_gpux725722392166...[100.0, 100.0, 100.0, 100.0, 100.0, 100.0, 100...[42.0, 23.0, 24.0, 57.0, 34.0, 24.0, 19.0, 47.0][71.0, 64.0, 59.0, 63.0, 52.0, 71.0, 66.0, 84.0][58.0, 77.0, 76.0, 43.0, 66.0, 76.0, 81.0, 53.0]
466Wednesday, 2017 October 18, 5:20 AMgpu7.257224e+135.0128.050.07.00.33smalldwt...0.396250[1.92064338446, 1.77885185957, 1.74191453695, ...[2.26447115898, 2.14269234657, 2.07216393471, ...[0.2734375, 0.3384375, 0.351875, 0.3684375, 0....[0.1525, 0.245, 0.21375, 0.30875, 0.31375, 0.3...xception_0.01x0.02x0.85x1e-10_gpux725722392166...[100.0, 100.0, 100.0, 100.0, 100.0, 100.0, 100...[40.0, 16.0, 26.0, 66.0, 36.0, 41.0, 18.0, 45.0][53.0, 78.0, 61.0, 75.0, 64.0, 78.0, 26.0, 77.0][60.0, 84.0, 74.0, 34.0, 64.0, 59.0, 82.0, 55.0]
462Wednesday, 2017 October 18, 12:01 AMgpu7.257224e+135.0128.050.07.00.50smalldwt...0.390000[1.92605224848, 1.78636225224, 1.74515648603, ...[2.32112989426, 2.78206655502, 1.97336621284, ...[0.27171875, 0.329375, 0.3478125, 0.3575, 0.37...[0.21, 0.1375, 0.225, 0.2825, 0.3175, 0.34, 0....xception_0.01x0.02x0.85x1e-10_gpux725722392166...[100.0, 100.0, 100.0, 100.0, 100.0, 100.0, 100...[38.0, 20.0, 27.0, 63.0, 30.0, 41.0, 16.0, 43.0][57.0, 72.0, 66.0, 82.0, 46.0, 85.0, 36.0, 78.0][62.0, 80.0, 73.0, 37.0, 70.0, 59.0, 84.0, 57.0]
457Tuesday, 2017 October 17, 10:13 AMgpu7.257224e+135.0128.050.07.00.66smalldwt...0.400000[1.9075643158, 1.76602378368, 1.73399444818, 1...[2.18002383232, 2.07807783127, 2.00649788857, ...[0.2753125, 0.348125, 0.3603125, 0.37171875, 0...[0.16625, 0.20875, 0.26375, 0.2925, 0.32, 0.36...xception_0.01x0.02x0.85x1e-10_gpux725722392166...[100.0, 100.0, 100.0, 100.0, 100.0, 100.0, 100...[39.0, 21.0, 27.0, 64.0, 29.0, 43.0, 19.0, 41.0][57.0, 81.0, 66.0, 75.0, 61.0, 88.0, 27.0, 62.0][61.0, 79.0, 73.0, 36.0, 71.0, 57.0, 81.0, 59.0]
478Saturday, 2017 October 21, 1:44 AMgpu7.257224e+1320.0128.073.010.00.00extendeddwt...0.517969[1.60942099846, 1.50336504962, 1.49111029547, ...[1.77678931952, 1.75901452303, 1.63194044828, ...[0.427226027397, 0.472067636986, 0.47142551369...[0.38671875, 0.41875, 0.43203125, 0.49140625, ...xception_0.01x0.02x0.85x1e-10_gpux725722392166...[839.0, 1085.0, 299.0, 323.0, 309.0, 128.0, 20...[303.0, 721.0, 1.0, 5.0, 0.0, 0.0, 0.0, 971.0][427.0, 1263.0, 4.0, 3.0, 0.0, 0.0, 46.0, 907.0][536.0, 364.0, 298.0, 318.0, 309.0, 128.0, 204...
473Thursday, 2017 October 19, 5:09 PMgpu7.257224e+1320.0128.073.010.00.33extendeddwt...0.523438[1.61077199244, 1.50769974271, 1.50827520677, ...[2.07184349298, 1.8523173213, 1.57434175014, 1...[0.423801369863, 0.47238869863, 0.467037671233...[0.38515625, 0.40546875, 0.45390625, 0.4765625...xception_0.01x0.02x0.85x1e-10_gpux725722392166...[839.0, 1085.0, 299.0, 323.0, 309.0, 128.0, 20...[251.0, 816.0, 3.0, 8.0, 0.0, 0.0, 1.0, 810.0][394.0, 1650.0, 9.0, 3.0, 3.0, 0.0, 6.0, 697.0][588.0, 269.0, 296.0, 315.0, 309.0, 128.0, 203...
452Tuesday, 2017 October 17, 12:40 AMgpu7.257224e+135.0128.050.07.00.00smalldwt...0.321250[1.95510603428, 1.76394223928, 1.67935840607, ...[4.12246021271, 2.54523499489, 2.4867950058, 2...[0.2590625, 0.34453125, 0.3853125, 0.41328125,...[0.12375, 0.17, 0.1925, 0.25375, 0.28625, 0.30...inception_v3_0.01x0.02x0.85x1e-10_gpux72572239...[100.0, 100.0, 100.0, 100.0, 100.0, 100.0, 100...[34.0, 15.0, 28.0, 49.0, 36.0, 29.0, 10.0, 38.0][88.0, 76.0, 61.0, 81.0, 60.0, 69.0, 49.0, 77.0][66.0, 85.0, 72.0, 51.0, 64.0, 71.0, 90.0, 62.0]
467Wednesday, 2017 October 18, 7:18 AMgpu7.257224e+135.0128.050.07.00.33smalldwt...0.375000[1.96890549421, 1.85074101686, 1.83232646942, ...[2.58446475029, 2.32470556259, 2.24585523605, ...[0.24, 0.3053125, 0.30859375, 0.33578125, 0.33...[0.14375, 0.17375, 0.2225, 0.22625, 0.2875, 0....inception_v3_0.01x0.02x0.85x1e-10_gpux72572239...[100.0, 100.0, 100.0, 100.0, 100.0, 100.0, 100...[38.0, 11.0, 34.0, 59.0, 29.0, 36.0, 4.0, 46.0][77.0, 46.0, 67.0, 96.0, 53.0, 102.0, 27.0, 75.0][62.0, 89.0, 66.0, 41.0, 71.0, 64.0, 96.0, 54.0]
463Wednesday, 2017 October 18, 1:59 AMgpu7.257224e+135.0128.050.07.00.50smalldwt...0.365000[1.95656326056, 1.84989557028, 1.81713608265, ...[4.47656255722, 2.75450143814, 2.37536446571, ...[0.246875, 0.30140625, 0.32328125, 0.33078125,...[0.12625, 0.1525, 0.18, 0.22375, 0.30625, 0.33...inception_v3_0.01x0.02x0.85x1e-10_gpux72572239...[100.0, 100.0, 100.0, 100.0, 100.0, 100.0, 100...[42.0, 17.0, 28.0, 56.0, 31.0, 33.0, 8.0, 44.0][79.0, 54.0, 57.0, 94.0, 54.0, 105.0, 26.0, 72.0][58.0, 83.0, 72.0, 44.0, 69.0, 67.0, 92.0, 56.0]
458Tuesday, 2017 October 17, 12:11 PMgpu7.257224e+135.0128.050.07.00.66smalldwt...0.368750[1.95642371655, 1.85347248316, 1.81622013807, ...[3.36360150337, 2.64108294487, 2.6109867382, 2...[0.25796875, 0.30046875, 0.31515625, 0.3276562...[0.125, 0.17375, 0.1825, 0.23125, 0.29625, 0.3...inception_v3_0.01x0.02x0.85x1e-10_gpux72572239...[100.0, 100.0, 100.0, 100.0, 100.0, 100.0, 100...[38.0, 14.0, 27.0, 59.0, 29.0, 36.0, 5.0, 47.0][78.0, 50.0, 59.0, 95.0, 50.0, 109.0, 22.0, 82.0][62.0, 86.0, 73.0, 41.0, 71.0, 64.0, 95.0, 53.0]
470Wednesday, 2017 October 18, 1:33 PMgpu7.257224e+1320.0128.073.010.00.00extendeddwt...0.505469[1.66405828032, 1.56770853637, 1.55640177041, ...[3.62894377708, 2.22240310907, 1.64683238268, ...[0.403681506849, 0.442851027397, 0.44670376712...[0.19375, 0.21015625, 0.4265625, 0.46953125, 0...inception_v3_0.01x0.02x0.85x1e-10_gpux72572239...[839.0, 1085.0, 299.0, 323.0, 309.0, 128.0, 20...[350.0, 631.0, 0.0, 43.0, 0.0, 0.0, 3.0, 810.0][649.0, 1319.0, 1.0, 48.0, 0.0, 2.0, 55.0, 740.0][489.0, 454.0, 299.0, 280.0, 309.0, 128.0, 201...
474Friday, 2017 October 20, 4:40 AMgpu7.257224e+1320.0128.073.010.00.33extendeddwt...0.517188[1.68054896511, 1.57903284243, 1.57147473505, ...[3.61337640285, 1.91185925007, 1.65783749819, ...[0.398330479452, 0.436108732877, 0.43503852739...[0.19375, 0.31171875, 0.409375, 0.4484375, 0.4...inception_v3_0.01x0.02x0.85x1e-10_gpux72572239...[839.0, 1085.0, 299.0, 323.0, 309.0, 128.0, 20...[402.0, 619.0, 1.0, 27.0, 0.0, 0.0, 2.0, 866.0][680.0, 1158.0, 0.0, 14.0, 0.0, 0.0, 94.0, 788.0][437.0, 466.0, 298.0, 296.0, 309.0, 128.0, 202...
453Tuesday, 2017 October 17, 1:53 AMgpu7.257224e+135.0128.050.07.00.00smalldwt...0.422500[1.76757016897, 1.51766999722, 1.38178520441, ...[3.45861001015, 2.54270760536, 2.19634951591, ...[0.3503125, 0.45375, 0.51265625, 0.54, 0.57953...[0.17125, 0.2525, 0.31875, 0.37375, 0.41, 0.41...resnet50_0.01x0.02x0.85x1e-10_gpux725722392166...[100.0, 100.0, 100.0, 100.0, 100.0, 100.0, 100...[46.0, 26.0, 27.0, 70.0, 33.0, 37.0, 20.0, 45.0][51.0, 71.0, 83.0, 55.0, 53.0, 67.0, 57.0, 59.0][54.0, 74.0, 73.0, 30.0, 67.0, 63.0, 80.0, 55.0]
468Wednesday, 2017 October 18, 8:32 AMgpu7.257224e+135.0128.050.07.00.33smalldwt...0.430000[1.76855951548, 1.61201533318, 1.57018355846, ...[3.1637541008, 2.74854323387, 2.28446052551, 1...[0.34484375, 0.41640625, 0.430625, 0.43796875,...[0.1425, 0.24625, 0.3075, 0.3425, 0.3825, 0.40...resnet50_0.01x0.02x0.85x1e-10_gpux725722392166...[100.0, 100.0, 100.0, 100.0, 100.0, 100.0, 100...[51.0, 25.0, 29.0, 76.0, 27.0, 37.0, 21.0, 48.0][49.0, 56.0, 87.0, 70.0, 43.0, 87.0, 36.0, 58.0][49.0, 75.0, 71.0, 24.0, 73.0, 63.0, 79.0, 52.0]
464Wednesday, 2017 October 18, 3:11 AMgpu7.257224e+135.0128.050.07.00.50smalldwt...0.411250[1.7645502615, 1.61465699434, 1.56118855238, 1...[3.98527582169, 3.07449765205, 2.29873632431, ...[0.35546875, 0.40953125, 0.42703125, 0.4415625...[0.13625, 0.2625, 0.3025, 0.3525, 0.3775, 0.38...resnet50_0.01x0.02x0.85x1e-10_gpux725722392166...[100.0, 100.0, 100.0, 100.0, 100.0, 100.0, 100...[52.0, 21.0, 28.0, 68.0, 29.0, 42.0, 19.0, 50.0][48.0, 59.0, 77.0, 63.0, 44.0, 89.0, 43.0, 68.0][48.0, 79.0, 72.0, 32.0, 71.0, 58.0, 81.0, 50.0]
459Tuesday, 2017 October 17, 1:25 PMgpu7.257224e+135.0128.050.07.00.66smalldwt...0.410000[1.76656898975, 1.61333152056, 1.57016033649, ...[3.12541369438, 2.38677270889, 2.02602273941, ...[0.34875, 0.4153125, 0.425625, 0.439375, 0.457...[0.1825, 0.2575, 0.3075, 0.35125, 0.39125, 0.4...resnet50_0.01x0.02x0.85x1e-10_gpux725722392166...[100.0, 100.0, 100.0, 100.0, 100.0, 100.0, 100...[48.0, 21.0, 33.0, 70.0, 30.0, 47.0, 25.0, 50.0][57.0, 50.0, 73.0, 68.0, 39.0, 94.0, 34.0, 61.0][52.0, 79.0, 67.0, 30.0, 70.0, 53.0, 75.0, 50.0]
471Wednesday, 2017 October 18, 8:36 PMgpu7.257224e+1320.0128.073.010.00.00extendeddwt...0.533594[1.51579548398, 1.40459283574, 1.37599181639, ...[1.69485304356, 1.57757995129, 1.44549598694, ...[0.472709760274, 0.513377568493, 0.52215325342...[0.4421875, 0.44140625, 0.50703125, 0.54921875...resnet50_0.01x0.02x0.85x1e-10_gpux725722392166...[839.0, 1085.0, 299.0, 323.0, 309.0, 128.0, 20...[13.0, 575.0, 0.0, 1.0, 0.0, 0.0, 0.0, 1104.0][30.0, 1016.0, 0.0, 1.0, 0.0, 0.0, 1.0, 1910.0][826.0, 510.0, 299.0, 322.0, 309.0, 128.0, 204...
475Friday, 2017 October 20, 11:40 AMgpu7.257224e+1320.0128.073.010.00.33extendeddwt...0.546094[1.51615725151, 1.41021734231, 1.38719483924, ...[2.69430851936, 1.6660217762, 1.47920976877, 1...[0.466181506849, 0.505351027397, 0.52097602739...[0.28515625, 0.4109375, 0.5, 0.528125, 0.52812...resnet50_0.01x0.02x0.85x1e-10_gpux725722392166...[839.0, 1085.0, 299.0, 323.0, 309.0, 128.0, 20...[16.0, 305.0, 0.0, 8.0, 0.0, 0.0, 0.0, 1363.0][15.0, 377.0, 0.0, 4.0, 0.0, 0.0, 1.0, 2562.0][823.0, 780.0, 299.0, 315.0, 309.0, 128.0, 204...
454Tuesday, 2017 October 17, 3:16 AMgpu7.257224e+135.0128.050.07.00.00smalldwt...0.443750[1.79767173767, 1.64688575506, 1.59771779776, ...[2.87258497238, 2.12235598564, 1.74181958675, ...[0.338125, 0.40359375, 0.4303125, 0.4446875, 0...[0.1775, 0.2525, 0.33625, 0.3625, 0.3975, 0.39...vgg16_0.01x0.02x0.85x1e-10_gpux72572239216642x...[100.0, 100.0, 100.0, 100.0, 100.0, 100.0, 100...[39.0, 28.0, 37.0, 82.0, 42.0, 18.0, 20.0, 47.0][48.0, 79.0, 68.0, 140.0, 47.0, 22.0, 49.0, 34.0][61.0, 72.0, 63.0, 18.0, 58.0, 82.0, 80.0, 53.0]
477Friday, 2017 October 20, 11:02 PMgpu7.257224e+135.0128.050.07.00.33smalldwt...0.462500[1.8049201417, 1.66879448652, 1.62949069262, 1...[2.64639645576, 1.85615677834, 1.74214244366, ...[0.33828125, 0.3934375, 0.41375, 0.43125, 0.43...[0.1925, 0.30875, 0.335, 0.385, 0.4025, 0.4012...vgg16_0.01x0.02x0.85x1e-10_gpux72572239216642x...[100.0, 100.0, 100.0, 100.0, 100.0, 100.0, 100...[38.0, 29.0, 45.0, 71.0, 42.0, 16.0, 14.0, 57.0][43.0, 94.0, 88.0, 50.0, 71.0, 23.0, 34.0, 85.0][62.0, 71.0, 55.0, 29.0, 58.0, 84.0, 86.0, 43.0]
476Friday, 2017 October 20, 8:20 PMgpu7.257224e+135.0128.050.07.00.50smalldwt...0.470000[1.80697010756, 1.68164082766, 1.62566782713, ...[2.92805818558, 1.8851287365, 1.81813882828, 1...[0.33171875, 0.38765625, 0.4053125, 0.4228125,...[0.2175, 0.28875, 0.2875, 0.3775, 0.38, 0.3925...vgg16_0.01x0.02x0.85x1e-10_gpux72572239216642x...[100.0, 100.0, 100.0, 100.0, 100.0, 100.0, 100...[61.0, 34.0, 18.0, 57.0, 43.0, 38.0, 3.0, 56.0][73.0, 131.0, 36.0, 17.0, 63.0, 80.0, 16.0, 74.0][39.0, 66.0, 82.0, 43.0, 57.0, 62.0, 97.0, 44.0]
460Tuesday, 2017 October 17, 2:49 PMgpu7.257224e+135.0128.050.07.00.66smalldwt...0.445000[1.80523064852, 1.67642034292, 1.62540378571, ...[3.02111760139, 1.89962665558, 1.70370420456, ...[0.3428125, 0.39125, 0.408125, 0.428125, 0.430...[0.1825, 0.2975, 0.37, 0.385, 0.38625, 0.39125...vgg16_0.01x0.02x0.85x1e-10_gpux72572239216642x...[100.0, 100.0, 100.0, 100.0, 100.0, 100.0, 100...[34.0, 25.0, 13.0, 77.0, 32.0, 11.0, 6.0, 80.0][30.0, 86.0, 51.0, 64.0, 42.0, 24.0, 26.0, 199.0][66.0, 75.0, 87.0, 23.0, 68.0, 89.0, 94.0, 20.0]
479Saturday, 2017 October 21, 1:41 PMgpu7.257224e+1320.0128.073.010.00.00extendeddwt...0.508594[1.53931434677, 1.42019427966, 1.39485175642, ...[1.93787550926, 1.62927873135, 1.43638788462, ...[0.461793664384, 0.505565068493, 0.52054794520...[0.48203125, 0.40625, 0.49921875, 0.48828125, ...vgg16_0.01x0.02x0.85x1e-10_gpux72572239216642x...[839.0, 1085.0, 299.0, 323.0, 309.0, 128.0, 20...[111.0, 66.0, 4.0, 2.0, 0.0, 0.0, 0.0, 1455.0][73.0, 41.0, 7.0, 1.0, 0.0, 0.0, 8.0, 2883.0][728.0, 1019.0, 295.0, 321.0, 309.0, 128.0, 20...
480Sunday, 2017 October 22, 5:34 AMgpu7.257224e+1320.0128.073.010.00.33extendeddwt...0.551562[1.5456171493, 1.43956860614, 1.41326133356, 1...[1.97792037725, 1.58831481934, 1.45778542757, ...[0.456014554795, 0.499143835616, 0.50941780821...[0.48203125, 0.46171875, 0.484375, 0.47890625,...vgg16_0.01x0.02x0.85x1e-10_gpux72572239216642x...[839.0, 1085.0, 299.0, 323.0, 309.0, 128.0, 20...[303.0, 542.0, 0.0, 19.0, 0.0, 0.0, 1.0, 1345.0][282.0, 593.0, 0.0, 1.0, 0.0, 0.0, 8.0, 1557.0][536.0, 543.0, 299.0, 304.0, 309.0, 128.0, 203...
\n", "

30 rows × 27 columns

\n", "
" ], "text/plain": [ " Run Started Source Processor Source \\\n", "450 Monday, 2017 October 16, 10:30 PM gpu 7.257224e+13 \n", "465 Wednesday, 2017 October 18, 4:35 AM gpu 7.257224e+13 \n", "461 Tuesday, 2017 October 17, 11:16 PM gpu 7.257224e+13 \n", "456 Tuesday, 2017 October 17, 9:27 AM gpu 7.257224e+13 \n", "469 Wednesday, 2017 October 18, 11:51 AM gpu 7.257224e+13 \n", "472 Thursday, 2017 October 19, 12:47 PM gpu 7.257224e+13 \n", "451 Monday, 2017 October 16, 10:42 PM gpu 7.257224e+13 \n", "466 Wednesday, 2017 October 18, 5:20 AM gpu 7.257224e+13 \n", "462 Wednesday, 2017 October 18, 12:01 AM gpu 7.257224e+13 \n", "457 Tuesday, 2017 October 17, 10:13 AM gpu 7.257224e+13 \n", "478 Saturday, 2017 October 21, 1:44 AM gpu 7.257224e+13 \n", "473 Thursday, 2017 October 19, 5:09 PM gpu 7.257224e+13 \n", "452 Tuesday, 2017 October 17, 12:40 AM gpu 7.257224e+13 \n", "467 Wednesday, 2017 October 18, 7:18 AM gpu 7.257224e+13 \n", "463 Wednesday, 2017 October 18, 1:59 AM gpu 7.257224e+13 \n", "458 Tuesday, 2017 October 17, 12:11 PM gpu 7.257224e+13 \n", "470 Wednesday, 2017 October 18, 1:33 PM gpu 7.257224e+13 \n", "474 Friday, 2017 October 20, 4:40 AM gpu 7.257224e+13 \n", "453 Tuesday, 2017 October 17, 1:53 AM gpu 7.257224e+13 \n", "468 Wednesday, 2017 October 18, 8:32 AM gpu 7.257224e+13 \n", "464 Wednesday, 2017 October 18, 3:11 AM gpu 7.257224e+13 \n", "459 Tuesday, 2017 October 17, 1:25 PM gpu 7.257224e+13 \n", "471 Wednesday, 2017 October 18, 8:36 PM gpu 7.257224e+13 \n", "475 Friday, 2017 October 20, 11:40 AM gpu 7.257224e+13 \n", "454 Tuesday, 2017 October 17, 3:16 AM gpu 7.257224e+13 \n", "477 Friday, 2017 October 20, 11:02 PM gpu 7.257224e+13 \n", "476 Friday, 2017 October 20, 8:20 PM gpu 7.257224e+13 \n", "460 Tuesday, 2017 October 17, 2:49 PM gpu 7.257224e+13 \n", "479 Saturday, 2017 October 21, 1:41 PM gpu 7.257224e+13 \n", "480 Sunday, 2017 October 22, 5:34 AM gpu 7.257224e+13 \n", "\n", " Pass Epochs Batch Size Steps Per Epoch Validation Steps Per Epoch \\\n", "450 5.0 128.0 50.0 7.0 \n", "465 5.0 128.0 50.0 7.0 \n", "461 5.0 128.0 50.0 7.0 \n", "456 5.0 128.0 50.0 7.0 \n", "469 20.0 128.0 73.0 10.0 \n", "472 20.0 128.0 73.0 10.0 \n", "451 5.0 128.0 50.0 7.0 \n", "466 5.0 128.0 50.0 7.0 \n", "462 5.0 128.0 50.0 7.0 \n", "457 5.0 128.0 50.0 7.0 \n", "478 20.0 128.0 73.0 10.0 \n", "473 20.0 128.0 73.0 10.0 \n", "452 5.0 128.0 50.0 7.0 \n", "467 5.0 128.0 50.0 7.0 \n", "463 5.0 128.0 50.0 7.0 \n", "458 5.0 128.0 50.0 7.0 \n", "470 20.0 128.0 73.0 10.0 \n", "474 20.0 128.0 73.0 10.0 \n", "453 5.0 128.0 50.0 7.0 \n", "468 5.0 128.0 50.0 7.0 \n", "464 5.0 128.0 50.0 7.0 \n", "459 5.0 128.0 50.0 7.0 \n", "471 20.0 128.0 73.0 10.0 \n", "475 20.0 128.0 73.0 10.0 \n", "454 5.0 128.0 50.0 7.0 \n", "477 5.0 128.0 50.0 7.0 \n", "476 5.0 128.0 50.0 7.0 \n", "460 5.0 128.0 50.0 7.0 \n", "479 20.0 128.0 73.0 10.0 \n", "480 20.0 128.0 73.0 10.0 \n", "\n", " Data Augmentation Factor Data Set Size Wavelet \\\n", "450 0.00 small dwt \n", "465 0.33 small dwt \n", "461 0.50 small dwt \n", "456 0.66 small dwt \n", "469 0.00 extended dwt \n", "472 0.33 extended dwt \n", "451 0.00 small dwt \n", "466 0.33 small dwt \n", "462 0.50 small dwt \n", "457 0.66 small dwt \n", "478 0.00 extended dwt \n", "473 0.33 extended dwt \n", "452 0.00 small dwt \n", "467 0.33 small dwt \n", "463 0.50 small dwt \n", "458 0.66 small dwt \n", "470 0.00 extended dwt \n", "474 0.33 extended dwt \n", "453 0.00 small dwt \n", "468 0.33 small dwt \n", "464 0.50 small dwt \n", "459 0.66 small dwt \n", "471 0.00 extended dwt \n", "475 0.33 extended dwt \n", "454 0.00 small dwt \n", "477 0.33 small dwt \n", "476 0.50 small dwt \n", "460 0.66 small dwt \n", "479 0.00 extended dwt \n", "480 0.33 extended dwt \n", "\n", " ... \\\n", "450 ... \n", "465 ... \n", "461 ... \n", "456 ... \n", "469 ... \n", "472 ... \n", "451 ... \n", "466 ... \n", "462 ... \n", "457 ... \n", "478 ... \n", "473 ... \n", "452 ... \n", "467 ... \n", "463 ... \n", "458 ... \n", "470 ... \n", "474 ... \n", "453 ... \n", "468 ... \n", "464 ... \n", "459 ... \n", "471 ... \n", "475 ... \n", "454 ... \n", "477 ... \n", "476 ... \n", "460 ... \n", "479 ... \n", "480 ... \n", "\n", " Final Validation Accuracy \\\n", "450 0.320000 \n", "465 0.433750 \n", "461 0.440000 \n", "456 0.415000 \n", "469 0.493750 \n", "472 0.539062 \n", "451 0.362500 \n", "466 0.396250 \n", "462 0.390000 \n", "457 0.400000 \n", "478 0.517969 \n", "473 0.523438 \n", "452 0.321250 \n", "467 0.375000 \n", "463 0.365000 \n", "458 0.368750 \n", "470 0.505469 \n", "474 0.517188 \n", "453 0.422500 \n", "468 0.430000 \n", "464 0.411250 \n", "459 0.410000 \n", "471 0.533594 \n", "475 0.546094 \n", "454 0.443750 \n", "477 0.462500 \n", "476 0.470000 \n", "460 0.445000 \n", "479 0.508594 \n", "480 0.551562 \n", "\n", " Training Loss History \\\n", "450 [1.82380256891, 1.6008654356, 1.40744186163, 1... \n", "465 [1.82024466991, 1.71069197178, 1.67135448933, ... \n", "461 [1.82721065044, 1.71751484394, 1.67218647718, ... \n", "456 [1.823697927, 1.71283866167, 1.68205006361, 1.... \n", "469 [1.58588705814, 1.46814873937, 1.44014323901, ... \n", "472 [1.58695063199, 1.47280326608, 1.44755986945, ... \n", "451 [1.90809360027, 1.68670711994, 1.56818962097, ... \n", "466 [1.92064338446, 1.77885185957, 1.74191453695, ... \n", "462 [1.92605224848, 1.78636225224, 1.74515648603, ... \n", "457 [1.9075643158, 1.76602378368, 1.73399444818, 1... \n", "478 [1.60942099846, 1.50336504962, 1.49111029547, ... \n", "473 [1.61077199244, 1.50769974271, 1.50827520677, ... \n", "452 [1.95510603428, 1.76394223928, 1.67935840607, ... \n", "467 [1.96890549421, 1.85074101686, 1.83232646942, ... \n", "463 [1.95656326056, 1.84989557028, 1.81713608265, ... \n", "458 [1.95642371655, 1.85347248316, 1.81622013807, ... \n", "470 [1.66405828032, 1.56770853637, 1.55640177041, ... \n", "474 [1.68054896511, 1.57903284243, 1.57147473505, ... \n", "453 [1.76757016897, 1.51766999722, 1.38178520441, ... \n", "468 [1.76855951548, 1.61201533318, 1.57018355846, ... \n", "464 [1.7645502615, 1.61465699434, 1.56118855238, 1... \n", "459 [1.76656898975, 1.61333152056, 1.57016033649, ... \n", "471 [1.51579548398, 1.40459283574, 1.37599181639, ... \n", "475 [1.51615725151, 1.41021734231, 1.38719483924, ... \n", "454 [1.79767173767, 1.64688575506, 1.59771779776, ... \n", "477 [1.8049201417, 1.66879448652, 1.62949069262, 1... \n", "476 [1.80697010756, 1.68164082766, 1.62566782713, ... \n", "460 [1.80523064852, 1.67642034292, 1.62540378571, ... \n", "479 [1.53931434677, 1.42019427966, 1.39485175642, ... \n", "480 [1.5456171493, 1.43956860614, 1.41326133356, 1... \n", "\n", " Validation Loss History \\\n", "450 [1.8220331955, 1.66511300087, 1.65917759418, 1... \n", "465 [1.814739151, 1.67947849274, 1.63117892742, 1.... \n", "461 [1.80424007416, 1.68645466328, 1.64975505829, ... \n", "456 [1.83449396133, 1.70822524071, 1.63681973457, ... \n", "469 [1.56640315056, 1.46727433205, 1.43934185505, ... \n", "472 [1.54402760267, 1.48328278065, 1.45242022276, ... \n", "451 [2.14867657661, 2.05830182076, 1.9655592823, 1... \n", "466 [2.26447115898, 2.14269234657, 2.07216393471, ... \n", "462 [2.32112989426, 2.78206655502, 1.97336621284, ... \n", "457 [2.18002383232, 2.07807783127, 2.00649788857, ... \n", "478 [1.77678931952, 1.75901452303, 1.63194044828, ... \n", "473 [2.07184349298, 1.8523173213, 1.57434175014, 1... \n", "452 [4.12246021271, 2.54523499489, 2.4867950058, 2... \n", "467 [2.58446475029, 2.32470556259, 2.24585523605, ... \n", "463 [4.47656255722, 2.75450143814, 2.37536446571, ... \n", "458 [3.36360150337, 2.64108294487, 2.6109867382, 2... \n", "470 [3.62894377708, 2.22240310907, 1.64683238268, ... \n", "474 [3.61337640285, 1.91185925007, 1.65783749819, ... \n", "453 [3.45861001015, 2.54270760536, 2.19634951591, ... \n", "468 [3.1637541008, 2.74854323387, 2.28446052551, 1... \n", "464 [3.98527582169, 3.07449765205, 2.29873632431, ... \n", "459 [3.12541369438, 2.38677270889, 2.02602273941, ... \n", "471 [1.69485304356, 1.57757995129, 1.44549598694, ... \n", "475 [2.69430851936, 1.6660217762, 1.47920976877, 1... \n", "454 [2.87258497238, 2.12235598564, 1.74181958675, ... \n", "477 [2.64639645576, 1.85615677834, 1.74214244366, ... \n", "476 [2.92805818558, 1.8851287365, 1.81813882828, 1... \n", "460 [3.02111760139, 1.89962665558, 1.70370420456, ... \n", "479 [1.93787550926, 1.62927873135, 1.43638788462, ... \n", "480 [1.97792037725, 1.58831481934, 1.45778542757, ... \n", "\n", " Training Accuracy History \\\n", "450 [0.3171875, 0.41875, 0.50671875, 0.6015625, 0.... \n", "465 [0.3175, 0.37109375, 0.38828125, 0.40578125, 0... \n", "461 [0.31625, 0.364375, 0.3853125, 0.39640625, 0.4... \n", "456 [0.31234375, 0.37171875, 0.38453125, 0.4017187... \n", "469 [0.434075342466, 0.494220890411, 0.49732448630... \n", "472 [0.433112157534, 0.488976883562, 0.48919092465... \n", "451 [0.285625, 0.38515625, 0.434375, 0.45453125, 0... \n", "466 [0.2734375, 0.3384375, 0.351875, 0.3684375, 0.... \n", "462 [0.27171875, 0.329375, 0.3478125, 0.3575, 0.37... \n", "457 [0.2753125, 0.348125, 0.3603125, 0.37171875, 0... \n", "478 [0.427226027397, 0.472067636986, 0.47142551369... \n", "473 [0.423801369863, 0.47238869863, 0.467037671233... \n", "452 [0.2590625, 0.34453125, 0.3853125, 0.41328125,... \n", "467 [0.24, 0.3053125, 0.30859375, 0.33578125, 0.33... \n", "463 [0.246875, 0.30140625, 0.32328125, 0.33078125,... \n", "458 [0.25796875, 0.30046875, 0.31515625, 0.3276562... \n", "470 [0.403681506849, 0.442851027397, 0.44670376712... \n", "474 [0.398330479452, 0.436108732877, 0.43503852739... \n", "453 [0.3503125, 0.45375, 0.51265625, 0.54, 0.57953... \n", "468 [0.34484375, 0.41640625, 0.430625, 0.43796875,... \n", "464 [0.35546875, 0.40953125, 0.42703125, 0.4415625... \n", "459 [0.34875, 0.4153125, 0.425625, 0.439375, 0.457... \n", "471 [0.472709760274, 0.513377568493, 0.52215325342... \n", "475 [0.466181506849, 0.505351027397, 0.52097602739... \n", "454 [0.338125, 0.40359375, 0.4303125, 0.4446875, 0... \n", "477 [0.33828125, 0.3934375, 0.41375, 0.43125, 0.43... \n", "476 [0.33171875, 0.38765625, 0.4053125, 0.4228125,... \n", "460 [0.3428125, 0.39125, 0.408125, 0.428125, 0.430... \n", "479 [0.461793664384, 0.505565068493, 0.52054794520... \n", "480 [0.456014554795, 0.499143835616, 0.50941780821... \n", "\n", " Validation Accuracy History \\\n", "450 [0.36375, 0.36875, 0.36375, 0.395, 0.36, 0.375... \n", "465 [0.35, 0.375, 0.39, 0.405, 0.39875, 0.41125, 0... \n", "461 [0.36125, 0.38375, 0.375, 0.4025, 0.3975, 0.40... \n", "456 [0.35125, 0.3775, 0.38375, 0.405, 0.405, 0.398... \n", "469 [0.49140625, 0.5, 0.50625, 0.4921875, 0.511718... \n", "472 [0.49453125, 0.48984375, 0.496875, 0.49375, 0.... \n", "451 [0.1625, 0.25125, 0.28875, 0.26875, 0.31625, 0... \n", "466 [0.1525, 0.245, 0.21375, 0.30875, 0.31375, 0.3... \n", "462 [0.21, 0.1375, 0.225, 0.2825, 0.3175, 0.34, 0.... \n", "457 [0.16625, 0.20875, 0.26375, 0.2925, 0.32, 0.36... \n", "478 [0.38671875, 0.41875, 0.43203125, 0.49140625, ... \n", "473 [0.38515625, 0.40546875, 0.45390625, 0.4765625... \n", "452 [0.12375, 0.17, 0.1925, 0.25375, 0.28625, 0.30... \n", "467 [0.14375, 0.17375, 0.2225, 0.22625, 0.2875, 0.... \n", "463 [0.12625, 0.1525, 0.18, 0.22375, 0.30625, 0.33... \n", "458 [0.125, 0.17375, 0.1825, 0.23125, 0.29625, 0.3... \n", "470 [0.19375, 0.21015625, 0.4265625, 0.46953125, 0... \n", "474 [0.19375, 0.31171875, 0.409375, 0.4484375, 0.4... \n", "453 [0.17125, 0.2525, 0.31875, 0.37375, 0.41, 0.41... \n", "468 [0.1425, 0.24625, 0.3075, 0.3425, 0.3825, 0.40... \n", "464 [0.13625, 0.2625, 0.3025, 0.3525, 0.3775, 0.38... \n", "459 [0.1825, 0.2575, 0.3075, 0.35125, 0.39125, 0.4... \n", "471 [0.4421875, 0.44140625, 0.50703125, 0.54921875... \n", "475 [0.28515625, 0.4109375, 0.5, 0.528125, 0.52812... \n", "454 [0.1775, 0.2525, 0.33625, 0.3625, 0.3975, 0.39... \n", "477 [0.1925, 0.30875, 0.335, 0.385, 0.4025, 0.4012... \n", "476 [0.2175, 0.28875, 0.2875, 0.3775, 0.38, 0.3925... \n", "460 [0.1825, 0.2975, 0.37, 0.385, 0.38625, 0.39125... \n", "479 [0.48203125, 0.40625, 0.49921875, 0.48828125, ... \n", "480 [0.48203125, 0.46171875, 0.484375, 0.47890625,... \n", "\n", " Weights File \\\n", "450 fcnn_0.01x0.02x0.85x1e-10_gpux72572239216642x5... \n", "465 fcnn_0.01x0.02x0.85x1e-10_gpux72572239216642x5... \n", "461 fcnn_0.01x0.02x0.85x1e-10_gpux72572239216642x5... \n", "456 fcnn_0.01x0.02x0.85x1e-10_gpux72572239216642x5... \n", "469 fcnn_0.01x0.02x0.85x1e-10_gpux72572239216642x2... \n", "472 fcnn_0.01x0.02x0.85x1e-10_gpux72572239216642x2... \n", "451 xception_0.01x0.02x0.85x1e-10_gpux725722392166... \n", "466 xception_0.01x0.02x0.85x1e-10_gpux725722392166... \n", "462 xception_0.01x0.02x0.85x1e-10_gpux725722392166... \n", "457 xception_0.01x0.02x0.85x1e-10_gpux725722392166... \n", "478 xception_0.01x0.02x0.85x1e-10_gpux725722392166... \n", "473 xception_0.01x0.02x0.85x1e-10_gpux725722392166... \n", "452 inception_v3_0.01x0.02x0.85x1e-10_gpux72572239... \n", "467 inception_v3_0.01x0.02x0.85x1e-10_gpux72572239... \n", "463 inception_v3_0.01x0.02x0.85x1e-10_gpux72572239... \n", "458 inception_v3_0.01x0.02x0.85x1e-10_gpux72572239... \n", "470 inception_v3_0.01x0.02x0.85x1e-10_gpux72572239... \n", "474 inception_v3_0.01x0.02x0.85x1e-10_gpux72572239... \n", "453 resnet50_0.01x0.02x0.85x1e-10_gpux725722392166... \n", "468 resnet50_0.01x0.02x0.85x1e-10_gpux725722392166... \n", "464 resnet50_0.01x0.02x0.85x1e-10_gpux725722392166... \n", "459 resnet50_0.01x0.02x0.85x1e-10_gpux725722392166... \n", "471 resnet50_0.01x0.02x0.85x1e-10_gpux725722392166... \n", "475 resnet50_0.01x0.02x0.85x1e-10_gpux725722392166... \n", "454 vgg16_0.01x0.02x0.85x1e-10_gpux72572239216642x... \n", "477 vgg16_0.01x0.02x0.85x1e-10_gpux72572239216642x... \n", "476 vgg16_0.01x0.02x0.85x1e-10_gpux72572239216642x... \n", "460 vgg16_0.01x0.02x0.85x1e-10_gpux72572239216642x... \n", "479 vgg16_0.01x0.02x0.85x1e-10_gpux72572239216642x... \n", "480 vgg16_0.01x0.02x0.85x1e-10_gpux72572239216642x... \n", "\n", " (Test Predictions, inclass) \\\n", "450 [100.0, 100.0, 100.0, 100.0, 100.0, 100.0, 100... \n", "465 [100.0, 100.0, 100.0, 100.0, 100.0, 100.0, 100... \n", "461 [100.0, 100.0, 100.0, 100.0, 100.0, 100.0, 100... \n", "456 [100.0, 100.0, 100.0, 100.0, 100.0, 100.0, 100... \n", "469 [839.0, 1085.0, 299.0, 323.0, 309.0, 128.0, 20... \n", "472 [839.0, 1085.0, 299.0, 323.0, 309.0, 128.0, 20... \n", "451 [100.0, 100.0, 100.0, 100.0, 100.0, 100.0, 100... \n", "466 [100.0, 100.0, 100.0, 100.0, 100.0, 100.0, 100... \n", "462 [100.0, 100.0, 100.0, 100.0, 100.0, 100.0, 100... \n", "457 [100.0, 100.0, 100.0, 100.0, 100.0, 100.0, 100... \n", "478 [839.0, 1085.0, 299.0, 323.0, 309.0, 128.0, 20... \n", "473 [839.0, 1085.0, 299.0, 323.0, 309.0, 128.0, 20... \n", "452 [100.0, 100.0, 100.0, 100.0, 100.0, 100.0, 100... \n", "467 [100.0, 100.0, 100.0, 100.0, 100.0, 100.0, 100... \n", "463 [100.0, 100.0, 100.0, 100.0, 100.0, 100.0, 100... \n", "458 [100.0, 100.0, 100.0, 100.0, 100.0, 100.0, 100... \n", "470 [839.0, 1085.0, 299.0, 323.0, 309.0, 128.0, 20... \n", "474 [839.0, 1085.0, 299.0, 323.0, 309.0, 128.0, 20... \n", "453 [100.0, 100.0, 100.0, 100.0, 100.0, 100.0, 100... \n", "468 [100.0, 100.0, 100.0, 100.0, 100.0, 100.0, 100... \n", "464 [100.0, 100.0, 100.0, 100.0, 100.0, 100.0, 100... \n", "459 [100.0, 100.0, 100.0, 100.0, 100.0, 100.0, 100... \n", "471 [839.0, 1085.0, 299.0, 323.0, 309.0, 128.0, 20... \n", "475 [839.0, 1085.0, 299.0, 323.0, 309.0, 128.0, 20... \n", "454 [100.0, 100.0, 100.0, 100.0, 100.0, 100.0, 100... \n", "477 [100.0, 100.0, 100.0, 100.0, 100.0, 100.0, 100... \n", "476 [100.0, 100.0, 100.0, 100.0, 100.0, 100.0, 100... \n", "460 [100.0, 100.0, 100.0, 100.0, 100.0, 100.0, 100... \n", "479 [839.0, 1085.0, 299.0, 323.0, 309.0, 128.0, 20... \n", "480 [839.0, 1085.0, 299.0, 323.0, 309.0, 128.0, 20... \n", "\n", " (Test Predictions, correct) \\\n", "450 [34.0, 21.0, 19.0, 47.0, 40.0, 29.0, 18.0, 33.0] \n", "465 [51.0, 26.0, 30.0, 70.0, 40.0, 36.0, 13.0, 49.0] \n", "461 [43.0, 25.0, 35.0, 68.0, 38.0, 35.0, 19.0, 48.0] \n", "456 [46.0, 24.0, 35.0, 66.0, 38.0, 39.0, 20.0, 45.0] \n", "469 [454.0, 442.0, 29.0, 148.0, 11.0, 5.0, 5.0, 11... \n", "472 [601.0, 397.0, 47.0, 171.0, 3.0, 7.0, 3.0, 116... \n", "451 [42.0, 23.0, 24.0, 57.0, 34.0, 24.0, 19.0, 47.0] \n", "466 [40.0, 16.0, 26.0, 66.0, 36.0, 41.0, 18.0, 45.0] \n", "462 [38.0, 20.0, 27.0, 63.0, 30.0, 41.0, 16.0, 43.0] \n", "457 [39.0, 21.0, 27.0, 64.0, 29.0, 43.0, 19.0, 41.0] \n", "478 [303.0, 721.0, 1.0, 5.0, 0.0, 0.0, 0.0, 971.0] \n", "473 [251.0, 816.0, 3.0, 8.0, 0.0, 0.0, 1.0, 810.0] \n", "452 [34.0, 15.0, 28.0, 49.0, 36.0, 29.0, 10.0, 38.0] \n", "467 [38.0, 11.0, 34.0, 59.0, 29.0, 36.0, 4.0, 46.0] \n", "463 [42.0, 17.0, 28.0, 56.0, 31.0, 33.0, 8.0, 44.0] \n", "458 [38.0, 14.0, 27.0, 59.0, 29.0, 36.0, 5.0, 47.0] \n", "470 [350.0, 631.0, 0.0, 43.0, 0.0, 0.0, 3.0, 810.0] \n", "474 [402.0, 619.0, 1.0, 27.0, 0.0, 0.0, 2.0, 866.0] \n", "453 [46.0, 26.0, 27.0, 70.0, 33.0, 37.0, 20.0, 45.0] \n", "468 [51.0, 25.0, 29.0, 76.0, 27.0, 37.0, 21.0, 48.0] \n", "464 [52.0, 21.0, 28.0, 68.0, 29.0, 42.0, 19.0, 50.0] \n", "459 [48.0, 21.0, 33.0, 70.0, 30.0, 47.0, 25.0, 50.0] \n", "471 [13.0, 575.0, 0.0, 1.0, 0.0, 0.0, 0.0, 1104.0] \n", "475 [16.0, 305.0, 0.0, 8.0, 0.0, 0.0, 0.0, 1363.0] \n", "454 [39.0, 28.0, 37.0, 82.0, 42.0, 18.0, 20.0, 47.0] \n", "477 [38.0, 29.0, 45.0, 71.0, 42.0, 16.0, 14.0, 57.0] \n", "476 [61.0, 34.0, 18.0, 57.0, 43.0, 38.0, 3.0, 56.0] \n", "460 [34.0, 25.0, 13.0, 77.0, 32.0, 11.0, 6.0, 80.0] \n", "479 [111.0, 66.0, 4.0, 2.0, 0.0, 0.0, 0.0, 1455.0] \n", "480 [303.0, 542.0, 0.0, 19.0, 0.0, 0.0, 1.0, 1345.0] \n", "\n", " (Test Predictions, incorrect_thought_was) \\\n", "450 [76.0, 98.0, 84.0, 52.0, 40.0, 67.0, 84.0, 58.0] \n", "465 [52.0, 76.0, 80.0, 65.0, 42.0, 74.0, 50.0, 46.0] \n", "461 [47.0, 72.0, 87.0, 67.0, 44.0, 74.0, 49.0, 49.0] \n", "456 [54.0, 77.0, 81.0, 59.0, 44.0, 76.0, 49.0, 47.0] \n", "469 [568.0, 499.0, 97.0, 220.0, 15.0, 14.0, 97.0, ... \n", "472 [822.0, 390.0, 108.0, 141.0, 2.0, 10.0, 26.0, ... \n", "451 [71.0, 64.0, 59.0, 63.0, 52.0, 71.0, 66.0, 84.0] \n", "466 [53.0, 78.0, 61.0, 75.0, 64.0, 78.0, 26.0, 77.0] \n", "462 [57.0, 72.0, 66.0, 82.0, 46.0, 85.0, 36.0, 78.0] \n", "457 [57.0, 81.0, 66.0, 75.0, 61.0, 88.0, 27.0, 62.0] \n", "478 [427.0, 1263.0, 4.0, 3.0, 0.0, 0.0, 46.0, 907.0] \n", "473 [394.0, 1650.0, 9.0, 3.0, 3.0, 0.0, 6.0, 697.0] \n", "452 [88.0, 76.0, 61.0, 81.0, 60.0, 69.0, 49.0, 77.0] \n", "467 [77.0, 46.0, 67.0, 96.0, 53.0, 102.0, 27.0, 75.0] \n", "463 [79.0, 54.0, 57.0, 94.0, 54.0, 105.0, 26.0, 72.0] \n", "458 [78.0, 50.0, 59.0, 95.0, 50.0, 109.0, 22.0, 82.0] \n", "470 [649.0, 1319.0, 1.0, 48.0, 0.0, 2.0, 55.0, 740.0] \n", "474 [680.0, 1158.0, 0.0, 14.0, 0.0, 0.0, 94.0, 788.0] \n", "453 [51.0, 71.0, 83.0, 55.0, 53.0, 67.0, 57.0, 59.0] \n", "468 [49.0, 56.0, 87.0, 70.0, 43.0, 87.0, 36.0, 58.0] \n", "464 [48.0, 59.0, 77.0, 63.0, 44.0, 89.0, 43.0, 68.0] \n", "459 [57.0, 50.0, 73.0, 68.0, 39.0, 94.0, 34.0, 61.0] \n", "471 [30.0, 1016.0, 0.0, 1.0, 0.0, 0.0, 1.0, 1910.0] \n", "475 [15.0, 377.0, 0.0, 4.0, 0.0, 0.0, 1.0, 2562.0] \n", "454 [48.0, 79.0, 68.0, 140.0, 47.0, 22.0, 49.0, 34.0] \n", "477 [43.0, 94.0, 88.0, 50.0, 71.0, 23.0, 34.0, 85.0] \n", "476 [73.0, 131.0, 36.0, 17.0, 63.0, 80.0, 16.0, 74.0] \n", "460 [30.0, 86.0, 51.0, 64.0, 42.0, 24.0, 26.0, 199.0] \n", "479 [73.0, 41.0, 7.0, 1.0, 0.0, 0.0, 8.0, 2883.0] \n", "480 [282.0, 593.0, 0.0, 1.0, 0.0, 0.0, 8.0, 1557.0] \n", "\n", " (Test Predictions, incorrect_thought_wasnt) \n", "450 [66.0, 79.0, 81.0, 53.0, 60.0, 71.0, 82.0, 67.0] \n", "465 [49.0, 74.0, 70.0, 30.0, 60.0, 64.0, 87.0, 51.0] \n", "461 [57.0, 75.0, 65.0, 32.0, 62.0, 65.0, 81.0, 52.0] \n", "456 [54.0, 76.0, 65.0, 34.0, 62.0, 61.0, 80.0, 55.0] \n", "469 [385.0, 643.0, 270.0, 175.0, 298.0, 123.0, 199... \n", "472 [238.0, 688.0, 252.0, 152.0, 306.0, 121.0, 201... \n", "451 [58.0, 77.0, 76.0, 43.0, 66.0, 76.0, 81.0, 53.0] \n", "466 [60.0, 84.0, 74.0, 34.0, 64.0, 59.0, 82.0, 55.0] \n", "462 [62.0, 80.0, 73.0, 37.0, 70.0, 59.0, 84.0, 57.0] \n", "457 [61.0, 79.0, 73.0, 36.0, 71.0, 57.0, 81.0, 59.0] \n", "478 [536.0, 364.0, 298.0, 318.0, 309.0, 128.0, 204... \n", "473 [588.0, 269.0, 296.0, 315.0, 309.0, 128.0, 203... \n", "452 [66.0, 85.0, 72.0, 51.0, 64.0, 71.0, 90.0, 62.0] \n", "467 [62.0, 89.0, 66.0, 41.0, 71.0, 64.0, 96.0, 54.0] \n", "463 [58.0, 83.0, 72.0, 44.0, 69.0, 67.0, 92.0, 56.0] \n", "458 [62.0, 86.0, 73.0, 41.0, 71.0, 64.0, 95.0, 53.0] \n", "470 [489.0, 454.0, 299.0, 280.0, 309.0, 128.0, 201... \n", "474 [437.0, 466.0, 298.0, 296.0, 309.0, 128.0, 202... \n", "453 [54.0, 74.0, 73.0, 30.0, 67.0, 63.0, 80.0, 55.0] \n", "468 [49.0, 75.0, 71.0, 24.0, 73.0, 63.0, 79.0, 52.0] \n", "464 [48.0, 79.0, 72.0, 32.0, 71.0, 58.0, 81.0, 50.0] \n", "459 [52.0, 79.0, 67.0, 30.0, 70.0, 53.0, 75.0, 50.0] \n", "471 [826.0, 510.0, 299.0, 322.0, 309.0, 128.0, 204... \n", "475 [823.0, 780.0, 299.0, 315.0, 309.0, 128.0, 204... \n", "454 [61.0, 72.0, 63.0, 18.0, 58.0, 82.0, 80.0, 53.0] \n", "477 [62.0, 71.0, 55.0, 29.0, 58.0, 84.0, 86.0, 43.0] \n", "476 [39.0, 66.0, 82.0, 43.0, 57.0, 62.0, 97.0, 44.0] \n", "460 [66.0, 75.0, 87.0, 23.0, 68.0, 89.0, 94.0, 20.0] \n", "479 [728.0, 1019.0, 295.0, 321.0, 309.0, 128.0, 20... \n", "480 [536.0, 543.0, 299.0, 304.0, 309.0, 128.0, 203... \n", "\n", "[30 rows x 27 columns]" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "indices_of_runs = ut.load_obj(\"indices_of_runs\")\n", "try:\n", " fma_results = ut.load_obj(\"fma_results_numbercrunch\")\n", "except:\n", " fma_results = ut.load_obj(\"fma_results_gpu\")\n", " files = os.listdir(\"saved_weights\")\n", " \n", " # Set the earliest time for which runs are valid (i.e., exclude earlier debugging\n", " # runs):\n", " deftime = datetime.strptime('2017-10-15+0000', '%Y-%m-%d+%H%M') \n", "\n", " for item in indices_of_runs:\n", " latest_time = deftime\n", " latest_weights = None\n", " model_key = fma_results.loc[item][\"Model\"]\n", " run_key = cku.formatted(cku.recover_run_key(fma_results.loc[item]))\n", " for file in files:\n", " if file.startswith(model_key):\n", " if run_key in file:\n", " time_segment = file[-18:-3]\n", " time_parsed = datetime.strptime(time_segment, '%Y-%m-%d+%H%M')\n", " if time_parsed > latest_time:\n", " latest_time = time_parsed\n", " latest_weights = file\n", " fma_results.loc[item,\"Weights File\"] = latest_weights\n", "\n", " fma_results[\"(Test Predictions, inclass)\"] = pd.Series(index=fma_results.index, \n", " dtype = \"object\")\n", " fma_results[\"(Test Predictions, correct)\"] = pd.Series(index=fma_results.index, \n", " dtype = \"object\")\n", " fma_results[\"(Test Predictions, incorrect_thought_was)\"] = pd.Series(\n", " index=fma_results.index, \n", " dtype = \"object\")\n", " fma_results[\"(Test Predictions, incorrect_thought_wasnt)\"] = pd.Series(\n", " index=fma_results.index, \n", " dtype = \"object\")\n", "ut.save_obj(fma_results,\"fma_results_numbercrunch\")\n", " \n", "display(fma_results.loc[indices_of_runs])" ] }, { "cell_type": "code", "execution_count": 32, "metadata": { "scrolled": true }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Creating generators with batch size 4...\n", "Loading mean and standard deviation for the training set from file 'saved_objects/fma_small_dwt_stats.npz'.\n", "\n", "Found 6400 images belonging to 8 classes.\n", "Found 800 images belonging to 8 classes.\n", "Found 800 images belonging to 8 classes.\n", "\n", "\n", "Creating generators with batch size 4...\n", "Loading mean and standard deviation for the training set from file 'saved_objects/fma_extended_dwt_stats.npz'.\n", "\n", "Found 37316 images belonging to 8 classes.\n", "Found 4350 images belonging to 8 classes.\n", "Found 4651 images belonging to 8 classes.\n", "\n", "\n" ] } ], "source": [ "test_gens = {}\n", "for size in [\"small\", \"extended\"]:\n", " param_dict[\"which_size\"] = size\n", " # Set up path\n", " param_dict[\"img_dir\"] = os.path.join(\"data\",\n", " os.path.join(\"fma_images\",\n", " os.path.join(\"byclass\", \n", " os.path.join(param_dict[\"which_size\"], \n", " param_dict[\"which_wavelet\"])\n", " )\n", " )\n", " )\n", " g = cku.set_up_generators(param_dict)\n", " test_gens[size] = g[2]\n", " print(\"\\n\")\n" ] }, { "cell_type": "code", "execution_count": 33, "metadata": { "collapsed": true, "scrolled": true }, "outputs": [], "source": [ "def generator_with_true_classes(model, generator):\n", " while True:\n", " x, y = generator.next()\n", " try:\n", " y_pred = model.predict(x)\n", " except:\n", " y_pred = np.ndarray((0,0))\n", " yield x, y_pred, y" ] }, { "cell_type": "code", "execution_count": 34, "metadata": { "collapsed": true, "scrolled": true }, "outputs": [], "source": [ "def predict_model(num_classes, test_gens, fma_results, item, models_printed):\n", " # Set up the generators\n", " item_data_size = fma_results.loc[item, \"Data Set Size\"]\n", " test_gen = test_gens[item_data_size]\n", " test_set_size = 800 if item_data_size == \"small\" else 4651\n", " \n", " if (fma_results.loc[item, \"Model\"] == \"fcnn\"):\n", " # Set up the input to accept FMA images:\n", " inp = keras.layers.Input(shape=(256,256,3,))\n", "\n", " # Add a flatten layer to make the input play nicely with these non-convolutional \n", " # layers:\n", " x = keras.layers.Flatten()(inp)\n", "\n", " # Add a Flatten/Affine/BatchNorm/ReLU/Dropout/Affine-softmax-categorization block:\n", " predict = cku.stack_two_layer_block(param_dict, x)\n", " \n", " # Construct the model:\n", " model = keras.models.Model(inp,predict)\n", " else:\n", " if (fma_results.loc[item, \"Model\"] == \"xception\"):\n", " model_class = keras.applications.xception.Xception\n", " elif (fma_results.loc[item, \"Model\"] == \"inception_v3\"):\n", " model_class = keras.applications.inception_v3.InceptionV3\n", " elif (fma_results.loc[item, \"Model\"] == \"resnet50\"):\n", " model_class = keras.applications.resnet50.ResNet50\n", " elif (fma_results.loc[item, \"Model\"] == \"vgg16\"):\n", " model_class = keras.applications.vgg16.VGG16\n", " else:\n", " print(\"Unrecognized: \", fma_results.loc[item, \"Model\"])\n", " raise ValueError\n", " \n", " # Set up the model\n", " basemodel = model_class(include_top=False, \n", " input_shape=param_dict[\"mean_img\"].shape)\n", " x = basemodel.output\n", " \n", " # Add a global spatial average pooling layer at the output:\n", " x = keras.layers.GlobalAveragePooling2D()(x)\n", "\n", " # Add Affine/BatchNorm/ReLU/Dropout/Affine-softmax-categorization block:\n", " predict = cku.stack_two_layer_block(param_dict, x)\n", "\n", " # Now make the model:\n", " model = keras.models.Model(basemodel.input, predict)\n", "\n", "\n", " # Describe the model, if one of this type hasn't been printed already - this code is\n", " # included here and excluded elsewhere to prevent the notebook from getting overly \n", " # cluttered:\n", " if not models_printed[fma_results.loc[item, \"Model\"]]:\n", " print(\"Model type: \",fma_results.loc[item, \"Model\"])\n", " i = 0\n", " for layer in model.layers:\n", " print(\"\\t{:d}:\\t{:s}\".format(i,layer.name))\n", " i += 1\n", " print()\n", " \n", " models_printed[fma_results.loc[item, \"Model\"]] = True\n", " \n", " # Load the weights:\n", " model.load_weights(os.path.join(\"saved_weights\", \n", " fma_results.loc[item, \"Weights File\"]))\n", " \n", " # Predict:\n", " # Flow data with correct labels, http://bit.ly/2iCudEJ\n", " num_batches = 0\n", " max_batches = (int)(np.ceil(test_set_size/test_gen.batch_size))\n", " print(\"Max batches: \", max_batches)\n", " preds = np.ndarray((test_set_size, num_classes+1))\n", " start_idx = 0\n", " for x, y_pred, y_true in generator_with_true_classes(model, test_gen):\n", " # do something with data, eg. print it.\n", " end_idx = np.min([test_set_size, \n", " start_idx + y_true.shape[0], \n", " start_idx + y_pred.shape[0]])\n", " \n", " slice_len = np.min([test_set_size-start_idx, \n", " y_true.shape[0],\n", " y_pred.shape[0]])\n", " \n", " preds[start_idx:(start_idx + slice_len), 0] = np.argmax(y_true)\n", " preds[start_idx:(start_idx + slice_len), 1:] = y_pred\n", " \n", " start_idx += test_gen.batch_size\n", " num_batches += 1\n", " if num_batches == max_batches:\n", " break\n", " \n", " # Do some number crunching\n", " correct_classes = preds[:,0]\n", " predicted_classes = np.argmax(preds[:,1:], axis = 1)\n", "\n", " inclass = np.zeros((param_dict[\"num_classes\"],1))\n", " corr = np.zeros((param_dict[\"num_classes\"],1))\n", " incorr_thought_was = np.zeros((param_dict[\"num_classes\"],1))\n", " incorr_thought_wasnt = np.zeros((param_dict[\"num_classes\"],1))\n", " for cls in np.arange(param_dict[\"num_classes\"]):\n", " # How many items actually are in this class?\n", " inclass[cls] = np.sum(correct_classes == cls)\n", "\n", " # Correct:\n", " corr_idxes = np.argwhere(correct_classes == cls)\n", " corr[cls] = np.sum(correct_classes[corr_idxes] == \n", " predicted_classes[corr_idxes])\n", "\n", " # Incorrectly identified as this class:\n", " iden_idxes = np.argwhere(predicted_classes == cls) \n", " # thought it was this class\n", " incorr_thought_was[cls] = np.sum(correct_classes[iden_idxes] != cls) \n", " # but it wasn't\n", "\n", " # Incorrectly identified as not this class:\n", " iden_idxes = np.argwhere(predicted_classes != cls) \n", " # thought it wasn't this class\n", " incorr_thought_wasnt[cls] = np.sum(correct_classes[iden_idxes] == cls) \n", " # but it was\n", "\n", " fma_results.loc[item, \"(Test Predictions, inclass)\"] = inclass\n", " fma_results.loc[item, \"(Test Predictions, correct)\"] = corr\n", " fma_results.loc[item, \"(Test Predictions, incorrect_thought_was)\"] = (\n", " incorr_thought_was)\n", " fma_results.loc[item, \"(Test Predictions, incorrect_thought_wasnt)\"] = (\n", " incorr_thought_wasnt)\n", "\n", " return preds" ] }, { "cell_type": "code", "execution_count": 35, "metadata": { "scrolled": false }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "\n", "fcnn_0.01x0.02x0.85x1e-10_gpux72572239216642x5x128x50x7x0xsmallxdwt_2017-10-16+2242.h5\n", "Skipping 450; already calculated...\n", "\n", "fcnn_0.01x0.02x0.85x1e-10_gpux72572239216642x5x128x50x7x0.33xsmallxdwt_2017-10-18+0520.h5\n", "Skipping 465; already calculated...\n", "\n", "fcnn_0.01x0.02x0.85x1e-10_gpux72572239216642x5x128x50x7x0.5xsmallxdwt_2017-10-18+0001.h5\n", "Skipping 461; already calculated...\n", "\n", "fcnn_0.01x0.02x0.85x1e-10_gpux72572239216642x5x128x50x7x0.66xsmallxdwt_2017-10-17+1013.h5\n", "Skipping 456; already calculated...\n", "\n", "fcnn_0.01x0.02x0.85x1e-10_gpux72572239216642x20x128x73x10x0xextendedxdwt_2017-10-18+1311.h5\n", "Skipping 469; already calculated...\n", "\n", "fcnn_0.01x0.02x0.85x1e-10_gpux72572239216642x20x128x73x10x0.33xextendedxdwt_2017-10-19+1708.h5\n", "Skipping 472; already calculated...\n", "\n", "xception_0.01x0.02x0.85x1e-10_gpux72572239216642x5x128x50x7x0xsmallxdwt_2017-10-17+0040.h5\n", "Skipping 451; already calculated...\n", "\n", "xception_0.01x0.02x0.85x1e-10_gpux72572239216642x5x128x50x7x0.33xsmallxdwt_2017-10-18+0718.h5\n", "Skipping 466; already calculated...\n", "\n", "xception_0.01x0.02x0.85x1e-10_gpux72572239216642x5x128x50x7x0.5xsmallxdwt_2017-10-18+0159.h5\n", "Skipping 462; already calculated...\n", "\n", "xception_0.01x0.02x0.85x1e-10_gpux72572239216642x5x128x50x7x0.66xsmallxdwt_2017-10-17+1211.h5\n", "Skipping 457; already calculated...\n", "\n", "xception_0.01x0.02x0.85x1e-10_gpux72572239216642x20x128x73x10x0xextendedxdwt_2017-10-21+1315.h5\n", "Skipping 478; already calculated...\n", "\n", "xception_0.01x0.02x0.85x1e-10_gpux72572239216642x20x128x73x10x0.33xextendedxdwt_2017-10-20+0439.h5\n", "Skipping 473; already calculated...\n", "\n", "inception_v3_0.01x0.02x0.85x1e-10_gpux72572239216642x5x128x50x7x0xsmallxdwt_2017-10-17+0152.h5\n", "Skipping 452; already calculated...\n", "\n", "inception_v3_0.01x0.02x0.85x1e-10_gpux72572239216642x5x128x50x7x0.33xsmallxdwt_2017-10-18+0831.h5\n", "Skipping 467; already calculated...\n", "\n", "inception_v3_0.01x0.02x0.85x1e-10_gpux72572239216642x5x128x50x7x0.5xsmallxdwt_2017-10-18+0311.h5\n", "Skipping 463; already calculated...\n", "\n", "inception_v3_0.01x0.02x0.85x1e-10_gpux72572239216642x5x128x50x7x0.66xsmallxdwt_2017-10-17+1324.h5\n", "Skipping 458; already calculated...\n", "\n", "inception_v3_0.01x0.02x0.85x1e-10_gpux72572239216642x20x128x73x10x0xextendedxdwt_2017-10-18+2036.h5\n", "Skipping 470; already calculated...\n", "\n", "inception_v3_0.01x0.02x0.85x1e-10_gpux72572239216642x20x128x73x10x0.33xextendedxdwt_2017-10-20+1140.h5\n", "Skipping 474; already calculated...\n", "\n", "resnet50_0.01x0.02x0.85x1e-10_gpux72572239216642x5x128x50x7x0xsmallxdwt_2017-10-17+0316.h5\n", "Skipping 453; already calculated...\n", "\n", "resnet50_0.01x0.02x0.85x1e-10_gpux72572239216642x5x128x50x7x0.33xsmallxdwt_2017-10-18+0956.h5\n", "Skipping 468; already calculated...\n", "\n", "resnet50_0.01x0.02x0.85x1e-10_gpux72572239216642x5x128x50x7x0.5xsmallxdwt_2017-10-18+0435.h5\n", "Skipping 464; already calculated...\n", "\n", "resnet50_0.01x0.02x0.85x1e-10_gpux72572239216642x5x128x50x7x0.66xsmallxdwt_2017-10-17+1449.h5\n", "Skipping 459; already calculated...\n", "\n", "resnet50_0.01x0.02x0.85x1e-10_gpux72572239216642x20x128x73x10x0xextendedxdwt_2017-10-19+0500.h5\n", "Skipping 471; already calculated...\n", "\n", "resnet50_0.01x0.02x0.85x1e-10_gpux72572239216642x20x128x73x10x0.33xextendedxdwt_2017-10-20+2004.h5\n", "Skipping 475; already calculated...\n", "\n", "vgg16_0.01x0.02x0.85x1e-10_gpux72572239216642x5x128x50x7x0xsmallxdwt_2017-10-17+0601.h5\n", "Skipping 454; already calculated...\n", "\n", "vgg16_0.01x0.02x0.85x1e-10_gpux72572239216642x5x128x50x7x0.33xsmallxdwt_2017-10-21+0144.h5\n", "Skipping 477; already calculated...\n", "\n", "vgg16_0.01x0.02x0.85x1e-10_gpux72572239216642x5x128x50x7x0.5xsmallxdwt_2017-10-20+2302.h5\n", "Skipping 476; already calculated...\n", "\n", "vgg16_0.01x0.02x0.85x1e-10_gpux72572239216642x5x128x50x7x0.66xsmallxdwt_2017-10-17+1734.h5\n", "Skipping 460; already calculated...\n", "\n", "vgg16_0.01x0.02x0.85x1e-10_gpux72572239216642x20x128x73x10x0xextendedxdwt_2017-10-22+0534.h5\n", "Skipping 479; already calculated...\n", "\n", "vgg16_0.01x0.02x0.85x1e-10_gpux72572239216642x20x128x73x10x0.33xextendedxdwt_2017-10-22+2125.h5\n", "Skipping 480; already calculated...\n", "\n", "\n", "\n", "Complete!\n" ] } ], "source": [ "models_printed = {}\n", "for model in [\"fcnn\",\"xception\",\"inception_v3\",\"resnet50\",\"vgg16\"]:\n", " models_printed[model] = False\n", " \n", "last_start_by = datetime(2017, 11, 8, 9, 45, 0)\n", "\n", "for item in indices_of_runs:\n", " print()\n", " print(fma_results.loc[item, \"Weights File\"])\n", " if pd.isnull(fma_results[\"(Test Predictions, inclass)\"]).loc[item]:\n", " print(\"Calculation started: \", timer.datetimestamp())\n", " preds = predict_model(param_dict[\"num_classes\"], test_gens, fma_results, item,\n", " models_printed)\n", " ut.save_obj(fma_results,\"fma_results_numbercrunch\")\n", " print(\"Shape of predictions: \", preds.shape)\n", " print(\"Calculation complete: \", timer.datetimestamp())\n", " print()\n", " \n", " if datetime.now() > last_start_by :\n", " break\n", " else:\n", " print(\"Skipping {:d}; already calculated...\".format(item))\n", " \n", "\n", " \n", "print(\"\\n\\n\\nComplete!\")" ] }, { "cell_type": "code", "execution_count": 36, "metadata": { "scrolled": false }, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
Run StartedSource ProcessorSourcePass EpochsBatch SizeSteps Per EpochValidation Steps Per EpochData Augmentation FactorData Set SizeWavelet...Final Validation AccuracyTraining Loss HistoryValidation Loss HistoryTraining Accuracy HistoryValidation Accuracy HistoryWeights File(Test Predictions, inclass)(Test Predictions, correct)(Test Predictions, incorrect_thought_was)(Test Predictions, incorrect_thought_wasnt)
450Monday, 2017 October 16, 10:30 PMgpu7.257224e+135.0128.050.07.00.00smalldwt...0.320000[1.82380256891, 1.6008654356, 1.40744186163, 1...[1.8220331955, 1.66511300087, 1.65917759418, 1...[0.3171875, 0.41875, 0.50671875, 0.6015625, 0....[0.36375, 0.36875, 0.36375, 0.395, 0.36, 0.375...fcnn_0.01x0.02x0.85x1e-10_gpux72572239216642x5...[100.0, 100.0, 100.0, 100.0, 100.0, 100.0, 100...[34.0, 21.0, 19.0, 47.0, 40.0, 29.0, 18.0, 33.0][76.0, 98.0, 84.0, 52.0, 40.0, 67.0, 84.0, 58.0][66.0, 79.0, 81.0, 53.0, 60.0, 71.0, 82.0, 67.0]
465Wednesday, 2017 October 18, 4:35 AMgpu7.257224e+135.0128.050.07.00.33smalldwt...0.433750[1.82024466991, 1.71069197178, 1.67135448933, ...[1.814739151, 1.67947849274, 1.63117892742, 1....[0.3175, 0.37109375, 0.38828125, 0.40578125, 0...[0.35, 0.375, 0.39, 0.405, 0.39875, 0.41125, 0...fcnn_0.01x0.02x0.85x1e-10_gpux72572239216642x5...[100.0, 100.0, 100.0, 100.0, 100.0, 100.0, 100...[51.0, 26.0, 30.0, 70.0, 40.0, 36.0, 13.0, 49.0][52.0, 76.0, 80.0, 65.0, 42.0, 74.0, 50.0, 46.0][49.0, 74.0, 70.0, 30.0, 60.0, 64.0, 87.0, 51.0]
461Tuesday, 2017 October 17, 11:16 PMgpu7.257224e+135.0128.050.07.00.50smalldwt...0.440000[1.82721065044, 1.71751484394, 1.67218647718, ...[1.80424007416, 1.68645466328, 1.64975505829, ...[0.31625, 0.364375, 0.3853125, 0.39640625, 0.4...[0.36125, 0.38375, 0.375, 0.4025, 0.3975, 0.40...fcnn_0.01x0.02x0.85x1e-10_gpux72572239216642x5...[100.0, 100.0, 100.0, 100.0, 100.0, 100.0, 100...[43.0, 25.0, 35.0, 68.0, 38.0, 35.0, 19.0, 48.0][47.0, 72.0, 87.0, 67.0, 44.0, 74.0, 49.0, 49.0][57.0, 75.0, 65.0, 32.0, 62.0, 65.0, 81.0, 52.0]
456Tuesday, 2017 October 17, 9:27 AMgpu7.257224e+135.0128.050.07.00.66smalldwt...0.415000[1.823697927, 1.71283866167, 1.68205006361, 1....[1.83449396133, 1.70822524071, 1.63681973457, ...[0.31234375, 0.37171875, 0.38453125, 0.4017187...[0.35125, 0.3775, 0.38375, 0.405, 0.405, 0.398...fcnn_0.01x0.02x0.85x1e-10_gpux72572239216642x5...[100.0, 100.0, 100.0, 100.0, 100.0, 100.0, 100...[46.0, 24.0, 35.0, 66.0, 38.0, 39.0, 20.0, 45.0][54.0, 77.0, 81.0, 59.0, 44.0, 76.0, 49.0, 47.0][54.0, 76.0, 65.0, 34.0, 62.0, 61.0, 80.0, 55.0]
469Wednesday, 2017 October 18, 11:51 AMgpu7.257224e+1320.0128.073.010.00.00extendeddwt...0.493750[1.58588705814, 1.46814873937, 1.44014323901, ...[1.56640315056, 1.46727433205, 1.43934185505, ...[0.434075342466, 0.494220890411, 0.49732448630...[0.49140625, 0.5, 0.50625, 0.4921875, 0.511718...fcnn_0.01x0.02x0.85x1e-10_gpux72572239216642x2...[839.0, 1085.0, 299.0, 323.0, 309.0, 128.0, 20...[454.0, 442.0, 29.0, 148.0, 11.0, 5.0, 5.0, 11...[568.0, 499.0, 97.0, 220.0, 15.0, 14.0, 97.0, ...[385.0, 643.0, 270.0, 175.0, 298.0, 123.0, 199...
472Thursday, 2017 October 19, 12:47 PMgpu7.257224e+1320.0128.073.010.00.33extendeddwt...0.539062[1.58695063199, 1.47280326608, 1.44755986945, ...[1.54402760267, 1.48328278065, 1.45242022276, ...[0.433112157534, 0.488976883562, 0.48919092465...[0.49453125, 0.48984375, 0.496875, 0.49375, 0....fcnn_0.01x0.02x0.85x1e-10_gpux72572239216642x2...[839.0, 1085.0, 299.0, 323.0, 309.0, 128.0, 20...[601.0, 397.0, 47.0, 171.0, 3.0, 7.0, 3.0, 116...[822.0, 390.0, 108.0, 141.0, 2.0, 10.0, 26.0, ...[238.0, 688.0, 252.0, 152.0, 306.0, 121.0, 201...
451Monday, 2017 October 16, 10:42 PMgpu7.257224e+135.0128.050.07.00.00smalldwt...0.362500[1.90809360027, 1.68670711994, 1.56818962097, ...[2.14867657661, 2.05830182076, 1.9655592823, 1...[0.285625, 0.38515625, 0.434375, 0.45453125, 0...[0.1625, 0.25125, 0.28875, 0.26875, 0.31625, 0...xception_0.01x0.02x0.85x1e-10_gpux725722392166...[100.0, 100.0, 100.0, 100.0, 100.0, 100.0, 100...[42.0, 23.0, 24.0, 57.0, 34.0, 24.0, 19.0, 47.0][71.0, 64.0, 59.0, 63.0, 52.0, 71.0, 66.0, 84.0][58.0, 77.0, 76.0, 43.0, 66.0, 76.0, 81.0, 53.0]
466Wednesday, 2017 October 18, 5:20 AMgpu7.257224e+135.0128.050.07.00.33smalldwt...0.396250[1.92064338446, 1.77885185957, 1.74191453695, ...[2.26447115898, 2.14269234657, 2.07216393471, ...[0.2734375, 0.3384375, 0.351875, 0.3684375, 0....[0.1525, 0.245, 0.21375, 0.30875, 0.31375, 0.3...xception_0.01x0.02x0.85x1e-10_gpux725722392166...[100.0, 100.0, 100.0, 100.0, 100.0, 100.0, 100...[40.0, 16.0, 26.0, 66.0, 36.0, 41.0, 18.0, 45.0][53.0, 78.0, 61.0, 75.0, 64.0, 78.0, 26.0, 77.0][60.0, 84.0, 74.0, 34.0, 64.0, 59.0, 82.0, 55.0]
462Wednesday, 2017 October 18, 12:01 AMgpu7.257224e+135.0128.050.07.00.50smalldwt...0.390000[1.92605224848, 1.78636225224, 1.74515648603, ...[2.32112989426, 2.78206655502, 1.97336621284, ...[0.27171875, 0.329375, 0.3478125, 0.3575, 0.37...[0.21, 0.1375, 0.225, 0.2825, 0.3175, 0.34, 0....xception_0.01x0.02x0.85x1e-10_gpux725722392166...[100.0, 100.0, 100.0, 100.0, 100.0, 100.0, 100...[38.0, 20.0, 27.0, 63.0, 30.0, 41.0, 16.0, 43.0][57.0, 72.0, 66.0, 82.0, 46.0, 85.0, 36.0, 78.0][62.0, 80.0, 73.0, 37.0, 70.0, 59.0, 84.0, 57.0]
457Tuesday, 2017 October 17, 10:13 AMgpu7.257224e+135.0128.050.07.00.66smalldwt...0.400000[1.9075643158, 1.76602378368, 1.73399444818, 1...[2.18002383232, 2.07807783127, 2.00649788857, ...[0.2753125, 0.348125, 0.3603125, 0.37171875, 0...[0.16625, 0.20875, 0.26375, 0.2925, 0.32, 0.36...xception_0.01x0.02x0.85x1e-10_gpux725722392166...[100.0, 100.0, 100.0, 100.0, 100.0, 100.0, 100...[39.0, 21.0, 27.0, 64.0, 29.0, 43.0, 19.0, 41.0][57.0, 81.0, 66.0, 75.0, 61.0, 88.0, 27.0, 62.0][61.0, 79.0, 73.0, 36.0, 71.0, 57.0, 81.0, 59.0]
478Saturday, 2017 October 21, 1:44 AMgpu7.257224e+1320.0128.073.010.00.00extendeddwt...0.517969[1.60942099846, 1.50336504962, 1.49111029547, ...[1.77678931952, 1.75901452303, 1.63194044828, ...[0.427226027397, 0.472067636986, 0.47142551369...[0.38671875, 0.41875, 0.43203125, 0.49140625, ...xception_0.01x0.02x0.85x1e-10_gpux725722392166...[839.0, 1085.0, 299.0, 323.0, 309.0, 128.0, 20...[303.0, 721.0, 1.0, 5.0, 0.0, 0.0, 0.0, 971.0][427.0, 1263.0, 4.0, 3.0, 0.0, 0.0, 46.0, 907.0][536.0, 364.0, 298.0, 318.0, 309.0, 128.0, 204...
473Thursday, 2017 October 19, 5:09 PMgpu7.257224e+1320.0128.073.010.00.33extendeddwt...0.523438[1.61077199244, 1.50769974271, 1.50827520677, ...[2.07184349298, 1.8523173213, 1.57434175014, 1...[0.423801369863, 0.47238869863, 0.467037671233...[0.38515625, 0.40546875, 0.45390625, 0.4765625...xception_0.01x0.02x0.85x1e-10_gpux725722392166...[839.0, 1085.0, 299.0, 323.0, 309.0, 128.0, 20...[251.0, 816.0, 3.0, 8.0, 0.0, 0.0, 1.0, 810.0][394.0, 1650.0, 9.0, 3.0, 3.0, 0.0, 6.0, 697.0][588.0, 269.0, 296.0, 315.0, 309.0, 128.0, 203...
452Tuesday, 2017 October 17, 12:40 AMgpu7.257224e+135.0128.050.07.00.00smalldwt...0.321250[1.95510603428, 1.76394223928, 1.67935840607, ...[4.12246021271, 2.54523499489, 2.4867950058, 2...[0.2590625, 0.34453125, 0.3853125, 0.41328125,...[0.12375, 0.17, 0.1925, 0.25375, 0.28625, 0.30...inception_v3_0.01x0.02x0.85x1e-10_gpux72572239...[100.0, 100.0, 100.0, 100.0, 100.0, 100.0, 100...[34.0, 15.0, 28.0, 49.0, 36.0, 29.0, 10.0, 38.0][88.0, 76.0, 61.0, 81.0, 60.0, 69.0, 49.0, 77.0][66.0, 85.0, 72.0, 51.0, 64.0, 71.0, 90.0, 62.0]
467Wednesday, 2017 October 18, 7:18 AMgpu7.257224e+135.0128.050.07.00.33smalldwt...0.375000[1.96890549421, 1.85074101686, 1.83232646942, ...[2.58446475029, 2.32470556259, 2.24585523605, ...[0.24, 0.3053125, 0.30859375, 0.33578125, 0.33...[0.14375, 0.17375, 0.2225, 0.22625, 0.2875, 0....inception_v3_0.01x0.02x0.85x1e-10_gpux72572239...[100.0, 100.0, 100.0, 100.0, 100.0, 100.0, 100...[38.0, 11.0, 34.0, 59.0, 29.0, 36.0, 4.0, 46.0][77.0, 46.0, 67.0, 96.0, 53.0, 102.0, 27.0, 75.0][62.0, 89.0, 66.0, 41.0, 71.0, 64.0, 96.0, 54.0]
463Wednesday, 2017 October 18, 1:59 AMgpu7.257224e+135.0128.050.07.00.50smalldwt...0.365000[1.95656326056, 1.84989557028, 1.81713608265, ...[4.47656255722, 2.75450143814, 2.37536446571, ...[0.246875, 0.30140625, 0.32328125, 0.33078125,...[0.12625, 0.1525, 0.18, 0.22375, 0.30625, 0.33...inception_v3_0.01x0.02x0.85x1e-10_gpux72572239...[100.0, 100.0, 100.0, 100.0, 100.0, 100.0, 100...[42.0, 17.0, 28.0, 56.0, 31.0, 33.0, 8.0, 44.0][79.0, 54.0, 57.0, 94.0, 54.0, 105.0, 26.0, 72.0][58.0, 83.0, 72.0, 44.0, 69.0, 67.0, 92.0, 56.0]
458Tuesday, 2017 October 17, 12:11 PMgpu7.257224e+135.0128.050.07.00.66smalldwt...0.368750[1.95642371655, 1.85347248316, 1.81622013807, ...[3.36360150337, 2.64108294487, 2.6109867382, 2...[0.25796875, 0.30046875, 0.31515625, 0.3276562...[0.125, 0.17375, 0.1825, 0.23125, 0.29625, 0.3...inception_v3_0.01x0.02x0.85x1e-10_gpux72572239...[100.0, 100.0, 100.0, 100.0, 100.0, 100.0, 100...[38.0, 14.0, 27.0, 59.0, 29.0, 36.0, 5.0, 47.0][78.0, 50.0, 59.0, 95.0, 50.0, 109.0, 22.0, 82.0][62.0, 86.0, 73.0, 41.0, 71.0, 64.0, 95.0, 53.0]
470Wednesday, 2017 October 18, 1:33 PMgpu7.257224e+1320.0128.073.010.00.00extendeddwt...0.505469[1.66405828032, 1.56770853637, 1.55640177041, ...[3.62894377708, 2.22240310907, 1.64683238268, ...[0.403681506849, 0.442851027397, 0.44670376712...[0.19375, 0.21015625, 0.4265625, 0.46953125, 0...inception_v3_0.01x0.02x0.85x1e-10_gpux72572239...[839.0, 1085.0, 299.0, 323.0, 309.0, 128.0, 20...[350.0, 631.0, 0.0, 43.0, 0.0, 0.0, 3.0, 810.0][649.0, 1319.0, 1.0, 48.0, 0.0, 2.0, 55.0, 740.0][489.0, 454.0, 299.0, 280.0, 309.0, 128.0, 201...
474Friday, 2017 October 20, 4:40 AMgpu7.257224e+1320.0128.073.010.00.33extendeddwt...0.517188[1.68054896511, 1.57903284243, 1.57147473505, ...[3.61337640285, 1.91185925007, 1.65783749819, ...[0.398330479452, 0.436108732877, 0.43503852739...[0.19375, 0.31171875, 0.409375, 0.4484375, 0.4...inception_v3_0.01x0.02x0.85x1e-10_gpux72572239...[839.0, 1085.0, 299.0, 323.0, 309.0, 128.0, 20...[402.0, 619.0, 1.0, 27.0, 0.0, 0.0, 2.0, 866.0][680.0, 1158.0, 0.0, 14.0, 0.0, 0.0, 94.0, 788.0][437.0, 466.0, 298.0, 296.0, 309.0, 128.0, 202...
453Tuesday, 2017 October 17, 1:53 AMgpu7.257224e+135.0128.050.07.00.00smalldwt...0.422500[1.76757016897, 1.51766999722, 1.38178520441, ...[3.45861001015, 2.54270760536, 2.19634951591, ...[0.3503125, 0.45375, 0.51265625, 0.54, 0.57953...[0.17125, 0.2525, 0.31875, 0.37375, 0.41, 0.41...resnet50_0.01x0.02x0.85x1e-10_gpux725722392166...[100.0, 100.0, 100.0, 100.0, 100.0, 100.0, 100...[46.0, 26.0, 27.0, 70.0, 33.0, 37.0, 20.0, 45.0][51.0, 71.0, 83.0, 55.0, 53.0, 67.0, 57.0, 59.0][54.0, 74.0, 73.0, 30.0, 67.0, 63.0, 80.0, 55.0]
468Wednesday, 2017 October 18, 8:32 AMgpu7.257224e+135.0128.050.07.00.33smalldwt...0.430000[1.76855951548, 1.61201533318, 1.57018355846, ...[3.1637541008, 2.74854323387, 2.28446052551, 1...[0.34484375, 0.41640625, 0.430625, 0.43796875,...[0.1425, 0.24625, 0.3075, 0.3425, 0.3825, 0.40...resnet50_0.01x0.02x0.85x1e-10_gpux725722392166...[100.0, 100.0, 100.0, 100.0, 100.0, 100.0, 100...[51.0, 25.0, 29.0, 76.0, 27.0, 37.0, 21.0, 48.0][49.0, 56.0, 87.0, 70.0, 43.0, 87.0, 36.0, 58.0][49.0, 75.0, 71.0, 24.0, 73.0, 63.0, 79.0, 52.0]
464Wednesday, 2017 October 18, 3:11 AMgpu7.257224e+135.0128.050.07.00.50smalldwt...0.411250[1.7645502615, 1.61465699434, 1.56118855238, 1...[3.98527582169, 3.07449765205, 2.29873632431, ...[0.35546875, 0.40953125, 0.42703125, 0.4415625...[0.13625, 0.2625, 0.3025, 0.3525, 0.3775, 0.38...resnet50_0.01x0.02x0.85x1e-10_gpux725722392166...[100.0, 100.0, 100.0, 100.0, 100.0, 100.0, 100...[52.0, 21.0, 28.0, 68.0, 29.0, 42.0, 19.0, 50.0][48.0, 59.0, 77.0, 63.0, 44.0, 89.0, 43.0, 68.0][48.0, 79.0, 72.0, 32.0, 71.0, 58.0, 81.0, 50.0]
459Tuesday, 2017 October 17, 1:25 PMgpu7.257224e+135.0128.050.07.00.66smalldwt...0.410000[1.76656898975, 1.61333152056, 1.57016033649, ...[3.12541369438, 2.38677270889, 2.02602273941, ...[0.34875, 0.4153125, 0.425625, 0.439375, 0.457...[0.1825, 0.2575, 0.3075, 0.35125, 0.39125, 0.4...resnet50_0.01x0.02x0.85x1e-10_gpux725722392166...[100.0, 100.0, 100.0, 100.0, 100.0, 100.0, 100...[48.0, 21.0, 33.0, 70.0, 30.0, 47.0, 25.0, 50.0][57.0, 50.0, 73.0, 68.0, 39.0, 94.0, 34.0, 61.0][52.0, 79.0, 67.0, 30.0, 70.0, 53.0, 75.0, 50.0]
471Wednesday, 2017 October 18, 8:36 PMgpu7.257224e+1320.0128.073.010.00.00extendeddwt...0.533594[1.51579548398, 1.40459283574, 1.37599181639, ...[1.69485304356, 1.57757995129, 1.44549598694, ...[0.472709760274, 0.513377568493, 0.52215325342...[0.4421875, 0.44140625, 0.50703125, 0.54921875...resnet50_0.01x0.02x0.85x1e-10_gpux725722392166...[839.0, 1085.0, 299.0, 323.0, 309.0, 128.0, 20...[13.0, 575.0, 0.0, 1.0, 0.0, 0.0, 0.0, 1104.0][30.0, 1016.0, 0.0, 1.0, 0.0, 0.0, 1.0, 1910.0][826.0, 510.0, 299.0, 322.0, 309.0, 128.0, 204...
475Friday, 2017 October 20, 11:40 AMgpu7.257224e+1320.0128.073.010.00.33extendeddwt...0.546094[1.51615725151, 1.41021734231, 1.38719483924, ...[2.69430851936, 1.6660217762, 1.47920976877, 1...[0.466181506849, 0.505351027397, 0.52097602739...[0.28515625, 0.4109375, 0.5, 0.528125, 0.52812...resnet50_0.01x0.02x0.85x1e-10_gpux725722392166...[839.0, 1085.0, 299.0, 323.0, 309.0, 128.0, 20...[16.0, 305.0, 0.0, 8.0, 0.0, 0.0, 0.0, 1363.0][15.0, 377.0, 0.0, 4.0, 0.0, 0.0, 1.0, 2562.0][823.0, 780.0, 299.0, 315.0, 309.0, 128.0, 204...
454Tuesday, 2017 October 17, 3:16 AMgpu7.257224e+135.0128.050.07.00.00smalldwt...0.443750[1.79767173767, 1.64688575506, 1.59771779776, ...[2.87258497238, 2.12235598564, 1.74181958675, ...[0.338125, 0.40359375, 0.4303125, 0.4446875, 0...[0.1775, 0.2525, 0.33625, 0.3625, 0.3975, 0.39...vgg16_0.01x0.02x0.85x1e-10_gpux72572239216642x...[100.0, 100.0, 100.0, 100.0, 100.0, 100.0, 100...[39.0, 28.0, 37.0, 82.0, 42.0, 18.0, 20.0, 47.0][48.0, 79.0, 68.0, 140.0, 47.0, 22.0, 49.0, 34.0][61.0, 72.0, 63.0, 18.0, 58.0, 82.0, 80.0, 53.0]
477Friday, 2017 October 20, 11:02 PMgpu7.257224e+135.0128.050.07.00.33smalldwt...0.462500[1.8049201417, 1.66879448652, 1.62949069262, 1...[2.64639645576, 1.85615677834, 1.74214244366, ...[0.33828125, 0.3934375, 0.41375, 0.43125, 0.43...[0.1925, 0.30875, 0.335, 0.385, 0.4025, 0.4012...vgg16_0.01x0.02x0.85x1e-10_gpux72572239216642x...[100.0, 100.0, 100.0, 100.0, 100.0, 100.0, 100...[38.0, 29.0, 45.0, 71.0, 42.0, 16.0, 14.0, 57.0][43.0, 94.0, 88.0, 50.0, 71.0, 23.0, 34.0, 85.0][62.0, 71.0, 55.0, 29.0, 58.0, 84.0, 86.0, 43.0]
476Friday, 2017 October 20, 8:20 PMgpu7.257224e+135.0128.050.07.00.50smalldwt...0.470000[1.80697010756, 1.68164082766, 1.62566782713, ...[2.92805818558, 1.8851287365, 1.81813882828, 1...[0.33171875, 0.38765625, 0.4053125, 0.4228125,...[0.2175, 0.28875, 0.2875, 0.3775, 0.38, 0.3925...vgg16_0.01x0.02x0.85x1e-10_gpux72572239216642x...[100.0, 100.0, 100.0, 100.0, 100.0, 100.0, 100...[61.0, 34.0, 18.0, 57.0, 43.0, 38.0, 3.0, 56.0][73.0, 131.0, 36.0, 17.0, 63.0, 80.0, 16.0, 74.0][39.0, 66.0, 82.0, 43.0, 57.0, 62.0, 97.0, 44.0]
460Tuesday, 2017 October 17, 2:49 PMgpu7.257224e+135.0128.050.07.00.66smalldwt...0.445000[1.80523064852, 1.67642034292, 1.62540378571, ...[3.02111760139, 1.89962665558, 1.70370420456, ...[0.3428125, 0.39125, 0.408125, 0.428125, 0.430...[0.1825, 0.2975, 0.37, 0.385, 0.38625, 0.39125...vgg16_0.01x0.02x0.85x1e-10_gpux72572239216642x...[100.0, 100.0, 100.0, 100.0, 100.0, 100.0, 100...[34.0, 25.0, 13.0, 77.0, 32.0, 11.0, 6.0, 80.0][30.0, 86.0, 51.0, 64.0, 42.0, 24.0, 26.0, 199.0][66.0, 75.0, 87.0, 23.0, 68.0, 89.0, 94.0, 20.0]
479Saturday, 2017 October 21, 1:41 PMgpu7.257224e+1320.0128.073.010.00.00extendeddwt...0.508594[1.53931434677, 1.42019427966, 1.39485175642, ...[1.93787550926, 1.62927873135, 1.43638788462, ...[0.461793664384, 0.505565068493, 0.52054794520...[0.48203125, 0.40625, 0.49921875, 0.48828125, ...vgg16_0.01x0.02x0.85x1e-10_gpux72572239216642x...[839.0, 1085.0, 299.0, 323.0, 309.0, 128.0, 20...[111.0, 66.0, 4.0, 2.0, 0.0, 0.0, 0.0, 1455.0][73.0, 41.0, 7.0, 1.0, 0.0, 0.0, 8.0, 2883.0][728.0, 1019.0, 295.0, 321.0, 309.0, 128.0, 20...
480Sunday, 2017 October 22, 5:34 AMgpu7.257224e+1320.0128.073.010.00.33extendeddwt...0.551562[1.5456171493, 1.43956860614, 1.41326133356, 1...[1.97792037725, 1.58831481934, 1.45778542757, ...[0.456014554795, 0.499143835616, 0.50941780821...[0.48203125, 0.46171875, 0.484375, 0.47890625,...vgg16_0.01x0.02x0.85x1e-10_gpux72572239216642x...[839.0, 1085.0, 299.0, 323.0, 309.0, 128.0, 20...[303.0, 542.0, 0.0, 19.0, 0.0, 0.0, 1.0, 1345.0][282.0, 593.0, 0.0, 1.0, 0.0, 0.0, 8.0, 1557.0][536.0, 543.0, 299.0, 304.0, 309.0, 128.0, 203...
\n", "

30 rows × 27 columns

\n", "
" ], "text/plain": [ " Run Started Source Processor Source \\\n", "450 Monday, 2017 October 16, 10:30 PM gpu 7.257224e+13 \n", "465 Wednesday, 2017 October 18, 4:35 AM gpu 7.257224e+13 \n", "461 Tuesday, 2017 October 17, 11:16 PM gpu 7.257224e+13 \n", "456 Tuesday, 2017 October 17, 9:27 AM gpu 7.257224e+13 \n", "469 Wednesday, 2017 October 18, 11:51 AM gpu 7.257224e+13 \n", "472 Thursday, 2017 October 19, 12:47 PM gpu 7.257224e+13 \n", "451 Monday, 2017 October 16, 10:42 PM gpu 7.257224e+13 \n", "466 Wednesday, 2017 October 18, 5:20 AM gpu 7.257224e+13 \n", "462 Wednesday, 2017 October 18, 12:01 AM gpu 7.257224e+13 \n", "457 Tuesday, 2017 October 17, 10:13 AM gpu 7.257224e+13 \n", "478 Saturday, 2017 October 21, 1:44 AM gpu 7.257224e+13 \n", "473 Thursday, 2017 October 19, 5:09 PM gpu 7.257224e+13 \n", "452 Tuesday, 2017 October 17, 12:40 AM gpu 7.257224e+13 \n", "467 Wednesday, 2017 October 18, 7:18 AM gpu 7.257224e+13 \n", "463 Wednesday, 2017 October 18, 1:59 AM gpu 7.257224e+13 \n", "458 Tuesday, 2017 October 17, 12:11 PM gpu 7.257224e+13 \n", "470 Wednesday, 2017 October 18, 1:33 PM gpu 7.257224e+13 \n", "474 Friday, 2017 October 20, 4:40 AM gpu 7.257224e+13 \n", "453 Tuesday, 2017 October 17, 1:53 AM gpu 7.257224e+13 \n", "468 Wednesday, 2017 October 18, 8:32 AM gpu 7.257224e+13 \n", "464 Wednesday, 2017 October 18, 3:11 AM gpu 7.257224e+13 \n", "459 Tuesday, 2017 October 17, 1:25 PM gpu 7.257224e+13 \n", "471 Wednesday, 2017 October 18, 8:36 PM gpu 7.257224e+13 \n", "475 Friday, 2017 October 20, 11:40 AM gpu 7.257224e+13 \n", "454 Tuesday, 2017 October 17, 3:16 AM gpu 7.257224e+13 \n", "477 Friday, 2017 October 20, 11:02 PM gpu 7.257224e+13 \n", "476 Friday, 2017 October 20, 8:20 PM gpu 7.257224e+13 \n", "460 Tuesday, 2017 October 17, 2:49 PM gpu 7.257224e+13 \n", "479 Saturday, 2017 October 21, 1:41 PM gpu 7.257224e+13 \n", "480 Sunday, 2017 October 22, 5:34 AM gpu 7.257224e+13 \n", "\n", " Pass Epochs Batch Size Steps Per Epoch Validation Steps Per Epoch \\\n", "450 5.0 128.0 50.0 7.0 \n", "465 5.0 128.0 50.0 7.0 \n", "461 5.0 128.0 50.0 7.0 \n", "456 5.0 128.0 50.0 7.0 \n", "469 20.0 128.0 73.0 10.0 \n", "472 20.0 128.0 73.0 10.0 \n", "451 5.0 128.0 50.0 7.0 \n", "466 5.0 128.0 50.0 7.0 \n", "462 5.0 128.0 50.0 7.0 \n", "457 5.0 128.0 50.0 7.0 \n", "478 20.0 128.0 73.0 10.0 \n", "473 20.0 128.0 73.0 10.0 \n", "452 5.0 128.0 50.0 7.0 \n", "467 5.0 128.0 50.0 7.0 \n", "463 5.0 128.0 50.0 7.0 \n", "458 5.0 128.0 50.0 7.0 \n", "470 20.0 128.0 73.0 10.0 \n", "474 20.0 128.0 73.0 10.0 \n", "453 5.0 128.0 50.0 7.0 \n", "468 5.0 128.0 50.0 7.0 \n", "464 5.0 128.0 50.0 7.0 \n", "459 5.0 128.0 50.0 7.0 \n", "471 20.0 128.0 73.0 10.0 \n", "475 20.0 128.0 73.0 10.0 \n", "454 5.0 128.0 50.0 7.0 \n", "477 5.0 128.0 50.0 7.0 \n", "476 5.0 128.0 50.0 7.0 \n", "460 5.0 128.0 50.0 7.0 \n", "479 20.0 128.0 73.0 10.0 \n", "480 20.0 128.0 73.0 10.0 \n", "\n", " Data Augmentation Factor Data Set Size Wavelet \\\n", "450 0.00 small dwt \n", "465 0.33 small dwt \n", "461 0.50 small dwt \n", "456 0.66 small dwt \n", "469 0.00 extended dwt \n", "472 0.33 extended dwt \n", "451 0.00 small dwt \n", "466 0.33 small dwt \n", "462 0.50 small dwt \n", "457 0.66 small dwt \n", "478 0.00 extended dwt \n", "473 0.33 extended dwt \n", "452 0.00 small dwt \n", "467 0.33 small dwt \n", "463 0.50 small dwt \n", "458 0.66 small dwt \n", "470 0.00 extended dwt \n", "474 0.33 extended dwt \n", "453 0.00 small dwt \n", "468 0.33 small dwt \n", "464 0.50 small dwt \n", "459 0.66 small dwt \n", "471 0.00 extended dwt \n", "475 0.33 extended dwt \n", "454 0.00 small dwt \n", "477 0.33 small dwt \n", "476 0.50 small dwt \n", "460 0.66 small dwt \n", "479 0.00 extended dwt \n", "480 0.33 extended dwt \n", "\n", " ... \\\n", "450 ... \n", "465 ... \n", "461 ... \n", "456 ... \n", "469 ... \n", "472 ... \n", "451 ... \n", "466 ... \n", "462 ... \n", "457 ... \n", "478 ... \n", "473 ... \n", "452 ... \n", "467 ... \n", "463 ... \n", "458 ... \n", "470 ... \n", "474 ... \n", "453 ... \n", "468 ... \n", "464 ... \n", "459 ... \n", "471 ... \n", "475 ... \n", "454 ... \n", "477 ... \n", "476 ... \n", "460 ... \n", "479 ... \n", "480 ... \n", "\n", " Final Validation Accuracy \\\n", "450 0.320000 \n", "465 0.433750 \n", "461 0.440000 \n", "456 0.415000 \n", "469 0.493750 \n", "472 0.539062 \n", "451 0.362500 \n", "466 0.396250 \n", "462 0.390000 \n", "457 0.400000 \n", "478 0.517969 \n", "473 0.523438 \n", "452 0.321250 \n", "467 0.375000 \n", "463 0.365000 \n", "458 0.368750 \n", "470 0.505469 \n", "474 0.517188 \n", "453 0.422500 \n", "468 0.430000 \n", "464 0.411250 \n", "459 0.410000 \n", "471 0.533594 \n", "475 0.546094 \n", "454 0.443750 \n", "477 0.462500 \n", "476 0.470000 \n", "460 0.445000 \n", "479 0.508594 \n", "480 0.551562 \n", "\n", " Training Loss History \\\n", "450 [1.82380256891, 1.6008654356, 1.40744186163, 1... \n", "465 [1.82024466991, 1.71069197178, 1.67135448933, ... \n", "461 [1.82721065044, 1.71751484394, 1.67218647718, ... \n", "456 [1.823697927, 1.71283866167, 1.68205006361, 1.... \n", "469 [1.58588705814, 1.46814873937, 1.44014323901, ... \n", "472 [1.58695063199, 1.47280326608, 1.44755986945, ... \n", "451 [1.90809360027, 1.68670711994, 1.56818962097, ... \n", "466 [1.92064338446, 1.77885185957, 1.74191453695, ... \n", "462 [1.92605224848, 1.78636225224, 1.74515648603, ... \n", "457 [1.9075643158, 1.76602378368, 1.73399444818, 1... \n", "478 [1.60942099846, 1.50336504962, 1.49111029547, ... \n", "473 [1.61077199244, 1.50769974271, 1.50827520677, ... \n", "452 [1.95510603428, 1.76394223928, 1.67935840607, ... \n", "467 [1.96890549421, 1.85074101686, 1.83232646942, ... \n", "463 [1.95656326056, 1.84989557028, 1.81713608265, ... \n", "458 [1.95642371655, 1.85347248316, 1.81622013807, ... \n", "470 [1.66405828032, 1.56770853637, 1.55640177041, ... \n", "474 [1.68054896511, 1.57903284243, 1.57147473505, ... \n", "453 [1.76757016897, 1.51766999722, 1.38178520441, ... \n", "468 [1.76855951548, 1.61201533318, 1.57018355846, ... \n", "464 [1.7645502615, 1.61465699434, 1.56118855238, 1... \n", "459 [1.76656898975, 1.61333152056, 1.57016033649, ... \n", "471 [1.51579548398, 1.40459283574, 1.37599181639, ... \n", "475 [1.51615725151, 1.41021734231, 1.38719483924, ... \n", "454 [1.79767173767, 1.64688575506, 1.59771779776, ... \n", "477 [1.8049201417, 1.66879448652, 1.62949069262, 1... \n", "476 [1.80697010756, 1.68164082766, 1.62566782713, ... \n", "460 [1.80523064852, 1.67642034292, 1.62540378571, ... \n", "479 [1.53931434677, 1.42019427966, 1.39485175642, ... \n", "480 [1.5456171493, 1.43956860614, 1.41326133356, 1... \n", "\n", " Validation Loss History \\\n", "450 [1.8220331955, 1.66511300087, 1.65917759418, 1... \n", "465 [1.814739151, 1.67947849274, 1.63117892742, 1.... \n", "461 [1.80424007416, 1.68645466328, 1.64975505829, ... \n", "456 [1.83449396133, 1.70822524071, 1.63681973457, ... \n", "469 [1.56640315056, 1.46727433205, 1.43934185505, ... \n", "472 [1.54402760267, 1.48328278065, 1.45242022276, ... \n", "451 [2.14867657661, 2.05830182076, 1.9655592823, 1... \n", "466 [2.26447115898, 2.14269234657, 2.07216393471, ... \n", "462 [2.32112989426, 2.78206655502, 1.97336621284, ... \n", "457 [2.18002383232, 2.07807783127, 2.00649788857, ... \n", "478 [1.77678931952, 1.75901452303, 1.63194044828, ... \n", "473 [2.07184349298, 1.8523173213, 1.57434175014, 1... \n", "452 [4.12246021271, 2.54523499489, 2.4867950058, 2... \n", "467 [2.58446475029, 2.32470556259, 2.24585523605, ... \n", "463 [4.47656255722, 2.75450143814, 2.37536446571, ... \n", "458 [3.36360150337, 2.64108294487, 2.6109867382, 2... \n", "470 [3.62894377708, 2.22240310907, 1.64683238268, ... \n", "474 [3.61337640285, 1.91185925007, 1.65783749819, ... \n", "453 [3.45861001015, 2.54270760536, 2.19634951591, ... \n", "468 [3.1637541008, 2.74854323387, 2.28446052551, 1... \n", "464 [3.98527582169, 3.07449765205, 2.29873632431, ... \n", "459 [3.12541369438, 2.38677270889, 2.02602273941, ... \n", "471 [1.69485304356, 1.57757995129, 1.44549598694, ... \n", "475 [2.69430851936, 1.6660217762, 1.47920976877, 1... \n", "454 [2.87258497238, 2.12235598564, 1.74181958675, ... \n", "477 [2.64639645576, 1.85615677834, 1.74214244366, ... \n", "476 [2.92805818558, 1.8851287365, 1.81813882828, 1... \n", "460 [3.02111760139, 1.89962665558, 1.70370420456, ... \n", "479 [1.93787550926, 1.62927873135, 1.43638788462, ... \n", "480 [1.97792037725, 1.58831481934, 1.45778542757, ... \n", "\n", " Training Accuracy History \\\n", "450 [0.3171875, 0.41875, 0.50671875, 0.6015625, 0.... \n", "465 [0.3175, 0.37109375, 0.38828125, 0.40578125, 0... \n", "461 [0.31625, 0.364375, 0.3853125, 0.39640625, 0.4... \n", "456 [0.31234375, 0.37171875, 0.38453125, 0.4017187... \n", "469 [0.434075342466, 0.494220890411, 0.49732448630... \n", "472 [0.433112157534, 0.488976883562, 0.48919092465... \n", "451 [0.285625, 0.38515625, 0.434375, 0.45453125, 0... \n", "466 [0.2734375, 0.3384375, 0.351875, 0.3684375, 0.... \n", "462 [0.27171875, 0.329375, 0.3478125, 0.3575, 0.37... \n", "457 [0.2753125, 0.348125, 0.3603125, 0.37171875, 0... \n", "478 [0.427226027397, 0.472067636986, 0.47142551369... \n", "473 [0.423801369863, 0.47238869863, 0.467037671233... \n", "452 [0.2590625, 0.34453125, 0.3853125, 0.41328125,... \n", "467 [0.24, 0.3053125, 0.30859375, 0.33578125, 0.33... \n", "463 [0.246875, 0.30140625, 0.32328125, 0.33078125,... \n", "458 [0.25796875, 0.30046875, 0.31515625, 0.3276562... \n", "470 [0.403681506849, 0.442851027397, 0.44670376712... \n", "474 [0.398330479452, 0.436108732877, 0.43503852739... \n", "453 [0.3503125, 0.45375, 0.51265625, 0.54, 0.57953... \n", "468 [0.34484375, 0.41640625, 0.430625, 0.43796875,... \n", "464 [0.35546875, 0.40953125, 0.42703125, 0.4415625... \n", "459 [0.34875, 0.4153125, 0.425625, 0.439375, 0.457... \n", "471 [0.472709760274, 0.513377568493, 0.52215325342... \n", "475 [0.466181506849, 0.505351027397, 0.52097602739... \n", "454 [0.338125, 0.40359375, 0.4303125, 0.4446875, 0... \n", "477 [0.33828125, 0.3934375, 0.41375, 0.43125, 0.43... \n", "476 [0.33171875, 0.38765625, 0.4053125, 0.4228125,... \n", "460 [0.3428125, 0.39125, 0.408125, 0.428125, 0.430... \n", "479 [0.461793664384, 0.505565068493, 0.52054794520... \n", "480 [0.456014554795, 0.499143835616, 0.50941780821... \n", "\n", " Validation Accuracy History \\\n", "450 [0.36375, 0.36875, 0.36375, 0.395, 0.36, 0.375... \n", "465 [0.35, 0.375, 0.39, 0.405, 0.39875, 0.41125, 0... \n", "461 [0.36125, 0.38375, 0.375, 0.4025, 0.3975, 0.40... \n", "456 [0.35125, 0.3775, 0.38375, 0.405, 0.405, 0.398... \n", "469 [0.49140625, 0.5, 0.50625, 0.4921875, 0.511718... \n", "472 [0.49453125, 0.48984375, 0.496875, 0.49375, 0.... \n", "451 [0.1625, 0.25125, 0.28875, 0.26875, 0.31625, 0... \n", "466 [0.1525, 0.245, 0.21375, 0.30875, 0.31375, 0.3... \n", "462 [0.21, 0.1375, 0.225, 0.2825, 0.3175, 0.34, 0.... \n", "457 [0.16625, 0.20875, 0.26375, 0.2925, 0.32, 0.36... \n", "478 [0.38671875, 0.41875, 0.43203125, 0.49140625, ... \n", "473 [0.38515625, 0.40546875, 0.45390625, 0.4765625... \n", "452 [0.12375, 0.17, 0.1925, 0.25375, 0.28625, 0.30... \n", "467 [0.14375, 0.17375, 0.2225, 0.22625, 0.2875, 0.... \n", "463 [0.12625, 0.1525, 0.18, 0.22375, 0.30625, 0.33... \n", "458 [0.125, 0.17375, 0.1825, 0.23125, 0.29625, 0.3... \n", "470 [0.19375, 0.21015625, 0.4265625, 0.46953125, 0... \n", "474 [0.19375, 0.31171875, 0.409375, 0.4484375, 0.4... \n", "453 [0.17125, 0.2525, 0.31875, 0.37375, 0.41, 0.41... \n", "468 [0.1425, 0.24625, 0.3075, 0.3425, 0.3825, 0.40... \n", "464 [0.13625, 0.2625, 0.3025, 0.3525, 0.3775, 0.38... \n", "459 [0.1825, 0.2575, 0.3075, 0.35125, 0.39125, 0.4... \n", "471 [0.4421875, 0.44140625, 0.50703125, 0.54921875... \n", "475 [0.28515625, 0.4109375, 0.5, 0.528125, 0.52812... \n", "454 [0.1775, 0.2525, 0.33625, 0.3625, 0.3975, 0.39... \n", "477 [0.1925, 0.30875, 0.335, 0.385, 0.4025, 0.4012... \n", "476 [0.2175, 0.28875, 0.2875, 0.3775, 0.38, 0.3925... \n", "460 [0.1825, 0.2975, 0.37, 0.385, 0.38625, 0.39125... \n", "479 [0.48203125, 0.40625, 0.49921875, 0.48828125, ... \n", "480 [0.48203125, 0.46171875, 0.484375, 0.47890625,... \n", "\n", " Weights File \\\n", "450 fcnn_0.01x0.02x0.85x1e-10_gpux72572239216642x5... \n", "465 fcnn_0.01x0.02x0.85x1e-10_gpux72572239216642x5... \n", "461 fcnn_0.01x0.02x0.85x1e-10_gpux72572239216642x5... \n", "456 fcnn_0.01x0.02x0.85x1e-10_gpux72572239216642x5... \n", "469 fcnn_0.01x0.02x0.85x1e-10_gpux72572239216642x2... \n", "472 fcnn_0.01x0.02x0.85x1e-10_gpux72572239216642x2... \n", "451 xception_0.01x0.02x0.85x1e-10_gpux725722392166... \n", "466 xception_0.01x0.02x0.85x1e-10_gpux725722392166... \n", "462 xception_0.01x0.02x0.85x1e-10_gpux725722392166... \n", "457 xception_0.01x0.02x0.85x1e-10_gpux725722392166... \n", "478 xception_0.01x0.02x0.85x1e-10_gpux725722392166... \n", "473 xception_0.01x0.02x0.85x1e-10_gpux725722392166... \n", "452 inception_v3_0.01x0.02x0.85x1e-10_gpux72572239... \n", "467 inception_v3_0.01x0.02x0.85x1e-10_gpux72572239... \n", "463 inception_v3_0.01x0.02x0.85x1e-10_gpux72572239... \n", "458 inception_v3_0.01x0.02x0.85x1e-10_gpux72572239... \n", "470 inception_v3_0.01x0.02x0.85x1e-10_gpux72572239... \n", "474 inception_v3_0.01x0.02x0.85x1e-10_gpux72572239... \n", "453 resnet50_0.01x0.02x0.85x1e-10_gpux725722392166... \n", "468 resnet50_0.01x0.02x0.85x1e-10_gpux725722392166... \n", "464 resnet50_0.01x0.02x0.85x1e-10_gpux725722392166... \n", "459 resnet50_0.01x0.02x0.85x1e-10_gpux725722392166... \n", "471 resnet50_0.01x0.02x0.85x1e-10_gpux725722392166... \n", "475 resnet50_0.01x0.02x0.85x1e-10_gpux725722392166... \n", "454 vgg16_0.01x0.02x0.85x1e-10_gpux72572239216642x... \n", "477 vgg16_0.01x0.02x0.85x1e-10_gpux72572239216642x... \n", "476 vgg16_0.01x0.02x0.85x1e-10_gpux72572239216642x... \n", "460 vgg16_0.01x0.02x0.85x1e-10_gpux72572239216642x... \n", "479 vgg16_0.01x0.02x0.85x1e-10_gpux72572239216642x... \n", "480 vgg16_0.01x0.02x0.85x1e-10_gpux72572239216642x... \n", "\n", " (Test Predictions, inclass) \\\n", "450 [100.0, 100.0, 100.0, 100.0, 100.0, 100.0, 100... \n", "465 [100.0, 100.0, 100.0, 100.0, 100.0, 100.0, 100... \n", "461 [100.0, 100.0, 100.0, 100.0, 100.0, 100.0, 100... \n", "456 [100.0, 100.0, 100.0, 100.0, 100.0, 100.0, 100... \n", "469 [839.0, 1085.0, 299.0, 323.0, 309.0, 128.0, 20... \n", "472 [839.0, 1085.0, 299.0, 323.0, 309.0, 128.0, 20... \n", "451 [100.0, 100.0, 100.0, 100.0, 100.0, 100.0, 100... \n", "466 [100.0, 100.0, 100.0, 100.0, 100.0, 100.0, 100... \n", "462 [100.0, 100.0, 100.0, 100.0, 100.0, 100.0, 100... \n", "457 [100.0, 100.0, 100.0, 100.0, 100.0, 100.0, 100... \n", "478 [839.0, 1085.0, 299.0, 323.0, 309.0, 128.0, 20... \n", "473 [839.0, 1085.0, 299.0, 323.0, 309.0, 128.0, 20... \n", "452 [100.0, 100.0, 100.0, 100.0, 100.0, 100.0, 100... \n", "467 [100.0, 100.0, 100.0, 100.0, 100.0, 100.0, 100... \n", "463 [100.0, 100.0, 100.0, 100.0, 100.0, 100.0, 100... \n", "458 [100.0, 100.0, 100.0, 100.0, 100.0, 100.0, 100... \n", "470 [839.0, 1085.0, 299.0, 323.0, 309.0, 128.0, 20... \n", "474 [839.0, 1085.0, 299.0, 323.0, 309.0, 128.0, 20... \n", "453 [100.0, 100.0, 100.0, 100.0, 100.0, 100.0, 100... \n", "468 [100.0, 100.0, 100.0, 100.0, 100.0, 100.0, 100... \n", "464 [100.0, 100.0, 100.0, 100.0, 100.0, 100.0, 100... \n", "459 [100.0, 100.0, 100.0, 100.0, 100.0, 100.0, 100... \n", "471 [839.0, 1085.0, 299.0, 323.0, 309.0, 128.0, 20... \n", "475 [839.0, 1085.0, 299.0, 323.0, 309.0, 128.0, 20... \n", "454 [100.0, 100.0, 100.0, 100.0, 100.0, 100.0, 100... \n", "477 [100.0, 100.0, 100.0, 100.0, 100.0, 100.0, 100... \n", "476 [100.0, 100.0, 100.0, 100.0, 100.0, 100.0, 100... \n", "460 [100.0, 100.0, 100.0, 100.0, 100.0, 100.0, 100... \n", "479 [839.0, 1085.0, 299.0, 323.0, 309.0, 128.0, 20... \n", "480 [839.0, 1085.0, 299.0, 323.0, 309.0, 128.0, 20... \n", "\n", " (Test Predictions, correct) \\\n", "450 [34.0, 21.0, 19.0, 47.0, 40.0, 29.0, 18.0, 33.0] \n", "465 [51.0, 26.0, 30.0, 70.0, 40.0, 36.0, 13.0, 49.0] \n", "461 [43.0, 25.0, 35.0, 68.0, 38.0, 35.0, 19.0, 48.0] \n", "456 [46.0, 24.0, 35.0, 66.0, 38.0, 39.0, 20.0, 45.0] \n", "469 [454.0, 442.0, 29.0, 148.0, 11.0, 5.0, 5.0, 11... \n", "472 [601.0, 397.0, 47.0, 171.0, 3.0, 7.0, 3.0, 116... \n", "451 [42.0, 23.0, 24.0, 57.0, 34.0, 24.0, 19.0, 47.0] \n", "466 [40.0, 16.0, 26.0, 66.0, 36.0, 41.0, 18.0, 45.0] \n", "462 [38.0, 20.0, 27.0, 63.0, 30.0, 41.0, 16.0, 43.0] \n", "457 [39.0, 21.0, 27.0, 64.0, 29.0, 43.0, 19.0, 41.0] \n", "478 [303.0, 721.0, 1.0, 5.0, 0.0, 0.0, 0.0, 971.0] \n", "473 [251.0, 816.0, 3.0, 8.0, 0.0, 0.0, 1.0, 810.0] \n", "452 [34.0, 15.0, 28.0, 49.0, 36.0, 29.0, 10.0, 38.0] \n", "467 [38.0, 11.0, 34.0, 59.0, 29.0, 36.0, 4.0, 46.0] \n", "463 [42.0, 17.0, 28.0, 56.0, 31.0, 33.0, 8.0, 44.0] \n", "458 [38.0, 14.0, 27.0, 59.0, 29.0, 36.0, 5.0, 47.0] \n", "470 [350.0, 631.0, 0.0, 43.0, 0.0, 0.0, 3.0, 810.0] \n", "474 [402.0, 619.0, 1.0, 27.0, 0.0, 0.0, 2.0, 866.0] \n", "453 [46.0, 26.0, 27.0, 70.0, 33.0, 37.0, 20.0, 45.0] \n", "468 [51.0, 25.0, 29.0, 76.0, 27.0, 37.0, 21.0, 48.0] \n", "464 [52.0, 21.0, 28.0, 68.0, 29.0, 42.0, 19.0, 50.0] \n", "459 [48.0, 21.0, 33.0, 70.0, 30.0, 47.0, 25.0, 50.0] \n", "471 [13.0, 575.0, 0.0, 1.0, 0.0, 0.0, 0.0, 1104.0] \n", "475 [16.0, 305.0, 0.0, 8.0, 0.0, 0.0, 0.0, 1363.0] \n", "454 [39.0, 28.0, 37.0, 82.0, 42.0, 18.0, 20.0, 47.0] \n", "477 [38.0, 29.0, 45.0, 71.0, 42.0, 16.0, 14.0, 57.0] \n", "476 [61.0, 34.0, 18.0, 57.0, 43.0, 38.0, 3.0, 56.0] \n", "460 [34.0, 25.0, 13.0, 77.0, 32.0, 11.0, 6.0, 80.0] \n", "479 [111.0, 66.0, 4.0, 2.0, 0.0, 0.0, 0.0, 1455.0] \n", "480 [303.0, 542.0, 0.0, 19.0, 0.0, 0.0, 1.0, 1345.0] \n", "\n", " (Test Predictions, incorrect_thought_was) \\\n", "450 [76.0, 98.0, 84.0, 52.0, 40.0, 67.0, 84.0, 58.0] \n", "465 [52.0, 76.0, 80.0, 65.0, 42.0, 74.0, 50.0, 46.0] \n", "461 [47.0, 72.0, 87.0, 67.0, 44.0, 74.0, 49.0, 49.0] \n", "456 [54.0, 77.0, 81.0, 59.0, 44.0, 76.0, 49.0, 47.0] \n", "469 [568.0, 499.0, 97.0, 220.0, 15.0, 14.0, 97.0, ... \n", "472 [822.0, 390.0, 108.0, 141.0, 2.0, 10.0, 26.0, ... \n", "451 [71.0, 64.0, 59.0, 63.0, 52.0, 71.0, 66.0, 84.0] \n", "466 [53.0, 78.0, 61.0, 75.0, 64.0, 78.0, 26.0, 77.0] \n", "462 [57.0, 72.0, 66.0, 82.0, 46.0, 85.0, 36.0, 78.0] \n", "457 [57.0, 81.0, 66.0, 75.0, 61.0, 88.0, 27.0, 62.0] \n", "478 [427.0, 1263.0, 4.0, 3.0, 0.0, 0.0, 46.0, 907.0] \n", "473 [394.0, 1650.0, 9.0, 3.0, 3.0, 0.0, 6.0, 697.0] \n", "452 [88.0, 76.0, 61.0, 81.0, 60.0, 69.0, 49.0, 77.0] \n", "467 [77.0, 46.0, 67.0, 96.0, 53.0, 102.0, 27.0, 75.0] \n", "463 [79.0, 54.0, 57.0, 94.0, 54.0, 105.0, 26.0, 72.0] \n", "458 [78.0, 50.0, 59.0, 95.0, 50.0, 109.0, 22.0, 82.0] \n", "470 [649.0, 1319.0, 1.0, 48.0, 0.0, 2.0, 55.0, 740.0] \n", "474 [680.0, 1158.0, 0.0, 14.0, 0.0, 0.0, 94.0, 788.0] \n", "453 [51.0, 71.0, 83.0, 55.0, 53.0, 67.0, 57.0, 59.0] \n", "468 [49.0, 56.0, 87.0, 70.0, 43.0, 87.0, 36.0, 58.0] \n", "464 [48.0, 59.0, 77.0, 63.0, 44.0, 89.0, 43.0, 68.0] \n", "459 [57.0, 50.0, 73.0, 68.0, 39.0, 94.0, 34.0, 61.0] \n", "471 [30.0, 1016.0, 0.0, 1.0, 0.0, 0.0, 1.0, 1910.0] \n", "475 [15.0, 377.0, 0.0, 4.0, 0.0, 0.0, 1.0, 2562.0] \n", "454 [48.0, 79.0, 68.0, 140.0, 47.0, 22.0, 49.0, 34.0] \n", "477 [43.0, 94.0, 88.0, 50.0, 71.0, 23.0, 34.0, 85.0] \n", "476 [73.0, 131.0, 36.0, 17.0, 63.0, 80.0, 16.0, 74.0] \n", "460 [30.0, 86.0, 51.0, 64.0, 42.0, 24.0, 26.0, 199.0] \n", "479 [73.0, 41.0, 7.0, 1.0, 0.0, 0.0, 8.0, 2883.0] \n", "480 [282.0, 593.0, 0.0, 1.0, 0.0, 0.0, 8.0, 1557.0] \n", "\n", " (Test Predictions, incorrect_thought_wasnt) \n", "450 [66.0, 79.0, 81.0, 53.0, 60.0, 71.0, 82.0, 67.0] \n", "465 [49.0, 74.0, 70.0, 30.0, 60.0, 64.0, 87.0, 51.0] \n", "461 [57.0, 75.0, 65.0, 32.0, 62.0, 65.0, 81.0, 52.0] \n", "456 [54.0, 76.0, 65.0, 34.0, 62.0, 61.0, 80.0, 55.0] \n", "469 [385.0, 643.0, 270.0, 175.0, 298.0, 123.0, 199... \n", "472 [238.0, 688.0, 252.0, 152.0, 306.0, 121.0, 201... \n", "451 [58.0, 77.0, 76.0, 43.0, 66.0, 76.0, 81.0, 53.0] \n", "466 [60.0, 84.0, 74.0, 34.0, 64.0, 59.0, 82.0, 55.0] \n", "462 [62.0, 80.0, 73.0, 37.0, 70.0, 59.0, 84.0, 57.0] \n", "457 [61.0, 79.0, 73.0, 36.0, 71.0, 57.0, 81.0, 59.0] \n", "478 [536.0, 364.0, 298.0, 318.0, 309.0, 128.0, 204... \n", "473 [588.0, 269.0, 296.0, 315.0, 309.0, 128.0, 203... \n", "452 [66.0, 85.0, 72.0, 51.0, 64.0, 71.0, 90.0, 62.0] \n", "467 [62.0, 89.0, 66.0, 41.0, 71.0, 64.0, 96.0, 54.0] \n", "463 [58.0, 83.0, 72.0, 44.0, 69.0, 67.0, 92.0, 56.0] \n", "458 [62.0, 86.0, 73.0, 41.0, 71.0, 64.0, 95.0, 53.0] \n", "470 [489.0, 454.0, 299.0, 280.0, 309.0, 128.0, 201... \n", "474 [437.0, 466.0, 298.0, 296.0, 309.0, 128.0, 202... \n", "453 [54.0, 74.0, 73.0, 30.0, 67.0, 63.0, 80.0, 55.0] \n", "468 [49.0, 75.0, 71.0, 24.0, 73.0, 63.0, 79.0, 52.0] \n", "464 [48.0, 79.0, 72.0, 32.0, 71.0, 58.0, 81.0, 50.0] \n", "459 [52.0, 79.0, 67.0, 30.0, 70.0, 53.0, 75.0, 50.0] \n", "471 [826.0, 510.0, 299.0, 322.0, 309.0, 128.0, 204... \n", "475 [823.0, 780.0, 299.0, 315.0, 309.0, 128.0, 204... \n", "454 [61.0, 72.0, 63.0, 18.0, 58.0, 82.0, 80.0, 53.0] \n", "477 [62.0, 71.0, 55.0, 29.0, 58.0, 84.0, 86.0, 43.0] \n", "476 [39.0, 66.0, 82.0, 43.0, 57.0, 62.0, 97.0, 44.0] \n", "460 [66.0, 75.0, 87.0, 23.0, 68.0, 89.0, 94.0, 20.0] \n", "479 [728.0, 1019.0, 295.0, 321.0, 309.0, 128.0, 20... \n", "480 [536.0, 543.0, 299.0, 304.0, 309.0, 128.0, 203... \n", "\n", "[30 rows x 27 columns]" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "display(fma_results.loc[indices_of_runs])\n", "ut.save_obj(fma_results,\"fma_results_numbercrunch\")" ] }, { "cell_type": "code", "execution_count": 43, "metadata": { "collapsed": true, "scrolled": true }, "outputs": [], "source": [ "iterables_a = [[\"fcnn\",\"inception_v3\",\"xception\",\"resnet50\",\"vgg16\"], \n", " [\"small\"],\n", " [0, 0.33, 0.5, 0.66]]\n", "iterables_b = [[\"fcnn\",\"inception_v3\",\"xception\",\"resnet50\",\"vgg16\"], \n", " [\"extended\"],\n", " [0, 0.33]]\n", "hier_idx = pd.MultiIndex.from_product(iterables_a, names=['Model', \n", " 'Data Set Size', \n", " 'Data Augmentation Factor'])\n", "hier_idx = hier_idx.append(pd.MultiIndex.from_product(iterables_b, names=['Model', \n", " 'Data Set Size', \n", " 'Data Augmentation Factor']))\n", "iterables_cols = [[\"Overall\"],\n", " [\"count\", \"correct\", \"correct_pct\"]]\n", "hier_cols = pd.MultiIndex.from_product(iterables_cols, names=['Class', \n", " 'Statistic'])\n", "iterables_cols = [param_dict[\"classes\"],\n", " [\"in_class\", \n", " \"correct\", \"correct_pct\", \n", " \"incorrect_thought_was\", \"incorrect_thought_was_pct\", \n", " \"incorrect_thought_wasnt\", \"incorrect_thought_wasnt_pct\"]]\n", "hier_cols = hier_cols.append(pd.MultiIndex.from_product(iterables_cols, names=['Class', \n", " 'Statistic']))\n", "try:\n", " test_results = ut.load_obj(\"test_results\")\n", "except:\n", " test_results = pd.DataFrame(index = hier_idx, columns = hier_cols)" ] }, { "cell_type": "code", "execution_count": 44, "metadata": { "collapsed": true, "scrolled": true }, "outputs": [], "source": [ "for item in indices_of_runs:\n", " model = fma_results.loc[item, \"Model\"]\n", " datasize = fma_results.loc[item, \"Data Set Size\"]\n", " daf = fma_results.loc[item, \"Data Augmentation Factor\"]\n", " row = ((test_results.index.get_level_values(\"Model\") == model) & \n", " (test_results.index.get_level_values(\"Data Set Size\") == datasize) &\n", " (test_results.index.get_level_values(\"Data Augmentation Factor\") == daf))\n", " try:\n", " tot_count = 0\n", " corr_count = 0\n", " for idx in np.arange(param_dict[\"num_classes\"]):\n", " cls_name = param_dict[\"classes\"][idx]\n", " count = fma_results.loc[item, \"(Test Predictions, inclass)\"][idx]\n", " tot_count += count\n", "\n", " # Count\n", " test_results.loc[row,\n", " (test_results.columns.get_level_values(\"Class\") == cls_name) &\n", " (test_results.columns.get_level_values(\"Statistic\") == \"in_class\")] = (\n", " count\n", " )\n", " # Correct vals:\n", " test_results.loc[row,\n", " (test_results.columns.get_level_values(\"Class\") == cls_name) &\n", " (test_results.columns.get_level_values(\"Statistic\") == \"correct\")] = (\n", " fma_results.loc[item, \"(Test Predictions, correct)\"][idx]\n", " )\n", " corr_count += fma_results.loc[item, \"(Test Predictions, correct)\"][idx]\n", " test_results.loc[row,\n", " (test_results.columns.get_level_values(\"Class\") == cls_name) &\n", " (test_results.columns.get_level_values(\"Statistic\") == \"correct_pct\")] = (\n", " fma_results.loc[item, \"(Test Predictions, correct)\"][idx]/count\n", " )\n", " # Incorrect, thought was vals:\n", " test_results.loc[row,\n", " (test_results.columns.get_level_values(\"Class\") == cls_name) &\n", " (test_results.columns.get_level_values(\"Statistic\") == \n", " \"incorrect_thought_was\")] = (\n", " fma_results.loc[item, \n", " \"(Test Predictions, incorrect_thought_was)\"][idx]\n", " )\n", " test_results.loc[row,\n", " (test_results.columns.get_level_values(\"Class\") == cls_name) &\n", " (test_results.columns.get_level_values(\"Statistic\") == \n", " \"incorrect_thought_was_pct\")] = (\n", " fma_results.loc[item, \n", " \"(Test Predictions, incorrect_thought_was)\"][idx]/count\n", " )\n", "\n", " # Incorrect, thought wasn't vals:\n", " test_results.loc[row,\n", " (test_results.columns.get_level_values(\"Class\") == cls_name) &\n", " (test_results.columns.get_level_values(\"Statistic\") == \n", " \"incorrect_thought_wasnt\")] = (\n", " fma_results.loc[item, \n", " \"(Test Predictions, incorrect_thought_wasnt)\"][idx]\n", " )\n", " test_results.loc[row,\n", " (test_results.columns.get_level_values(\"Class\") == cls_name) &\n", " (test_results.columns.get_level_values(\"Statistic\") == \n", " \"incorrect_thought_wasnt_pct\")] = (\n", " fma_results.loc[item, \n", " \"(Test Predictions, incorrect_thought_wasnt)\"][idx]/count\n", " )\n", " # Model (row) statistics:\n", " test_results.loc[row,\n", " (test_results.columns.get_level_values(\"Class\") == \"Overall\") &\n", " (test_results.columns.get_level_values(\"Statistic\") == \"count\")] = (\n", " tot_count\n", " )\n", " test_results.loc[row,\n", " (test_results.columns.get_level_values(\"Class\") == \"Overall\") &\n", " (test_results.columns.get_level_values(\"Statistic\") == \"correct\")] = (\n", " corr_count\n", " )\n", " test_results.loc[row,\n", " (test_results.columns.get_level_values(\"Class\") == \"Overall\") &\n", " (test_results.columns.get_level_values(\"Statistic\") == \"correct_pct\")] = (\n", " corr_count/tot_count\n", " )\n", " except:\n", " pass" ] }, { "cell_type": "code", "execution_count": 46, "metadata": { "scrolled": true }, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
ClassOverallelectronic...poprock
Statisticcountcorrectcorrect_pctin_classcorrectcorrect_pctincorrect_thought_wasincorrect_thought_was_pctincorrect_thought_wasntincorrect_thought_wasnt_pct...incorrect_thought_was_pctincorrect_thought_wasntincorrect_thought_wasnt_pctin_classcorrectcorrect_pctincorrect_thought_wasincorrect_thought_was_pctincorrect_thought_wasntincorrect_thought_wasnt_pct
ModelData Set SizeData Augmentation Factor
fcnnsmall0.08002410.30125100340.34760.76660.66...0.84820.82100330.33580.58670.67
\n", "

1 rows × 59 columns

\n", "
" ], "text/plain": [ "Class Overall \\\n", "Statistic count correct correct_pct \n", "Model Data Set Size Data Augmentation Factor \n", "fcnn small 0.0 800 241 0.30125 \n", "\n", "Class electronic \\\n", "Statistic in_class correct correct_pct \n", "Model Data Set Size Data Augmentation Factor \n", "fcnn small 0.0 100 34 0.34 \n", "\n", "Class \\\n", "Statistic incorrect_thought_was \n", "Model Data Set Size Data Augmentation Factor \n", "fcnn small 0.0 76 \n", "\n", "Class \\\n", "Statistic incorrect_thought_was_pct \n", "Model Data Set Size Data Augmentation Factor \n", "fcnn small 0.0 0.76 \n", "\n", "Class \\\n", "Statistic incorrect_thought_wasnt \n", "Model Data Set Size Data Augmentation Factor \n", "fcnn small 0.0 66 \n", "\n", "Class \\\n", "Statistic incorrect_thought_wasnt_pct \n", "Model Data Set Size Data Augmentation Factor \n", "fcnn small 0.0 0.66 \n", "\n", "Class ... \\\n", "Statistic ... \n", "Model Data Set Size Data Augmentation Factor ... \n", "fcnn small 0.0 ... \n", "\n", "Class pop \\\n", "Statistic incorrect_thought_was_pct \n", "Model Data Set Size Data Augmentation Factor \n", "fcnn small 0.0 0.84 \n", "\n", "Class \\\n", "Statistic incorrect_thought_wasnt \n", "Model Data Set Size Data Augmentation Factor \n", "fcnn small 0.0 82 \n", "\n", "Class \\\n", "Statistic incorrect_thought_wasnt_pct \n", "Model Data Set Size Data Augmentation Factor \n", "fcnn small 0.0 0.82 \n", "\n", "Class rock \\\n", "Statistic in_class correct correct_pct \n", "Model Data Set Size Data Augmentation Factor \n", "fcnn small 0.0 100 33 0.33 \n", "\n", "Class \\\n", "Statistic incorrect_thought_was \n", "Model Data Set Size Data Augmentation Factor \n", "fcnn small 0.0 58 \n", "\n", "Class \\\n", "Statistic incorrect_thought_was_pct \n", "Model Data Set Size Data Augmentation Factor \n", "fcnn small 0.0 0.58 \n", "\n", "Class \\\n", "Statistic incorrect_thought_wasnt \n", "Model Data Set Size Data Augmentation Factor \n", "fcnn small 0.0 67 \n", "\n", "Class \n", "Statistic incorrect_thought_wasnt_pct \n", "Model Data Set Size Data Augmentation Factor \n", "fcnn small 0.0 0.67 \n", "\n", "[1 rows x 59 columns]" ] }, "execution_count": 46, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# # Show everything \n", "# display(test_results)\n", "\n", "# Show a single example\n", "test_results.loc[(test_results.index.get_level_values(\"Model\") == \"fcnn\") & \n", " (test_results.index.get_level_values(\"Data Set Size\") == \"small\") &\n", " (test_results.index.get_level_values(\"Data Augmentation Factor\") == 0)]" ] }, { "cell_type": "code", "execution_count": 47, "metadata": { "collapsed": true, "scrolled": true }, "outputs": [], "source": [ "ut.save_obj(test_results, \"test_results\")" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.6.1" } }, "nbformat": 4, "nbformat_minor": 2 }