Build a Machine Learning Framework
AI Machine LearningGreat article by Florian Cäsar on how his team developed a new machine learning framework. From scratch. In 491 steps!
He summarizes the entire process up in this great quote:
| *From images, text files, or your cat videos, bits are fed to the
data pipeline that transforms them into usable data chunks and in
turn to data sets,*
| *which are then fed in small pieces to a trainer that manages all
the training and passes it right on to the underlying neural
network,*
| *which consists of many underlying neural network layers connected
through an arbitrarily linear or funky architecture,*
| *which consist of many underlying neurons that form the smallest
computational unit and are nudged in the right direction according
to the trainer’s optimiser,*
| *which takes the network and the transient training data in the
shape of layer buffers, marks the parameters it can improve, runs
every layer, and calculates a “how well did we do” score based on
the calculated and correct answers from the supplied small pieces
of the given dataset according to the optimiser’s settings, *
| *which computes the gradient of every parameter with respect to
the score and then nudges the individual neurons correspondingly,*
| *which then is run again and again until the optimiser reports
results that are good enough as set in a rich criteria and hook
system,*
| *which is based on global and local nested
parameter-identifier-registries that contain the shared parameters
and distribute them safely to all workers*
| *which are the actual workhorses of the training process that do
as their operator says using individual and separate mathematical
backends, *
| *which use the layer-defined placeholder computation graphs and
put in the raw data and then execute it on their computational
backend,*
| *which are all also managed by the operator that distributes the
worker’s work as needed and configured and also functions as a
coordinator to the owning trainer,*
| *which connects the network, the optimiser, the operator, the
initialisers, *
| *which tell the trainer with which distribution to initialise what
parameters, which work similar to hooks that act as a bridge
between them all and communicate with external things using the
Sigma environment,*
| *which is the container and laid-back manager to everything that
also supplies and runs these external things called monitors, *
| *which can be truly anything that makes us of the training data
and*
| *which finally display the learned funny cat image*
| *… from the hooks from the workers from their operator from
its assigned network from its dozens of layers from its millions
of individual neurons derived from some data records from data
chunks from data sets from data extractors.*
In other words, they created a new software called Sigma.Core.
Sigma.Core
Sigma.core appears to be a Windows based machine learning software that uses deep learning. It’s feature list is small but impressive:
- Uses different deep learning layers (i.e. dropouts, recurrent, etc)
- Uses both linear and nonlinear networks
- Four (4) different optimizers
- Has hooks for storing, restoring checkpoints, CPU and runtime metrics
- Runs on multi and single CPU’s, CUDA GPU
- Native Windows GUI
- Functional automatic differentiation
How long did it take?
According to Florian it took about 700 hours of intro/research, 2000 hours of development, and 2 souls sold to the devil. That’s over 1 full year of work for one person, assuming a standard 40 hour work week!