Feature Engineering in Driverless AI
h2oai AI Machine LearningDmitry Larko, Kaggle Grandmaster, and Senior Data Scientist at H2O.ai goes into depth on how to apply feature engineering in general and in Driverless AI. This video is over a year old and the version of Driverless AI shown is in beta form. The current version is much more developed today.
This is by far one of the best videos I’ve seen on the topic of feature engineering, not because I work for H2O.ai, but because it approaches the concepts in an easy to understand manner. Plus Dmitry does an awesome job of helping watchers understand with great examples.
The question and answer part is also very good, especially the discussion on overfitting. My notes from the video are below.
- Feature engineering is extremely important in model building
- “Coming up with features is difficult, time-consuming, requires expert knowledge. “Applied machine learning” is basically feature engineering” - Andrew Ng
- Common Machine Learning workflow (see image below)
- What is feature engineering? Example uses Polar coordinate conversions for linear classifications
- Creating a target variable is NOT feature engineering
- Removing duplicates/Missing values/Scaling/Normalization/Feature Selection IS NOT feature engineering
- Feature Selection should be done AFTER feature engineering
- Feature Engineering Cycle: Dataset > Hypotheis Set > Validate Hypothesis > Apply Hypothesis > Dataset
- Domain knowledge is key, so is prior experience
- EDA / ML model feedback is important
- Validation set: use cross validation. Be aware of data leakage
- Target encoding is powerful but can introduce leakage when applied wrong
- Feature engineering is hard and very very time consuming
- Feature engineering makes your model better, simpler models
- Transform predictor/response variables into a normal distribution in some situation like log transform
- Feature Encoding turns categorical features into numerical features
- Labeled encoding and one hot encoding
- Labeled encoding is bad, it implies an order which is not preferred
- One hot encoding transforms into binary (dummy coding)
- One hot encoding create a very sparse data set
- Columns BLOW UP in size with one hot encoding
- You can do frequency encoding instead of one hot encoding
- Frequency Encoding is robust but what about balanced data sets?
- Then you do Target Mean encoding. Downfall is high cardinality features. This can cause leakage!
- To avoid leakage, you can use ’leave one out’ schema
- Apply Bayesian smoothing, calc a weight average on the mean of the training set
- What about numerical features? Feature encoding using: Binning with quantiles / PCA and SVD / Clustering
- Great, then how do you find feature interactions?
- Apply domain knowledge / Apply genetic programming / ML also behavior (investigate model weights, etc)
- You could encode categories features by stats (std dev, etc)
- Feature Extraction is the application of extracting value out of hidden features, like zip code
- Zip code can give you state and city information
- You can extract day, week, holiday, etc can be extracted date-times
Update: The H2O.ai documentation on the feature transformations applied is here. Check it out, it’s pretty intense.