Feature engineering
After exploring and gaining understanding of your data through visualizations and analysis, it’s time for feature engineering. Every unique attribute of the data is considered a feature. For example, when designing a solution for predicting customer churn, you start with the customer data that has been collected over time. The customer data captures features (also known as attributes), such as customer location, age, income level, and recent purchases.

Figure 11: Feature engineering main components
Feature engineering is a process to select and transform variables when creating a predictive model using machine learning or statistical modeling. Feature engineering typically includes feature creation, feature transformation, feature extraction, and feature selection as listed in Figure 11. With deep learning, the feature engineering is automated as part of the algorithm learning.
-
Feature creation is creating new features from existing data to help with better predictions. Examples of feature creation techniques include: one-hot-encoding, binning, splitting, and calculated features.
-
Feature transformation and imputation manage replacing missing features or features that are not valid. Some techniques include: forming Cartesian products of features, non-linear transformations (such as binning numeric variables into categories), and creating domain-specific features.
-
Feature extraction involves reducing the amount of data to be processed using dimensionality reduction techniques. These techniques include: PCA, ICA, and LDA. This reduces the amount of memory and computing power required, while still accurately maintaining original data characteristics.
-
Feature selection is the process of selecting subset of extracted features. This subset is relevant and contributes to minimizing the error rate of a trained model. Feature importance score and correlation matrix can be factors in selecting the most relevant features for model training.