Incremental transductive workflows
While you update model artifacts simply by re-running the steps one through three (from Data export and configuration to Model transform), Neptune ML supports simpler ways to update your batch ML predictions using new data. One is to use an incremental-model workflow, and another is to use model retraining with a warm start.
Incremental-model workflow
In this workflow, you update the ML predictions without retraining the ML model.
Note
You can only do this when the graph data has been updated with new nodes and/or edges. It will not currently work when nodes are removed.
Data export and configuration – This step is the same as in the main workflow.
Incremental data preprocessing – This step is similar to the data preprocessing step in the main workflow, but uses the same processing configuration used previously, that corresponds to a specific trained model.
Model transform – Instead of a model training step, this model-transform step takes the trained model from the main workflow and the results of the incremental data preprocessing step, and generates new model artifacts to use for inference. The model-transform step launches a SageMaker processing job to perform the computation that generates the updated model artifacts.
Update the Amazon SageMaker inference endpoint – Optionally, if you have an existing inference endpoint, this step updates the endpoint with the new model artifacts generated by the model-transform step. Alternatively, you can also create a new inference endpoint with the new model artifacts.
Model re-training with a warm start
Using this workflow, you can train and deploy a new ML model for making predictions using the incremental graph data, but start from an existing model generated using the main workflow:
Data export and configuration – This step is the same as in the main workflow.
Incremental data preprocessing – This step is the same as in the incremental model inference workflow. The new graph data should be processed with the same processing method that was used previously for model training.
Model training with a warm start – Model training is similar to what happens in the main workflow, but you can speed up model hyperparameter search by leveraging the information from the previous model training task.
Update the Amazon SageMaker inference endpoint – This step is the same as in the incremental model inference workflow.