Unloads a deep learning model and frees allocated memory.
void weaver::WEAVER_UnloadModel ( const fil::WeaverModelId& inModelId, const bool inLeaveForRedeploy )
|inModelId||const WeaverModelId&||Identifier of the deployed model|
|inLeaveForRedeploy||const bool||True||Do not unload the model completely to speed up redeploying the model in future. All allocated memory on device will be freed regardless, but it will not free allocated system memory.|
This filter frees memory allocated for a model (mostly weights). It will not free memory allocated for executing models on specific device. Use this filter only if there is a risk of out-of-memory errors.
Setting inLeaveForRedeploy to false will unload the model completely, which means that further deploying the same model will take comparable amount of time as deploying the model for the first time. Setting inLeaveForRedeploy to true, will not free some information about the model stored in RAM which greatly speeds up further deployment of the same model. Memory allocated on device will be freed regardless.
Set inLeaveForRedeploy to true if there is a possibility of deploying the same model in the same program execution.