# DL_DetectAnomalies1_Deploy

Loads a deep learning model and prepares its execution on a specific target device.

### Syntax

void fil::DL_DetectAnomalies1_Deploy
(
const fil::DetectAnomalies1ModelDirectory& inModelDirectory,
const ftl::Optional<fil::DeviceType::Type>& inTargetDevice,
const bool inReconstructHint,
fil::DetectAnomalies1ModelId& outModelId
)


### Parameters

Name Type Default Description
inModelDirectory const DetectAnomalies1ModelDirectory& A Detect Anomalies 1 model stored in a specific disk directory.
inTargetDevice const Optional<DeviceType::Type>& NIL A device selected for deploying and executing the model. If not set, device depending on version (CPU/GPU) of installed Deep Learning Add-on is selected.
inReconstructHint const bool True Prepares the model for a reconstruction computation in advance
outModelId DetectAnomalies1ModelId& Identifier of the deployed model

### Hints

• In most cases, this filter should be placed in the INITIALIZE section.
• Executing this filter may take several seconds.
• This filter should be connected to DL_DetectAnomalies1 through the ModelId ports.
• You can edit the model directly through the inModelDirectory. Another option is to use the Deep Learning Editor application and just copy the path to the created model.
• If any subsequent DL_DetectAnomalies1 filter using deployed model is set to compute a reconstruction, it is advisable to set inReconstructHint to true. In other case, inReconstructHint should be set to false. Following this guidelines should ensure an optimal memory usage and no performance hit on first call to DL_DetectAnomalies1.

### Remarks

• Passing NIL as inTargetDevice (which is default), is identical to passing DeviceType::CUDA on GPU version of Deep Learning Addon and DeviceType::CPU on CPU version on Deep Learning Addon.
• GPU version of Deep Learning Addon supports DeviceType::CUDA and DeviceType::CPU as inTargetDevice value.
• CPU version of Deep Learning Addon supports only DeviceType::CPU as inTargetDevice value.