Supported Hardware
Frigate supports multiple different detectors that work on different types of hardware:
Most Hardware
- Coral EdgeTPU: The Google Coral EdgeTPU is available in USB, Mini PCIe, and m.2 formats allowing for a wide range of compatibility with devices.
- Hailo: The Hailo8 and Hailo8L AI Acceleration module is available in m.2 format with a HAT for RPi devices, offering a wide range of compatibility with devices.
- Community Supported MemryX: The MX3 Acceleration module is available in m.2 format, offering broad compatibility across various platforms.
- Community Supported DeGirum: Service for using hardware devices in the cloud or locally. Hardware and models provided on the cloud on their website.
AMD
- ROCm: ROCm can run on AMD Discrete GPUs to provide efficient object detection.
- ONNX: ROCm will automatically be detected and used as a detector in the
-rocmFrigate image when a supported ONNX model is configured.
Apple Silicon
- Apple Silicon: Apple Silicon can run on M1 and newer Apple Silicon devices.
Intel
- OpenVino: OpenVino can run on Intel Arc GPUs, Intel integrated GPUs, and Intel CPUs to provide efficient object detection.
- ONNX: OpenVINO will automatically be detected and used as a detector in the default Frigate image when a supported ONNX model is configured.
Nvidia GPU
- ONNX: Nvidia GPUs will automatically be detected and used as a detector in the
-tensorrtFrigate image when a supported ONNX model is configured.
Nvidia Jetson Community Supported
- TensortRT: TensorRT can run on Jetson devices, using one of many default models.
- ONNX: TensorRT will automatically be detected and used as a detector in the
-tensorrt-jp6Frigate image when a supported ONNX model is configured.
Rockchip Community Supported
- RKNN: RKNN models can run on Rockchip devices with included NPUs.
Synaptics Community Supported
- Synaptics: synap models can run on Synaptics devices(e.g astra machina) with included NPUs.
AXERA Community Supported
- AXEngine: axmodels can run on AXERA AI acceleration.
For Testing
- CPU Detector (not recommended for actual use: Use a CPU to run tflite model, this is not recommended and in most cases OpenVINO can be used in CPU mode with better results.
Multiple detectors can not be mixed for object detection (ex: OpenVINO and Coral EdgeTPU can not be used for object detection at the same time).
This does not affect using hardware for accelerating other tasks such as semantic search
Officially Supported Detectors
Frigate provides a number of builtin detector types. By default, Frigate will use a single OpenVINO detector running on the CPU. Other detectors may require additional configuration as described below. When using multiple detectors they will run in dedicated processes, but pull from a common queue of detection requests from across all cameras.
Edge TPU Detectorâ
The Edge TPU detector type runs TensorFlow Lite models utilizing the Google Coral delegate for hardware acceleration. To configure an Edge TPU detector, set the "type" attribute to "edgetpu".
The Edge TPU device can be specified using the "device" attribute according to the Documentation for the TensorFlow Lite Python API. If not set, the delegate will use the first device it finds.
See common Edge TPU troubleshooting steps if the Edge TPU is not detected.
Single USB Coralâ
- Frigate UI
- YAML
Navigate to SettingsâSystemâDetector hardware and select EdgeTPU from the detector type dropdown and click Add, then set device to usb.
detectors:
coral:
type: edgetpu
device: usb
Multiple USB Coralsâ
- Frigate UI
- YAML
Navigate to SettingsâSystemâDetector hardware and select EdgeTPU from the detector type dropdown and click Add to add multiple detectors, specifying usb:0 and usb:1 as the device for each.
detectors:
coral1:
type: edgetpu
device: usb:0
coral2:
type: edgetpu
device: usb:1
Native Coral (Dev Board)â
warning: may have compatibility issues after v0.9.x
- Frigate UI
- YAML
Navigate to SettingsâSystemâDetector hardware and select EdgeTPU from the detector type dropdown and click Add, then leave the device field empty.
detectors:
coral:
type: edgetpu
device: ""
Single PCIE/M.2 Coralâ
- Frigate UI
- YAML
Navigate to SettingsâSystemâDetector hardware and select EdgeTPU from the detector type dropdown and click Add, then set device to pci.
detectors:
coral:
type: edgetpu
device: pci
Multiple PCIE/M.2 Coralsâ
- Frigate UI
- YAML
Navigate to SettingsâSystemâDetector hardware and select EdgeTPU from the detector type dropdown and click Add to add multiple detectors, specifying pci:0 and pci:1 as the device for each.
detectors:
coral1:
type: edgetpu
device: pci:0
coral2:
type: edgetpu
device: pci:1
Mixing Coralsâ
- Frigate UI
- YAML
Navigate to SettingsâSystemâDetector hardware and select EdgeTPU from the detector type dropdown and click Add to add multiple detectors with different device types (e.g., usb and pci).
detectors:
coral_usb:
type: edgetpu
device: usb
coral_pci:
type: edgetpu
device: pci
EdgeTPU Supported Modelsâ
| Model | Notes |
|---|---|
| Mobiledet | Default model |
| YOLOv9 | More accurate but slower than default model |
Mobiledetâ
A TensorFlow Lite model is provided in the container at /edgetpu_model.tflite and is used by this detector type by default. To provide your own model, bind mount the file into the container and provide the path with model.path.
YOLOv9â
YOLOv9 models that are compiled for TensorFlow Lite and properly quantized are supported, but not included by default. Instructions for downloading a model with support for the Google Coral.
Frigate+ Users: Follow the instructions to set a model ID in your config file.
YOLOv9 Setup & Config
After placing the downloaded files for the tflite model and labels in your config folder, use the following configuration:
- Frigate UI
- YAML
Navigate to SettingsâSystemâDetector hardware and select EdgeTPU from the detector type dropdown and click Add, then set device to usb. Then navigate to SettingsâSystemâDetection model and configure the model settings:
| Field | Value |
|---|---|
| Object Detection Model Type | yolo-generic |
| Object detection model input width | 320 (should match the imgsize of the model) |
| Object detection model input height | 320 (should match the imgsize of the model) |
| Custom object detector model path | /config/model_cache/yolov9-s-relu6-best_320_int8_edgetpu.tflite |
| Label map for custom object detector | /config/labels-coco17.txt |
detectors:
coral:
type: edgetpu
device: usb
model:
model_type: yolo-generic
width: 320 # <--- should match the imgsize of the model, typically 320
height: 320 # <--- should match the imgsize of the model, typically 320
path: /config/model_cache/yolov9-s-relu6-best_320_int8_edgetpu.tflite
labelmap_path: /config/labels-coco17.txt
Note that due to hardware limitations of the Coral, the labelmap is a subset of the COCO labels and includes only 17 object classes.
Hailo-8â
This detector is available for use with both Hailo-8 and Hailo-8L AI Acceleration Modules. The integration automatically detects your hardware architecture via the Hailo CLI and selects the appropriate default model if no custom model is specified.
See the installation docs for information on configuring the Hailo hardware.
If no custom model is provided, the Hailo detector downloads a default model from the Hailo Model Zoo on first startup. Once cached, the model works fully offline. See Network Requirements for details.
Configurationâ
When configuring the Hailo detector, you have two options to specify the model: a local path or a URL.
If both are provided, the detector will first check for the model at the given local path. If the file is not found, it will download the model from the specified URL. The model file is cached under /config/model_cache/hailo.
YOLOâ
Use this configuration for YOLO-based models. When no custom model path or URL is provided, the detector automatically downloads the default model based on the detected hardware:
- Hailo-8 hardware: Uses YOLOv6n (default:
yolov6n.hef) - Hailo-8L hardware: Uses YOLOv6n (default:
yolov6n.hef)
- Frigate UI
- YAML
Navigate to SettingsâSystemâDetector hardware and select Hailo-8/Hailo-8L from the detector type dropdown and click Add, then set device to PCIe. Then navigate to SettingsâSystemâDetection model and configure the model settings:
| Field | Value |
|---|---|
| Object detection model input width | 320 |
| Object detection model input height | 320 |
| Model Input Tensor Shape | nhwc |
| Model Input Pixel Color Format | rgb |
| Model Input D Type | int |
| Object Detection Model Type | yolo-generic |
| Label map for custom object detector | /labelmap/coco-80.txt |
The detector automatically selects the default model based on your hardware. Optionally, specify a local model path or URL to override.
detectors:
hailo:
type: hailo8l
device: PCIe
model:
width: 320
height: 320
input_tensor: nhwc
input_pixel_format: rgb
input_dtype: int
model_type: yolo-generic
labelmap_path: /labelmap/coco-80.txt
# The detector automatically selects the default model based on your hardware:
# - For Hailo-8 hardware: YOLOv6n (default: yolov6n.hef)
# - For Hailo-8L hardware: YOLOv6n (default: yolov6n.hef)
#
# Optionally, you can specify a local model path to override the default.
# If a local path is provided and the file exists, it will be used instead of downloading.
# Example:
# path: /config/model_cache/hailo/yolov6n.hef
#
# You can also override using a custom URL:
# path: https://hailo-model-zoo.s3.eu-west-2.amazonaws.com/ModelZoo/Compiled/v2.14.0/hailo8/yolov6n.hef
# just make sure to give it the write configuration based on the model
SSDâ
For SSD-based models, provide either a model path or URL to your compiled SSD model. The integration will first check the local path before downloading if necessary.
- Frigate UI
- YAML
Navigate to SettingsâSystemâDetector hardware and select Hailo-8/Hailo-8L from the detector type dropdown and click Add, then set device to PCIe. Then navigate to SettingsâSystemâDetection model and configure the model settings:
| Field | Value |
|---|---|
| Object detection model input width | 300 |
| Object detection model input height | 300 |
| Model Input Tensor Shape | nhwc |
| Model Input Pixel Color Format | rgb |
| Object Detection Model Type | ssd |
Specify the local model path or URL for SSD MobileNet v1.
detectors:
hailo:
type: hailo8l
device: PCIe
model:
width: 300
height: 300
input_tensor: nhwc
input_pixel_format: rgb
model_type: ssd
# Specify the local model path (if available) or URL for SSD MobileNet v1.
# Example with a local path:
# path: /config/model_cache/h8l_cache/ssd_mobilenet_v1.hef
#
# Or override using a custom URL:
# path: https://hailo-model-zoo.s3.eu-west-2.amazonaws.com/ModelZoo/Compiled/v2.14.0/hailo8l/ssd_mobilenet_v1.hef
Custom Modelsâ
The Hailo detector supports all YOLO models compiled for Hailo hardware that include post-processing. You can specify a custom URL or a local path to download or use your model directly. If both are provided, the detector checks the local path first.
- Frigate UI
- YAML
Navigate to SettingsâSystemâDetector hardware and select Hailo-8/Hailo-8L from the detector type dropdown and click Add, then set device to PCIe. Then navigate to SettingsâSystemâDetection model and configure the model settings to match your custom model dimensions and format.
detectors:
hailo:
type: hailo8l
device: PCIe
model:
width: 640
height: 640
input_tensor: nhwc
input_pixel_format: rgb
input_dtype: int
model_type: yolo-generic
labelmap_path: /labelmap/coco-80.txt
# Optional: Specify a local model path.
# path: /config/model_cache/hailo/custom_model.hef
#
# Alternatively, or as a fallback, provide a custom URL:
# path: https://custom-model-url.com/path/to/model.hef
For additional ready-to-use models, please visit: https://github.com/hailo-ai/hailo_model_zoo
Hailo8 supports all models in the Hailo Model Zoo that include HailoRT post-processing. You're welcome to choose any of these pre-configured models for your implementation.
Note: The config.path parameter can accept either a local file path or a URL ending with .hef. When provided, the detector will first check if the path is a local file path. If the file exists locally, it will use it directly. If the file is not found locally or if a URL was provided, it will attempt to download the model from the specified URL.
OpenVINO Detectorâ
The OpenVINO detector type runs an OpenVINO IR model on AMD and Intel CPUs, Intel GPUs and Intel NPUs. To configure an OpenVINO detector, set the "type" attribute to "openvino".
The OpenVINO device to be used is specified using the "device" attribute according to the naming conventions in the Device Documentation. The most common devices are CPU, GPU, or NPU.
OpenVINO is supported on 6th Gen Intel platforms (Skylake) and newer. It will also run on AMD CPUs despite having no official support for it. A supported Intel platform is required to use the GPU or NPU device with OpenVINO. For detailed system requirements, see OpenVINO System Requirements
NPU + GPU Systems: If you have both NPU and GPU available (Intel Core Ultra processors), use NPU for object detection and GPU for enrichments (semantic search, face recognition, etc.) for best performance and compatibility.
When using many cameras one detector may not be enough to keep up. Multiple detectors can be defined assuming GPU resources are available. An example configuration would be:
- Frigate UI
- YAML
Navigate to SettingsâSystemâDetector hardware and select OpenVINO from the detector type dropdown and click Add to add multiple detectors, each targeting GPU or NPU.
detectors:
ov_0:
type: openvino
device: GPU # or NPU
ov_1:
type: openvino
device: GPU # or NPU
OpenVINO Supported Modelsâ
| Model | GPU | NPU | Notes |
|---|---|---|---|
| YOLOv9 | â | â | Recommended for GPU & NPU |
| RF-DETR | â | â | Requires XE iGPU or Arc |
| YOLO-NAS | â | â | |
| MobileNet v2 | â | â | Fast and lightweight model, less accurate than larger models |
| YOLOX | â | ? | |
| D-FINE / DEIMv2 | â | â |
SSDLite MobileNet v2â
An OpenVINO model is provided in the container at /openvino-model/ssdlite_mobilenet_v2.xml and is used by this detector type by default. The model comes from Intel's Open Model Zoo SSDLite MobileNet V2 and is converted to an FP16 precision IR model.
MobileNet v2 Config
Use the model configuration shown below when using the OpenVINO detector with the default OpenVINO model:
- Frigate UI
- YAML
Navigate to SettingsâSystemâDetector hardware and select OpenVINO from the detector type dropdown and click Add, then set device to GPU (or NPU). Then navigate to SettingsâSystemâDetection model and configure:
| Field | Value |
|---|---|
| Object detection model input width | 300 |
| Object detection model input height | 300 |
| Model Input Tensor Shape | nhwc |
| Model Input Pixel Color Format | bgr |
| Custom object detector model path | /openvino-model/ssdlite_mobilenet_v2.xml |
| Label map for custom object detector | /openvino-model/coco_91cl_bkgr.txt |
detectors:
ov:
type: openvino
device: GPU # Or NPU
model:
width: 300
height: 300
input_tensor: nhwc
input_pixel_format: bgr
path: /openvino-model/ssdlite_mobilenet_v2.xml
labelmap_path: /openvino-model/coco_91cl_bkgr.txt
YOLOXâ
This detector also supports YOLOX. Frigate does not come with any YOLOX models preloaded, so you will need to supply your own models.
YOLO-NASâ
YOLO-NAS models are supported, but not included by default. See the models section for more information on downloading the YOLO-NAS model for use in Frigate.
YOLO-NAS Setup & Config
After placing the downloaded onnx model in your config folder, use the following configuration:
- Frigate UI
- YAML
Navigate to SettingsâSystemâDetector hardware and select OpenVINO from the detector type dropdown and click Add, then set device to GPU. Then navigate to SettingsâSystemâDetection model and configure:
| Field | Value |
|---|---|
| Object Detection Model Type | yolonas |
| Object detection model input width | 320 (should match whatever was set in notebook) |
| Object detection model input height | 320 (should match whatever was set in notebook) |
| Model Input Tensor Shape | nchw |
| Model Input Pixel Color Format | bgr |
| Custom object detector model path | /config/yolo_nas_s.onnx |
| Label map for custom object detector | /labelmap/coco-80.txt |
detectors:
ov:
type: openvino
device: GPU
model:
model_type: yolonas
width: 320 # <--- should match whatever was set in notebook
height: 320 # <--- should match whatever was set in notebook
input_tensor: nchw
input_pixel_format: bgr
path: /config/yolo_nas_s.onnx
labelmap_path: /labelmap/coco-80.txt
Note that the labelmap uses a subset of the complete COCO label set that has only 80 objects.
YOLO (v3, v4, v7, v9)â
YOLOv3, YOLOv4, YOLOv7, and YOLOv9 models are supported, but not included by default.
The YOLO detector has been designed to support YOLOv3, YOLOv4, YOLOv7, and YOLOv9 models, but may support other YOLO model architectures as well.
YOLOv Setup & Config
If you are using a Frigate+ model, you should not define any of the below model parameters in your config except for path. See the Frigate+ model docs for more information on setting up your model.
After placing the downloaded onnx model in your config folder, use the following configuration:
- Frigate UI
- YAML
Navigate to SettingsâSystemâDetector hardware and select OpenVINO from the detector type dropdown and click Add, then set device to GPU (or NPU). Then navigate to SettingsâSystemâDetection model and configure:
| Field | Value |
|---|---|
| Object Detection Model Type | yolo-generic |
| Object detection model input width | 320 (should match the imgsize set during model export) |
| Object detection model input height | 320 (should match the imgsize set during model export) |
| Model Input Tensor Shape | nchw |
| Model Input D Type | float |
| Custom object detector model path | /config/model_cache/yolo.onnx |
| Label map for custom object detector | /labelmap/coco-80.txt |
detectors:
ov:
type: openvino
device: GPU # or NPU
model:
model_type: yolo-generic
width: 320 # <--- should match the imgsize set during model export
height: 320 # <--- should match the imgsize set during model export
input_tensor: nchw
input_dtype: float
path: /config/model_cache/yolo.onnx
labelmap_path: /labelmap/coco-80.txt
Note that the labelmap uses a subset of the complete COCO label set that has only 80 objects.
RF-DETRâ
RF-DETR is a DETR based model. The ONNX exported models are supported, but not included by default. See the models section for more informatoin on downloading the RF-DETR model for use in Frigate.
Due to the size and complexity of the RF-DETR model, it is only recommended to be run with discrete Arc Graphics Cards.
RF-DETR Setup & Config
After placing the downloaded onnx model in your config/model_cache folder, use the following configuration:
- Frigate UI
- YAML
Navigate to SettingsâSystemâDetector hardware and select OpenVINO from the detector type dropdown and click Add, then set device to GPU. Then navigate to SettingsâSystemâDetection model and configure:
| Field | Value |
|---|---|
| Object Detection Model Type | rfdetr |
| Object detection model input width | 320 |
| Object detection model input height | 320 |
| Model Input Tensor Shape | nchw |
| Model Input D Type | float |
| Custom object detector model path | /config/model_cache/rfdetr.onnx |
detectors:
ov:
type: openvino
device: GPU
model:
model_type: rfdetr
width: 320
height: 320
input_tensor: nchw
input_dtype: float
path: /config/model_cache/rfdetr.onnx
D-FINE / DEIMv2â
D-FINE and DEIMv2 are DETR based models that share the same ONNX input/output format. The ONNX exported models are supported, but not included by default. See the models section for downloading D-FINE or DEIMv2 for use in Frigate.
Currently D-FINE / DEIMv2 models only run on OpenVINO in CPU mode, GPUs currently fail to compile the model
D-FINE Setup & Config
After placing the downloaded onnx model in your config/model_cache folder, use the following configuration:
- Frigate UI
- YAML
Navigate to Settings