Documentation for Axxon Next 4.5.0. Documentation for other versions of Axxon Next is available too.
To configure the neural tracker, do the following:
- Select the Neurotracker object.
- By default, metadata are recorded into the database. To disable metadata recording, select No (1) from the Record object tracking list.
- If a camera supports multistreaming, select the stream to apply the detection tool to (2).
To reduce false alarms rate from a fish-eye camera, you have to position it properly (3). For other devices, this parameter is not valid.
- Select a processing resource for decoding video streams (4). When you select a GPU, a stand-alone graphics card takes priority (when decoding with NVidia NVDEC chips). If there's no appropriate GPU, the decoding will use the Intel Quick Sync Video technology. Otherwise, CPU resources will be used for decoding.
- Set the recognition threshold for objects in percent (5). If the recognition probability falls below the specified value, the data will be ignored. The higher the value, the higher the accuracy — for the cost of sensitivity.
- Specify the Minimum number of detection triggers for the neural tracker to display the object's trajectory (6). The higher the value, the more is the time interval between the object's detection and display of its trajectory on screen. Low values may lead to false triggering.
Select the neural network file (7).
A trained neural network does a great job for a particular scene if you want to detect only objects of a certain type (e.g. person, cyclist, motorcyclist, etc.).
To train your neural network, contact AxxonSoft (see Requirements to data collection for neural network training).
For correct neural network operation under Linux, place the corresponding file in the /opt/AxxonSoft/AxxonNext/ directory.
You can use the neural filter to sort out video recordings featuring selected objects and their trajectories. For example, the neural tracker detects all freight trucks, and the neural filter sorts out only video recordings that contain trucks with cargo door open. To set up a neural filter, do the following:
To use the neural filter, set Yes in the corresponding field (8).
- In the Neurofilter file field, select a neural network file (9).
In the Neurofilter mode field, select a processor to be used for neural network computations (10).
Select the processor for the neural network: the CPU or one of GPUs (11).
We recommend the GPU.
It may take several minutes to launch the algorithm on an NVIDIA GPU after you apply the settings.
Set the frame rate value for the neural network (12). The other frames will be interpolated. The higher the value, the more accurate tracking, the higher the CPU load.
6 FPS or more is recommended. For fast moving objects (running individuals, vehicles), you must set frame rate at 12 FPS or above.
If you don't need to detect moving objects, select Yes in the Hide moving objects field (13). An object is treated as static if it does not change its position more than at 10% of its width or height during its track's lifetime.
If you don't need to detect static objects, select Yes in the Hide stationary objects field (14). This parameter lowers the false alarm rate when detecting moving objects.
In the Track retention time field, set a time interval in seconds after which the tracking of a vehicle is considered lost (15). This helps if objects in scene temporarily obscure each other. For example, a larger vehicle may completely block the smaller one from view.
By default, the entire FoV is a detection zone. If you need to narrow down the area to be analyzed, you can set one or several detection zones.
The procedure of setting zones is identical to the primary tracker's (see Setting General Zones for Scene Analytics). The only difference is that the neural tracker's zones are processed while the primary tracker's are ignored.
- Click Apply.
The next step is to create and configure the necessary detection tools. The configuration procedure is the same as for the primary tracker.
To trigger a Motion in Area detection tool under a neural network tracker, an object must be displaced by at least 25% of its width or height in FoV.
The abandoned objects detection tool works only with the primary tracker.