Documentation for Axxon Next 4.2.2. Documentation for other versions of Axxon Next is available too.

Previous page Object Tracker  Setting General Zones for Scene Analytics Next page

Skip to end of metadata
Go to start of metadata

Some parameters can be bulk configured for Situation Analysis detection tools. To configure them, do as follows: 

  1. Select the Object Tracker object (1).

  2. By default, video stream's metadata are recorded in the database. You can disable it by selecting No in the Record object tracking list (2).


    Video decompression and analysis are used to obtain metadata, which causes high Server load and limits the number of video cameras that can be used on it.

  3. If a video camera supports multistreaming, select the stream for which detection is needed (3). Selecting a low-quality video stream allows reducing the load on the Server.

  4. To correct for camera shake, set Antishaker  to Yes (4).  This setting is recommended only for cameras that show clear signs of shaking-related image degradation.
  5. If you require automatic adjustment of the sensitivity of scene analytic detection tools, in the Auto Sensitivity list, select Yes (5).


    Enabling this option is recommended if the lighting fluctuates significantly in the course of the video camera's operation (for example, in outdoor conditions)

  6. By default, the frame is compressed to 1920 pixels on the longer side. To avoid detection errors on streams with a higher resolution, it is recommended that compression be reduced (6).

  7. In the Motion detection sensitivity field (7), set the sensitivity for motion detection tools, on a scale of 1 to 100.

  8. Enter the time interval in seconds, during which  object's properties will be stored in the Time of Object in DB  field (8). If the object leaves and enters the FoV within the specified time, it will be identified as one and the same object (same ID).  
  9. If necessary, configure the neural network filter. The neural network filter processes the results of the tracker and filters out false alarms on complex video images (foliage, glare, etc.). 

    1. Enable the filter by selecting Yes (1).

    2. Select the processor for the neural network — CPU, one of GPUs or a Movidius (2). 

    3. Select a neural network (3). To access a neural network, contact technical support. If no neural network file is specified, or the settings are incorrect, no filtering will occur.


    A neural network filter can be used either only for analyzing moving objects, or only for analyzing abandoned objects. You cannot operate  two neural networks simultaneously.

  10. Click the Apply button.

The general parameters of the situation analysis detection tools are now set.