Go to documentation repository

Contact technical support

Page tree

Documentation for Axxon Next 4.5.0. Documentation for other versions of Axxon Next is available too.

Previous page Functions of the neural counter  Fire and Smoke Detection Tools Next page

Skip to end of metadata
Go to start of metadata

To configure Neuralcounter, do the following:

  1. If a camera supports multistreaming, select the stream to apply the detection tool to (1). 
  2. Select a processing resource for decoding video streams (2). When you select a GPU, a stand-alone graphics card takes priority (when decoding with NVidia NVDEC chips). If there's no appropriate GPU, the decoding will use the Intel Quick Sync Video technology. Otherwise, CPU resources will be used for decoding.
  3. If you need to outline objects in the preview window, select Yes in the Detected Objects parameter (3).
  4. Set the recognition threshold for objects in percent (4). If the recognition probability falls below the specified value, the data will be ignored. The higher the value, the higher the accuracy — for the cost of sensitivity.
  5. Set the interval between the analyzed frames in seconds (5). The value should be within the range of 0,05 – 30.

  6. Set the minimum number of frames with excessive numbers of objects for Neuralcounter to trigger (9). The value should be within the range of 2 – 20.

    Note

    The default values (3 frames and 1 second) indicate that Neuralcounter will analyze one frame every second. If Neuralcounter detects more objects than the specified threshold value on 3 frames, then it triggers.

  7. Select the processor for the neural network - CPU, one of GPUs, or Intel NCS (6, see Hardware requirements for neural analytics operation). 

    Attention!

    It may take several minutes to launch the algorithm on an NVIDIA GPU after you apply the settings.

    Attention!

    If you specify other processing resource than the CPU, this device will carry the most of computing load. However, the detection tool will consume CPU as well.

  8. Select the neural network file (7).

    Note

    For correct neural network operation under Linux, place the corresponding file in the /opt/AxxonSoft/AxxonNext/ directory.

  9. Set the triggering condition for the neural counter:

    1. In the Number of alarm objects field, set the threshold value for the number of objects in FoV (8). 

    2. In the Trigger upon count field, select the condition polarity: whether triggering should occur on exceeding the threshold, or dropping below it (10). 

  10. In the preview window, you can set the detection zones with the help of anchor points much like privacy masks in Scene Analytics (see Setting General Zones for Scene Analytics). By default, the entire FoV is a detection zone.
  11. Click Apply.

After the neural tracker is created, the layout can display the sensor icon with the number of objects within the controlled area. To configure this option, please follow the steps below:

  1. Proceed to the Layout Editing mode (see Switching to layout editing mode).
  2. Place the sensor anywhere in FoV.
  3. Customize the font. To do this, press the button .
  4. Save the layout (see Exiting layout editing mode).

  • No labels