Go to documentation repository

Page tree

Documentation for Intellect 4.10.4. Documentation for other versions of Intellect is available too.

Previous page Recommendations on configuring smart video detection tools  Configuring the neural filter Next page

Skip to end of metadata
Go to start of metadata

The Tracker object is designed for motion detection and saving video recordings to VMDA metadata storage.

To configure the Tracker object, do the following:

  1. Go to the Hardware tab in the System settings dialog box.
  2. Select the Camera object in the objects tree of the Hardware tab.
  3. Create the Tracker object on the base of the Camera object. The settings panel for the created object appears on the right in the Hardware tab.
  4. Set the Show objects on image checkbox (1), if the objects are to be selected on the preview screen. When object is selected in the top left corner the object speed is displayed in pixels per second.


    To enable displaying tracker ID, set 1 value  for the DrawDetectorNumbers line parameter in the HKEY_LOCAL_MACHINE\SOFTWARE\ITV\INTELLECT\Video (HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\ITV\INTELLECT\Video for 64-bit system) registry.
    Frame color is adjusted by the DrawDetectorColors parameter in the same registry:
    1. When value is 1, then the frame color is the average one in the area
    2. When value is 0, then the frame color is white.
  5. To allow VMDA detectors monitoring abandoned objects in area covered by the camera set the Abandoned objects detection checkbox (2). See the video requirements for this feature to operate in the Video requirements to be met for abandoned object detection tool of the Tracker object operation section.


    Camera is to always perform recording for proper operation of the abandoned objects detection tool.


    If there is no need to monitor abandoned objects in area covered by the camera, disable the Abandoned objects detection option to reduce Server load. Disabling this option you disable detectors configured to monitor abandoned objects.

    If the Abandoned objects detection feature is enabled, then detected abandoned objects is framed in the image while viewing the live video or archive. In addition, an object disappearance is detected, a place where it was located is framed on the image. The function of highlighting abandoned and disappeared objects on the video is available for both standard and converted fisheye video.

  6. If the camera is installed on the moving object, then set the Camera shake removal checkbox to stabilize the image and reduce detector’s errors (3).


     When the Camera shake removal option is active, the Server load increases.
  7. Go to the Basic settings tab (4).
  8. In the Metadata sources table, set checkboxes next to objects used for metadata creation (5):
    1. Internal source. The resources of the Tracker object itself are used as metadata source.
    2. Embedded detector.  Metadata goes from detectors embedded in the camera (see Embedded detectors section).


      Functionality of recording tracks from the embedded detectors must be supported by the device. Furthermore, this functionality has to be integrated in the Intellect software.

  9. Set the value of the Sensitivity parameter by moving the slider to the required position (6). The parameter value corresponds to minimal value of moving object’s averaged brightness on which the detector will trigger only to its motion, not to video signal noise (including snow, rain, etc.).


    If the slider is in the left end position, then the value of the Sensitivity parameter is selected automatically.
  10. Set the Waiting for loss slider into position corresponding to time when the object stops moving and is considered to be active and the detector is still tracking it (7). If the object is motionless for a longer period than the value which has been set, then the object is considered to be lost. 


     If the lost object starts moving it is considered as a new one.


    More sophisticated configuration of abandoned objects detection tool is performed using registry keys – see Registry keys reference guide.

  11. In the Objects in frame, not more than field specify maximum number of objects in the frame that are detected (8). If the number of objects exceeds the specified value, then MD_LIMIT event is generated (see CAM section of Programming Guide). If the parameter is not set or equal to 0, then this event is not generated.
  12. To disable analysis in the area covered by the camera click the Set mask button (2) in the Tracker mask tab (1) and specify the area in the preview field. It is possible to disable analysis in several areas, i.e. to set several masks. To set an additional mask, click the Set mask button again and specify an additional masked area in the video preview field.

    To zoom in on the video preview field for more precise selection of the masked area, right-click the video preview while holding the Shift key on the keyboard after clicking the Set mask button. The video opens in a separate window, which can be resized by dragging its borders. After setting the mask in this window, close it by clicking the  button in the upper right corner.
  13. To set the minimal and maximal sizes of detected object do the following:


    If the perspective configuration is enabled, maximum and minimum object size for detection parameters are ignored. Width, m and Height, m parameters in the Perspective tab (see Configuring perspective) are considered instead.

    1. Go to the Detection parameters tab (1).
    2. To stop playback in the preview, click the Stop button (2).


      To resume playback, click the Resume playback button.

    3. In the Minimal size group specify the minimal size of detected object as a percentage of total image area (3) or click the Configure button and specify the size in the preview area (4).


       Range of values of the Minimal size parameter varies from 0 to 30% relatively the frame size.
    4. In the Maximal size group, specify the maximal size of detected object as a percentage of total image area (5) or click the Configure button and specify the size in the preview area (6).


       Maximal size is to be bigger than the minimal one and not more than 100%. If the maximal size is the same as the minimal, then there is no detection.
  14. To save the changes, click the Apply button.

The Tracker object is now created and configured. During video signal recording in VMDA metadata storage the information about all objects in camera coverage will be recorded.

  • No labels