In order to install the proper libraries, please first install PyTorch. The following guidelines are made for Windows 10/11. Adjust accordingly.
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu124Once everything has been installed, simply run the requirements.txt file with:
pip install -r requirements.txtThe project has a dedicated config.py file where all the default parameters have been set, such as:
- Misc: Contains general settings such as the random seed, device configuration (CPU/GPU), and experimental features like Automatic Mixed Precision (AMP).
- Dataset: Defines the dataset-related configurations, including paths, dataset names, sizes, and sampling strategies. It also handles dataset splitting for combined datasets.
- Model: Configures the ReID model architecture, including backbone choices, pretraining options, and advanced features like GeM pooling, stride adjustments, and normalization layers.
- Color Model: Specifies the color classification model used alongside the ReID model, including its architecture and pretrained weights.
- Augmentation: Controls the data augmentation pipeline, including resizing, cropping, padding, color jitter, and normalization settings.
- Loss: Configures the loss functions used during training, including triplet loss variants, label smoothing, and advanced techniques like Relation Preserving Triplet Mining (RPTM) and Multi-Attribute Loss Weighting (MALW).
- Training: Manages the training process, including epochs, batch size, optimizer settings, learning rate schedules, and checkpoint loading.
- Validation: Sets up the validation process, including batch size, validation intervals, and re-ranking options.
- Test: Configures the testing phase, including embedding normalization, similarity algorithms, and paths to test images or models.
- Tracking: Defines settings for object tracking, including YOLO configurations, filtering thresholds, and output paths for bounding boxes and videos.
- Database: Configures the MongoDB database connection and collections for storing vehicle, camera, trajectory, and bounding box data.
- Metrics: Handles evaluation metrics for tracking and ReID tasks, including MOTA metrics, IoU thresholds, and prediction file management.
- Pipeline: Controls the overall pipeline execution, including video paths, ROI masks, target search, and database unification processes.
If you wish to override some parameters, create a config.yaml file .
To run the full pipeline, which involves Object Detection + Tracking + ReID, run:
python pipeline.py <config_file>.ymlThe YAML file specified here, must be a configuration file that contains the following section:
- camera_x: (path, roi, info, gt) Contains the paths to the video/frames, the ROI, the informations and possibly the ground truth.
- layout: (layout path)
Probably the most important and MANDATORY field. It contains geometric informations between all pairs of cameras, with FPS, scales and offsets, compatibility (matrix for determining whether a car can be seen from a camera to another), dtmin (minimum time for a vehicle to transition between two pairs of cams) and dtmax. There's no pipeline without this field.
You can find examples of configuration files in the
configfolder.
For AICity22, it would be:
python pipeline.py configs/cameras_s02_cityflow.ymlPlease change the config.py file to adjust the MTMC (YOLO, Detector, Tracking & Pipeline configs).
To train the model, run the following command in your terminal:
python -m reid.main <config_file>.yml
N.B. If you do not specify a .yml file, the script will use the pre-defined
config.pyfile to set up the Dataset, Model, and Training parameters. N.B.2 For example, for RPTM Training, you can call the existing config fileconfig_rptm.yml
To evaluate a trained model, use the following command:
python -m reid.test <config_test_file>.yml
N.B. If you do not specify a .yml file, the script will use the pre-defined
config_test.ymlfile to set up the Model and Testing parameters.
N.B. This script will load a pre-trained model specified in the
config_test.ymlfile and either:
- Compare two specific images: If
run_reid_metricsis set toFalsein the config file, the script will compute the similarity or distance between the two images specified inPATH_IMG_1andPATH_IMG_2. The similarity metric (e.g., Euclidean distance or cosine similarity) is determined by theSIMILARITY_ALGORITHMsetting in the config file.- Run re-identification metrics: If
run_reid_metricsis set toTrue, the script will evaluate the model on the validation set and compute re-identification metrics.- Run color metrics: If
run_color_metricsis set toTrue, the script will evaluate the color classification model on the validation set and compute color-related metrics.
Additionally, if
stack_imagesis set to True andrun_reid_metricsisFalse, the script will compute a similarity matrix for all images in the test directory and display it as a heatmap. For this reason, change the Test Configuration accordingly.

