Uncertainty-Aware Pedestrian Re-Identification in Autonomous Driving Perception Pipelines
DOI:
https://doi.org/10.71465/mrcis211Keywords:
Pedestrian re-identification, domain adaptation, vision–language models, uncertainty weighting, autonomous drivingAbstract
Pedestrian re-identification models trained on one city often degrade when deployed in another due to changes in camera placement, lighting, and pedestrian appearance patterns. Building on CLIP-based uncertainty modal modeling, this work proposes an uncertainty-weighted adaptation strategy that transfers vision–language embeddings across cities while down weighting unreliable pseudo-labels. The method combines (i) entropy-based sample filtering, (ii) uncertainty-aware class prototype refinement, and (iii) consistency regularization between image and text embeddings. Experiments are conducted on cross-city splits constructed from autonomous driving data, totaling 320,000 images and 38,000 identities across three urban domains. Compared with OSNet, TransReID, and CLIP-derived ReID baselines under standard unsupervised domain adaptation settings, the proposed method improves target-domain mAP by 4.0%–6.3% and rank-1 accuracy by 3.2%–5.1%, with the largest gains observed in nighttime and rain subsets.
Downloads
Published
Issue
Section
License
Copyright (c) 2026 Marco Bianchi, Alessandro Rossi , Giulia Conti , Lorenzo De Santis (Author)

This work is licensed under a Creative Commons Attribution 4.0 International License.
All articles published in the Multidisciplinary Research in Computing Information Systems are licensed under an open-access model. Authors retain full copyright and grant the journal the right of first publication. The content can be freely accessed, distributed, and reused for non-commercial purposes, provided proper citation is given to the original work.
