The author of this document has limited its availability to on-campus or logged-in CSUSB users only.

Off-campus CSUSB users: To download restricted items, please log in to our proxy server with your MyCoyote username and password.

Date of Award

5-2026

Document Type

Restricted Project: Campus only access

Degree Name

Master of Science in Computer Science

Department

School of Computer Science and Engineering

First Reader/Committee Chair

Yan Zhang

Abstract

Object detection in low-light environments remains a significant challenge for modern computer vision systems. This paper presents a systematic comparative study of three state-of-the-art transformer-based object detection architectures — DETR (DEtection TRansformer), YOLOS (You Only Look at One Sequence), and RT-DETR (Real-Time DEtection TRansformer) — evaluated under both normal and degraded lighting conditions. We propose a three-phase experimental methodology: (1) pretrained baseline evaluation, (2) fine-tuning with low-light augmented training data, and (3) dual test set evaluation under normal and simulated low-light conditions. Our experiments on the COCO val2017 dataset demonstrate that low-light augmented fine-tuning yields substantial improvements across all three architectures, with YOLOS-Small achieving the largest relative improvement (+991% mAP increase) and RT-DETR-R50 maintaining the highest absolute performance (mAP = 0.1232 on normal, 0.1227 on low-light). Notably, all models exhibit strong robustness to low-light degradation after augmented training, with minimal performance gaps between normal and low-light evaluation conditions. These findings suggest that data augmentation strategies incorporating synthetic low-light simulation are an effective and architecture-agnostic approach to improving detection robustness in challenging illumination scenarios.

Share

COinS