Computer vision has evolved from a supportive driver-assistance tool into a core technology for intelligent, non-intrusive occupant health monitoring in modern vehicles. Leveraging deep learning, edge optimization, and adaptive image processing, this work presents a dual-module Driver Health and Wellness Monitoring System that simultaneously performs fatigue detection and emotional wellbeing assessment using existing in-cabin RGB cameras without requiring additional sensors or intrusive wearables.
The fatigue module employs MediaPipe-based facial and skeletal landmark analysis to track Eye Aspect Ratio (EAR), Mouth Aspect Ratio (MAR), head posture, and gaze dynamics, detecting early drowsiness and postural deviations. Adaptive, driver-specific thresholds combined with CAN-bus data fusion minimize false positives, achieving over 92% detection accuracy even under variable lighting and demographics. The emotional wellbeing module analyzes micro-expressions and facial action units to estimate stress, calmness, and agitation, contextualizing these states with fatigue indicators for holistic assessment. All computation occurs on the Jetson Nano edge platform with has a Quad-core ARM Cortex-A57 CPU and 128-core Maxwell GPU, optimized with TensorRT quantization for real-time operation (≤150 ms latency). The architecture ensures on-device privacy, aligning with GDPR and ISO/SAE 21434 cybersecurity principles. Compared with Tier-1 radar camera solutions, the proposed framework is fully software-driven, cost-efficient, and privacy-preserving. Field validation confirms strong correlation between model predictions and physiological HRV metrics. Future extensions include extreme fatigue detection and multimodal sensor fusion toward a self-adaptive, wellness aware vehicle ecosystem.