Your Destination for Mobility Engineering Resources
Recently Published
Browse AllWith the continuous development of avionics systems towards greater integration and modularization, traditional aircraft buses such as ARINC 429 and MIL-STD-1553B are increasingly facing challenges in meeting the demanding requirements of next-generation avionics systems. These traditional buses struggle to provide sufficient bandwidth efficiency, real-time performance, and scalability for modern avionics applications. In response to these limitations, AFDX (Avionics Full-Duplex Switched Ethernet), a deterministic network architecture based on the ARINC 664 standard, has emerged as a critical solution for enabling high-speed data communication in avionics systems. The AFDX architecture offers several advantages, including a dual-redundant network topology, a Virtual Link (VL) isolation mechanism, and well-defined bandwidth allocation strategies, all of which contribute to its robustness and reliability. However, with the increasing complexity of onboard networks and multi-tasking
With the acceleration of urbanization, freeway traffic congestion is becoming increasingly serious, especially at entrance ramps, where the concentrated inflow of traffic often leads to increased traffic pressure on the mainline, affecting the overall access efficiency. In order to alleviate the ramp congestion problem, this paper proposes a deep reinforcement learning-based intelligent control method for entrance ramps of network-connected vehicles, which adopts Proximal Policy Optimization (PPO) algorithm to optimize the ramp vehicle flow and speed control strategy in real time by constructing a reinforcement learning control framework. In this paper, simulation experiments are conducted in different traffic density scenarios and compared with the traditional reinforcement learning algorithms DQN and A2C. The experimental results show that the PPO algorithm is able to converge quickly in low, medium and high traffic densities, significantly improve the cumulative reward value, and














