While machine-learning-based methods suffer from a lack of transparency,
rule-based (RB) methods dominate safety-critical systems. Yet the RB approaches
cannot compete with the first ones in robustness to multiple system
requirements, for instance, simultaneously addressing safety, comfort, and
efficiency. Hence, this article proposes a decision-making and control framework
which profits from the advantages of both the RB and machine-learning-based
techniques while compensating for their disadvantages. The proposed method
embodies two controllers operating in parallel, called Safety and Learned. An RB
switching logic selects one of the actions transmitted from both controllers.
The Safety controller is prioritized whenever the Learned one does not meet the
safety constraint, and also directly participates in the Learned controller
training. Decision-making and control in autonomous driving are chosen as the
system case study, where an autonomous vehicle (AV) learns a multitask policy to
safely execute an unprotected left turn. Multiple requirements (i.e., safety,
efficiency, and comfort) are set to vehicle motion. A numerical simulation is
performed for the proposed framework validation, where its ability to satisfy
the requirements and robustness to changing environments is successfully
demonstrated.