Automated driving is currently one of the most active areas of research worldwide. While the general progress in developing specific algorithms for perception, planning and control tasks is very advanced, testing and validation of the resulting functions is still challenging due to the large number of possible scenarios and generation of ground-truth. Currently, real world testing and simulations are used in combination to overcome some of these challenges. While real world testing does not suffer from imperfect sensor models and environments, it is expensive, slow and not accurately repeatable and therefore unable to capture all possible scenarios. However, simulation models are not sophisticated enough to fully replace real world testing. In this paper, we propose a workflow that is capable of augmenting real sensor-level data with simulated sensor data. With this approach we are able to generate scenarios which are as realistic as possible while also being flexible with the ability to insert arbitrary objects. This sensor-level based approach enables testing of the whole algorithm chain for automated driving, including perception, object-detection, scene understanding, path planning, decision making, and control.