Compression ignition engine-based transportation is nowadays looking for cleaner combustion solutions. Among them, ducted fuel injection (DFI) is emerging as a cutting-edge technology due to its potential to drastically curtail engine-out soot emissions. Although the DFI capability to abate soot formation has been demonstrated both in constant-volume and optical engine conditions, its optimization and understanding is still needed for its exploitation on series production engines. For this purpose, computational fluid dynamics (CFD) coupled with low-cost turbulence models, like RANS, can be a powerful tool, especially in the industrial context. However, it is often challenging to obtain reliable RANS-based CFD simulations, especially due to the high dependence of the various state-of-the-art turbulence models on the case study.
In this scenario, the present work proposes a comparative analysis of the outcome of several RANS turbulence models against high-fidelity large eddy simulation (LES) data related to the DFI case study in constant-volume non-reacting conditions. In particular, the mixing process predicted by each RANS model has been assessed in terms of turbulent kinetic energy and fuel-to-air ratio fields, using the resolved portion of the LES turbulent energy spectrum (characterized by more than 80% of resolved turbulent length scales) as a target. After that, the most reliable models have been selected for the combustion analysis with the aim of providing additional insights regarding the soot formation process. According to these results, although the macroscopic spray characteristics and the flow detachment location are consistently predicted by each RANS turbulence model, the mixing intensity and the related fuel-to-air ratio values in the auto-ignition region are strictly model dependent, drastically modifying the DFI performance in terms of soot reduction compared to the free spray case.