Lane-keeping is critical for SAE Level 3+ autonomous vehicles, requiring rigorous validation and end-to-end interpretability. All recently U.S.-approved level 3 vehicles are equipped with lidar, likely for accelerating active safety. Lidar offers direct distance measurements, allowing rule-based algorithms compared to camera-based methods, which rely on statistical methods for perception. Furthermore, lidar can support a more comprehensive and detailed approach to studying lane-keeping. This paper proposes a module perceiving oncoming vehicle behavior, as part of a larger behavior-tree structure for adaptive lane-keeping using data from a lidar sensor. The complete behavior tree would include road curvature, speed limits, road types (rural, urban, interstate), and the proximity of objects or humans to lane markings. It also accounts for the lane-keeping behavior, type of adjacent and opposing vehicles, lane occlusion, and weather conditions. The algorithm was evaluated using experimental lidar data collected from driving around Georgia Southern’s campus on one of the behavior tree’s most intensive inputs: oncoming vehicle lane-keeping behavior in two-way, two-lane highways with no physical barriers. Preliminary results include demonstrating one behavior-tree module recognizing an oncoming vehicle’s lane-keeping ability, showing a promising future for interpretable algorithms when using lidar. Existing and novel methods were combined to acquire behavior metrics: Distance to Lane Marking (DTLM), trajectory prediction error (pE), the relative distance between ego- and target vehicles, predicted dividing lane crossings, and the number of vehicle points tracked (NoP).