Project

Autonomous Traffic Control

This project started as a physical simulation of a city intersection: 12 LEDs for traffic lights, 4 LED bar graphs for car density, a keypad for manual overrides, and a Raspberry Pi orchestrating everything. Over time it grew into a full ML-assisted traffic controller.

The goal was simple: stop wasting time at empty red lights. Instead of hard-coded timings, the system reads lane occupancy and adjusts green phases dynamically, while still respecting safety constraints like minimum red times and pedestrian crossings.

What it controls

  • 12 LEDs + 12 resistors representing 4 directions
  • 4 LED bar graphs that visualize lane density
  • Keypad input for manual car counts / overrides
  • LCDs per direction showing phase + timers

Logic in the loop

  • Queue lengths per lane estimated from sensors / camera
  • Adaptive phase selection based on total delay
  • Fail-safe defaults if signals look inconsistent
  • Logging for later replay + analysis
Vehicle DetectionNorth • South • East • West
Traffic control simulation board

The first version ran purely on discrete hardware; the current one is wired to a computer-vision pipeline that estimates lane density from video frames.

Computer vision & YOLOPv2

The second phase replaces manual car counts with a proper perception stack. Using YOLOPv2 for joint object detection and drivable-area segmentation, the system only counts vehicles inside the road mask instead of relying on hand-drawn ROIs.

That drivable mask feeds into a simple time-series model that predicts near-future queue lengths, which is enough to choose which phase to grant next and how long to hold it.

  • • YOLOPv2 for lane + drivable area segmentation
  • • Vehicle counts per lane from masked detections
  • • Rolling window of counts → short-horizon predictions
  • • Phase chooser minimizes total predicted waiting time
Loop sketchpi@junction-node
while True:
    frame = camera.read()
    detections, drivable_mask = yolopv2(frame)

    # count cars per lane inside drivable area
    lane_counts = aggregate_by_lane(detections, drivable_mask)

    # update short history window
    history.append(lane_counts)

    demand = predict_next_window(history)

    next_phase = choose_phase(demand, safety_state)
    schedule_signals(next_phase, hardware_io)

    log_state(frame_id, lane_counts, next_phase)

Takeaways

Tight hardware loops

Tuning timings when LEDs, keypads, and LCDs are all talking to the same Pi taught me to keep the control loop stupidly simple.

Bridging sim ↔ ML

Going from a hand-tuned timing model to a CV-driven one made it clear which pieces belong in math, and which belong in perception.

Explainable behavior

Every phase decision is logged with lane counts and predicted delay, so you can justify why a light changed when it did.