The CLEVER Project presents a novel way to embed AI into programmable data planes β with zero arithmetic logic, zero stateful memory, and zero loss in performance.
π Our latest innovation introduces a lookup table distillation technique that transforms complex DNNs into cascaded, stateless, P4-compatible flow tables, unlocking true in-network AI at wirespeed.
π‘ Why it matters:
- Traditional P4 hardware lacks the arithmetic power to run DNNs
- Existing solutions require external FPGAs or GPUs β adding latency and power consumption
- Our method enables fully in-network inference for tasks like DDoS detection, traffic analysis, and more
π§ How we did it:
1- Cascaded architecture breaks large models into 2-input neural nets
2- Distilled into efficient LUTs deployable in hardware switches
3- Stateless design = ideal for P4 pipeline acceleration
4- Validated using the UNSW-NB15 cybersecurity dataset
With just 6 stateless features and 8-bit quantization, our distilled model achieved 93.75% F1-score β outperforming many heavyweight offloading methods with a fraction of the resource usage. Below is the cascading LUT structure used to map a DNN into a hardware-ready inference engine, deployable in real-time network switches.

π Learn more about how weβre redefining network AI and accelerating 6G-ready infrastructures:
https://www.cleverproject.eu
Read more from the paper! : https://lnkd.in/gGe9syHB