βš™οΈ Deploying Deep Neural Networks directly into network switches? Now it’s possible! πŸš€Β 

The CLEVER Project presents a novel way to embed AI into programmable data planes β€” with zero arithmetic logic, zero stateful memory, and zero loss in performance

πŸ” Our latest innovation introduces a lookup table distillation technique that transforms complex DNNs into cascaded, stateless, P4-compatible flow tables, unlocking true in-network AI at wirespeed

πŸ’‘ Why it matters: 

  • Traditional P4 hardware lacks the arithmetic power to run DNNs 
  • Existing solutions require external FPGAs or GPUs β€” adding latency and power consumption 
  • Our method enables fully in-network inference for tasks like DDoS detection, traffic analysis, and more 

🧠 How we did it: 
1- Cascaded architecture breaks large models into 2-input neural nets 
2- Distilled into efficient LUTs deployable in hardware switches 
3- Stateless design = ideal for P4 pipeline acceleration 
4- Validated using the UNSW-NB15 cybersecurity dataset 

 
With just 6 stateless features and 8-bit quantization, our distilled model achieved 93.75% F1-score β€” outperforming many heavyweight offloading methods with a fraction of the resource usage. Below is the cascading LUT structure used to map a DNN into a hardware-ready inference engine, deployable in real-time network switches. 

πŸ”— Learn more about how we’re redefining network AI and accelerating 6G-ready infrastructures: 
https://www.cleverproject.eu 

Read more from the paper! : https://lnkd.in/gGe9syHB