6

In his groundbreaking new research, HTTP/1.1 Must Die: The Desync Endgame, Kettle challenges the security community to completely rethink its approach to request smuggling. He argues that, in practical terms, it's nigh on impossible to consistently and reliably determine the boundaries between HTTP/1.1 requests, especially when implemented across the chains of interconnected systems that comprise modern web architectures. Mistakes such as parsing discrepancies are inevitable, and when using upstream HTTP/1.1, even the tiniest of bugs often have critical security impact, including complete site takeover.

This research demonstrates unequivocally that patching individual implementations will never be enough to eliminate the threat of request smuggling. Using upstream HTTP/2 offers a robust solution.

I just read this article in a marketing blog from portswigger, the maker of the penetration testing tool burp suite.

Can someone with more insight explain what we're supposed to do? Completely disabling HTTP/1.1 is probably not doable for many organisations.

top 1 comments
sorted by: hot top controversial new old
[-] cron@feddit.org 2 points 1 month ago

Sort of a self-answer, now that i read more about this issue. The problem is not on the frontend (browser --> server), but with shared connections in the backend. E.g. you have a reverse proxy in place. Whats relevant is that the connection between the reverse proxy and the backend server should be HTTP/2.

Note that disabling HTTP/1 between the browser and the front-end is not required. These connections are rarely shared between different users and, as a result, they're significantly less dangerous. Just ensure they're converted to HTTP/2 upstream.

this post was submitted on 07 Aug 2025
6 points (87.5% liked)

cybersecurity

5025 readers
65 users here now

An umbrella community for all things cybersecurity / infosec. News, research, questions, are all welcome!

Community Rules

Enjoy!

founded 2 years ago
MODERATORS