Rendered at 21:55:43 GMT+0000 (Coordinated Universal Time) with Cloudflare Workers.
PunchyHamster 23 hours ago [-]
Unless you're hosting array of common apps (like wordpress), WAF is waste of time of everyone involved and the time would be better spent actually auditing the application you wrote rather than fighting with false positives.
The industry sold the idea to the gullible that they can make a bunch of arbitrary pattern matching rules that just make any app more secure
RajT88 22 hours ago [-]
Yes. Pentesting of an application on every release is what everyone should be doing, finding and fixing the vulnerabilities immediately.
Not everyone can do that because of business realities. Legacy software, vendor software, no budget, no dev bandwidth, etc., etc.
All security is a compromise based on realities - implementing a WAF is one. Tuning a WAF is a further exercise in security compromises. They have value, but aren't a panacea. A good security model should have many layers, and this is one of the layers you can choose which addresses a wide variety of attacks your application may (or may not) be vulnerable to, and which you may (or may not) have the budget or bandwidth to actually fix.
AnonHP 19 hours ago [-]
I disagree that it’s a waste of time or that only gullible people use it. A WAF (enabled to block malicious requests) is a cheaper and quicker solution to throw and still get some benefits.
I’ve seen that even in some large (non-FAANG or whatever) companies, budgets for security are always very tight or not available. Practically, it’s easier to kick the can down the road with a WAF.
For enterprise applications deployed for specific clients, if at all there are issues because of the WAF, they’d quickly bubble up through standard support mechanisms.
mikeweiss 20 hours ago [-]
Uhh ... No. This is absolutely not true for industry leading WAFs like Akamai, Cloudflare, Imperva, or even AWS WAF which monitor for new and known critical vulnerables in the wild and will issue new rules for them in short order.
Just last year we had React2Shell (CVE-2025-55182) which allowed RCE for many apps using React Server Components. Within 24 hours the big WAF providers rolled out rules capable of blocking requests matching the exploit pattern.
Yes a patch was available and patching is always the primary solution for resolving critical vulnerabilities, but WAF can step in as a crucial temporary protection until patching can happen.
RajT88 22 hours ago [-]
The problem here is a real one: A lot of people in charge of implementing WAF don't understand what it's all about.
That said, this article is describing something that you quickly learn studying the WAF offerings on a cloud provider on day 1. For such a complex topic, this is surprisingly remedial to show up here.
All that said: there's a lot of dumb shit that ends up being configured in the cloud, and articles like this are good reminders for people to check for dumb shit.
jakehansen 23 hours ago [-]
Reading this, I cannot help but think about the LLM writing tropes that made the front page yesterday.
I have a feeling my brain chemistry has been permanently altered and I will forever be distracted by subconsciously rating the “LLM-ness” of everything I read.
AWS forces an explicit default choice—Allow or Block. Azure defaults to passive "Detection," requiring a manual switch to "Prevention." An AWS engineer, used to making this conscious decision, might miss that Azure requires a separate, critical step to actually turn protection on.
The industry sold the idea to the gullible that they can make a bunch of arbitrary pattern matching rules that just make any app more secure
Not everyone can do that because of business realities. Legacy software, vendor software, no budget, no dev bandwidth, etc., etc.
All security is a compromise based on realities - implementing a WAF is one. Tuning a WAF is a further exercise in security compromises. They have value, but aren't a panacea. A good security model should have many layers, and this is one of the layers you can choose which addresses a wide variety of attacks your application may (or may not) be vulnerable to, and which you may (or may not) have the budget or bandwidth to actually fix.
I’ve seen that even in some large (non-FAANG or whatever) companies, budgets for security are always very tight or not available. Practically, it’s easier to kick the can down the road with a WAF.
For enterprise applications deployed for specific clients, if at all there are issues because of the WAF, they’d quickly bubble up through standard support mechanisms.
Just last year we had React2Shell (CVE-2025-55182) which allowed RCE for many apps using React Server Components. Within 24 hours the big WAF providers rolled out rules capable of blocking requests matching the exploit pattern.
Yes a patch was available and patching is always the primary solution for resolving critical vulnerabilities, but WAF can step in as a crucial temporary protection until patching can happen.
That said, this article is describing something that you quickly learn studying the WAF offerings on a cloud provider on day 1. For such a complex topic, this is surprisingly remedial to show up here.
All that said: there's a lot of dumb shit that ends up being configured in the cloud, and articles like this are good reminders for people to check for dumb shit.
I have a feeling my brain chemistry has been permanently altered and I will forever be distracted by subconsciously rating the “LLM-ness” of everything I read.
https://news.ycombinator.com/item?id=47291513