Debate: should the reject-www-data rule be enabled by default?


#1

Hi everyone,

Whilst discussing an issue with the reject-www-data firewall rule on Github, I was wondering if this rule is useful.

There are strong arguments both for and against its usage. I’m wondering now if it should be installed by default on new installations.

The original intention was to prevent the webserver from being able to execute code that downloads malicious software on the the machine, for further compromise. Imagine a plugin in your CMS which allows such things to happen :scream_cat: – this rule would block that straightforward case, forcing the attacker to work out a privilege escalation exploit instead, which is substantially harder.

However this naive rule also blocks plugin installations, automatic updates, sites fetching RSS news feeds etc.

What do you think?


#2

It causes me occasional trouble, as I have to manually add RSS feeding sites to the file in order to subscribe to their feeds (which is done by the web server). Once the feed is added, I can remove the line, as the actual RSS update process is done from another program on a different port.

I’ve also added the wordpress update sites to this file to allow wordpress to carry out updates when necessary.

Occasionally I’ve considered just turning it off, but in general it doesn’t cause me too much pain, so I’ve left it enabled.

Andy


#3

It can be a pain. Certainly it dos not play well with Joomla, Wordpress, Drupal, Coppermine etc as they all need write access to some extent. Maybe it would be good to leave it enabled by default, but have extensive documentation how to add rules for packages such as these and any other popular ones that I may have missed.


#4

Security wise it’s a good default. Human wise, probably not as it’s likely to be the first big-surprise for new users no matter how good the docs, especially for those new to linux. Having said that, I think Josh mooted the idea of a Quick Start Guide – this could go a long way towards avoiding the gotcha.

As it stands, I’d reluctantly vote for opt-in (and hope that my neighbours aren’t deploying CMS alpha 1 bad boy).

Either way, it would be a massive improvement if the mechanism alerted on fail. At the moment there’s a real risk updates and other functionality will silently fail. Worse, we wouldn’t be aware that potentially malicious activity is in play. I’d rather have 10,000 emails to root than not know.

It may be a pain to support but for me it’s warm-fuzzy functionality - even as a silent killer. :wink:


#5

I’d say keep it.

I’ve a couple of head scratching moments trying to get something to work, then realising it’s my own fault for not adding the host to the file, but in general it’s never caused any hassle and I agree with why it’s there.


#6

Could there be a log listing the sites that are being blocked by this rule, so that it can be reviewed and the lines copied across. This could be a good compromise to helping deal with the human side of this rule being difficult.


#7

The problem here is that the firewall itself is doing the logging based on information about the packet, i.e. which user generated it, rather than information inside the packet, e.g. the GET request. Logging is perfectly possible, but it would be on a per-IP basis rather than per site, which probably isn’t so helpful.

Similarly, it would be simple enough to install logwatch (or some other log-anomaly-to-email system) which could report on these outgoing failures. Again, though, it would report on a per-IP basis rather than per site.


#8

Hi @pcherry & thanks for the follow-up.

Googling this topic has turned me into some sort of headless-chicken shuttlecock hybrid. :wink:

Keeping it simple, an unknown-source alert definitely sounds useful. Although, at the moment, I’m not sure what I’d do to track down the site.

Presumably, identifying wayward php would be relatively simple if the httpd-logger made site-specific noises about (failed) outbound connections. [cluck]

So we’d have a list of blocked-outbound by ip address – sounds useful for identifying unintended blocks.


#9

This has certainly caused me headaches whilst I’ve tried to get everything working. Most of them I resolved. However I regularly have issues where I end up having to completely disable the firewall temporarily because it’s the only I can get one Wordpress plugin to update. I have domains whitelisted for everything else, but for one plugin that won’t work.

Quite why I have never established.

Whether it shouldn’t be there, I don’t know. It maybe worth the pain. But it certainly causes some.


#10

I vote for keeping it. But it might be worth considering whether the file should have some sites included by default, whether remmed out or not, as a sort of quick-start guide.

A quick start guide is really well worth doing. There is a daunting amount of information for a newcomer to Symbiosis, some of it very detailed, some alarmingly sketchy.

Splitting the documentation into user and tech ref isn’t necessarily making things better either, because the info you need is now more fragmented.

Links to the relevant bits of tech ref direct from the user manual sections would be helpful. And more examples in the tech ref would help.

I often find myself searching this forum for fixes to problems instead of referring to the docs. That isn’t how it should be! When something useful appears here, it should go into the docs.