This is an online log of my Slackware experiences. Be aware that I'm also using this blog to cover basic and intermediate security issues that may not pertain to Slackware. This is my way of consolidating blogs (I've several of them).
Friday, November 17, 2017
Some Goodness Not Related To Linux
In my spare time, I game on my gaming PC. Once upon a time, I had a gaming laptop. It's not been used for close to 2 years, as the graphics card went bad.
This particular system is an Alienware M17x R3. I was torn between getting the same graphics card (the GTX 580M, a dedicated mobile graphics card) or upgrading to something like the 680M or 780M.
It's taken me this long to decide. I decided to stick with the 580M, knowing that this card will probably also go bad within 1-2 years (they are not long-in-the-tooth). Why the 580M? Because I didn't feel like hacking the laptop to get the 680M or above to work. They aren't plug-n-play and require some effort to get to work. I wanted no fuss. Plus, the 580M is now a lot cheaper...I got this one for $125, whereas 2 years ago, they were running twice that.
The card I bought was still in it's original packaging, (ie, it was new).
It arrived last night and I spent most of my evening installing it and then struggling with the laptop. There was another issue that I had issues fixing. The damned battery was drained. In this system, it appears that the system throttles down when the battery isn't working. The system was very slow and at first I thought I had grounded something within the internals of the system when I had it open.
I studied up in my spare time at work tonight, got home, removed the battery from the system, rebooted the system and it acted like it was a NEW system! Before then, it would refuse to install the many system patches waiting for it, as well as auto-updating programs such as Steam, Origin, and other programs.
I'm glad I decided to try that first, as I was almost ready to try to re-install the OS.
A new battery is on the way.
I will attempt to game on this system this weekend. I've it tethered to my 27" iMac (using it as a extra monitor, a neat trick that Macs can do).
The card appears to be working well. If it dies within a year, I'll consider upgrading, as some of the newer cards last longer. If it lasts two years, I'll consider buying another 580M.
At some point, I should consider running Linux and using the Steam Linux client.
Labels:
Alienware,
graphics card,
GTX 580M,
linux,
M17x,
Nvidia,
R3,
Steam,
video card
Wednesday, March 29, 2017
More Postfix Success
I've been delving into why I don't see Postfix bans in my logs. I think it has to do with the filters that came with my install of Fail2ban...they don't work for Ubuntu.
I looked at my logs and saw a ton of bruteforce attempts against the SMTP service, so I know for a fact that Fail2ban should be blocking these attempts.
I found this page and wanted to test to see if it's filter's regex would work on my server, so created the filter based on what was on that page then edited my jail.local file. I then restarted Fail2ban but also wanted to see if the filter works, so I ran this:
--------------------------------------------------------
root@linode:/var/log# fail2ban-regex /var/log/mail.log /etc/fail2ban/filter.d/postfix-auth.conf
Running tests
=============
Use failregex filter file : postfix-auth, basedir: /etc/fail2ban
Use log file : /var/log/mail.log
Use encoding : UTF-8
Results
=======
Failregex: 1526 total
|- #) [# of hits] regular expression
| 1) [1526] ^\s*(<[^.]+\.[^.]+>)?\s*(?:\S+ )?(?:kernel: \[ *\d+\.\d+\] )?(?:@vserver_\S+ )?(?:(?:\[\d+\])?:\s+[\[\(]?postfix/smtpd(?:\(\S+\))?[\]\)]?:?|[\[\(]?postfix/smtpd(?:\(\S+\))?[\]\)]?:?(?:\[\d+\])?:?)?\s(?:\[ID \d+ \S+\])?\s*lost connection after .*\[\]$
`-
Ignoreregex: 0 total
Date template hits:
|- [# of hits] date format
| [10563] (?:DAY )?MON Day 24hour:Minute:Second(?:\.Microseconds)?(?: Year)?
`-
Lines: 10563 lines, 0 ignored, 1526 matched, 9037 missed [processed in 0.88 sec]
Missed line(s): too many to print. Use --print-all-missed to print all 9037 lines
--------------------------------------------------------
This time I had matches. The last few days of me trying this with other filters or editing the canned filters netted me nothing.
Then I checked my server's fail2ban logs:
--------------------------------------------------------
2017-03-29 21:53:56,987 fail2ban.filter [12346]: INFO [postfix-auth] Found 156.67.106.244
2017-03-29 21:53:57,037 fail2ban.filter [12346]: INFO [postfix-auth] Found 156.67.106.244
2017-03-29 21:53:57,971 fail2ban.actions [12346]: NOTICE [postfix-auth] Ban 156.67.106.244
2017-03-29 22:03:56,413 fail2ban.filter [12346]: INFO [postfix-auth] Found 105.112.3.167
2017-03-29 22:07:03,231 fail2ban.filter [12346]: INFO [postfix-auth] Found 220.178.1.34
2017-03-29 22:12:31,667 fail2ban.filter [12346]: INFO [postfix-auth] Found 66.23.212.157
--------------------------------------------------------
So it is working.
Why do I want use Fail2ban to block bad traffic going to my SMTP service? Well, when I checked one of those IPs above, just to see how noisy it was in the log files, this is what I saw:
root@linode:/var/log# grep 156.67.106.244 mail.log | grep connect | wc -l
1741
root@linode:/var/log# grep 156.67.106.244 mail.log.1 | grep connect | wc -l
1333
root@linode:/var/log# zgrep 156.67.106.244 mail.log.*.gz | grep connect | wc -l
10931
A grand total of 14,005 connection attempts between the 13th and 29th of March. The filter is configured to block if more than 2 attempts occur in a 5 minute span of time, so it should now block most of these. I'll watch to see if that filter's parameters work sufficiently, but at least I've a working filter now!
I looked at my logs and saw a ton of bruteforce attempts against the SMTP service, so I know for a fact that Fail2ban should be blocking these attempts.
I found this page and wanted to test to see if it's filter's regex would work on my server, so created the filter based on what was on that page then edited my jail.local file. I then restarted Fail2ban but also wanted to see if the filter works, so I ran this:
--------------------------------------------------------
root@linode:/var/log# fail2ban-regex /var/log/mail.log /etc/fail2ban/filter.d/postfix-auth.conf
Running tests
=============
Use failregex filter file : postfix-auth, basedir: /etc/fail2ban
Use log file : /var/log/mail.log
Use encoding : UTF-8
Results
=======
Failregex: 1526 total
|- #) [# of hits] regular expression
| 1) [1526] ^\s*(<[^.]+\.[^.]+>)?\s*(?:\S+ )?(?:kernel: \[ *\d+\.\d+\] )?(?:@vserver_\S+ )?(?:(?:\[\d+\])?:\s+[\[\(]?postfix/smtpd(?:\(\S+\))?[\]\)]?:?|[\[\(]?postfix/smtpd(?:\(\S+\))?[\]\)]?:?(?:\[\d+\])?:?)?\s(?:\[ID \d+ \S+\])?\s*lost connection after .*\[
`-
Ignoreregex: 0 total
Date template hits:
|- [# of hits] date format
| [10563] (?:DAY )?MON Day 24hour:Minute:Second(?:\.Microseconds)?(?: Year)?
`-
Lines: 10563 lines, 0 ignored, 1526 matched, 9037 missed [processed in 0.88 sec]
Missed line(s): too many to print. Use --print-all-missed to print all 9037 lines
--------------------------------------------------------
This time I had matches. The last few days of me trying this with other filters or editing the canned filters netted me nothing.
Then I checked my server's fail2ban logs:
--------------------------------------------------------
2017-03-29 21:53:56,987 fail2ban.filter [12346]: INFO [postfix-auth] Found 156.67.106.244
2017-03-29 21:53:57,037 fail2ban.filter [12346]: INFO [postfix-auth] Found 156.67.106.244
2017-03-29 21:53:57,971 fail2ban.actions [12346]: NOTICE [postfix-auth] Ban 156.67.106.244
2017-03-29 22:03:56,413 fail2ban.filter [12346]: INFO [postfix-auth] Found 105.112.3.167
2017-03-29 22:07:03,231 fail2ban.filter [12346]: INFO [postfix-auth] Found 220.178.1.34
2017-03-29 22:12:31,667 fail2ban.filter [12346]: INFO [postfix-auth] Found 66.23.212.157
--------------------------------------------------------
So it is working.
Why do I want use Fail2ban to block bad traffic going to my SMTP service? Well, when I checked one of those IPs above, just to see how noisy it was in the log files, this is what I saw:
root@linode:/var/log# grep 156.67.106.244 mail.log | grep connect | wc -l
1741
root@linode:/var/log# grep 156.67.106.244 mail.log.1 | grep connect | wc -l
1333
root@linode:/var/log# zgrep 156.67.106.244 mail.log.*.gz | grep connect | wc -l
10931
A grand total of 14,005 connection attempts between the 13th and 29th of March. The filter is configured to block if more than 2 attempts occur in a 5 minute span of time, so it should now block most of these. I'll watch to see if that filter's parameters work sufficiently, but at least I've a working filter now!
Wednesday, March 22, 2017
Some Fail2ban Success
I've been playing with Fail2ban jail configurations since the last post and I think I've got my setup running close to perfect.
In my last post, I mentioned that I wanted Fail2ban to block non-ssh traffic. This was difficult to get working because there aren't all that many explanations on the inner workings of this tool. The readmes aren't exactly descriptive. With a lot of web searches I got things working.
The jail list shows that I've enabled the following filters:
root@linode:/var/log# fail2ban-client status
Status
|- Number of jail: 15
`- Jail list: apache, apache-multiport, apache-noscript, apache-overflows, courier-auth, courier-smtp, dropbear, mysqld-auth, php-url-fopen, postfix, postfix-sasl, sasl, ssh-ddos, sshd, xinetd-fail
Of them, I've seen traffic blocked from apache-noscript, apache-overflows, ssh-ddos, and sshd.
The rest of the filters have not captured any logs, but that just means conditions haven't been met to block/log. In fact, I've only seen one apache-overflows alert trigger.
What I've been doing is trying to correlate the Fail2ban log entries to the service logs (ie, an alert is generated against the apache-noscript filter and I grep the apache logs for the IP to see what occurred.
Here's an example Fail2ban alert:
root@linode:/var/log# cat /var/log/fail2ban.log | grep 'script' | grep 'Ban'
2017-03-23 00:00:00,322 fail2ban.actions [26381]: NOTICE [apache-noscript] Ban 195.154.211.207
Here's the Apache log entries for that IP:
root@linode:/var/log# cat apache2/access.log | grep 195.154.211.207
195.154.211.207 - - [22/Mar/2017:18:13:56 +0000] "GET //wp-includes/registration-functions.php HTTP/1.1" 500 185 "-" "Mozilla/5.0 (Windows NT 6.1; Trident/7.0; rv:11.0) like Gecko"
195.154.211.207 - - [22/Mar/2017:23:59:59 +0000] "GET //wall_login.php?login=cmd HTTP/1.1" 404 510 "-" "Mozilla/5.0 (Windows NT 6.1; Trident/7.0; rv:11.0) like Gecko"
Here's how the apache-noscript section looks within my jail.local file:
[apache-noscript]
enabled = true
port = http,https
filter = apache-noscript
logpath = /var/log/apache2/error.log
maxretry = 1
findtime = 60
bantime = -1
You see two log entries. In this case, the filter is looking for more than one violation in a 60 second timeframe. Violators are banned indefinitely.
The logs look hokey when comparing against the apache-noscript configuration within the jail.local file, but it's correct. The logs look like this attack occurred after the offending IP connected to the Apache server twice within five hours and was banned at midnight on the second attempt. That's not what happened. The logs are deceiving. The attacks (defined by maxretry) must occur within the findtime value. Since the maxretry is 1 and the findtime is 60, a ban occurred when the offending IP tried a consecutive attack within 60 seconds (at midnight). Apache only logged the first attempt (at midnight). After the second attempt occurred, a ban was set before Apache could log the attempt.
The ssh-ddos filter discovers distributed attacks relating to brute-forcing of SSH connections. There are also many other filters relating to ssh but they're pretty much redundant in that they block the same activity, so if I have several of them enabled, I end up with redundant alerts in my log file. I've turned off the ones that generate duplicate alerts.
I also need to back up my configuration files so that I don't have to experiment with and tune the setup if I happen to lose my configuration files later and have to reinstall Fail2ban. That would suck.
In my last post, I mentioned that I wanted Fail2ban to block non-ssh traffic. This was difficult to get working because there aren't all that many explanations on the inner workings of this tool. The readmes aren't exactly descriptive. With a lot of web searches I got things working.
The jail list shows that I've enabled the following filters:
root@linode:/var/log# fail2ban-client status
Status
|- Number of jail: 15
`- Jail list: apache, apache-multiport, apache-noscript, apache-overflows, courier-auth, courier-smtp, dropbear, mysqld-auth, php-url-fopen, postfix, postfix-sasl, sasl, ssh-ddos, sshd, xinetd-fail
Of them, I've seen traffic blocked from apache-noscript, apache-overflows, ssh-ddos, and sshd.
The rest of the filters have not captured any logs, but that just means conditions haven't been met to block/log. In fact, I've only seen one apache-overflows alert trigger.
What I've been doing is trying to correlate the Fail2ban log entries to the service logs (ie, an alert is generated against the apache-noscript filter and I grep the apache logs for the IP to see what occurred.
Here's an example Fail2ban alert:
root@linode:/var/log# cat /var/log/fail2ban.log | grep 'script' | grep 'Ban'
2017-03-23 00:00:00,322 fail2ban.actions [26381]: NOTICE [apache-noscript] Ban 195.154.211.207
Here's the Apache log entries for that IP:
root@linode:/var/log# cat apache2/access.log | grep 195.154.211.207
195.154.211.207 - - [22/Mar/2017:18:13:56 +0000] "GET //wp-includes/registration-functions.php HTTP/1.1" 500 185 "-" "Mozilla/5.0 (Windows NT 6.1; Trident/7.0; rv:11.0) like Gecko"
195.154.211.207 - - [22/Mar/2017:23:59:59 +0000] "GET //wall_login.php?login=cmd HTTP/1.1" 404 510 "-" "Mozilla/5.0 (Windows NT 6.1; Trident/7.0; rv:11.0) like Gecko"
Here's how the apache-noscript section looks within my jail.local file:
[apache-noscript]
enabled = true
port = http,https
filter = apache-noscript
logpath = /var/log/apache2/error.log
maxretry = 1
findtime = 60
bantime = -1
You see two log entries. In this case, the filter is looking for more than one violation in a 60 second timeframe. Violators are banned indefinitely.
The logs look hokey when comparing against the apache-noscript configuration within the jail.local file, but it's correct. The logs look like this attack occurred after the offending IP connected to the Apache server twice within five hours and was banned at midnight on the second attempt. That's not what happened. The logs are deceiving. The attacks (defined by maxretry) must occur within the findtime value. Since the maxretry is 1 and the findtime is 60, a ban occurred when the offending IP tried a consecutive attack within 60 seconds (at midnight). Apache only logged the first attempt (at midnight). After the second attempt occurred, a ban was set before Apache could log the attempt.
The ssh-ddos filter discovers distributed attacks relating to brute-forcing of SSH connections. There are also many other filters relating to ssh but they're pretty much redundant in that they block the same activity, so if I have several of them enabled, I end up with redundant alerts in my log file. I've turned off the ones that generate duplicate alerts.
I also need to back up my configuration files so that I don't have to experiment with and tune the setup if I happen to lose my configuration files later and have to reinstall Fail2ban. That would suck.
Labels:
apache,
apache-noscript,
DDoS,
Fail2ban,
findtime,
jail.local,
maxretry,
ssh
Thursday, January 12, 2017
Ubuntu 16.04, Fail2ban and Postfix...Ugh...
So, I've been trying to get Fail2ban working with Postfix.
It has been a bit of a hassle and I'm still not sure if I've got it working properly.
First, when I edit jail.conf to enable the postfix configuration, Fail2ban stops working when I add a ports listing.
Second, I've got it running without errors but can see that Fail2ban isn't blocking incoming bruteforcing attempts on Postfix. I can see the attacks happening in the mail logs but can't see Fail2ban blocking them. The Postfix jail is showing when I run "fail2ban-client status".
I've a crapload of studying up to do, as I just found the man pages for fail2ban-client.
I need to configure for FTP and HTTP as well. SSH is already done.
UPDATE (1/15/2017) - I now have Fail2ban working with more than just SSH. I'm running it to monitor Apache and Xinetd, as well as MySQL and php-url-fopen attacks. But I'm stills struggling with getting it to track Postfix brute-forcing attempts.
It has been a bit of a hassle and I'm still not sure if I've got it working properly.
First, when I edit jail.conf to enable the postfix configuration, Fail2ban stops working when I add a ports listing.
Second, I've got it running without errors but can see that Fail2ban isn't blocking incoming bruteforcing attempts on Postfix. I can see the attacks happening in the mail logs but can't see Fail2ban blocking them. The Postfix jail is showing when I run "fail2ban-client status".
I've a crapload of studying up to do, as I just found the man pages for fail2ban-client.
I need to configure for FTP and HTTP as well. SSH is already done.
UPDATE (1/15/2017) - I now have Fail2ban working with more than just SSH. I'm running it to monitor Apache and Xinetd, as well as MySQL and php-url-fopen attacks. But I'm stills struggling with getting it to track Postfix brute-forcing attempts.
Subscribe to:
Posts (Atom)