The 'S' Files
This is an online log of my Slackware experiences. Be aware that I'm also using this blog to cover basic and intermediate security issues that may not pertain to Slackware. This is my way of consolidating blogs (I've several of them).
Friday, November 17, 2017
Some Goodness Not Related To Linux
In my spare time, I game on my gaming PC. Once upon a time, I had a gaming laptop. It's not been used for close to 2 years, as the graphics card went bad.
This particular system is an Alienware M17x R3. I was torn between getting the same graphics card (the GTX 580M, a dedicated mobile graphics card) or upgrading to something like the 680M or 780M.
It's taken me this long to decide. I decided to stick with the 580M, knowing that this card will probably also go bad within 1-2 years (they are not long-in-the-tooth). Why the 580M? Because I didn't feel like hacking the laptop to get the 680M or above to work. They aren't plug-n-play and require some effort to get to work. I wanted no fuss. Plus, the 580M is now a lot cheaper...I got this one for $125, whereas 2 years ago, they were running twice that.
The card I bought was still in it's original packaging, (ie, it was new).
It arrived last night and I spent most of my evening installing it and then struggling with the laptop. There was another issue that I had issues fixing. The damned battery was drained. In this system, it appears that the system throttles down when the battery isn't working. The system was very slow and at first I thought I had grounded something within the internals of the system when I had it open.
I studied up in my spare time at work tonight, got home, removed the battery from the system, rebooted the system and it acted like it was a NEW system! Before then, it would refuse to install the many system patches waiting for it, as well as auto-updating programs such as Steam, Origin, and other programs.
I'm glad I decided to try that first, as I was almost ready to try to re-install the OS.
A new battery is on the way.
I will attempt to game on this system this weekend. I've it tethered to my 27" iMac (using it as a extra monitor, a neat trick that Macs can do).
The card appears to be working well. If it dies within a year, I'll consider upgrading, as some of the newer cards last longer. If it lasts two years, I'll consider buying another 580M.
At some point, I should consider running Linux and using the Steam Linux client.
Labels:
Alienware,
graphics card,
GTX 580M,
linux,
M17x,
Nvidia,
R3,
Steam,
video card
Wednesday, March 29, 2017
More Postfix Success
I've been delving into why I don't see Postfix bans in my logs. I think it has to do with the filters that came with my install of Fail2ban...they don't work for Ubuntu.
I looked at my logs and saw a ton of bruteforce attempts against the SMTP service, so I know for a fact that Fail2ban should be blocking these attempts.
I found this page and wanted to test to see if it's filter's regex would work on my server, so created the filter based on what was on that page then edited my jail.local file. I then restarted Fail2ban but also wanted to see if the filter works, so I ran this:
--------------------------------------------------------
root@linode:/var/log# fail2ban-regex /var/log/mail.log /etc/fail2ban/filter.d/postfix-auth.conf
Running tests
=============
Use failregex filter file : postfix-auth, basedir: /etc/fail2ban
Use log file : /var/log/mail.log
Use encoding : UTF-8
Results
=======
Failregex: 1526 total
|- #) [# of hits] regular expression
| 1) [1526] ^\s*(<[^.]+\.[^.]+>)?\s*(?:\S+ )?(?:kernel: \[ *\d+\.\d+\] )?(?:@vserver_\S+ )?(?:(?:\[\d+\])?:\s+[\[\(]?postfix/smtpd(?:\(\S+\))?[\]\)]?:?|[\[\(]?postfix/smtpd(?:\(\S+\))?[\]\)]?:?(?:\[\d+\])?:?)?\s(?:\[ID \d+ \S+\])?\s*lost connection after .*\[\]$
`-
Ignoreregex: 0 total
Date template hits:
|- [# of hits] date format
| [10563] (?:DAY )?MON Day 24hour:Minute:Second(?:\.Microseconds)?(?: Year)?
`-
Lines: 10563 lines, 0 ignored, 1526 matched, 9037 missed [processed in 0.88 sec]
Missed line(s): too many to print. Use --print-all-missed to print all 9037 lines
--------------------------------------------------------
This time I had matches. The last few days of me trying this with other filters or editing the canned filters netted me nothing.
Then I checked my server's fail2ban logs:
--------------------------------------------------------
2017-03-29 21:53:56,987 fail2ban.filter [12346]: INFO [postfix-auth] Found 156.67.106.244
2017-03-29 21:53:57,037 fail2ban.filter [12346]: INFO [postfix-auth] Found 156.67.106.244
2017-03-29 21:53:57,971 fail2ban.actions [12346]: NOTICE [postfix-auth] Ban 156.67.106.244
2017-03-29 22:03:56,413 fail2ban.filter [12346]: INFO [postfix-auth] Found 105.112.3.167
2017-03-29 22:07:03,231 fail2ban.filter [12346]: INFO [postfix-auth] Found 220.178.1.34
2017-03-29 22:12:31,667 fail2ban.filter [12346]: INFO [postfix-auth] Found 66.23.212.157
--------------------------------------------------------
So it is working.
Why do I want use Fail2ban to block bad traffic going to my SMTP service? Well, when I checked one of those IPs above, just to see how noisy it was in the log files, this is what I saw:
root@linode:/var/log# grep 156.67.106.244 mail.log | grep connect | wc -l
1741
root@linode:/var/log# grep 156.67.106.244 mail.log.1 | grep connect | wc -l
1333
root@linode:/var/log# zgrep 156.67.106.244 mail.log.*.gz | grep connect | wc -l
10931
A grand total of 14,005 connection attempts between the 13th and 29th of March. The filter is configured to block if more than 2 attempts occur in a 5 minute span of time, so it should now block most of these. I'll watch to see if that filter's parameters work sufficiently, but at least I've a working filter now!
I looked at my logs and saw a ton of bruteforce attempts against the SMTP service, so I know for a fact that Fail2ban should be blocking these attempts.
I found this page and wanted to test to see if it's filter's regex would work on my server, so created the filter based on what was on that page then edited my jail.local file. I then restarted Fail2ban but also wanted to see if the filter works, so I ran this:
--------------------------------------------------------
root@linode:/var/log# fail2ban-regex /var/log/mail.log /etc/fail2ban/filter.d/postfix-auth.conf
Running tests
=============
Use failregex filter file : postfix-auth, basedir: /etc/fail2ban
Use log file : /var/log/mail.log
Use encoding : UTF-8
Results
=======
Failregex: 1526 total
|- #) [# of hits] regular expression
| 1) [1526] ^\s*(<[^.]+\.[^.]+>)?\s*(?:\S+ )?(?:kernel: \[ *\d+\.\d+\] )?(?:@vserver_\S+ )?(?:(?:\[\d+\])?:\s+[\[\(]?postfix/smtpd(?:\(\S+\))?[\]\)]?:?|[\[\(]?postfix/smtpd(?:\(\S+\))?[\]\)]?:?(?:\[\d+\])?:?)?\s(?:\[ID \d+ \S+\])?\s*lost connection after .*\[
`-
Ignoreregex: 0 total
Date template hits:
|- [# of hits] date format
| [10563] (?:DAY )?MON Day 24hour:Minute:Second(?:\.Microseconds)?(?: Year)?
`-
Lines: 10563 lines, 0 ignored, 1526 matched, 9037 missed [processed in 0.88 sec]
Missed line(s): too many to print. Use --print-all-missed to print all 9037 lines
--------------------------------------------------------
This time I had matches. The last few days of me trying this with other filters or editing the canned filters netted me nothing.
Then I checked my server's fail2ban logs:
--------------------------------------------------------
2017-03-29 21:53:56,987 fail2ban.filter [12346]: INFO [postfix-auth] Found 156.67.106.244
2017-03-29 21:53:57,037 fail2ban.filter [12346]: INFO [postfix-auth] Found 156.67.106.244
2017-03-29 21:53:57,971 fail2ban.actions [12346]: NOTICE [postfix-auth] Ban 156.67.106.244
2017-03-29 22:03:56,413 fail2ban.filter [12346]: INFO [postfix-auth] Found 105.112.3.167
2017-03-29 22:07:03,231 fail2ban.filter [12346]: INFO [postfix-auth] Found 220.178.1.34
2017-03-29 22:12:31,667 fail2ban.filter [12346]: INFO [postfix-auth] Found 66.23.212.157
--------------------------------------------------------
So it is working.
Why do I want use Fail2ban to block bad traffic going to my SMTP service? Well, when I checked one of those IPs above, just to see how noisy it was in the log files, this is what I saw:
root@linode:/var/log# grep 156.67.106.244 mail.log | grep connect | wc -l
1741
root@linode:/var/log# grep 156.67.106.244 mail.log.1 | grep connect | wc -l
1333
root@linode:/var/log# zgrep 156.67.106.244 mail.log.*.gz | grep connect | wc -l
10931
A grand total of 14,005 connection attempts between the 13th and 29th of March. The filter is configured to block if more than 2 attempts occur in a 5 minute span of time, so it should now block most of these. I'll watch to see if that filter's parameters work sufficiently, but at least I've a working filter now!
Wednesday, March 22, 2017
Some Fail2ban Success
I've been playing with Fail2ban jail configurations since the last post and I think I've got my setup running close to perfect.
In my last post, I mentioned that I wanted Fail2ban to block non-ssh traffic. This was difficult to get working because there aren't all that many explanations on the inner workings of this tool. The readmes aren't exactly descriptive. With a lot of web searches I got things working.
The jail list shows that I've enabled the following filters:
root@linode:/var/log# fail2ban-client status
Status
|- Number of jail: 15
`- Jail list: apache, apache-multiport, apache-noscript, apache-overflows, courier-auth, courier-smtp, dropbear, mysqld-auth, php-url-fopen, postfix, postfix-sasl, sasl, ssh-ddos, sshd, xinetd-fail
Of them, I've seen traffic blocked from apache-noscript, apache-overflows, ssh-ddos, and sshd.
The rest of the filters have not captured any logs, but that just means conditions haven't been met to block/log. In fact, I've only seen one apache-overflows alert trigger.
What I've been doing is trying to correlate the Fail2ban log entries to the service logs (ie, an alert is generated against the apache-noscript filter and I grep the apache logs for the IP to see what occurred.
Here's an example Fail2ban alert:
root@linode:/var/log# cat /var/log/fail2ban.log | grep 'script' | grep 'Ban'
2017-03-23 00:00:00,322 fail2ban.actions [26381]: NOTICE [apache-noscript] Ban 195.154.211.207
Here's the Apache log entries for that IP:
root@linode:/var/log# cat apache2/access.log | grep 195.154.211.207
195.154.211.207 - - [22/Mar/2017:18:13:56 +0000] "GET //wp-includes/registration-functions.php HTTP/1.1" 500 185 "-" "Mozilla/5.0 (Windows NT 6.1; Trident/7.0; rv:11.0) like Gecko"
195.154.211.207 - - [22/Mar/2017:23:59:59 +0000] "GET //wall_login.php?login=cmd HTTP/1.1" 404 510 "-" "Mozilla/5.0 (Windows NT 6.1; Trident/7.0; rv:11.0) like Gecko"
Here's how the apache-noscript section looks within my jail.local file:
[apache-noscript]
enabled = true
port = http,https
filter = apache-noscript
logpath = /var/log/apache2/error.log
maxretry = 1
findtime = 60
bantime = -1
You see two log entries. In this case, the filter is looking for more than one violation in a 60 second timeframe. Violators are banned indefinitely.
The logs look hokey when comparing against the apache-noscript configuration within the jail.local file, but it's correct. The logs look like this attack occurred after the offending IP connected to the Apache server twice within five hours and was banned at midnight on the second attempt. That's not what happened. The logs are deceiving. The attacks (defined by maxretry) must occur within the findtime value. Since the maxretry is 1 and the findtime is 60, a ban occurred when the offending IP tried a consecutive attack within 60 seconds (at midnight). Apache only logged the first attempt (at midnight). After the second attempt occurred, a ban was set before Apache could log the attempt.
The ssh-ddos filter discovers distributed attacks relating to brute-forcing of SSH connections. There are also many other filters relating to ssh but they're pretty much redundant in that they block the same activity, so if I have several of them enabled, I end up with redundant alerts in my log file. I've turned off the ones that generate duplicate alerts.
I also need to back up my configuration files so that I don't have to experiment with and tune the setup if I happen to lose my configuration files later and have to reinstall Fail2ban. That would suck.
In my last post, I mentioned that I wanted Fail2ban to block non-ssh traffic. This was difficult to get working because there aren't all that many explanations on the inner workings of this tool. The readmes aren't exactly descriptive. With a lot of web searches I got things working.
The jail list shows that I've enabled the following filters:
root@linode:/var/log# fail2ban-client status
Status
|- Number of jail: 15
`- Jail list: apache, apache-multiport, apache-noscript, apache-overflows, courier-auth, courier-smtp, dropbear, mysqld-auth, php-url-fopen, postfix, postfix-sasl, sasl, ssh-ddos, sshd, xinetd-fail
Of them, I've seen traffic blocked from apache-noscript, apache-overflows, ssh-ddos, and sshd.
The rest of the filters have not captured any logs, but that just means conditions haven't been met to block/log. In fact, I've only seen one apache-overflows alert trigger.
What I've been doing is trying to correlate the Fail2ban log entries to the service logs (ie, an alert is generated against the apache-noscript filter and I grep the apache logs for the IP to see what occurred.
Here's an example Fail2ban alert:
root@linode:/var/log# cat /var/log/fail2ban.log | grep 'script' | grep 'Ban'
2017-03-23 00:00:00,322 fail2ban.actions [26381]: NOTICE [apache-noscript] Ban 195.154.211.207
Here's the Apache log entries for that IP:
root@linode:/var/log# cat apache2/access.log | grep 195.154.211.207
195.154.211.207 - - [22/Mar/2017:18:13:56 +0000] "GET //wp-includes/registration-functions.php HTTP/1.1" 500 185 "-" "Mozilla/5.0 (Windows NT 6.1; Trident/7.0; rv:11.0) like Gecko"
195.154.211.207 - - [22/Mar/2017:23:59:59 +0000] "GET //wall_login.php?login=cmd HTTP/1.1" 404 510 "-" "Mozilla/5.0 (Windows NT 6.1; Trident/7.0; rv:11.0) like Gecko"
Here's how the apache-noscript section looks within my jail.local file:
[apache-noscript]
enabled = true
port = http,https
filter = apache-noscript
logpath = /var/log/apache2/error.log
maxretry = 1
findtime = 60
bantime = -1
You see two log entries. In this case, the filter is looking for more than one violation in a 60 second timeframe. Violators are banned indefinitely.
The logs look hokey when comparing against the apache-noscript configuration within the jail.local file, but it's correct. The logs look like this attack occurred after the offending IP connected to the Apache server twice within five hours and was banned at midnight on the second attempt. That's not what happened. The logs are deceiving. The attacks (defined by maxretry) must occur within the findtime value. Since the maxretry is 1 and the findtime is 60, a ban occurred when the offending IP tried a consecutive attack within 60 seconds (at midnight). Apache only logged the first attempt (at midnight). After the second attempt occurred, a ban was set before Apache could log the attempt.
The ssh-ddos filter discovers distributed attacks relating to brute-forcing of SSH connections. There are also many other filters relating to ssh but they're pretty much redundant in that they block the same activity, so if I have several of them enabled, I end up with redundant alerts in my log file. I've turned off the ones that generate duplicate alerts.
I also need to back up my configuration files so that I don't have to experiment with and tune the setup if I happen to lose my configuration files later and have to reinstall Fail2ban. That would suck.
Labels:
apache,
apache-noscript,
DDoS,
Fail2ban,
findtime,
jail.local,
maxretry,
ssh
Thursday, January 12, 2017
Ubuntu 16.04, Fail2ban and Postfix...Ugh...
So, I've been trying to get Fail2ban working with Postfix.
It has been a bit of a hassle and I'm still not sure if I've got it working properly.
First, when I edit jail.conf to enable the postfix configuration, Fail2ban stops working when I add a ports listing.
Second, I've got it running without errors but can see that Fail2ban isn't blocking incoming bruteforcing attempts on Postfix. I can see the attacks happening in the mail logs but can't see Fail2ban blocking them. The Postfix jail is showing when I run "fail2ban-client status".
I've a crapload of studying up to do, as I just found the man pages for fail2ban-client.
I need to configure for FTP and HTTP as well. SSH is already done.
UPDATE (1/15/2017) - I now have Fail2ban working with more than just SSH. I'm running it to monitor Apache and Xinetd, as well as MySQL and php-url-fopen attacks. But I'm stills struggling with getting it to track Postfix brute-forcing attempts.
It has been a bit of a hassle and I'm still not sure if I've got it working properly.
First, when I edit jail.conf to enable the postfix configuration, Fail2ban stops working when I add a ports listing.
Second, I've got it running without errors but can see that Fail2ban isn't blocking incoming bruteforcing attempts on Postfix. I can see the attacks happening in the mail logs but can't see Fail2ban blocking them. The Postfix jail is showing when I run "fail2ban-client status".
I've a crapload of studying up to do, as I just found the man pages for fail2ban-client.
I need to configure for FTP and HTTP as well. SSH is already done.
UPDATE (1/15/2017) - I now have Fail2ban working with more than just SSH. I'm running it to monitor Apache and Xinetd, as well as MySQL and php-url-fopen attacks. But I'm stills struggling with getting it to track Postfix brute-forcing attempts.
Saturday, November 26, 2016
Slackware Box Stopped Working - Upgaded It and Now Slackware Won't Run
A few months ago, my Slackware box died. It would no longer boot up (no error messages...no POST beeps, no BIOS bootup screen). The system fans wouldn't even activate as they normally would.
I thought it was the power supply, so I replaced it. Afterward, the system would begin the bootup process, I could hear the fans, but still couldn't see the BIOS bootup messages or access the setup screen. I then replaced the motherboard.
It was previously running a Pentinum D 820 and Intel-based motherboard. I replaced it with a spare, an Asus M4N98TD EVO running an AMD Phenom II X6 X1100T.
After then swap of the motherboards, I was able to get to the BIOS but the system would no longer boot up it's instance of Slackware (v14.1). It would attempt to boot up Slackware but would run into a kernel panic condition:
I then wiped both drives and focused on installing on just one drive. I did this twice. I ended up with the same error message both times.
I then decided to do some research, as this was something I have never experienced in the past.
This page describes what I'm experiencing. I'll highlight the relevant details:
Hopefully I'll be able to fix this issue without too much hoop-jumping.
I thought it was the power supply, so I replaced it. Afterward, the system would begin the bootup process, I could hear the fans, but still couldn't see the BIOS bootup messages or access the setup screen. I then replaced the motherboard.
It was previously running a Pentinum D 820 and Intel-based motherboard. I replaced it with a spare, an Asus M4N98TD EVO running an AMD Phenom II X6 X1100T.
After then swap of the motherboards, I was able to get to the BIOS but the system would no longer boot up it's instance of Slackware (v14.1). It would attempt to boot up Slackware but would run into a kernel panic condition:
slackware 14.1 kernel panic - not syncing: VFS: Unable to mount root fs on unknown block (8,3)
Instead of troubleshooting, I figured that I'd just do an install on that same system's spare hard drive. I installed onto that drive and ran into the same message.
I then wiped both drives and focused on installing on just one drive. I did this twice. I ended up with the same error message both times.
I then decided to do some research, as this was something I have never experienced in the past.
This page describes what I'm experiencing. I'll highlight the relevant details:
In case your kernel does not include the driver for your root filesystem, or a driver for your SATA bus, or other stuff that is only built as modules, your kernel will panic if it boots and can not access the necessary disks, partitions and/or files. Typically, this looks likeVFS: Cannot open root device "802" or unknown-block (8,2) Please append a correct "root=" boot option Kernel Panic-not syncing: VFS: unable to mount root fs on unknown block(8,2)and this means you will have to build an initrd or “Initial Ram Disk” containing the required modules. The location of the initrd is then added in the appropriate section of/etc/lilo.conf
so that the kernel can find it when it boots, and is able to load the drivers it needs to access your disks.
The problem is, I can't follow the described steps because the system can't mount the drive, so I can't use the necessary tools to build an initrd, and can't edit the /etc/lilo.conf file (it's on the partition that won't mount). mkinitrd is not on the install/rescue disk (well, it's not accessible as a command).
I'm to the point that I'm about to ask for assistance at LQ.org, but will try to run Slackware 14.1 in VMware to see if I can build the files and put them on the partition I can access on the physical Slackware machine. I'd still run into the problem of being able to edit /etc/lilo.conf, though (that file resides on the / partition, which can't be accessed until I'm able to fix the issue. Or, I can find a Slackware-based live CD that'll have the proper files and drivers that will mount the partitions and allow me to make the needed fixes.
Hopefully I'll be able to fix this issue without too much hoop-jumping.
UPDATE: It is working now. It's been up the last few days with no load. I'm still trying to find out why it wasn't working, but it certainly has to do with the partition layout I opted to use, which is weird because I was using the same partition scheme I've been using for years (a /boot partiton, a swap partition, and a / partition). I ended up using a swap partition and / partition (no dedicated /boot since it appears that may've been the issue...I may be able to add it after the fact).
Labels:
docs.slackware.com,
initrd,
kernel panic,
LQ.org,
mkinitrd,
Slackware
Tuesday, November 24, 2015
IPTables and SSH - Resetting Brute Force Traffic, UPDATE
I showed how to determine IPs that were bruteforcing in my last blog entry, and how to block the majority of it (the posts are here and here).
I'm to the point where I should create a script that would determine any IP with a hit count of, for example, 1000, and block them. Or I can just use opensource products like fail2ban (remember, denyhosts isn't working because tcpwrapper support isn't compiled into inetd.conf on Ubuntu 14.04 LTS.). So, I've installed fail2ban and will watch to see how it blocks things with the default setting before making changes to the configuration. The cool thing about fail2ban is that it will watch and block more than just SSH activity.
Am I worrying too much about this? Some would say, YES! I'm doing this for learning purposes and also to get a better understanding of how such bruteforcing is evolving, but I don't really want to have to deal with a box that has been hacked, either. Every admin has the responsiblity of doing their utmost to ensure their machines aren't hacked...that means that the admin needs to be proactive in placing defenses to deter or slow down attacks.
UPDATE:
Already, I see the following in the fail2ban logs:
2015-11-24 22:33:10,453 fail2ban.actions: WARNING [ssh] Ban 195.154.46.203
2015-11-24 22:33:10,458 fail2ban.actions: WARNING [ssh] Ban 59.47.0.157
2015-11-24 22:33:10,462 fail2ban.actions: WARNING [ssh] Ban 43.229.53.49
2015-11-24 22:33:10,466 fail2ban.actions: WARNING [ssh] Ban 43.229.53.87
2015-11-24 22:43:11,085 fail2ban.actions: WARNING [ssh] Unban 195.154.46.203
2015-11-24 22:43:11,090 fail2ban.actions: WARNING [ssh] Unban 59.47.0.157
2015-11-24 22:43:11,094 fail2ban.actions: WARNING [ssh] Unban 43.229.53.49
2015-11-24 22:43:11,098 fail2ban.actions: WARNING [ssh] Unban 43.229.53.87
In checking my logs over the last week, I see some activity that's making it through the firewall. To be honest, I doubt I'll be able to filter ALL the bruteforcing using the rules I've currently in place. I did tweak them a bit, changing the 4th and 5th rules' threshold to 60 seconds (vs 30 seconds). It helped some but then I saw one particular IP blast through the firewall's ruleset like it was designed to go low and slow.
Here are the IPs that have been bruteforcing port 22 this week, as well as their hit counts:
Here are the IPs that have been bruteforcing port 22 this week, as well as their hit counts:
root@linode:~/ssh_brute_force_logs# sed 's/S.*=//' combined1 | uniq -c
2 183.60.48.25
2 162.13.76.111
2 183.60.48.25
2 113.108.21.16
1334 221.231.139.206
2 183.60.48.25
1 217.91.114.240
2 113.108.21.16
2 207.46.140.7
1 218.4.117.26
2 113.108.21.16
8 146.88.41.119
2 113.108.21.16
1 218.60.99.51
2 113.108.21.16
1 187.120.217.121
1 113.108.21.16
1 43.229.53.49
2 113.108.21.16
2 43.229.53.49
6 61.146.150.209
1 94.23.60.117
5 61.146.150.209
2 113.108.21.16
1 179.48.253.43
2 51.254.137.164
1 104.20.64.58
1 51.254.137.164
1 178.187.137.68
2 113.108.21.16
39 140.115.36.207
2 113.108.21.16
3 119.90.18.7
2 113.108.21.16
1 121.43.169.81
1 113.108.21.16
1 203.129.217.5
1 113.108.21.16
1 106.57.172.14
2 14.160.13.174
2 113.108.21.16
2 119.90.18.7
IP 221.231.139.206 was the most prevalent by far, with IP 140.115.36.207 coming in second.
I'm to the point where I should create a script that would determine any IP with a hit count of, for example, 1000, and block them. Or I can just use opensource products like fail2ban (remember, denyhosts isn't working because tcpwrapper support isn't compiled into inetd.conf on Ubuntu 14.04 LTS.). So, I've installed fail2ban and will watch to see how it blocks things with the default setting before making changes to the configuration. The cool thing about fail2ban is that it will watch and block more than just SSH activity.
Am I worrying too much about this? Some would say, YES! I'm doing this for learning purposes and also to get a better understanding of how such bruteforcing is evolving, but I don't really want to have to deal with a box that has been hacked, either. Every admin has the responsiblity of doing their utmost to ensure their machines aren't hacked...that means that the admin needs to be proactive in placing defenses to deter or slow down attacks.
UPDATE:
Already, I see the following in the fail2ban logs:
2015-11-24 22:33:10,453 fail2ban.actions: WARNING [ssh] Ban 195.154.46.203
2015-11-24 22:33:10,458 fail2ban.actions: WARNING [ssh] Ban 59.47.0.157
2015-11-24 22:33:10,462 fail2ban.actions: WARNING [ssh] Ban 43.229.53.49
2015-11-24 22:33:10,466 fail2ban.actions: WARNING [ssh] Ban 43.229.53.87
2015-11-24 22:43:11,085 fail2ban.actions: WARNING [ssh] Unban 195.154.46.203
2015-11-24 22:43:11,090 fail2ban.actions: WARNING [ssh] Unban 59.47.0.157
2015-11-24 22:43:11,094 fail2ban.actions: WARNING [ssh] Unban 43.229.53.49
2015-11-24 22:43:11,098 fail2ban.actions: WARNING [ssh] Unban 43.229.53.87
Fail2ban banned and then unbanned the IPs 10 minutes later (default settings).
What I need to study is how Fail2ban is implementing the bans. Also, do I want to outright permanently ban these IPs...is that even feasible? We'll see.
UPDATE 2 - 12/24/2016:
I checked on things. I checked today's bans since I've been out of touch with the server and Linux in general (been doing work-related things but nothing close to this type of stuff). I didn't even remember how log Fail2ban was blocking but I see I changed the default from 10 minutes to 7 days. I was too lazy to check the logs, so what I did was pick an IP from the ban logs and grep for it. I saw it banned and saw it unbanned 7 days later.
The logs go as far back as 11/20. The trend seems to be mid teens for Nov, although I saw several spikes to the mid twenties. I'm just checking to see what the norm is, so that as I begin to get back into the swing of things, I have a baseline to compare. It would be nice if I could archive these files somehow.
What I need to study is how Fail2ban is implementing the bans. Also, do I want to outright permanently ban these IPs...is that even feasible? We'll see.
UPDATE 2 - 12/24/2016:
I checked on things. I checked today's bans since I've been out of touch with the server and Linux in general (been doing work-related things but nothing close to this type of stuff). I didn't even remember how log Fail2ban was blocking but I see I changed the default from 10 minutes to 7 days. I was too lazy to check the logs, so what I did was pick an IP from the ban logs and grep for it. I saw it banned and saw it unbanned 7 days later.
The logs go as far back as 11/20. The trend seems to be mid teens for Nov, although I saw several spikes to the mid twenties. I'm just checking to see what the norm is, so that as I begin to get back into the swing of things, I have a baseline to compare. It would be nice if I could archive these files somehow.
Labels:
brute force,
Fail2ban,
IPTables,
ssh,
tcpwrappers,
Ubuntu
Tuesday, November 17, 2015
IPTables and SSH - Resetting Brute Force Traffic
In my prior blog post, I described an issue with brute force attempts of my Postfix server. Due to the weird way xinetd was handling services and me not wanting to spend inordinate amounts of time tuning xinetd's tcpwrapper functionality, I uninstalled xinetd. I will continue to rely on Postfix to reject the traffic.
That does me no good when it comes to SSH, though. With Slackware, I rely on either tcpwrapper or packages that leverage tcpwrappers. This works well when it comes to blocking SSH, but I can't use that method on my Ubuntu server without substantial rebuilding (building by source, which I want to avoid).
My solution? Leverage IPtables.
I wanted to use a different method than maintaining block lists. They work but I wanted to learn something new. I found this.
I added the following rules to the firewall:
The second rule says:
The third and fourth rules mean:
That does me no good when it comes to SSH, though. With Slackware, I rely on either tcpwrapper or packages that leverage tcpwrappers. This works well when it comes to blocking SSH, but I can't use that method on my Ubuntu server without substantial rebuilding (building by source, which I want to avoid).
My solution? Leverage IPtables.
I wanted to use a different method than maintaining block lists. They work but I wanted to learn something new. I found this.
I added the following rules to the firewall:
I added the
following rules:
The first rule tells the system:-A INPUT -i eth0 -p tcp -m tcp --dport 22 -m state --state NEW -m recent --set --name SSH --rsource
-A INPUT -i eth0 -p tcp -m tcp --dport 22 -m recent --rcheck --seconds 30 --hitcount 4 --rttl --name SSH --rsource -j REJECT --reject-with tcp-reset
-A INPUT -i eth0 -p tcp -m tcp --dport 22 -m recent --rcheck --seconds 30 --hitcount 3 --rttl --name SSH --rsource -j LOG --log-prefix "SSH brute force "
-A INPUT -i eth0 -p tcp -m tcp --dport 22 -m recent --update --seconds 30 --hitcount 3 --rttl --name SSH --rsource -j REJECT --reject-with tcp-reset
-A INPUT -i eth0 -p tcp -m tcp -s kangaroo.pocket.net --dport 22 -j ACCEPT
TCP packets are going to come in, that will
attempt to establish an SSH connection. Mark them as SSH. Pay
attention to the source of the packet.
The second rule says:
If a packet attempting to establish an SSH
connection comes, and it's the fourth packet to come from the same source
in thirty seconds, just reject it with prejudice and stop thinking about
it.
The third and fourth rules mean:
If an SSH connection packet comes in, and it's the
third attempt from the same guy in thirty seconds, log it to the system
log once, then immediately reject it and forget about it.
The 5th rule allows SSH traffic from a certain domain to pass into the environment.
It has cut down my syslogs significantly. I'm pretty sure the firewall isn't suffering, because all it's doing is tracking the source IPs that are coming in on destination port 22. I've not noticed the server suffering under any significant load. Some IPs are still making it through, but I'm counting between 10-20 IP entries in a 24 hour period. I can tell they're making it through because the traffic is hitting my clean-up rule. I've had 8 make it through all the rules before being blocked by the clean-up rule.
It has cut down my syslogs significantly. I'm pretty sure the firewall isn't suffering, because all it's doing is tracking the source IPs that are coming in on destination port 22. I've not noticed the server suffering under any significant load. Some IPs are still making it through, but I'm counting between 10-20 IP entries in a 24 hour period. I can tell they're making it through because the traffic is hitting my clean-up rule. I've had 8 make it through all the rules before being blocked by the clean-up rule.
root@linode:/var/log# grep -i 'dpt=22' syslog | grep -i 'clean-up rule'
Nov 17 07:03:20 linode kernel: Clean-up Rule - BLOCKED: IN=eth0 OUT= MAC=fe:fd:40:3e:e7:dc:84:78:ac:0d:a6:41:08:00 SRC=113.108.21.16 DST=xxx.xxx.xxx.xxx LEN=40 TOS=0x00 PREC=0x00 TTL=52 ID=0 DF PROTO=TCP SPT=32204 DPT=22 WINDOW=0 RES=0x00 RST URGP=0
Nov 17 07:05:13 linode kernel: Clean-up Rule - BLOCKED: IN=eth0 OUT= MAC=fe:fd:40:3e:e7:dc:84:78:ac:0d:79:c1:08:00 SRC=203.129.217.5 DST=xxx.xxx.xxx.xxx LEN=40 TOS=0x00 PREC=0x00 TTL=51 ID=0 DF PROTO=TCP SPT=46595 DPT=22 WINDOW=0 RES=0x00 RST URGP=0
Nov 17 07:10:17 linode kernel: Clean-up Rule - BLOCKED: IN=eth0 OUT= MAC=fe:fd:40:3e:e7:dc:84:78:ac:0d:a6:41:08:00 SRC=113.108.21.16 DST=xxx.xxx.xxx.xxx LEN=40 TOS=0x00 PREC=0x00 TTL=52 ID=0 DF PROTO=TCP SPT=32203 DPT=22 WINDOW=0 RES=0x00 RST URGP=0
Nov 17 11:19:15 linode kernel: Clean-up Rule - BLOCKED: IN=eth0 OUT= MAC=fe:fd:40:3e:e7:dc:84:78:ac:0d:a6:41:08:00 SRC=106.57.172.14 DST=xxx.xxx.xxx.xxx LEN=40 TOS=0x00 PREC=0x00 TTL=52 ID=0 DF PROTO=TCP SPT=49995 DPT=22 WINDOW=0 RES=0x00 RST URGP=0
Nov 17 16:55:46 linode kernel: Clean-up Rule - BLOCKED: IN=eth0 OUT= MAC=fe:fd:40:3e:e7:dc:84:78:ac:0d:a6:41:08:00 SRC=14.160.13.174 DST=xxx.xxx.xxx.xxx LEN=40 TOS=0x00 PREC=0x00 TTL=116 ID=25601 DF PROTO=TCP SPT=16895 DPT=22 WINDOW=16591 RES=0x00 ACK FIN URGP=0
Nov 17 16:55:46 linode kernel: Clean-up Rule - BLOCKED: IN=eth0 OUT= MAC=fe:fd:40:3e:e7:dc:84:78:ac:0d:a6:41:08:00 SRC=14.160.13.174 DST=xxx.xxx.xxx.xxx LEN=140 TOS=0x00 PREC=0x00 TTL=116 ID=25772 DF PROTO=TCP SPT=16895 DPT=22 WINDOW=16591 RES=0x00 ACK PSH FIN URGP=0
Nov 17 19:17:40 linode kernel: Clean-up Rule - BLOCKED: IN=eth0 OUT= MAC=fe:fd:40:3e:e7:dc:84:78:ac:0d:79:c1:08:00 SRC=113.108.21.16 DST=xxx.xxx.xxx.xxx LEN=40 TOS=0x00 PREC=0x00 TTL=52 ID=0 DF PROTO=TCP SPT=32205 DPT=22 WINDOW=0 RES=0x00 RST URGP=0
Nov 17 19:24:37 linode kernel: Clean-up Rule - BLOCKED: IN=eth0 OUT= MAC=fe:fd:40:3e:e7:dc:84:78:ac:0d:79:c1:08:00 SRC=113.108.21.16 DST=xxx.xxx.xxx.xxx LEN=40 TOS=0x00 PREC=0x00 TTL=52 ID=0 DF PROTO=TCP SPT=32202 DPT=22 WINDOW=0 RES=0x00 RST URGP=0
root@linode:/var/log# grep -i 'dpt=22' syslog | grep -i 'clean-up rule' | wc -l
8
root@linode:/var/log#
Subscribe to:
Posts (Atom)