Tuesday, November 24, 2015

IPTables and SSH - Resetting Brute Force Traffic, UPDATE

I showed how to determine IPs that were bruteforcing in my last blog entry, and how to block the majority of it (the posts are here and here).

In checking my logs over the last week, I see some activity that's making it through the firewall.  To be honest, I doubt I'll be able to filter ALL the bruteforcing using the rules I've currently in place.  I did tweak them a bit, changing the 4th and 5th rules' threshold to 60 seconds (vs 30 seconds).  It helped some but then I saw one particular IP blast through the firewall's ruleset like it was designed to go low and slow.

Here are the IPs that have been bruteforcing port 22 this week, as well as their hit counts:
root@linode:~/ssh_brute_force_logs# sed 's/S.*=//' combined1 | uniq -c
      2 183.60.48.25
      2 162.13.76.111
      2 183.60.48.25
      2 113.108.21.16
   1334 221.231.139.206
      2 183.60.48.25
      1 217.91.114.240
      2 113.108.21.16
      2 207.46.140.7
      1 218.4.117.26
      2 113.108.21.16
      8 146.88.41.119
      2 113.108.21.16
      1 218.60.99.51
      2 113.108.21.16
      1 187.120.217.121
      1 113.108.21.16
      1 43.229.53.49
      2 113.108.21.16
      2 43.229.53.49
      6 61.146.150.209
      1 94.23.60.117
      5 61.146.150.209
      2 113.108.21.16
      1 179.48.253.43
      2 51.254.137.164
      1 104.20.64.58
      1 51.254.137.164
      1 178.187.137.68
      2 113.108.21.16
     39 140.115.36.207
      2 113.108.21.16
      3 119.90.18.7
      2 113.108.21.16
      1 121.43.169.81
      1 113.108.21.16
      1 203.129.217.5
      1 113.108.21.16
      1 106.57.172.14
      2 14.160.13.174
      2 113.108.21.16
      2 119.90.18.7
IP 221.231.139.206 was the most prevalent by far, with IP 140.115.36.207 coming in second.

I'm to the point where I should create a script that would determine any IP with a hit count of, for example, 1000, and block them.  Or I can just use opensource products like fail2ban (remember, denyhosts isn't working because tcpwrapper support isn't compiled into inetd.conf on Ubuntu 14.04 LTS.).  So, I've installed fail2ban and will watch to see how it blocks things with the default setting before making changes to the configuration.  The cool thing about fail2ban is that it will watch and block more than just SSH activity.

Am I worrying too much about this?  Some would say, YES!  I'm doing this for learning purposes and also to get a better understanding of how such bruteforcing is evolving, but I don't really want to have to deal with a box that has been hacked, either.  Every admin has the responsiblity of doing their utmost to ensure their machines aren't hacked...that means that the admin needs to be proactive in placing defenses to deter or slow down attacks.

UPDATE:

Already, I see the following in the fail2ban logs:

2015-11-24 22:33:10,453 fail2ban.actions: WARNING [ssh] Ban 195.154.46.203
2015-11-24 22:33:10,458 fail2ban.actions: WARNING [ssh] Ban 59.47.0.157
2015-11-24 22:33:10,462 fail2ban.actions: WARNING [ssh] Ban 43.229.53.49
2015-11-24 22:33:10,466 fail2ban.actions: WARNING [ssh] Ban 43.229.53.87
2015-11-24 22:43:11,085 fail2ban.actions: WARNING [ssh] Unban 195.154.46.203
2015-11-24 22:43:11,090 fail2ban.actions: WARNING [ssh] Unban 59.47.0.157
2015-11-24 22:43:11,094 fail2ban.actions: WARNING [ssh] Unban 43.229.53.49
2015-11-24 22:43:11,098 fail2ban.actions: WARNING [ssh] Unban 43.229.53.87

Fail2ban banned and then unbanned the IPs 10 minutes later (default settings).

What I need to study is how Fail2ban is implementing the bans.  Also, do I want to outright permanently ban these IPs...is that even feasible?  We'll see.

UPDATE 2 - 12/24/2016:

I checked on things.  I checked today's bans since I've been out of touch with the server and Linux in general (been doing work-related things but nothing close to this type of stuff).  I didn't even remember how log Fail2ban was blocking but I see I changed the default from 10 minutes to 7 days.  I was too lazy to check the logs, so what I did was pick an IP from the ban logs and grep for it.  I saw it banned and saw it unbanned 7 days later.

The logs go as far back as 11/20.  The trend seems to be mid teens for Nov, although I saw several spikes to the mid twenties.  I'm just checking to see what the norm is, so that as I begin to get back into the swing of things, I have a baseline to compare.  It would be nice if I could archive these files somehow.

Tuesday, November 17, 2015

IPTables and SSH - Resetting Brute Force Traffic

In my prior blog post, I described an issue with brute force attempts of my Postfix server.  Due to the weird way xinetd was handling services and me not wanting to spend inordinate amounts of time tuning xinetd's tcpwrapper functionality, I uninstalled xinetd.  I will continue to rely on Postfix to reject the traffic.

That does me no good when it comes to SSH, though.  With Slackware, I rely on either tcpwrapper or packages that leverage tcpwrappers.  This works well when it comes to blocking SSH, but I can't use that method on my Ubuntu server without substantial rebuilding (building by source, which I want to avoid).

My solution?  Leverage IPtables.

I wanted to use a different method than maintaining block lists.  They work but I wanted to learn something new.  I found this.

I added the following rules to the firewall:

I added the following rules:
-A INPUT -i eth0 -p tcp -m tcp --dport 22 -m state --state NEW -m recent --set --name SSH --rsource
-A INPUT -i eth0 -p tcp -m tcp --dport 22 -m recent --rcheck --seconds 30 --hitcount 4 --rttl --name SSH --rsource -j REJECT --reject-with tcp-reset
-A INPUT -i eth0 -p tcp -m tcp --dport 22 -m recent --rcheck --seconds 30 --hitcount 3 --rttl --name SSH --rsource -j LOG --log-prefix "SSH brute force "
-A INPUT -i eth0 -p tcp -m tcp --dport 22 -m recent --update --seconds 30 --hitcount 3 --rttl --name SSH --rsource -j REJECT --reject-with tcp-reset
-A INPUT -i eth0 -p tcp -m tcp -s kangaroo.pocket.net --dport 22 -j ACCEPT
The first rule tells the system:

TCP packets are going to come in, that will attempt to establish an SSH connection.  Mark them as SSH.  Pay attention to the source of the packet.

The second rule says:

If a packet attempting to establish an SSH connection comes, and it's the fourth packet to come from the same source in thirty seconds, just reject it with prejudice and stop thinking about it.

The third and fourth rules mean:

If an SSH connection packet comes in, and it's the third attempt from the same guy in thirty seconds, log it to the system log once, then immediately reject it and forget about it.

The 5th rule allows SSH traffic from a certain domain to pass into the environment.

It has cut down my syslogs significantly.  I'm pretty sure the firewall isn't suffering, because all it's doing is tracking the source IPs that are coming in on destination port 22.  I've not noticed the server suffering under any significant load.  Some IPs are still making it through, but I'm counting between 10-20 IP entries in a 24 hour period.  I can tell they're making it through because the traffic is hitting my clean-up rule.  I've had 8 make it through all the rules before being blocked by the clean-up rule.

root@linode:/var/log# grep -i 'dpt=22' syslog | grep -i 'clean-up rule'

Nov 17 07:03:20 linode kernel: Clean-up Rule - BLOCKED: IN=eth0 OUT= MAC=fe:fd:40:3e:e7:dc:84:78:ac:0d:a6:41:08:00 SRC=113.108.21.16 DST=xxx.xxx.xxx.xxx LEN=40 TOS=0x00 PREC=0x00 TTL=52 ID=0 DF PROTO=TCP SPT=32204 DPT=22 WINDOW=0 RES=0x00 RST URGP=0
Nov 17 07:05:13 linode kernel: Clean-up Rule - BLOCKED: IN=eth0 OUT= MAC=fe:fd:40:3e:e7:dc:84:78:ac:0d:79:c1:08:00 SRC=203.129.217.5 DST=
xxx.xxx.xxx.xxx LEN=40 TOS=0x00 PREC=0x00 TTL=51 ID=0 DF PROTO=TCP SPT=46595 DPT=22 WINDOW=0 RES=0x00 RST URGP=0
Nov 17 07:10:17 linode kernel: Clean-up Rule - BLOCKED: IN=eth0 OUT= MAC=fe:fd:40:3e:e7:dc:84:78:ac:0d:a6:41:08:00 SRC=113.108.21.16 DST=
xxx.xxx.xxx.xxx LEN=40 TOS=0x00 PREC=0x00 TTL=52 ID=0 DF PROTO=TCP SPT=32203 DPT=22 WINDOW=0 RES=0x00 RST URGP=0
Nov 17 11:19:15 linode kernel: Clean-up Rule - BLOCKED: IN=eth0 OUT= MAC=fe:fd:40:3e:e7:dc:84:78:ac:0d:a6:41:08:00 SRC=106.57.172.14 DST=
xxx.xxx.xxx.xxx LEN=40 TOS=0x00 PREC=0x00 TTL=52 ID=0 DF PROTO=TCP SPT=49995 DPT=22 WINDOW=0 RES=0x00 RST URGP=0
Nov 17 16:55:46 linode kernel: Clean-up Rule - BLOCKED: IN=eth0 OUT= MAC=fe:fd:40:3e:e7:dc:84:78:ac:0d:a6:41:08:00 SRC=14.160.13.174 DST=
xxx.xxx.xxx.xxx LEN=40 TOS=0x00 PREC=0x00 TTL=116 ID=25601 DF PROTO=TCP SPT=16895 DPT=22 WINDOW=16591 RES=0x00 ACK FIN URGP=0
Nov 17 16:55:46 linode kernel: Clean-up Rule - BLOCKED: IN=eth0 OUT= MAC=fe:fd:40:3e:e7:dc:84:78:ac:0d:a6:41:08:00 SRC=14.160.13.174 DST=
xxx.xxx.xxx.xxx LEN=140 TOS=0x00 PREC=0x00 TTL=116 ID=25772 DF PROTO=TCP SPT=16895 DPT=22 WINDOW=16591 RES=0x00 ACK PSH FIN URGP=0
Nov 17 19:17:40 linode kernel: Clean-up Rule - BLOCKED: IN=eth0 OUT= MAC=fe:fd:40:3e:e7:dc:84:78:ac:0d:79:c1:08:00 SRC=113.108.21.16 DST=
xxx.xxx.xxx.xxx LEN=40 TOS=0x00 PREC=0x00 TTL=52 ID=0 DF PROTO=TCP SPT=32205 DPT=22 WINDOW=0 RES=0x00 RST URGP=0
Nov 17 19:24:37 linode kernel: Clean-up Rule - BLOCKED: IN=eth0 OUT= MAC=fe:fd:40:3e:e7:dc:84:78:ac:0d:79:c1:08:00 SRC=113.108.21.16 DST=
xxx.xxx.xxx.xxx LEN=40 TOS=0x00 PREC=0x00 TTL=52 ID=0 DF PROTO=TCP SPT=32202 DPT=22 WINDOW=0 RES=0x00 RST URGP=0

root@linode:/var/log# grep -i 'dpt=22' syslog | grep -i 'clean-up rule' | wc -l
8
root@linode:/var/log#

Thursday, November 12, 2015

IPs hammering Postfix

So, I've been lax in checking my logs lately, but decided to check them last night on my Linode server, which is running Ubuntu.

I'm running PSAD but for some reason, it stopped reporting scanning IPs to me. I've fixed that but it won't account for the last few months of PSAD not working.

What I'm seeing in the syslogs are connection attempts to my Postfix mail server. The server appears to have automatically dropped the traffic. I see no indication that Iptables tried to block the traffic, but this is probably due to needing to have Postfix open to the internet.

I can either rely on Postfix to continue to kill these connection attempts or I can add them to the firewall. First, I've to compile a listing of unique IPs (there were many duplicates as well as other connection information that needed to be removed).

The first thing I did was filter any logs not relevant to Postfix and redirect that output to a file. In fact what I did was grep the syslogs using the following filter: 'disconnect from unknown', which showed me every instance of Postfix disconnecting an IP from communication:



root@linode:/var/log# grep 'disconnect from unknown' syslog > /root/postfix_drops

Nov 12 05:44:12 linode postfix/smtpd[15130]: disconnect from unknown[198.206.134.152]
Nov 12 05:46:10 linode postfix/smtpd[15176]: disconnect from unknown[203.157.41.130]
Nov 12 05:57:11 linode postfix/smtpd[15366]: disconnect from unknown[91.205.167.98]
Nov 12 06:03:48 linode postfix/smtpd[15424]: disconnect from unknown[198.206.134.152]
Nov 12 06:05:49 linode postfix/smtpd[15467]: disconnect from unknown[155.208.242.236]
Nov 12 06:17:03 linode postfix/smtpd[15677]: disconnect from unknown[203.122.41.195]
Nov 12 06:17:20 linode postfix/smtpd[15677]: disconnect from unknown[112.175.103.29]
Nov 12 06:21:59 linode postfix/smtpd[15861]: disconnect from unknown[122.154.46.172]
Nov 12 06:22:20 linode postfix/smtpd[15861]: disconnect from unknown[155.208.242.236]
Nov 12 06:23:41 linode postfix/smtpd[15861]: disconnect from unknown[198.206.134.152]
Nov 12 06:35:21 linode postfix/smtpd[16335]: disconnect from unknown[122.154.46.172]
Nov 12 06:48:00 linode postfix/smtpd[16516]: disconnect from unknown[203.157.41.130]


The small snippet above shows that there are repeat offenders.  That'll need to be sorted out but we'll do that later.  Right now, all I need are the IPs.  I've to remove everything else.

I then use the following to remove a good bit of the logs:



root@linode:/var/log# nawk '{print $8}' /root/postfix_drops > /root/postfix_drops_2

The result is:


unknown[91.205.167.98]
unknown[198.206.134.152]
unknown[155.208.242.236]
unknown[203.122.41.195]
unknown[112.175.103.29]
unknown[122.154.46.172]
unknown[155.208.242.236]
unknown[198.206.134.152]
unknown[122.154.46.172]
unknown[203.157.41.130]

Note that everytime I'm making a change to the output, I'm also redirecting the output to a new file.

I then remove all instances of 'unknown' from the file (I had to add spaces between the the pointed brackets and the word "unknown", otherwise neither will render...remove the spaced for the code to work properly):

root@linode:/var/log# sed 's/\< unknown \>//g' /root/postfix_drops_2 > /root/postfix_drops_3

[155.208.242.236]
[203.122.41.195]
[112.175.103.29]
[122.154.46.172]
[155.208.242.236]
[198.206.134.152]
[122.154.46.172]
[203.157.41.130]

All that's left are removing the brackets:

root@linode:/var/log# sed 's/\[//g;s/\]//g' /root/postfix_drops_3 > /root/postfix_drops_4
203.122.41.195
112.175.103.29
122.154.46.172
155.208.242.236
198.206.134.152
122.154.46.172
203.157.41.130



I then sort the file while also only wanting to see unique IPs:


root@linode:/var/log# sort -u /root/postfix_drops_4
94.156.37.145
94.156.37.146
94.156.37.147
94.156.37.148
94.156.37.149
94.156.37.150

I started out with 394 lines and ended up with 94 unique IPs, across two syslog files.  I still have to do the archived syslog files.  I also saw that the 94 IPs show many IPs that are in sequence, which means that the culprits are trying to distribute the attacks over a range of IPs (I've posted similar attacks when analyzing SSH logs).

The resulting log file is here.

So, now, I can either be happy with how Postfix is handling the traffic or add the IPs to my firewall ruleset.  This was more of an exercise in seeing how well I could quickly assess the nature of the traffic in question and I'm happy (it was simple enough, although I'm out of practice in doing such things).

UPDATE:

1. I scrubbed the archived syslog files and the total, including those IPs I already scrubbed, is 959, and when sorted uniquely, there are 301 IPs.

2.  I added the first batch of IPs to /etc/hosts.deny for tcpwrappers to block.  I also found that I couldn't use tcpwrappers since I didn't compile postfix with tcpwrapper support, so I installed xinetd (which can wrap services inside itself with some configuration).  I'll monitor to see if the IPs will be blocked and if it doesn't work, I'll add the IPs to the firewall's ruleset.

Saturday, May 16, 2015

Linux - User Account Management, Part II

In my post titled, "Linux - User Account Management, Part I," I talked about how to check a user's account for expiration.  I set a user account to expire on 5/15/2015.  It is now 5/16/2015.  The user account has expired.  When I attempt to log into this account, it shows as expired:
ron@slackbox:~$ su nor
Password:
Your login has expired.  Contact the system administrator.
 To re-enable the account, I will use either of the following commands:
usermod -e yyyy-mm-dd username
chage -E yyyy-mm-dd username 
When running the chage command, there's no message or prompt after execution. I'm able to log back in without issue.

Next, we'll learn how to set the number of days until a password change is required.  We'll set the password to expire for 30 days:
ron@slackbox:~$ron@slackbox:~$ sudo chage -M 30 nor
ron@slackbox:~$
ron@slackbox:~$ sudo chage -l nor
Last password change                                 : May 10, 2015
Password expires                                     : Jun 09, 2015
Password inactive                                    : never
Account expires                                      : Dec 31, 2015
Minimum number of days between password change       : 0
Maximum number of days between password change       : 30
Number of days of warning before password expires    :  7  
ron@slackbox:~$
We'll check this account again after 30 days and use the 'password -u nor' command to re-enable the expired password at that time.



Thursday, May 14, 2015

VENOM vulnerability(CVE-2015-3456)

For those of you with Linodes and are curious/concerned about the recent VENOM (CVE-2015-3456) vulnerability, this Linode blog entry describes the vulnerability and why Linode VPSs aren't affected.

Even if you've no Linode, it would behoove you to understand the vulnerability so that you can ask questions of your VPS provider to ensure you're not affected by this vulnerability.  And note that this vulnerability affects QEMU, which is an open-source emulator.  That means that it's free (as in source and beer) and that it'll most likely be widespread in use.  Many IT security products use virtualization and if they're using QEMU, that's a problem, this can be as widespread an issue as Heartbleed was.

I'll be sure to post as I discover more information about this vulnerability.

Tuesday, May 12, 2015

How to configure a user to run privileged commands & and how to switch user accounts...all via CLI.

How to switch users in Linux?  How to run privileged commands without having to run the command as root (or switch to the root account, which can be dangerous)?

'su' can be used to switch from a regular user to the root user.  You can check to see which user you are by using the 'whoami' command.  Or, you can tell what type of user account you're using by looking at the prompt.  The # prompt is the root account and the $ prompt is a regular user account.

You can also use sudo to run privileged commands if the system admin allows your account to run privileged commands.

One quick way to configure sudo to allow a regular user to run privileged commands is to:

Type visudo, which will open up the /etc/sudoers file using vi.  Visudo allows you to edit the file in a safe fashion. (visudo locks the sudoers file against multiple simultaneous edits, provides basic sanity checks, and checks for parse errors.)  You should find "# %wheel ALL=(ALL) ALL".  If you see # in front of a row, it usually means that it's been commented out.  That means that any commands will be treated as regular file comments/notes and will not be executed as commands.  It means the command is disabled, so to enable that particular command, remove the "#".  Save and quit the edit session by typing ":wq" which means write to file and quit the session.

Next, type 'vigr', which opens up the /etc/group file in a safe fashion for you to edit.  Find the wheel group within the file and add the account name to the group.  For example, my current wheel group within /etc/group shows the following:

wheel:x:10:root,ron,nor

I added the 'nor' user.  Type ':wq' again, which saves the edit and quits the session.

Now, when you need to run root-level commands, you don't have use 'su -c' and enter the root password.  What happens is that you'll still be prompted for a password, but you'll type in the password associated with the regular account.  This keeps the admin from having to share out the system's root password.

Again, these are pretty basic and simple steps unless you're totally not familiar with *nix (these steps can apply to the BSDs or other Unix clones, as well).

Monday, May 11, 2015

Linux - User Account Management, Part I

I've never administered user accounts in Linux.  I know Linux but I'd be lying if I said I knew every facet of it.  I've created accounts and actually gave a fellow Linux user access to my machine once (checking the logs from time to time just to ensure he wasn't doing things he wasn't supposed to do), but I'd never made an account that had an expiration date (nor a password that had an expiration date).  So, yesterday, I created a test account that had an account expiration of May 15, 2015.  I want to be able to unexpire the account once it has expired.

I referenced some commands that would allow me to monitor a user account's status.  'chage' is one of those commands:
ron@slackbox:~$ sudo chage -l nor
Password:
Last password change                                    : May 10, 2015
Password expires                                        : never
Password inactive                                       : never
Account expires                                         : May 15, 2015
Minimum number of days between password change          : 0
Maximum number of days between password change          : 99999
Number of days of warning before password expires       : 7
I'd use the 'usermod' or 'chage' commands to change the expiration date (using sudo):
usermod -e yyyy-mm-dd username
chage -E yyyy-mm-dd username 
I'll use the above commands in a few days just to test.

To check if the password has expired, use the following:
grep ‘username’ /etc/shadow
The following command would re-enable an expired password:
password -u username
I encourage you to read the chage and passwd manual pages for further insight on how to use these two commands.

I know these are simple commands and the process itself is simple, but again, I've never done this before and wanted to share what I've learned...someone out there will learn from my experience.

Saturday, May 09, 2015

Installing Slackware 14.1

Awhile back, I bought a bunch of 500GB SATA drives so that I could experiment with Linux and RAID.  The plan was to use the drives as one gigantic drive.  The setup was somewhat easy to do, but I could never finish the configuration because I kept getting errors when trying to use lilo and grub, so I stopped.

Today, I picked up again, giving up on RAID and just using a conventional setup.  I'm trying to install Slackware 14.1, but for some reason, the machine won't boot up after installation.  I created a USB boot drive and for some weird reason, it's not readable, although I didn't get any errors when creating it.  I also opted to boot from the MBR, and initially had issues getting the system booted up, until I looked in the BIOS settings and saw that the system was trying to boot from one of the other unused disks.  Once I fixed that, it booted up without issue.  I still may try to recreate the USB boot disk, but I'll do that later.

So far I've done the standard things:  created a regular user (immediately), added the regular user to the wheel group and edited the sudoers file to accept root commands from the wheel group.  The system also has two NICs and when installing Slackware, the install used the motherboard's NIC and not the extra NIC I installed (this wasn't a huge issue, but had me wondering why I couldn't get an internet connection).

I'm currently using XFCE as a desktop environment.  I wanted to use KDE but for some reason, it's not working.  It's something to look into later.

I've been without Slackware for quite awhile (the last 4-5 years).  I'm also not so Slackware-savvy, but I'm so familiar with installing Slack that I had no issues getting it up and running.  I know that Slack has many new and enhanced features.  The challenge is to get familiar again.  I'll do this without relying on IRC (the days of IRCing are over for me).

I did apply some of the tips listed here, though.

Wednesday, April 29, 2015

Wordpress - Zero Day Vulnerability Discovered


http://hackingdude.com/2015/04/29/wordpress-zero-day-vulnerability/

Most of the time, we have reported about WordPress vulnerabilities involving vulnerable plugins, but this time a Finnish security researcher has discovered a critical zero-day vulnerability in the core engine of the WordPress content management system.
I thought I'd post about this since the vulnerability is a bit unusual.  I also though it was a bit unusual that Wordpress reportedly ignored a previous vulnerability that the researcher reported to them.  Wordpress has a responsibility to it's users and for them to purposefully ignore such a discovery is wrong, in my opinion.

So, if you've Wordpress CMSs that you administer, I'd advise you to upgrade to v4.2.1 (I did a few days ago).


Wednesday, January 28, 2015

CVE-2015-0235: Linux and glibc "Ghost" Vulnerability

Wondering about the Linux Ghost vulnerability?

Here's what I received from the US CERT:
The Linux GNU C Library (glibc) versions prior to 2.18 are vulnerable to remote code execution via a vulnerability in the gethostbyname function. Exploitation of this vulnerability may allow a remote attacker to take control of an affected system. Linux distributions employing glibc-2.18 and later are not affected.
US-CERT recommends users and administrators refer to their respective Linux or Unix-based OS vendor(s) for an appropriate patch if affected. Patches are available from Ubuntu and Red Hat. The GNU C Library versions 2.18 and laterare also available for experienced users and administrators to implement.
How to determine if your distribution is affect:  run "ldd --version".  If versions prior to v2.18 show in the results, you're vulnerable.

I used this link to check my Ubuntu servers.  Although, that resource doesn't cater to Slackware, the checks can still be applied.  I'm not using Slackware as a flavor at the moment, but I'm sure Pat V. is working on a patched version of glibc.  In fact, it would be wise to check your distribution's website for further news about this vulnerability and how to patch your particular flavor of Linux.