Showing posts with label ssh. Show all posts
Showing posts with label ssh. Show all posts

Wednesday, March 22, 2017

Some Fail2ban Success

I've been playing with Fail2ban jail configurations since the last post and I think I've got my setup running close to perfect.

In my last post, I mentioned that I wanted Fail2ban to block non-ssh traffic.  This was difficult to get working because there aren't all that many explanations on the inner workings of this tool.  The readmes aren't exactly descriptive.  With a lot of web searches I got things working.

The jail list shows that I've enabled the following filters: 

root@linode:/var/log# fail2ban-client status
Status
|- Number of jail:      15
`- Jail list:   apache, apache-multiport, apache-noscript, apache-overflows, courier-auth, courier-smtp, dropbear, mysqld-auth, php-url-fopen, postfix, postfix-sasl, sasl, ssh-ddos, sshd, xinetd-fail

Of them, I've seen traffic blocked from apache-noscript, apache-overflows, ssh-ddos, and sshd.

The rest of the filters have not captured any logs, but that just means conditions haven't been met to block/log.  In fact, I've only seen one apache-overflows alert trigger.

What I've been doing is trying to correlate the Fail2ban log entries to the service logs (ie, an alert is generated against the apache-noscript filter and I grep the apache logs for the IP to see what occurred.

Here's an example Fail2ban alert:

root@linode:/var/log# cat /var/log/fail2ban.log | grep 'script' | grep 'Ban'
2017-03-23 00:00:00,322 fail2ban.actions        [26381]: NOTICE  [apache-noscript] Ban 195.154.211.207

Here's the Apache log entries for that IP:

root@linode:/var/log# cat apache2/access.log | grep 195.154.211.207
195.154.211.207 - - [22/Mar/2017:18:13:56 +0000] "GET //wp-includes/registration-functions.php HTTP/1.1" 500 185 "-" "Mozilla/5.0 (Windows NT 6.1; Trident/7.0; rv:11.0) like Gecko"
195.154.211.207 - - [22/Mar/2017:23:59:59 +0000] "GET //wall_login.php?login=cmd HTTP/1.1" 404 510 "-" "Mozilla/5.0 (Windows NT 6.1; Trident/7.0; rv:11.0) like Gecko"

Here's how the apache-noscript section looks within my jail.local file:

[apache-noscript]

enabled  = true
port     = http,https
filter   = apache-noscript
logpath  = /var/log/apache2/error.log
maxretry = 1
findtime = 60
bantime  = -1

You see two log entries.  In this case, the filter is looking for more than one violation in a 60 second timeframe.  Violators are banned indefinitely.

The logs look hokey when comparing against the apache-noscript configuration within the jail.local file, but it's correct.  The logs look like this attack occurred after the offending IP connected to the Apache server twice within five hours and was banned at midnight on the second attempt.  That's not what happened.  The logs are deceiving.  The attacks (defined by maxretry) must occur within the findtime value.  Since the maxretry is 1 and the findtime is 60, a ban occurred when the offending IP tried a consecutive attack within 60 seconds (at midnight).  Apache only logged the first attempt (at midnight).  After the second attempt occurred, a ban was set before Apache could log the attempt.

The ssh-ddos filter discovers distributed attacks relating to brute-forcing of SSH connections.  There are also many other filters relating to ssh but they're pretty much redundant in that they block the same activity, so if I have several of them enabled, I end up with redundant alerts in my log file.  I've turned off the ones that generate duplicate alerts.

I also need to back up my configuration files so that I don't have to experiment with and tune the setup if I happen to lose my configuration files later and have to reinstall Fail2ban.  That would suck.

Tuesday, November 24, 2015

IPTables and SSH - Resetting Brute Force Traffic, UPDATE

I showed how to determine IPs that were bruteforcing in my last blog entry, and how to block the majority of it (the posts are here and here).

In checking my logs over the last week, I see some activity that's making it through the firewall.  To be honest, I doubt I'll be able to filter ALL the bruteforcing using the rules I've currently in place.  I did tweak them a bit, changing the 4th and 5th rules' threshold to 60 seconds (vs 30 seconds).  It helped some but then I saw one particular IP blast through the firewall's ruleset like it was designed to go low and slow.

Here are the IPs that have been bruteforcing port 22 this week, as well as their hit counts:
root@linode:~/ssh_brute_force_logs# sed 's/S.*=//' combined1 | uniq -c
      2 183.60.48.25
      2 162.13.76.111
      2 183.60.48.25
      2 113.108.21.16
   1334 221.231.139.206
      2 183.60.48.25
      1 217.91.114.240
      2 113.108.21.16
      2 207.46.140.7
      1 218.4.117.26
      2 113.108.21.16
      8 146.88.41.119
      2 113.108.21.16
      1 218.60.99.51
      2 113.108.21.16
      1 187.120.217.121
      1 113.108.21.16
      1 43.229.53.49
      2 113.108.21.16
      2 43.229.53.49
      6 61.146.150.209
      1 94.23.60.117
      5 61.146.150.209
      2 113.108.21.16
      1 179.48.253.43
      2 51.254.137.164
      1 104.20.64.58
      1 51.254.137.164
      1 178.187.137.68
      2 113.108.21.16
     39 140.115.36.207
      2 113.108.21.16
      3 119.90.18.7
      2 113.108.21.16
      1 121.43.169.81
      1 113.108.21.16
      1 203.129.217.5
      1 113.108.21.16
      1 106.57.172.14
      2 14.160.13.174
      2 113.108.21.16
      2 119.90.18.7
IP 221.231.139.206 was the most prevalent by far, with IP 140.115.36.207 coming in second.

I'm to the point where I should create a script that would determine any IP with a hit count of, for example, 1000, and block them.  Or I can just use opensource products like fail2ban (remember, denyhosts isn't working because tcpwrapper support isn't compiled into inetd.conf on Ubuntu 14.04 LTS.).  So, I've installed fail2ban and will watch to see how it blocks things with the default setting before making changes to the configuration.  The cool thing about fail2ban is that it will watch and block more than just SSH activity.

Am I worrying too much about this?  Some would say, YES!  I'm doing this for learning purposes and also to get a better understanding of how such bruteforcing is evolving, but I don't really want to have to deal with a box that has been hacked, either.  Every admin has the responsiblity of doing their utmost to ensure their machines aren't hacked...that means that the admin needs to be proactive in placing defenses to deter or slow down attacks.

UPDATE:

Already, I see the following in the fail2ban logs:

2015-11-24 22:33:10,453 fail2ban.actions: WARNING [ssh] Ban 195.154.46.203
2015-11-24 22:33:10,458 fail2ban.actions: WARNING [ssh] Ban 59.47.0.157
2015-11-24 22:33:10,462 fail2ban.actions: WARNING [ssh] Ban 43.229.53.49
2015-11-24 22:33:10,466 fail2ban.actions: WARNING [ssh] Ban 43.229.53.87
2015-11-24 22:43:11,085 fail2ban.actions: WARNING [ssh] Unban 195.154.46.203
2015-11-24 22:43:11,090 fail2ban.actions: WARNING [ssh] Unban 59.47.0.157
2015-11-24 22:43:11,094 fail2ban.actions: WARNING [ssh] Unban 43.229.53.49
2015-11-24 22:43:11,098 fail2ban.actions: WARNING [ssh] Unban 43.229.53.87

Fail2ban banned and then unbanned the IPs 10 minutes later (default settings).

What I need to study is how Fail2ban is implementing the bans.  Also, do I want to outright permanently ban these IPs...is that even feasible?  We'll see.

UPDATE 2 - 12/24/2016:

I checked on things.  I checked today's bans since I've been out of touch with the server and Linux in general (been doing work-related things but nothing close to this type of stuff).  I didn't even remember how log Fail2ban was blocking but I see I changed the default from 10 minutes to 7 days.  I was too lazy to check the logs, so what I did was pick an IP from the ban logs and grep for it.  I saw it banned and saw it unbanned 7 days later.

The logs go as far back as 11/20.  The trend seems to be mid teens for Nov, although I saw several spikes to the mid twenties.  I'm just checking to see what the norm is, so that as I begin to get back into the swing of things, I have a baseline to compare.  It would be nice if I could archive these files somehow.

Tuesday, November 17, 2015

IPTables and SSH - Resetting Brute Force Traffic

In my prior blog post, I described an issue with brute force attempts of my Postfix server.  Due to the weird way xinetd was handling services and me not wanting to spend inordinate amounts of time tuning xinetd's tcpwrapper functionality, I uninstalled xinetd.  I will continue to rely on Postfix to reject the traffic.

That does me no good when it comes to SSH, though.  With Slackware, I rely on either tcpwrapper or packages that leverage tcpwrappers.  This works well when it comes to blocking SSH, but I can't use that method on my Ubuntu server without substantial rebuilding (building by source, which I want to avoid).

My solution?  Leverage IPtables.

I wanted to use a different method than maintaining block lists.  They work but I wanted to learn something new.  I found this.

I added the following rules to the firewall:

I added the following rules:
-A INPUT -i eth0 -p tcp -m tcp --dport 22 -m state --state NEW -m recent --set --name SSH --rsource
-A INPUT -i eth0 -p tcp -m tcp --dport 22 -m recent --rcheck --seconds 30 --hitcount 4 --rttl --name SSH --rsource -j REJECT --reject-with tcp-reset
-A INPUT -i eth0 -p tcp -m tcp --dport 22 -m recent --rcheck --seconds 30 --hitcount 3 --rttl --name SSH --rsource -j LOG --log-prefix "SSH brute force "
-A INPUT -i eth0 -p tcp -m tcp --dport 22 -m recent --update --seconds 30 --hitcount 3 --rttl --name SSH --rsource -j REJECT --reject-with tcp-reset
-A INPUT -i eth0 -p tcp -m tcp -s kangaroo.pocket.net --dport 22 -j ACCEPT
The first rule tells the system:

TCP packets are going to come in, that will attempt to establish an SSH connection.  Mark them as SSH.  Pay attention to the source of the packet.

The second rule says:

If a packet attempting to establish an SSH connection comes, and it's the fourth packet to come from the same source in thirty seconds, just reject it with prejudice and stop thinking about it.

The third and fourth rules mean:

If an SSH connection packet comes in, and it's the third attempt from the same guy in thirty seconds, log it to the system log once, then immediately reject it and forget about it.

The 5th rule allows SSH traffic from a certain domain to pass into the environment.

It has cut down my syslogs significantly.  I'm pretty sure the firewall isn't suffering, because all it's doing is tracking the source IPs that are coming in on destination port 22.  I've not noticed the server suffering under any significant load.  Some IPs are still making it through, but I'm counting between 10-20 IP entries in a 24 hour period.  I can tell they're making it through because the traffic is hitting my clean-up rule.  I've had 8 make it through all the rules before being blocked by the clean-up rule.

root@linode:/var/log# grep -i 'dpt=22' syslog | grep -i 'clean-up rule'

Nov 17 07:03:20 linode kernel: Clean-up Rule - BLOCKED: IN=eth0 OUT= MAC=fe:fd:40:3e:e7:dc:84:78:ac:0d:a6:41:08:00 SRC=113.108.21.16 DST=xxx.xxx.xxx.xxx LEN=40 TOS=0x00 PREC=0x00 TTL=52 ID=0 DF PROTO=TCP SPT=32204 DPT=22 WINDOW=0 RES=0x00 RST URGP=0
Nov 17 07:05:13 linode kernel: Clean-up Rule - BLOCKED: IN=eth0 OUT= MAC=fe:fd:40:3e:e7:dc:84:78:ac:0d:79:c1:08:00 SRC=203.129.217.5 DST=
xxx.xxx.xxx.xxx LEN=40 TOS=0x00 PREC=0x00 TTL=51 ID=0 DF PROTO=TCP SPT=46595 DPT=22 WINDOW=0 RES=0x00 RST URGP=0
Nov 17 07:10:17 linode kernel: Clean-up Rule - BLOCKED: IN=eth0 OUT= MAC=fe:fd:40:3e:e7:dc:84:78:ac:0d:a6:41:08:00 SRC=113.108.21.16 DST=
xxx.xxx.xxx.xxx LEN=40 TOS=0x00 PREC=0x00 TTL=52 ID=0 DF PROTO=TCP SPT=32203 DPT=22 WINDOW=0 RES=0x00 RST URGP=0
Nov 17 11:19:15 linode kernel: Clean-up Rule - BLOCKED: IN=eth0 OUT= MAC=fe:fd:40:3e:e7:dc:84:78:ac:0d:a6:41:08:00 SRC=106.57.172.14 DST=
xxx.xxx.xxx.xxx LEN=40 TOS=0x00 PREC=0x00 TTL=52 ID=0 DF PROTO=TCP SPT=49995 DPT=22 WINDOW=0 RES=0x00 RST URGP=0
Nov 17 16:55:46 linode kernel: Clean-up Rule - BLOCKED: IN=eth0 OUT= MAC=fe:fd:40:3e:e7:dc:84:78:ac:0d:a6:41:08:00 SRC=14.160.13.174 DST=
xxx.xxx.xxx.xxx LEN=40 TOS=0x00 PREC=0x00 TTL=116 ID=25601 DF PROTO=TCP SPT=16895 DPT=22 WINDOW=16591 RES=0x00 ACK FIN URGP=0
Nov 17 16:55:46 linode kernel: Clean-up Rule - BLOCKED: IN=eth0 OUT= MAC=fe:fd:40:3e:e7:dc:84:78:ac:0d:a6:41:08:00 SRC=14.160.13.174 DST=
xxx.xxx.xxx.xxx LEN=140 TOS=0x00 PREC=0x00 TTL=116 ID=25772 DF PROTO=TCP SPT=16895 DPT=22 WINDOW=16591 RES=0x00 ACK PSH FIN URGP=0
Nov 17 19:17:40 linode kernel: Clean-up Rule - BLOCKED: IN=eth0 OUT= MAC=fe:fd:40:3e:e7:dc:84:78:ac:0d:79:c1:08:00 SRC=113.108.21.16 DST=
xxx.xxx.xxx.xxx LEN=40 TOS=0x00 PREC=0x00 TTL=52 ID=0 DF PROTO=TCP SPT=32205 DPT=22 WINDOW=0 RES=0x00 RST URGP=0
Nov 17 19:24:37 linode kernel: Clean-up Rule - BLOCKED: IN=eth0 OUT= MAC=fe:fd:40:3e:e7:dc:84:78:ac:0d:79:c1:08:00 SRC=113.108.21.16 DST=
xxx.xxx.xxx.xxx LEN=40 TOS=0x00 PREC=0x00 TTL=52 ID=0 DF PROTO=TCP SPT=32202 DPT=22 WINDOW=0 RES=0x00 RST URGP=0

root@linode:/var/log# grep -i 'dpt=22' syslog | grep -i 'clean-up rule' | wc -l
8
root@linode:/var/log#

Wednesday, April 03, 2013

My Book Live - Connection Issues and Troubleshooting

I've been noticing issues with my NAS solution, which is a Western Digital My Book Live Personal Cloud Edition.

I keep losing connectivity after 5 or so minutes of connecting to the NAS via the web-based console or accessing it as a mapped drive.  I'd get the message:

30001 - Your last operation timed out. Make sure there are no network connectivity issues and try again.
I used Google to attempt to find a solution, but all I see is shared pain.

I did find a way to log into the device's command line.  Here's what I did:

  • I put "http://[ip of your MBL NAS]/UI/ssh" into my browser's address bar.
  • Clicked the "enable" button.
  • Shelled into the NAS using Putty and "root/welc0me" as a username/password.

Once I logged in, I immediately began to run 'top' because I knew I'd lose the session after 5 or so minutes and wouldn't be able to log in again unless I power-cycled the NAS.  I noticed that Twonky appeared to hog CPU cycles, so I went to the web GUI and disabled it.  Then I watched top again.  The load averages were a bit high before I disabled Twonky (in the 7.xx range as a first number).  I watched them drop to the mid-4s, then they started raising again.  Top wasn't telling me anything, though.

I watched the load average raise to 22.xx before the terminal session showed signs of degrading to the point that it stopped taking input.

login as: root
root@xxx.xxx.xxx.xxx's password:
Linux MyBookLive 2.6.32.11-svn70860 #1 Thu May 17 13:32:51 PDT 2012 ppc
Disclaimer: SSH provides access to the network device and all its
content, only users with advanced computer networking and Linux experience
should enable it. Failure to understand the Linux command line interface
can result in rendering your network device inoperable, as well as allowing
unauthorized users access to your network. If you enable SSH, do not share
the root password with anyone you do not want to have direct access to all
the content on your network device.

MyBookLive:~# w
 22:37:58 up 2 min,  1 user,  load average: 5.03, 1.54, 0.54
USER     TTY      FROM              LOGIN@   IDLE   JCPU   PCPU WHAT
root     pts/0    ron-alien.home   22:37    1.00s  0.05s  0.03s w
MyBookLive:~# w
 22:38:10 up 2 min,  1 user,  load average: 5.85, 1.89, 0.67
USER     TTY      FROM              LOGIN@   IDLE   JCPU   PCPU WHAT
root     pts/0    ron-alien.home   22:37    0.00s  0.04s  0.02s w
MyBookLive:~# w
 22:38:18 up 2 min,  1 user,  load average: 6.11, 2.07, 0.74
USER     TTY      FROM              LOGIN@   IDLE   JCPU   PCPU WHAT
root     pts/0    ron-alien.home   22:37    0.00s  0.04s  0.02s w
MyBookLive:~#
MyBookLive:~#
MyBookLive:~#
MyBookLive:~# top
top - 22:39:10 up 3 min,  1 user,  load average: 7.44, 3.06, 1.14
Tasks:  97 total,   1 running,  96 sleeping,   0 stopped,   0 zombie
Cpu(s): 31.9%us, 17.4%sy, 41.8%ni,  0.0%id,  6.6%wa,  0.3%hi,  2.0%si,  0.0%st
Mem:    253632k total,   242432k used,    11200k free,    41280k buffers
Swap:   500608k total,    42560k used,   458048k free,    52736k cached

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
 4429 root      21   1 21504 8192 3456 S 43.6  3.2   0:45.39 twonkymediaserv
 3936 www-data   4 -16 72704  30m  20m S 11.6 12.4   0:01.09 apache2
 3327 www-data   4 -16 76160  31m  19m S  5.6 12.6   0:02.37 apache2
 3809 www-data   4 -16 72704  33m  23m S  5.6 13.6   0:03.08 apache2
 3326 www-data   4 -16 74944  26m  16m S  1.7 10.7   0:03.34 apache2
 3829 www-data   4 -16 66624  23m  16m S  1.3  9.7   0:01.50 apache2
 4156 www-data   4 -16 69248  25m  17m S  1.3 10.3   0:00.30 apache2
 5071 root       4 -16  5056 3136 2304 D  1.0  1.2   0:00.03 getServiceStart
 4639 root      39  19  5120 3264 1920 D  0.7  1.3   0:03.12 ls
 4641 root      39  19  3776 1792 1344 S  0.7  0.7   0:00.77 tally
 4821 root      20   0  5056 3008 1920 R  0.7  1.2   0:00.34 top
 5067 root       4 -16  5056 3136 2304 D  0.7  1.2   0:00.02 getServiceStart
 2230 root      20   0 31424 3264 2048 S  0.3  1.3   0:00.19 rsyslogd
 2385 root      20   0     0    0    0 D  0.3  0.0   0:00.28 jbd2/sda4-8
 4405 root      20   0 57280 7552 2816 S  0.3  3.0   0:00.94 forked-daapd
 4640 root      39  19  4480 1856 1344 S  0.3  0.7   0:00.48 awk
    1 root      20   0  4352 1984 1600 S  0.0  0.8   0:00.82 init
MyBookLive:~#
MyBookLive:~#
MyBookLive:~#
MyBookLive:~# w
 22:39:15 up 3 min,  1 user,  load average: 7.24, 3.09, 1.16
USER     TTY      FROM              LOGIN@   IDLE   JCPU   PCPU WHAT
root     pts/0    ron-alien.home   22:37    1.00s  0.04s  0.02s w
MyBookLive:~# w
 22:39:16 up 3 min,  1 user,  load average: 7.24, 3.09, 1.16
USER     TTY      FROM              LOGIN@   IDLE   JCPU   PCPU WHAT
root     pts/0    ron-alien.home   22:37    0.00s  0.03s  0.01s w
MyBookLive:~# w
 22:39:19 up 3 min,  1 user,  load average: 7.22, 3.16, 1.20
USER     TTY      FROM              LOGIN@   IDLE   JCPU   PCPU WHAT
root     pts/0    ron-alien.home   22:37    1.00s  0.04s  0.02s w
MyBookLive:~# w
 22:39:20 up 3 min,  1 user,  load average: 7.22, 3.16, 1.20
USER     TTY      FROM              LOGIN@   IDLE   JCPU   PCPU WHAT
root     pts/0    ron-alien.home   22:37    0.00s  0.03s  0.01s w
MyBookLive:~# w
 22:39:25 up 3 min,  1 user,  load average: 7.36, 3.25, 1.24
USER     TTY      FROM              LOGIN@   IDLE   JCPU   PCPU WHAT
root     pts/0    ron-alien.home   22:37    2.00s  0.04s  0.02s w
MyBookLive:~# w
 22:39:32 up 3 min,  1 user,  load average: 7.09, 3.26, 1.25
USER     TTY      FROM              LOGIN@   IDLE   JCPU   PCPU WHAT
root     pts/0    ron-alien.home   22:37    0.00s  0.05s  0.02s w
MyBookLive:~# w
 22:39:39 up 3 min,  1 user,  load average: 6.62, 3.29, 1.28
USER     TTY      FROM              LOGIN@   IDLE   JCPU   PCPU WHAT
root     pts/0    ron-alien.home   22:37    0.00s  0.04s  0.01s w
MyBookLive:~# w
 22:40:17 up 4 min,  1 user,  load average: 5.75, 3.43, 1.40
USER     TTY      FROM              LOGIN@   IDLE   JCPU   PCPU WHAT
root     pts/0    ron-alien.home   22:37    1.00s  0.05s  0.02s w
MyBookLive:~# w
 22:40:24 up 4 min,  1 user,  load average: 5.79, 3.52, 1.45
USER     TTY      FROM              LOGIN@   IDLE   JCPU   PCPU WHAT
root     pts/0    ron-alien.home   22:37    0.00s  0.05s  0.02s w
MyBookLive:~# w
 22:40:35 up 4 min,  1 user,  load average: 6.11, 3.66, 1.52
USER     TTY      FROM              LOGIN@   IDLE   JCPU   PCPU WHAT
root     pts/0    ron-alien.home   22:37    1.00s  0.05s  0.01s w
MyBookLive:~# w
 22:40:46 up 4 min,  1 user,  load average: 5.85, 3.69, 1.55
USER     TTY      FROM              LOGIN@   IDLE   JCPU   PCPU WHAT
root     pts/0    ron-alien.home   22:37    0.00s  0.05s  0.01s w
MyBookLive:~# w
 22:41:00 up 5 min,  1 user,  load average: 5.44, 3.70, 1.59
USER     TTY      FROM              LOGIN@   IDLE   JCPU   PCPU WHAT
root     pts/0    ron-alien.home   22:37    0.00s  0.05s  0.01s w
MyBookLive:~# w
 22:41:54 up 5 min,  2 users,  load average: 4.65, 3.75, 1.73
USER     TTY      FROM              LOGIN@   IDLE   JCPU   PCPU WHAT
root     pts/0    ron-alien.home   22:37    0.00s  0.06s  0.02s w
root     pts/1    ron-alien.home   22:41   21.00s  0.17s  0.15s top
MyBookLive:~# w
 22:42:48 up 6 min,  2 users,  load average: 4.90, 3.93, 1.89
USER     TTY      FROM              LOGIN@   IDLE   JCPU   PCPU WHAT
root     pts/0    ron-alien.home   22:37    2.00s  0.09s  0.04s w
root     pts/1    ron-alien.home   22:41    1:15   0.50s  0.48s top
MyBookLive:~#
MyBookLive:~#
MyBookLive:~# w
 22:43:11 up 7 min,  2 users,  load average: 5.26, 4.09, 1.99
USER     TTY      FROM              LOGIN@   IDLE   JCPU   PCPU WHAT
root     pts/0    ron-alien.home   22:37    2.00s  0.07s  0.02s w
root     pts/1    ron-alien.home   22:41    1:39   0.66s  0.64s top
MyBookLive:~# w

Then there is this:


Something isn't quite right with this NAS, but it's going to take awhile to figure out what's going on.  Also, it responds well to pings, even if the SSH session is dead and won't recover.  And I still have to back it up.  I think I've 378GB of data on it (that's crucial...like once-in-a-lifetime types of pictures).

I don't think the drives are bad, but it may be too early to say that.  I've never seen bad drives ramp up load averages like that.

The drive is out of warranty and I'm a bit upset that what's touted as a top-notch home NAS is having such issues, especially considering that it's a WD product.

I'll update this post when/if I've more findings on this issue.

EDIT:  I just checked again after posting and, while the shells aren't dead, they are very slide-show-like.  I checked the load average and it's dropped to 12.94.

EDIT 2:  I got tired of waiting for "apachectl stop" to finish and I think it was actually hung, so I did a "killall -9 apache2" which immediately brought the load down.  The load is currently at 1.09 and has been around that the last 20 minutes.  So, it's apache that's killing the NAS.  Note that I tested to see if I could reach the NAS shares in a conventional manner (ie, non-shell or without apache) and was able to reach the shares without issue.  I may keep apache off for the duration (unless I need to access the control panel).

Friday, October 19, 2012

Engineering Stories

On the way to work today, I remembered an occasion where a team member who'd left the company had been stockpiling 1U rackmount servers in storage.  He'd reimaged each server with a common image (each had different passwords, though).  I had a listing of passwords for each server, but the listed password for one particular server wasn't working and we needed to get access to that machine.  I couldn't just reimage the machine since, even though it shared a common image, it was prepped for deployment to a certain location and was configured for that specific site.  While I had a copy of the site-specific information, I just did not have the time to reimage the machine and reconfigure it...I saved that as a "last resort" option.

After a bit of research, was able to log in successfully.

I knew the BIOS wasn't locked down, so I went into the BIOS and enabled booting from CDROM.  I had a copy of a Linux CD which I put into the CDROM tray.  I then power-cycled the system.  I was able to use the live-CD to boot up the box.  I mounted the drive within the system and removed the encrypted password within /etc/passwd using 'vipw'.  I then shut the box down, removed the live-CD, then started the system.  I was immediately given a shell.  I then reset the password to what was on the passwords list for that particular system then finished the pre-deployment steps.

This is why I love Linux.  There's always an option.  I could NOT do this with one of the backup Windows servers we had.  That case was similar:  the system was a cold backup and was racked but powered down...it was a new system with a new image but customized for a specific role...it had yet to be used, though.  The password that we had for the device was apparently incorrect.  I even tried to crack the SAM file...that didn't work and I eventually had to reinstall (not reimage) Windows Server (forgot which version) onto the system again.  What made this much worse was that there wasn't an original cloning image to use, as well as the fact that the previous engineer hadn't maintained directions on how he configured the device.  So I had to use the trial-and-error method.  I eventually configured the OS properly and installed and configured the proper software (it was a CA eTrust AV server).  The whole time, the lead client was pestering, badgering, and being overly hostile.

In another case, another contractor had left the company.  He'd been administering a Nessus server that he installed on top of OpenBSD.  This contractor chose OpenBSD and was comfortable with working within a terminal session (as was I).  And really, the box didn't really have an abundance of resources anyways, so it was probably more robust without the GUI enabled.  I understood something of OpenBSD and was aware of how to conduct scans and how to view/store the scan results.  I even had a cron job running that would conduct the scans during maintenance windows.  Everything was working fine.  The same client lead couldn't operate the system because his *nix skills were seriously lacking.  Instead of asking for help/guidance, he directed another contractor to wipe the machine and install Red Hat with the GUI enabled so that he could operate the machine.  Data was not backed up.  The scanning data as well as configuration man-hours were wasted.

Another time, I was working a deployment issue where client remote hands were my remote hands/eyes.  They'd received our Snort sensor that we'd imaged, customized, and configured and had just finished racking and powering it up.  The remote hands did not know anything of how to operate within a terminal session.  I walked him through the process, spelling out the commands he needed to type.  The problem?  We built the machine and while testing it before we shipped, had logged into the machine via SSH.  When the machine was at the remote location, I could not establish an SSH session because the host key had changed.  In order for me to regain access, the remote hands had to remove the existing host key that was tied to the IP of my work machine...the host key resided on the Snort sensor that I was trying to log into.  What made me feel good was that one of the clients was logged into the bridge call and was listening.  After the call, she praised me for my knowledge of guiding the remote hands through the whole process without ever being able to view what was on his screen.  She also commented on how I guided him in what to type.  In this case, I could care less how much they were paying me (which wasn't really all that much)...I was happy that I was able to be of assistance and value.  That was payment enough.  That was one of the few bright days in working with that particular organization.  I soon took a dignified stance and left that contract.  To this day, I will not recommend any person I know to work at that particular location without giving them ample warning.

But the main reason for this post is to share that I love *nix (and why)!

Wednesday, August 25, 2010

Protect your privates!

Protect your privates!

http://isc.sans.edu/diary.html?storyid=9367


In view of all the brute force attacks still being attempted against Secure Shell (SSH), we have long since been extolling the virtues of forgoing passwords and moving to RSA/DSA keys instead.
While key based login indeed nicely addresses the problem of password guessing attacks, it looks like many a Unix admin has been less than diligent in the implementation. In pretty much every Unix security audit recently, we've come across unprotected or badly protected SSH private keys (id_dsa, id_rsa). Some reside plain flat out in the open, in /tmp and such. Others are found in world-readable tar "backup" archives of user and administrator home directories. Some are even built into home-grown Linux RPM and Solaris PKG packages, ready to be plucked off an install server.

Friday, June 18, 2010

Distributed SSH Brute Force Attempts on the rise again -- SANS ISC

Reported by SANS ISC:

Distributed SSH Brute Force Attempts on the rise again

SSH brute force attempts seem to be on the rise again, at the SANS Internet Storm Center we have received a number of reports that a number of networks are seeing them. The source IP addresses vary with each new attempted username in the wordlist, which would indicate that the attempts are distributed through botnet(s). It only takes a single user with a weak password for a breach to occur, then with that foothold escalation and further attacks are likely next. This is certainly not a new phenomenon, however I think it is a good time to raise awareness about it once again.

Monday, December 29, 2008

Distributed SSH Brute Force Attempts, part 3

I wanted to be able to harvest the log data that the brute force attempts are generating, so I've decided to not move the SSH listening port. I'm also logging each event within my firewall logs. The particular machine I'm seeing the attacks on is a FreeBSD box (I should've mentioned that earlier) and I'm using PF as the firewall of choice.

I'd like to show you my listing of blocked IPs. I've been actively gathering them since approximately 11/17/2008. I was unhappy that I hadn't noticed the distributed attempts until November and wanted more trending data, so I reached into my SSH logs and parsed the files present with a very dirty script that added each unique IP to a PF table that is designed to block such activity. The script is below:



[root@delly ~]# cat IPscript
#/bin/bash

cd /var/log
bunzip2 pflog.*

cat /var/log/auth.log | grep sshd | grep -i 'invalid user' > /tmp/auth_IP_list_1
bzcat /var/log/auth.log.[01234567].bz2 | grep sshd| grep -i 'invalid user' >> /tmp/auth_IP_list_1
sed '/Failed keyboard-interactive/d' /tmp/auth_IP_list_1 > /tmp/auth_IP_list_2
awk '{print $10}' /tmp/auth_IP_list_2 > /tmp/auth_IP_list_3

tcpdump -nettttr /var/log/pflog > /tmp/fw_IP_list_1
tcpdump -nettttr /var/log/pflog.0 >> /tmp/fw_IP_list_1
awk '{print $9}' /tmp/fw_IP_list_1 > /tmp/fw_IP_list_2
nawk -F. '{print $1, $2, $3, $4}' /tmp/fw_IP_list_2 > /tmp/fw_IP_list_3
sed 's/ /./g' /tmp/fw_IP_list_3 > /tmp/fw_IP_list_4

cat /tmp/fw_IP_list_4 >> /tmp/auth_IP_list_4
cat /tmp/auth_IP_list_4 | sort -rn | uniq > /tmp/auth_IP_list_5
cat /tmp/auth_IP_list_5 | grep -v '64.62.231.220' > /tmp/auth_IP_list_6
cat /tmp/auth_IP_list_6 | grep -v '66.160.141.30' > /tmp/auth_IP_list_7
cat /tmp/auth_IP_list_7 | grep -v '10.150.1' > /tmp/auth_IP_list_8
cat /tmp/auth_IP_list_8
cat /tmp/auth_IP_list_8 | wc -l

pfctl -t bruteforce -T add -f /tmp/auth_IP_list_8

rm -rf /tmp/fw_IP_list_* /tmp/auth_IP_list_*



I apologize for the lack of comments in the script...as I said, it was an extremely dirty hack that required me to learn a tad of sed and awk. The script outputs the following file: http://wigglit.ath.cx/txt/bruteforce_IPtable

My logs don't actually go back that far. The FW logs go back to 14 Dec. The SSH service logs go back to 26 Nov. I probably have captured IPs reaching back to the beginning of November, though.

My FW policy prevents anything from entering the network unless specifically allowed (default deny policy). The reason I wanted to track the IPs was because my Denyhosts configuration wasn't catching most of these and it was working fine before this new trend occurred. I'm a security consultant and a researcher at heart, so I thought that tracking this would be cool. My home router has ports 22, 443, and 3306 exposed to the wild for the FreeBSD box. The FW only allows certain IPs in on those ports, though, so nothing will get in. The activity that doesn't get immediately blocked is blocked by the bruteforce_IPtable script that I run daily. The script is flawless and I will begin to have it run hourly via a cronjob. I'll also have it send an updated list to my website daily.

The IPs within my block table number 565. The script parsed 115 from the logs tonight, but only added 1 IP. The norm is usually 2-3 daily. The rest of the IPs are from logs over the last month and a half.

Oh yeah, I've another script does a daily copy of the IPs that are added to the table, so I can at least quickly determine (using 'diff') what was added on a certain day. I can create a script that will show me what was added daily for a given timespan (day/week/month).

Monday, November 24, 2008

Distributed SSH Brute Force Attempts, part 2

OK, so I'm still curious about the distributed SSH brute force attempts I've been seeing.

I wanted to check all my logs and not just the most recent logfile. Looking at my /var/log directory:

-su-2.05b# ls | grep auth
auth.log
auth.log.0.bz2
auth.log.1.bz2
auth.log.2.bz2
auth.log.3.bz2
auth.log.4.bz2
auth.log.5.bz2
auth.log.6.bz2
auth.log.7.bz2


I tailed the auth.log file and grabbed a random IP:

Nov 24 21:53:59 delly sshd[75490]: Invalid user bryan from 170.56.255.20
Nov 24 21:54:00 delly sshd[75490]: error: PAM: authentication error for illegal user bryan from 170.56.255.20
Nov 24 21:54:00 delly sshd[75490]: Failed keyboard-interactive/pam for invalid user bryan from 170.56.255.20 port 43229 ssh2


I then wanted to check all the auth.log.* files, but was curious as to how I could check compressed files. I found that there's a command called bzgrep that allows one to grep compressed files, so I used the following command and came up with quite a few hits for the referenced IP over seven (7) log files:

-su-2.05b# bzgrep '170.56.255.20' auth.log.*


The results show 2-3 instances of log entries per login attempt, so I wanted to isolate each instance without having to use arcane sed and sort commands, so I used the following:


-su-2.05b# bzgrep '170.56.255.20' auth.log.* | grep 'Invalid user' | wc -l
19


So, this particular IP generated 19 log entries between 22 and 24 Nov.

That's not particularly good. I'd love to create a script that would break down all of these IPs' unique login attempts (and possibly block them). Sounds like a project, no? :)

I've a bit of time, so I did the following (non-scripted). I cat'd the auth.log file and collected a screens-worth of data:

-su-2.05b# cat auth.log | less
Nov 24 15:00:00 delly newsyslog[74001]: logfile turned over due to size>100K
Nov 24 15:00:49 delly sshd[74014]: Invalid user brand from 218.80.215.198
Nov 24 15:00:50 delly sshd[74014]: error: PAM: authentication error for illegal user brand from 218.80.215.198
Nov 24 15:00:50 delly sshd[74014]: Failed keyboard-interactive/pam for invalid user brand from 218.80.215.198 port 19051 ssh2
Nov 24 15:02:25 delly sshd[74017]: Invalid user brandee from 83.19.224.11
Nov 24 15:02:26 delly sshd[74017]: error: PAM: authentication error for illegal user brandee from dum11.internetdsl.tpnet.pl
Nov 24 15:02:26 delly sshd[74017]: Failed keyboard-interactive/pam for invalid user brandee from 83.19.224.11 port 50163 ssh2
Nov 24 15:03:48 delly sshd[74020]: Invalid user brandee from 194.224.118.61
Nov 24 15:03:48 delly sshd[74020]: error: PAM: authentication error for illegal user brandee from 194.224.118.61
Nov 24 15:03:48 delly sshd[74020]: Failed keyboard-interactive/pam for invalid user brandee from 194.224.118.61 port 6345 ssh2
Nov 24 15:05:21 delly sshd[74026]: Invalid user brandee from 90.176.233.222
Nov 24 15:05:22 delly sshd[74026]: error: PAM: authentication error for illegal user brandee from 222.233.broadband9.iol.cz
Nov 24 15:05:22 delly sshd[74026]: Failed keyboard-interactive/pam for invalid user brandee from 90.176.233.222 port 46108 ssh2
Nov 24 15:06:42 delly sshd[74029]: Invalid user branden from 125.77.106.246
Nov 24 15:06:42 delly sshd[74029]: error: PAM: authentication error for illegal user branden from 125.77.106.246
Nov 24 15:06:42 delly sshd[74029]: Failed keyboard-interactive/pam for invalid user branden from 125.77.106.246 port 46495 ssh2
Nov 24 15:10:59 delly sshd[74035]: Invalid user brandi from 122.224.128.222
Nov 24 15:10:59 delly sshd[74035]: error: PAM: authentication error for illegal user brandi from 122.224.128.222
Nov 24 15:10:59 delly sshd[74035]: Failed keyboard-interactive/pam for invalid user brandi from 122.224.128.222 port 42253 ssh2
Nov 24 15:12:27 delly sshd[74051]: Invalid user brandi from 59.125.200.51
Nov 24 15:12:28 delly sshd[74051]: error: PAM: authentication error for illegal user brandi from 3w.upcc.com.tw
Nov 24 15:12:28 delly sshd[74051]: Failed keyboard-interactive/pam for invalid user brandi from 59.125.200.51 port 14046 ssh2
Nov 24 15:15:23 delly sshd[74057]: Invalid user brandice from 62.112.222.88
Nov 24 15:15:24 delly sshd[74057]: error: PAM: authentication error for illegal user brandice from 3e70de58.adsl.enternet.hu
Nov 24 15:15:24 delly sshd[74057]: Failed keyboard-interactive/pam for invalid user brandice from 62.112.222.88 port 42127 ssh2
Nov 24 15:16:49 delly sshd[74060]: Invalid user brandice from 218.80.215.198
Nov 24 15:16:50 delly sshd[74060]: error: PAM: authentication error for illegal user brandice from 218.80.215.198
Nov 24 15:16:50 delly sshd[74060]: Failed keyboard-interactive/pam for invalid user brandice from 218.80.215.198 port 57929 ssh2
Nov 24 15:18:11 delly sshd[74063]: Invalid user brandice from 65.203.231.41
Nov 24 15:18:11 delly sshd[74063]: error: PAM: authentication error for illegal user brandice from 65.203.231.41
Nov 24 15:18:11 delly sshd[74063]: Failed keyboard-interactive/pam for invalid user brandice from 65.203.231.41 port 38395 ssh2
Nov 24 15:19:43 delly sshd[74066]: Invalid user brandie from 123.14.10.64
Nov 24 15:19:44 delly sshd[74066]: error: PAM: authentication error for illegal user brandie from 123.14.10.64
Nov 24 15:19:44 delly sshd[74066]: Failed keyboard-interactive/pam for invalid user brandie from 123.14.10.64 port 4925 ssh2
Nov 24 15:21:07 delly sshd[74072]: Invalid user brandie from 200.170.141.134
Nov 24 15:21:07 delly sshd[74072]: error: PAM: authentication error for illegal user brandie from 200-170-141-134.static.ctbctelecom.com.br
Nov 24 15:21:07 delly sshd[74072]: Failed keyboard-interactive/pam for invalid user brandie from 200.170.141.134 port 39979 ssh2
Nov 24 15:22:44 delly sshd[74088]: Invalid user brandie from 80.51.31.84
Nov 24 15:22:44 delly sshd[74088]: error: PAM: authentication error for illegal user brandie from 80.51.31.84
Nov 24 15:22:44 delly sshd[74088]: Failed keyboard-interactive/pam for invalid user brandie from 80.51.31.84 port 39453 ssh2
Nov 24 15:24:02 delly sshd[74091]: Invalid user brandon from 200.157.176.13
Nov 24 15:24:03 delly sshd[74091]: error: PAM: authentication error for illegal user brandon from 200.157.176.13
Nov 24 15:24:03 delly sshd[74091]: Failed keyboard-interactive/pam for invalid user brandon from 200.157.176.13 port 54638 ssh2

I then checked every IP for unique log entries within all of my ssh logs:

-su-2.05b# bzgrep '218.80.215.198' auth.log.* | grep 'Invalid user' | wc -l
6

-su-2.05b# bzgrep '218.80.215.198' auth.log.* | grep 'Invalid user'
auth.log.0.bz2:Nov 24 10:55:25 delly sshd[73084]: Invalid user bjorn from 218.80.215.198
auth.log.1.bz2:Nov 23 23:22:28 delly sshd[70023]: Invalid user bahari from 218.80.215.198
auth.log.3.bz2:Nov 23 11:18:54 delly sshd[66908]: Invalid user archibald from 218.80.215.198
auth.log.4.bz2:Nov 23 04:11:52 delly sshd[65051]: Invalid user amy from 218.80.215.198
auth.log.5.bz2:Nov 22 23:20:53 delly sshd[63465]: Invalid user alize from 218.80.215.198
auth.log.7.bz2:Nov 22 01:07:10 delly sshd[57652]: Invalid user claire from 218.80.215.198

-su-2.05b# bzgrep '83.19.224.11' auth.log.* | grep 'Invalid user' | wc -l
8
-su-2.05b# bzgrep '194.224.118.61' auth.log.* | grep 'Invalid user' | wc -l
11
-su-2.05b# bzgrep '90.176.233.222' auth.log.* | grep 'Invalid user' | wc -l
0
-su-2.05b# bzgrep '125.77.106.246' auth.log.* | grep 'Invalid user' | wc -l
9
-su-2.05b# bzgrep '122.224.128.222' auth.log.* | grep 'Invalid user' | wc -l
8
-su-2.05b# bzgrep '59.125.200.51' auth.log.* | grep 'Invalid user' | wc -l
5
-su-2.05b# bzgrep '62.112.222.88' auth.log.* | grep 'Invalid user' | wc -l
10
-su-2.05b# bzgrep '218.80.215.198' auth.log.* | grep 'Invalid user' | wc -l
6
-su-2.05b# bzgrep '65.203.231.41' auth.log.* | grep 'Invalid user' | wc -l
15

-su-2.05b# bzgrep '65.203.231.41' auth.log.* | grep 'Invalid user'
auth.log.0.bz2:Nov 24 14:56:28 delly sshd[73982]: Invalid user bran from 65.203.231.41
auth.log.1.bz2:Nov 23 22:32:01 delly sshd[69793]: Invalid user azra from 65.203.231.41
auth.log.1.bz2:Nov 24 02:04:43 delly sshd[70680]: Invalid user bartholemew from 65.203.231.41
auth.log.1.bz2:Nov 24 04:07:45 delly sshd[71474]: Invalid user beck from 65.203.231.41
auth.log.2.bz2:Nov 23 19:02:03 delly sshd[68866]: Invalid user aurora from 65.203.231.41
auth.log.2.bz2:Nov 23 20:16:18 delly sshd[69213]: Invalid user avi from 65.203.231.41
auth.log.3.bz2:Nov 23 10:08:33 delly sshd[66592]: Invalid user april from 65.203.231.41
auth.log.3.bz2:Nov 23 10:24:43 delly sshd[66657]: Invalid user aquila from 65.203.231.41
auth.log.3.bz2:Nov 23 11:22:12 delly sshd[66933]: Invalid user archie from 65.203.231.41
auth.log.4.bz2:Nov 23 04:22:52 delly sshd[65094]: Invalid user anahid from 65.203.231.41
auth.log.4.bz2:Nov 23 05:32:46 delly sshd[65407]: Invalid user andra from 65.203.231.41
auth.log.5.bz2:Nov 22 23:00:09 delly sshd[63393]: Invalid user alisha from 65.203.231.41
auth.log.6.bz2:Nov 22 12:41:17 delly sshd[60534]: Invalid user abraham from 65.203.231.41
auth.log.6.bz2:Nov 22 16:14:07 delly sshd[61564]: Invalid user africa from 65.203.231.41
auth.log.7.bz2:Nov 22 11:48:33 delly sshd[60289]: Invalid user aaralyn from 65.203.231.41

-su-2.05b# bzgrep '123.14.10.64' auth.log.* | grep 'Invalid user' | wc -l
19
-su-2.05b# bzgrep '200.170.141.134' auth.log.* | grep 'Invalid user' | wc -l
6
-su-2.05b# bzgrep '80.51.31.84' auth.log.* | grep 'Invalid user' | wc -l
3
-su-2.05b# bzgrep '200.157.176.13' auth.log.* | grep 'Invalid user' | wc -l
4


So, someone appears to have a pool of compromised machines and is using each one in a scaled SSH brute force attack, based on the referenced user accounts being bruteforced. I'm seeing more of this than standard, blatant SSH BF attempts. I'll be checking Denyhosts' website to see if they've a resolution on how to track and ban such activity.

Sunday, November 23, 2008

Distributed SSH Brute Force Attempts?

I'd read not long ago on the ISC Diary that someone has noticed that a there's a newly discovered way to avoid automated tools such as Denyhosts and Fail2ban. It appears that the attacks are now distributed across an IP pool of compromised machines. Maybe botnet masters are leveraging their botnets to attempt to bruteforce login attempts without risking the attacking hosts.

I think I'm seeing this in my home firewall logs:

Nov 23 15:23:01 delly sshd[67946]: error: PAM: authentication error for illegal user artois from 1-1-4-27a.vhe.sth.bostream.se
Nov 23 15:23:01 delly sshd[67946]: Failed keyboard-interactive/pam for invalid user artois from 82.182.188.187 port 35763 ssh2
Nov 23 15:24:18 delly sshd[67949]: Invalid user arty from 58.26.48.162
Nov 23 15:24:18 delly sshd[67949]: error: PAM: authentication error for illegal user arty from 58.26.48.162
Nov 23 15:24:18 delly sshd[67949]: Failed keyboard-interactive/pam for invalid user arty from 58.26.48.162 port 5785 ssh2
Nov 23 15:25:17 delly sshd[67955]: Invalid user arty from 200.170.141.134
Nov 23 15:25:17 delly sshd[67955]: error: PAM: authentication error for illegal user arty from 200-170-141-134.static.ctbctelecom.com.br
Nov 23 15:25:17 delly sshd[67955]: Failed keyboard-interactive/pam for invalid user arty from 200.170.141.134 port 57360 ssh2
Nov 23 15:26:31 delly sshd[67958]: Invalid user arty from 219.76.222.27
Nov 23 15:26:31 delly sshd[67958]: error: PAM: authentication error for illegal user arty from n219076222027.netvigator.com
Nov 23 15:26:31 delly sshd[67958]: Failed keyboard-interactive/pam for invalid user arty from 219.76.222.27 port 47176 ssh2
Nov 23 15:28:48 delly sshd[67963]: Invalid user arva from 58.196.4.2
Nov 23 15:28:49 delly sshd[67963]: error: PAM: authentication error for illegal user arva from 58.196.4.2
Nov 23 15:28:49 delly sshd[67963]: Failed keyboard-interactive/pam for invalid user arva from 58.196.4.2 port 50637 ssh2
Nov 23 15:33:27 delly sshd[67982]: Invalid user arvid from 125.77.106.246
Nov 23 15:33:27 delly sshd[67982]: error: PAM: authentication error for illegal user arvid from 125.77.106.246
Nov 23 15:33:27 delly sshd[67982]: Failed keyboard-interactive/pam for invalid user arvid from 125.77.106.246 port 51673 ssh2
Nov 23 15:34:40 delly sshd[67985]: Invalid user arvin from 85.39.252.226
Nov 23 15:34:40 delly sshd[67985]: error: PAM: authentication error for illegal user arvin from host226-252-static.39-85-b.business.telecomitalia.it
Nov 23 15:34:40 delly sshd[67985]: Failed keyboard-interactive/pam for invalid user arvin from 85.39.252.226 port 43706 ssh2
Nov 23 15:35:54 delly sshd[67991]: Invalid user arvin from 217.126.90.161
Nov 23 15:35:55 delly sshd[67991]: error: PAM: authentication error for illegal user arvin from 161.red-217-126-90.staticip.rima-tde.net
Nov 23 15:35:55 delly sshd[67991]: Failed keyboard-interactive/pam for invalid user arvin from 217.126.90.161 port 36755 ssh2
Nov 23 15:37:11 delly sshd[67994]: Invalid user arvin from 200.232.181.40
Nov 23 15:37:11 delly sshd[67994]: error: PAM: authentication error for illegal user arvin from 200-232-181-40.dsl.telesp.net.br
Nov 23 15:37:11 delly sshd[67994]: Failed keyboard-interactive/pam for invalid user arvin from 200.232.181.40 port 56318 ssh2
Nov 23 15:39:16 delly sshd[67997]: Invalid user arwan from 200.248.82.130
Nov 23 15:39:17 delly sshd[67997]: error: PAM: authentication error for illegal user arwan from 200.248.82.130
Nov 23 15:39:17 delly sshd[67997]: Failed keyboard-interactive/pam for invalid user arwan from 200.248.82.130 port 53388 ssh2
Nov 23 15:40:24 delly sshd[68003]: Invalid user arwan from 217.126.90.161
Nov 23 15:40:25 delly sshd[68003]: error: PAM: authentication error for illegal user arwan from 161.red-217-126-90.staticip.rima-tde.net
Nov 23 15:40:25 delly sshd[68003]: Failed keyboard-interactive/pam for invalid user arwan from 217.126.90.161 port 43871 ssh2
Nov 23 15:41:33 delly sshd[68006]: Invalid user arwen from 200.209.6.130
Nov 23 15:41:34 delly sshd[68006]: error: PAM: authentication error for illegal user arwen from 200.209.6.130
Nov 23 15:41:34 delly sshd[68006]: Failed keyboard-interactive/pam for invalid user arwen from 200.209.6.130 port 14808 ssh2
Nov 23 15:42:48 delly sshd[68016]: Invalid user arwen from 123.14.10.64
Nov 23 15:42:49 delly sshd[68016]: error: PAM: authentication error for illegal user arwen from 123.14.10.64
Nov 23 15:42:49 delly sshd[68016]: Failed keyboard-interactive/pam for invalid user arwen from 123.14.10.64 port 7600 ssh2
Nov 23 15:43:50 delly sshd[68020]: reverse mapping checking getaddrinfo for techregister.worcesteracademy.org [68.112.227.30] failed - POSSIBLE BREAK-IN ATTEMPT!
Nov 23 15:43:50 delly sshd[68020]: Invalid user arwen from 68.112.227.30
Nov 23 15:43:50 delly sshd[68020]: error: PAM: authentication error for illegal user arwen from 68.112.227.30
Nov 23 15:43:50 delly sshd[68020]: Failed keyboard-interactive/pam for invalid user arwen from 68.112.227.30 port 38273 ssh2
Nov 23 15:45:03 delly sshd[68039]: Invalid user arya from 196.28.50.162
Nov 23 15:45:03 delly sshd[68039]: error: PAM: authentication error for illegal user arya from www.cfse.gov.pr
Nov 23 15:45:03 delly sshd[68039]: Failed keyboard-interactive/pam for invalid user arya from 196.28.50.162 port 55647 ssh2
Nov 23 15:46:10 delly sshd[68042]: Invalid user arya from 81.12.221.74
Nov 23 15:46:10 delly sshd[68042]: error: PAM: authentication error for illegal user arya from em.asiban.ro
Nov 23 15:46:10 delly sshd[68042]: Failed keyboard-interactive/pam for invalid user arya from 81.12.221.74 port 16653 ssh2
Nov 23 15:47:23 delly sshd[68050]: Invalid user arya from 190.34.148.178
Nov 23 15:47:23 delly sshd[68050]: error: PAM: authentication error for illegal user arya from 190.34.148.178
Nov 23 15:47:23 delly sshd[68050]: Failed keyboard-interactive/pam for invalid user arya from 190.34.148.178 port 58738 ssh2
Nov 23 15:48:29 delly sshd[68053]: reverse mapping checking getaddrinfo for britannic-iss-medidean-working.e1-4-0-0-57.0.ar2.lon3.gblx.net [64.213.54.106] failed - POSSIBLE BREAK-IN ATTEMPT!
Nov 23 15:48:29 delly sshd[68053]: Invalid user asa from 64.213.54.106
Nov 23 15:48:29 delly sshd[68053]: error: PAM: authentication error for illegal user asa from 64.213.54.106
Nov 23 15:48:29 delly sshd[68053]: Failed keyboard-interactive/pam for invalid user asa from 64.213.54.106 port 42991 ssh2
Nov 23 15:49:46 delly sshd[68056]: Invalid user asa from 91.135.200.86
Nov 23 15:49:47 delly sshd[68056]: error: PAM: authentication error for illegal user asa from 91.135.200.86
Nov 23 15:49:47 delly sshd[68056]: Failed keyboard-interactive/pam for invalid user asa from 91.135.200.86 port 10262 ssh2
Nov 23 15:50:52 delly sshd[68062]: Invalid user asa from 200.20.187.222
Nov 23 15:50:53 delly sshd[68062]: error: PAM: authentication error for illegal user asa from 200.20.187.222
Nov 23 15:50:53 delly sshd[68062]: Failed keyboard-interactive/pam for invalid user asa from 200.20.187.222 port 52959 ssh2

Don't focus on the attacking IPs, but look at the referenced users. There are now tools that look like they're scaling attacks on a listing of common logins (or maybe even dictionary attacks) so that there's less risk of detection. There are current tools that look for attacks in a thresholded manner (example: 4 attacks in 5 sec warrants a block of that attacking IP). This new method of attack will not trigger the thresholding blocks.

More than ever, SSH key-based authentication should be used. This will prevent a successful login when under attack via brute forcing methods.

I can already see attack detection tools being adjusted to focus on tracking user accounts being bruteforced and banning all IPs that try to access user accounts based on time (example: 4 attacks on account asa in 5 sec will warrant a ban of all subsequent IPs for the next day or so...and not block if the IP is listed within a whitelist).

Your thoughts?