Image

Today marks one month that my “work time” is 100% devoted to Red Hat; last month I “retired” from any of the IT/web work for my church that I’ve been doing for the last 12 or so years. It’s been an interesting month being able to spend time on things that I want to spend on outside of regular Red Hat work hours. =) My rediscovered and available time has been spent fiddling around the house dealing with things I never really had time to get to in quite a while, and some on computer-related things that I’ve wanted to do but never had the time to do. One of the things on this long todo list was to do some “security reviews” of passwords and related things (GPG keys, SSH keys, etc.).

This review became a bit more pressing for two reasons: macOS Sierra ships with a version of OpenSSH that does not use DSA keys by default, and a former client of mine phoned me two weeks ago because they were phished for $150k and he was looking for help. The latter reminded me that it’s good to review the landscape every once in a while and keep up to date with changes, the former was the impetus to actually do something about it.

Because of the DSA key change, and because (to my chagrin) I still used such a key for a few hosts, I decided to do some cleanup of my SSH keys first. To maintain compatibility with some older hosts that may not support newer key types like ECDSA, I opted to create a new 2048 bit RSA key for use:

# ssh-keygen -t rsa -b 2048 -C "user@host"

Then the real fun began when I had to remember where the old keys were being used and updating them. I had a half-dozen SSH keys of various types (DSA, 1024 bit RSA, 2048 bit RSA) and decided to consolidate things and cleanup my ~/.ssh/known_hosts and ~/.ssh/config files as well. I started primarily with making a clean ~/.ssh directory and backing up the old one so I can still keep it for those hosts I’m likely to have missed.

As a side note, when you upgrade to macOS Sierra it overwrites /etc/ssh/sshd_config and /etc/ssh/ssh_config so I had to make changes there with respects to kerberos authentication. The options are the same; basically you want to set GSSAPIAuthentication yes in both config files.

Double-checked my GPG key and it’s a 3072 bit RSA key, so doing alright there.

The next step in the coming days is to update passwords on some sites where it’s not been changed in a long time. I use 1Password (you could use any password manager you like) so I have distinct and unique passwords on each site, but some of them are a bit weaker than I would like. One nice feature of 1Password is the “Security Audit” section that lists passwords that are weak, duplicates, and then those that have not been changed in a while (broken down in 3+ years old, 1-3 years old, and 6-12 months old). Thankfully the weak password count is fairly low, but the 3+ year old list is a bit higher.

Some people recommend changing passwords often, but I disagree with this. For instance, whenever you change your password you introduce some risk (is the site currently compromised? are you? is there someone listening on the wire as you make the change?) so there are some instances where it makes sense, but if you’re rotating passwords every 3mos or so I have to wonder what the point is (and point out that you’re actually increasing risk when you do). So use a good password or passphrase, make sure they are different for each site (so a compromise of one doesn’t compromise you on another, etc.). Having said that, it’s good to change passwords every once in a while. The security policy of the site may have changed — maybe when you first signed up they kept passwords in plaintext for some insane reason, and in a later software update they started storing it properly, or maybe they used to store the password as a weak md5 hash and have since moved to a stronger sha256. The point is, even though your password may have been strong, you don’t know how they have stored it on the remote side and in a lot of these cases, you’re at their mercy even if you use a good password.

Of course, you can almost always tell if the remote end has bad security policies by doing the “forgotten password” request… if they send you your password, you might want to move on from that site entirely (a server should not be able to send you your password in plaintext!).

Finally, if the sites you use offer two-factor authentication (sometimes called two-step authentication) I do recommend using it. Sometimes it’s a pain, but (as in the case of this former client) using two-factor authentication would have kept someone from accessing his email account and convincing his bank to wire a lot of money out of the country. The idiocy of the bank aside, this is a pretty costly reminder that some really smart people can do some really creative things to steal a buck, and some other (equally smart or not nearly as smart, you decide) people can get suckered. How that bank handled that situation was beyond ridiculous, which sort of drives home the point that ultimately it’s up to you to be your own advocate and first line of defence. Good passwords, good security “know how”, and just plain old being smart (don’t click random links in email, people!) are really the best things you can do to save yourself a lot of grief.

And while there is a balance between security and convenience, with some things (like 1Password) you can get a measure of both. And education… education is important. We as humans are really good at learning about the things that matter to us… healthy eating, responsible money management, and so on… we need to drive home the message that security is important and it starts with the individual and their computing habits. As security professionals, we almost need these horror stories to shock people into action but it seems like when the horror stories come in rapid succession, we tend to filter them out. These things becoming “normal” is not ok. Humans tend to minimize how awful something is unless it happens to them. And sometimes we need bad things to happen more than once to really drive it home.

As an illustration, a neighbour had a drive die and the last time this happened a few years ago I helped him get sorted out with a backup system and made it quite clear how important it was by sharing my own experiences. However, over time, the backups happened less and now those pictures, financial information, and other things are… poof… gone. He’s an example of being bit, perhaps not hard enough the first time, but hopefully hard enough this time. I’ve been bit once (hard!) and I would gladly pay the cost of multiple redundant backups than deal with that kind of data loss again. I suspect this time he will too.

While that isn’t really security-related, per se, I think it’s probably something that people can appreciate due to their own experiences — when it comes to some of the damage that comes from security breaches or information theft, I don’t want people to have those experiences! So while we can’t fully control what information we provide to a site that may or may not get exposed, we can certainly try to reduce the damage and we should. I’ve spoken to homeschool groups about exactly this and I was probably more shocked overall than they were! They were shocked at all the naughty stuff that can happen and I was shocked that they didn’t know even the half of it.

I could keep going (maybe I should, maybe a multitude of voices shouting from the rooftops that this stuff is actually serious will make a difference), but I’ll stop here. Suffice it to say that while I think there is a great, and growing, responsibility on site owners to protect their customers and users, the sad reality is these things are based on software written by humans and humans make mistakes. I’ve been doing security response work for over 15 years now and while certain things become better, other things are horribly, horribly broken (don’t even get me started on IoT!) and we have a lot to do to fix them and educate the coders of the future on how not to write these things.

And since there will always be users, we need to educate them so that they know the risks and how best to minimize them. We can try, and often succeed, in keeping users safe, but it only takes one crack in the armor and then it becomes a matter of minimizing damage and the user who has prepared for this (because it will happen) will be much better off than the user that didn’t or, even worse, didn’t even realize it was possible.

Share on: TwitterLinkedIn


Related Posts


Published

Category

Linux

Tags

Stay in touch