LinuxQuestions.org
Latest LQ Deal: Latest LQ Deals
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Security
User Name
Password
Linux - Security This forum is for all security related questions.
Questions, tips, system compromises, firewalls, etc. are all included here.

Notices


Reply
  Search this Thread
Old 07-20-2011, 05:56 PM   #16
sneakyimp
Senior Member
 
Registered: Dec 2004
Posts: 1,056

Original Poster
Rep: Reputation: 78

Thanks for input. Do you have any thoughts on the list of processes? Remember that 10.04 has some extra 30 processes running versus 11.04.

As Salasi recommended, I'm assembling a detailed document for my own purposes that includes URLs and credentials and everything, but it's not for public consumption.I've decided to go with 10.04 LTS and so far I have:
* Login to Amazon EC2.com account and create a new key pair
* Create a security group that permits inbound traffic for port 22 from IP range 76.173.0.0/16 ONLY. AFAIK, this should be the only inbound connection permitted for this virtual machine.
* Instantiate a Large EC2 Compute Instance using this AMI which was linked from this page at Ubuntu. The ubuntu system that resulted is configured to only allow ssh login using certificate as user ubuntu. While the sshd_config had PermitRootLogin set to 'yes', the authorized_keys file for the root user had a command value which instructed anyone using root to login as 'ubuntu' instead. PasswordAuthentication and PermitEmptyPassword are set to no by default. RSAAuthentication and PubkeyAuthentication are set to yes by default.
* Listed the trusted keys using sudo apt-key finger. Verify their key fingerprints visually against data located at http://keyserver.ubuntu.com as detailed above. Also use gpg commands to import and attempt to verify these keys (and the one subkey) on a separate machine, my ubuntu desktop using techniques described above. I'm still not clear on why it's ok to trust these keys because they have no chain-of-trust link to me. While this is not surprising, the keyserver neither delivers key information via HTTPS nor would it be impossible to create some random key and 40 fake email addresses and sign it all myself. Per unspawn, I'm giving the key verification a rest in favor of progress.
* Ask Unspawn (and community at large) to inspect installed package list, running processes list, sources.list, and sshd_config.
* Create a new account for myself with no password.
* Add the public key from my personal key pair to the ~/.ssh/authorized_keys file for this new user. Test login using this new account.
* Using the ubuntu account, add this new user to sudoers with ALL=(ALL) NOPASSWD:ALL which gives this new user sudo without requiring any password.
* Comment out the public key in /home/ubuntu/.ssh/authorized_keys, thereby disabling login entirely for user ubuntu.
* Alter command directive in /root/.ssh/authorized_keys so that it tells users to use their own login rather than ubuntu login.
* Test that new user has sudo capability and ubuntu login is disabled.
* Edit /etc/ssh/sshd_config so that PermitRootLogin is no, PermitEmptyPasswords is no, PasswordAuthentication is no, and AllowUsers contains only the name of my newly added user.
* restart sshd: sudo /etc/init.d/ssh restart
* Test effectiveness of AllowUsers by re-enabling login (but not sudoer) for ubuntu while excluding it from AllowUsers.
* confirm that both root and ubuntu logins are no longer permitted, whether using the key or otherwise.

At the moment, I'm trying to figure out fail2ban and Tiger (which is brand new to me) and thinking that I'll be removing the universe repositories from my sources.list to see how far I get with package installs.

I'm still wondering if apt-get install will fail if it encounters a) some unsigned package or dependency or b) a package signed by a key other than the two in my apt-keys.
 
Click here to see the post LQ members have rated as the most helpful post in this thread.
Old 07-20-2011, 06:07 PM   #17
sneakyimp
Senior Member
 
Registered: Dec 2004
Posts: 1,056

Original Poster
Rep: Reputation: 78
By the way, I just noticed that the 10.04 image has a totally different sources.list:
Code:
deb http://us-east-1.ec2.archive.ubuntu.com/ubuntu/ lucid main universe
deb-src http://us-east-1.ec2.archive.ubuntu.com/ubuntu/ lucid main universe
deb http://us-east-1.ec2.archive.ubuntu.com/ubuntu/ lucid-updates main universe
deb-src http://us-east-1.ec2.archive.ubuntu.com/ubuntu/ lucid-updates main universe
deb http://security.ubuntu.com/ubuntu lucid-security main universe
deb-src http://security.ubuntu.com/ubuntu lucid-security main universe
It lacks comments, but I like the fact that they are all ubuntu uri's, but do not like the fact that universe includes software maintained by the "ubuntu community". Should I remove the universe bit and leave main only? Oh wait...I see that suhosin is in Universe according to your post. Is universe safe?

Last edited by sneakyimp; 07-20-2011 at 06:09 PM.
 
Old 07-20-2011, 07:33 PM   #18
unSpawn
Moderator
 
Registered: May 2001
Posts: 29,415
Blog Entries: 55

Rep: Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600
Quote:
Originally Posted by sneakyimp View Post
Do you have any thoughts on the list of processes? Remember that 10.04 has some extra 30 processes running versus 11.04.
Ninety nine percent of those processes listed are kernel threads, distinguishable by a PPID that equals the PID of kthread, and about the userland ones we can be short (BTW '/bin/ps axc' or '/bin/ps axfwww -eo ppid,pid,uid,args --sort=ppid' makes for easier readable output): the system needs rsyslogd, "upstart-udev-br", udevd, dbus-daemon, "console-kit-dae", irqbalance and dhclient3. The amount of getty instances could be reduced later on (don't need many) and sshd you're dealing with already (I wonder though if the AppArmor profile for OpenSSH is available in /etc/apparmor.d/). atd might be but cron is required (reduce access by echoing allowed usernames into /etc/{cron,at}.allow). Note while your process listing resolves user names (which I usually avoid) it shows dbus-daemon running with an UID of "102" (absence of name GECOS field in /etc/passwd?).


Quote:
Originally Posted by sneakyimp View Post
As Salasi recommended, I'm assembling a detailed document for my own purposes
Forgot to make it but keeping an admin log would have been one of my remarks as well. I'll add using versioning for storing configuration diffs. There's nothing like being able to restore a configuration to working order...


Quote:
Originally Posted by sneakyimp View Post
At the moment, I'm trying to figure out fail2ban and Tiger
Both fail2ban and requires minimal configuration to run. The only difference I made is combining fail2ban with "-m recent" and a bitbucket earlier on, in the -t raw PREROUTING chain. GNU/Tiger should require a minimum amount of configuration to run as well: just read the comments in the tiger.conf file.


Quote:
Originally Posted by sneakyimp View Post
I'm still wondering if apt-get install will fail if it encounters a) some unsigned package or dependency or b) a package signed by a key other than the two in my apt-keys.
I don't know, haven't encountered nor tested that, but my guess would be it wouldn't if some packages are not signed.


Quote:
Originally Posted by sneakyimp View Post
By the way, I just noticed that the 10.04 image has a totally different sources.list (..) Is universe safe?
"The universe component is a snapshot of the free, open-source, and Linux world. It houses almost every piece of open-source software, all built from a range of public sources. Canonical does not provide a guarantee of regular security updates for software in the universe component, but will provide these where they are made available by the community. Users should understand the risk inherent in using these packages."*. In short you're at the mercy of volunteers (which basically accounts for ninety nine percent of OSS, right?) but if that's an acceptable risk to you only you can assess. Being able to find out about upstream updates independently and update SW yourself if security requires it would help. I guess that's one more reason for having a staging area as I suggested before.

Last edited by unSpawn; 07-20-2011 at 07:34 PM. Reason: //Close quote tag
 
Old 07-20-2011, 11:15 PM   #19
sneakyimp
Senior Member
 
Registered: Dec 2004
Posts: 1,056

Original Poster
Rep: Reputation: 78
Unspawn, thanks yet again for the priceless input. I wait anxiously for every morsel of info you are kind enough to give.

Before I install anything, before I run apt-get update or apt-get upgrade, I feel like I need to resolve the universe vs. main question for repositories. Given that universe is in the default sources.list, I'm guessing that there may be installed packages already that are in universe. My inclination is to trust it but following my nose before is precisely what got me in trouble.
* Can anyone recommend any command(s) or process wherein I might determine (quickly) whether my installed packages (or other packages I plan to install) are in main or universe? The only way I can imagine doing so -- and I don't much like this idea -- is to restrict my sources.list to main and try updating/installing things and see what fails. I'm on #ubuntu IRC now trying to get answers.
UPDATE: I believe this command lists my installed packages and calls apt-cache policy on all of them:
dpkg --get-selections | grep -oE '^[+\.a-z0-9\-]+\s' | xargs apt-cache policy
some additional filtering yields the main/universe/multiverse bit:
dpkg --get-selections | grep -oE '^[+\.a-z0-9\-]+\s' | xargs apt-cache policy | grep -E ' (lucid.*/[a-z]+)'
and a final pipe says to me that not one of these packages is universe:
dpkg --get-selections | grep -oE '^[+\.a-z0-9\-]+\s' | xargs apt-cache policy | grep -E ' (lucid.*/[a-z]+)' | grep universe
Does this mean that if I were to disable the universe repository options now that I could be sure to exclude all but main from my system?

* Can anyone recommend a way to test whether unsigned or untrusted packages cause apt-get failure and/or noisy notification?

Quote:
Originally Posted by unSpawn
(I wonder though if the AppArmor profile for OpenSSH is available in /etc/apparmor.d/).
This went over my head. I've read some of your links on apparmor but am not really that far along yet.

Quote:
Originally Posted by unSpawn
(reduce access by echoing allowed usernames into /etc/{cron,at}.allow).
Thanks for that tip.

Quote:
Originally Posted by unSpawn
Note while your process listing resolves user names (which I usually avoid) it shows dbus-daemon running with an UID of "102" (absence of name GECOS field in /etc/passwd?).
DOH. Is that a security risk that I've listed this? If so, perhaps a mod can remove the listing. Also, what might be the reason for the lack of username for 102? Does this present a problem? I am unsure what this means exactly.

Quote:
Originally Posted by unSpawn
The only difference I made is combining fail2ban with "-m recent" and a bitbucket earlier on, in the -t raw PREROUTING chain.
I don't know where these options belong.

Quote:
Originally Posted by unSpawn
I don't know, haven't encountered nor tested that, but my guess would be it wouldn't if some packages are not signed.
I've been on the #ubuntu IRC chat channel and was more or less griefed by some more experienced ubuntu guys who'd rather tell me that I'm wasting my time with this level of detail than help me out. I may return later.

Quote:
Originally Posted by unSpawn
In short you're at the mercy of volunteers (which basically accounts for ninety nine percent of OSS, right?) but if that's an acceptable risk to you only you can assess. Being able to find out about upstream updates independently and update SW yourself if security requires it would help. I guess that's one more reason for having a staging area as I suggested before.
I've always wondered about the anonymous nature of FOSS and, although I know a lot more about it now, this lingering question remains. Can you trust FOSS? Given that suhosin is universe and recommended by you, I'm inclined to trust it. Given that I'm using EC2 to manage my virtual machine, it should be trivial to create a staging area -- just make a private AMI from my main AMI and it should basically be an exact copy. On the other hand, I don't really know what kind of testing I should be doing in a staging area. I suppose we'll get there eventually.

Also, if my dpkg/grep/apt-cache/grep commands above do what I think they do, we can assume that there are no universe packages used in this basic machine image. Does that sound like a reasonable conclusion?
 
Old 07-21-2011, 12:40 AM   #20
sneakyimp
Senior Member
 
Registered: Dec 2004
Posts: 1,056

Original Poster
Rep: Reputation: 78
I figure I might as well state a few goals here to see if it looks good for me to proceed.

Tomorrow, I hope to:
* run apt-get update and apt-get upgrade to bring the machine up to date
* install Tiger, fail2ban, and other security and diagnostic tools
* set up iptable or other rules to lock the machine down properly.
* start setting up the web stack: PHP 5.x, MySQL 5.x.x, and any required modules (curl, suhosin, possibly others)
* determine DNS situation. Please recall that we are using a LOT of subdomains. Hopefully we won't have to use BIND but this is a big question mark.
* Create cert signing request for a new security certificate for www.mydomain.com

To given an idea of the PHP stuff I might need, I ran this command on the old server:
Code:
[adminuser@nameserver ~]$ php -me
[PHP Modules]
bz2
calendar
ChartDirector PHP API
ctype
curl
date
dom
exif
ftp
gd
gettext
gmp
hash
iconv
imap
json
ldap
libxml
mbstring
mcrypt
mime_magic
mysql
mysqli
odbc
openssl
pcntl
pcre
PDO
pdo_mysql
PDO_ODBC
pdo_sqlite
posix
pspell
Reflection
session
shmop
SimpleXML
sockets
SPL
standard
sysvmsg
sysvsem
sysvshm
tokenizer
wddx
xml
xmlreader
xmlrpc
xmlwriter
xsl
zlib

[Zend Modules]
I suspect we'll need curl, ChartDirector, gd, mysql, and openssl but that's probably about it. Any commentary on these and their relative safety/risk would be much appreciated.
 
Old 07-21-2011, 01:54 PM   #21
sneakyimp
Senior Member
 
Registered: Dec 2004
Posts: 1,056

Original Poster
Rep: Reputation: 78
Good day.

So I bit the bullet and decided to proceed with apt-get update and apt-get upgrade. Only a couple of things were updated:
Code:
sneakyimp@machine:~$ sudo apt-get upgrade
Reading package lists... Done
Building dependency tree       
Reading state information... Done
The following packages will be upgraded:
  libparted0debian1 logrotate parted
3 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
Need to get 534kB of archives.
After this operation, 0B of additional disk space will be used.
Do you want to continue [Y/n]? y
I seriously doubt I'll need parted or any other partition management tool in a cloud environment, but a logrotate update sounds nice. That there are so few updates needed makes me think that Ubuntu's AMIs are probably updated frequently.

As for the question of whether to include universe packages, it seems quite likely that I will need to. I did an apt-cache search for tiger:
Code:
apt-cache search tiger
which, along with a lot of other junk, yielded these:
Code:
aide - Advanced Intrusion Detection Environment - static binary
aide-common - Advanced Intrusion Detection Environment - Common files
aide-dynamic - Advanced Intrusion Detection Environment - dynamic binary
aide-xen - Advanced Intrusion Detection Environment - static binary for XEN
tiger - Report system security vulnerabilities
tiger-otheros - Scripts to run Tiger in other operating systems
Is that plain old "tiger" package looks like what I want?

apt-cache policy tells me this is a universe package:
Code:
sneakyimp@machine:~$ apt-cache policy tiger
tiger:
  Installed: (none)
  Candidate: 1:3.2.2-11ubuntu1
  Version table:
     1:3.2.2-11ubuntu1 0
        500 http://us-east-1.ec2.archive.ubuntu.com/ubuntu/ lucid/universe Packages
fail2ban is also universe:
Code:
jason@ip-10-100-237-252:~$ apt-cache policy fail2ban
fail2ban:
  Installed: (none)
  Candidate: 0.8.4-1ubuntu1
  Version table:
     0.8.4-1ubuntu1 0
        500 http://us-east-1.ec2.archive.ubuntu.com/ubuntu/ lucid/universe Packages
 
Old 07-21-2011, 04:22 PM   #22
unSpawn
Moderator
 
Registered: May 2001
Posts: 29,415
Blog Entries: 55

Rep: Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600
Quote:
Originally Posted by sneakyimp View Post
Can anyone recommend a way to test whether unsigned or untrusted packages cause apt-get failure and/or noisy notification?
As I said I'm no Ubuntu guru but /etc/dpkg/dpkg.cfg contains this explanation:
Code:
# Do not enable debsig-verify by default; since the distribution is not using
# embedded signatures, debsig-verify would reject all packages.
no-debsig

Quote:
Originally Posted by sneakyimp View Post
DOH. Is that a security risk that I've listed this? If so, perhaps a mod can remove the listing. Also, what might be the reason for the lack of user name for 102? Does this present a problem? I am unsure what this means exactly.
No security risk, just an empty fifth field: compare 'getent passwd 102' with 'getent passwd 0'.


Quote:
Originally Posted by sneakyimp View Post
I don't know where these options belong.
If you go that route (later on): firewall configuration.


Quote:
Originally Posted by sneakyimp View Post
I've been on the #ubuntu IRC chat channel and was more or less grieved by some more experienced Ubuntu guys who'd rather tell me that I'm wasting my time with this level of detail than help me out. I may return later.
Sensible choice.


Quote:
Originally Posted by sneakyimp View Post
I've always wondered about the anonymous nature of FOSS and, although I know a lot more about it now, this lingering question remains. Can you trust FOSS? Given that Suhosin is universe and recommended by you, I'm inclined to trust it.
"The anonymous nature of FOSS" was not what I'm talking about: wrt Universe I was talking about reliance on the expertise and time of volunteers. You could compare Ubuntu Universe somewhat to the situation of Centos, RPMForge and EPEL. (With the exceptions that Dag has been around for ages, Centos as a whole moves at a different pace, with a different package selection and with wholly different package and repo verification options). And even Ubuntu documentation itself tells you to just enable Universe and Multiverse...


Quote:
Originally Posted by sneakyimp View Post
(..) it should be trivial to create a staging area (..) I don't really know what kind of testing I should be doing in a staging area.
Basically any (re)configuration or software deployment that may hold increased risks: installations leaving vulnerable files around, firewall errors that leave services vulnerable, service disruptions caused by faulty configuration, testing service fail-over, etc, etc. Your choice.


Quote:
Originally Posted by sneakyimp View Post
Also, if my dpkg/grep/apt-cache/grep commands above do what I think they do, we can assume that there are no universe packages used in this basic machine image. Does that sound like a reasonable conclusion?
Given the above I'd just live with Universe.


Quote:
Originally Posted by sneakyimp View Post
* run apt-get update and apt-get upgrade to bring the machine up to date
* install Tiger, fail2ban, and other security and diagnostic tools
* set up iptable or other rules to lock the machine down properly.
While you can definitely install your web stack please ensure OS configuration, hardening and testing is sufficiently complete before commencing.


Quote:
Originally Posted by sneakyimp View Post
* determine DNS situation. Please recall that we are using a LOT of subdomains. Hopefully we won't have to use BIND but this is a big question mark.
Oh, I read a post recently about registrars, maybe that holds something for you. From another angle, if you decide to go for your own DNS anyway see http://code.google.com/speed/public-.../security.html plus other BIND9 references like http://www.bind9.net/.



Quote:
Originally Posted by sneakyimp View Post
Any commentary on these and their relative safety/risk would be much appreciated.
You can query the USN, the Ubuntu CVE and the MITRE CVE. I just picked two from the list (you check the rest): shmop and sysvshm. shmop yields CVE-2007-1376 and CVE-2011-1092, not in the Ubuntu CVE list AFAIK, and sysvshm yields CVE-2010-1861 which AFAIK isn't either. Fix for CVE-2010-1861 is listed in the resource itself, exploit for CVE-2011-1092 is available and fix for the same is PHP 5.3.6. Note these are system level vulnerabilities and while fixing them is mandatory they are no comparison to the true cornucopia of errors like products that don't clean up after themselves, that don't adhere to coding standards or outright tell you that you must disable any protections for it to work.


Quote:
Originally Posted by sneakyimp View Post
I seriously doubt I'll need parted or any other partition management tool in a cloud environment, but a logrotate update sounds nice. That there are so few updates needed makes me think that Ubuntu's AMIs are probably updated frequently.
Logrotation is mandatory when you'll be dealing with large log volumes. I don't know about AMI update frequencies. Interesting question.


Quote:
Originally Posted by sneakyimp View Post
Is that plain old "tiger" package looks like what I want?
I would like to focus as much on security aspects as possible. Please make it a habit to query package information yourself and present it with your question as that's way more efficient. ("tiger-otheros" contains files for non-Linux systems BTW.) Do realize auditing tools like Logwatch, SEC, OSSEC, Chkrootkit, Rootkit Hunter, Aide, Samhain, hell even tripwire are all after-the-fact tools. That does not mean they're bad or something, what I mean is the emphasis should be on prevention: hardening. As for Aide vs Samhain: Aide is passive as you must drive it manually (or through an at or cron job) while Samhain runs as daemon. Added value is that Samhain can operate in a client-server fashion, meaning you can have a trusted, impenetrable host holding databases and configuration files serving them to clients.

HTH

Last edited by unSpawn; 05-30-2012 at 05:43 PM. Reason: //Close URI tag
 
1 members found this post helpful.
Old 07-21-2011, 05:24 PM   #23
sneakyimp
Senior Member
 
Registered: Dec 2004
Posts: 1,056

Original Poster
Rep: Reputation: 78
Quote:
Originally Posted by unSpawn View Post
As I said I'm no Ubuntu guru but /etc/dpkg/dpkg.cfg contains this explanation:
Code:
# Do not enable debsig-verify by default; since the distribution is not using
# embedded signatures, debsig-verify would reject all packages.
no-debsig
WHAT?! Where's the my-head-is-exploding emoticon? For a few days now, I've been living this pleasant lie thinking that all of my packages were signed with two semi-trustworthy keys (but to which i have no chain-of-trust relation) and now I learn that not a single one gets its signature verified? Good grief. I *must* know more about this. I'm guessing that other distros have much more serious package-signing practices? Surely there's a comparison out there on this matter.

Quote:
Originally Posted by unSpawn View Post
No security risk, just an empty fifth field: compare 'getent passwd 102' with 'getent passwd 0'.
Should I manually edit the passwd file and give the messagebus a name?
Code:
messagebus:x:102:107::/var/run/dbus:/bin/false
Quote:
Originally Posted by unSpawn View Post
And even Ubuntu documentation itself tells you to just enable Universe and Multiverse...
I haven't seen that directive in the documentation (which seems extensive, largely redundant, and largely ad-hoc IMHO). A reminder here: I'm not using any of the GUI features here and the sources.list file on this server looks pretty clean to me, including only the main and universe packages:
Code:
deb http://us-east-1.ec2.archive.ubuntu.com/ubuntu/ lucid main universe
deb-src http://us-east-1.ec2.archive.ubuntu.com/ubuntu/ lucid main universe
deb http://us-east-1.ec2.archive.ubuntu.com/ubuntu/ lucid-updates main universe
deb-src http://us-east-1.ec2.archive.ubuntu.com/ubuntu/ lucid-updates main universe
deb http://security.ubuntu.com/ubuntu lucid-security main universe
deb-src http://security.ubuntu.com/ubuntu lucid-security main universe
I believe the multiverse stuff is probably all the buggy GUI-related stuff for the desktop version.


Quote:
Originally Posted by unSpawn View Post
Given the above I'd just live with Universe.

While you can definitely install your web stack please ensure OS configuration, hardening and testing is sufficiently complete before commencing.

...plus other BIND9 references like http://www.bind9.net/.

...You can query the USN...
I will definitely be living with universe. I hope to finish hardening and testing ASAP. Hope you'll take a peek at my to-do list below. DNS I will probably handle with Amazon Route 53. Thanks for the tips on USN/CVE.


Quote:
Originally Posted by unSpawn View Post
I would like to focus as much on security aspects as possible. Please make it a habit to query package information yourself and present it with your question as that's way more efficient. ("tiger-otheros" contains files for non-Linux systems BTW.) Do realize auditing tools like Logwatch, SEC, OSSEC, Chkrootkit, Rootkit Hunter, Aide, Samhain, hell even tripwire are all after-the-fact tools. That does not mean they're bad or something, what I mean is the emphasis should be on prevention: hardening. As for Aide vs Samhain: Aide is passive as you must drive it manually (or through an at or cron job) while Samhain runs as daemon. Added value is that Samhain can operate in a client-server fashion, meaning you can have a trusted, impenetrable host holding databases and configuration files serving them to clients.

HTH
YES YES YES it helps and thank so very very much. I hope you might take a moment to help me prioritize the to-do items here. I expect to perform them roughly in this order:

* establish IP table rules
* install fail2ban, configure it
* install tiger, configure it
* install Apache and PHP and harden as directed by the Securing Debian Manual
* set up Amazon RDS to host MySQL database. Limit DB access to either my security group or this machine specifically.
* Use Amazon Route 53 to handle DNS
* Set up Amazon SES to handle outgoing mail
* Incoming mail?? Google apps? ??? Need to migrate existing email and accounts to a new system.
* Antivirus? ClamAV? Email and image upload are the only ways that files are introduced via users.
* Set up automated apt-get update/upgrade as described here. I'm not really sure what tradeoffs there are. I understand that unattended updates can introduce security issues. On the other hand, no updates also introduce security problems. What would Bruce Schneier do?
* Create AMI from the hardened, configured machine for backup and for the purpose of creating a staging area as needed.
 
Old 07-21-2011, 06:43 PM   #24
sneakyimp
Senior Member
 
Registered: Dec 2004
Posts: 1,056

Original Poster
Rep: Reputation: 78
Per unSpawn's email previously, I would like to:
Quote:
- allow ALL traffic in and outbound between your management IP
range (return traffic plus logins) and the host AND
- only allow ESTABLISHED,RELATED inbound traffic from other sources
(only return traffic).
The basic idea is that I want at this point to allow SSH traffic between my management IP (which, sadly, is via DHCP through Time Warner Cable and may change) and my server. I've been reading this article which describes some iptables commands but doesn't seem to show an example that denies SSH traffic from every IP address except a block of IP addresses. I don't want to lock myself out -- that would be very bad.

I've also tried reading this ponderous document and reading the man pages. I could certainly use some tips here. In particular, these are my goals in order:
* not to exclude myself from SSH access to the server, even if my IP address changes, which it will
* block unwelcome visitors (meaning pretty much the entire world) from ever speaking to my machine via ssh
* allow incoming requests for web traffic on port 80 and 443
* allow this machine to make mysql queries to another machine
* allow this machine to make secure (and possibly non-secure) curl requests to another web server
* allow this machine to send mail on another machine
* close every other port down

Any thoughts or input would be most appreciated.
 
Old 07-21-2011, 07:12 PM   #25
unSpawn
Moderator
 
Registered: May 2001
Posts: 29,415
Blog Entries: 55

Rep: Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600
Quote:
Originally Posted by sneakyimp View Post
I'm guessing that other distros have much more serious package-signing practices?
I'm sorry to conclude that only one of the three oldest surviving distributions got that aspect right.


Quote:
Originally Posted by sneakyimp View Post
Should I manually edit the passwd file and give the messagebus a name?
You could but it's no necessity.


Quote:
Originally Posted by sneakyimp View Post
DNS I will probably handle with Amazon Route 53.
One worry less...


Quote:
Originally Posted by sneakyimp View Post
* install tiger, configure it
+ Configure tiggerrc and set Tiger_Check_SENDMAIL=N, Tiger_Check_ROOTKIT=N (run Chkrootkit separately), Tiger_Collect_CRACK=N, Tiger_SSH_Protocol='2' and populate (use something like 'sudo lsof -Pwln|awk '/REG.*bin {print $NF}|sort -u';' for that) Tiger_Running_Procs= with full path and not plain names. Run GNU/Tiger, eyeball report, address issues, rinse, repeat. Be sure to run it with "-e" or "-E" and read and research the explanations. (Posted before in this forum.)


Quote:
Originally Posted by sneakyimp View Post
* establish IP table rules
+ Ubuntu by default comes with 'ufw' so if you want to remain compatible with all Ubuntu documentation (and support) you should probably use it. Setting rules isn't that hard for instance 'sudo ufw allow proto tcp from 76.173.0.0/16 to any port 22' would result in only your 76.173.0.0/16 (default DROP policy) being able to SSH in. (My problem is I read 'sudo iptables-save > /tmp/iptables-save.log' better and ufw throws in a gazillion chains not in use.)


Quote:
Originally Posted by sneakyimp View Post
* install fail2ban, configure it
+ In fail2ban.conf check loglevel = 3 and logtarget is set. In jail.conf set ignoreip to your IP range, set bantime and findtime. I keep backend on auto. Set ssh-iptables to enabled = true, verify the log file exists and is used by OpenSSH or rsyslogd and check maxretry is the default 3 or less. If you run SSHd on a separate port as well (I keep a dormant, restricted Xinetd entry) you just copy the ssh-iptables section, change the name and port= line.
+ Wrt to iptables "-m recent" what I do is move inbound SSH traffic from the filter table INPUT to a separate chain (default INPUT chain DROP policy) in which it creates its fail2ban-[name] rules, then I allow certain ranges and finally trap offenders with "-m recent --name SSH --set" and then drop traffic. In the raw table PREROUTING chain there's a "-m tcp -p tcp ---dport 22 -m recent --name SSH --update --seconds n --hitcount n -j DROP". The added value is that now /proc/net/ipt_recent/SSH exists you can use that to manage blocks (and this goes for every service you have a bucket for) and remove or add IP addresses to it without having to muck with iptables rules (in contrast with tools that dump just about everything in the filter table INPUT chain which, given the way filters are traversed is not good for performance let alone be easy to manage...).


Quote:
Originally Posted by sneakyimp View Post
* install Apache and PHP and harden as directed by the Securing Debian Manual
* set up Amazon RDS to host MySQL database. Limit DB access to either my security group or this machine specifically.
+ Also please see previous OWASP links.
/* Note to self: add MySQL security best practices. */


Quote:
Originally Posted by sneakyimp View Post
* Incoming mail?? Google apps? ??? Need to migrate existing email and accounts to a new system.
! No advice here except remember you had a lot of spam going on.


Quote:
Originally Posted by sneakyimp View Post
* Antivirus? ClamAV? Email and image upload are the only ways that files are introduced via users.
Probably good, yes.
+ AFAIK you did use FTP as well, right?


Quote:
Originally Posted by sneakyimp View Post
* Set up automated apt-get update/upgrade as described here. I'm not really sure what tradeoffs there are. I understand that unattended updates can introduce security issues. On the other hand, no updates also introduce security problems. What would Bruce Schneier do?
! Has off-site backups and uses a staging machine so he doesn't have to worry?
 
Old 07-21-2011, 07:53 PM   #26
sneakyimp
Senior Member
 
Registered: Dec 2004
Posts: 1,056

Original Poster
Rep: Reputation: 78
Amazing info thank you so much. Kind of choking on the information overload at the moment. Both ufw and iptables have pretty epic man pages. I think I'll go with iptables directly because it seems more precise. I understand a few things (and please do not hesitate to correct or add detail)
* if any rule results in ACCEPT, this overrules any DROP, regardless of rule sequence (do I have that right?)
* be careful not to DROP loopback; i.e., make sure your first rule is to permit loopback/localhost access
* For this hardening project, I'm really just interested in the INPUT stage and not so much FORWARD or OUTPUT (or does ESTABLISHED,RELATED somehow affect OUTPUT) ?
* I could very well lock myself out permanently from this server and, because it's an amazon cloud instance, nobody can just walk up and plug a keyboard in to correct my mistake. I'm really sweating the subnet thing and expect I should build in access for some other IP addresses or subnets just in case. Suggestions welcome here for avoiding lockout.

Quote:
+ Wrt to iptables "-m recent" what I do is move inbound SSH traffic from the filter table INPUT to a separate chain (default INPUT chain DROP policy) in which it creates its fail2ban-[name] rules, then I allow certain ranges and finally trap offenders with "-m recent --name SSH --set" and then drop traffic. In the raw table PREROUTING chain there's a "-m tcp -p tcp ---dport 22 -m recent --name SSH --update --seconds n --hitcount n -j DROP". The added value is that now /proc/net/ipt_recent/SSH exists you can use that to manage blocks (and this goes for every service you have a bucket for) and remove or add IP addresses to it without having to muck with iptables rules (in contrast with tools that dump just about everything in the filter table INPUT chain which, given the way filters are traversed is not good for performance let alone be easy to manage...).
This may make more sense when I've managed to get my head around ipTables more, but right now it's greek. The -m options and the idea of 'tables' has still not sunk in. I'm reading as fast as I can.
 
Old 07-21-2011, 08:33 PM   #27
sneakyimp
Senior Member
 
Registered: Dec 2004
Posts: 1,056

Original Poster
Rep: Reputation: 78
OK my belief that an accept rule can occur anywhere doesn't seem correct any more. I think I mis-read the tutorial here which uses the -j flag for each rule. ORDER IS IMPORTANT.

Based on that tutorial, I have concocted thiese iptables rules:
Code:
# allow established sessions to receive traffic
iptables -A INPUT -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT

# use this one if the previous one does not work
#iptables -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT

# allow SSH access from my current specific subnet, 76.173.*.*
# DANGEROUS if ip changes or Time Warner switches to IPV6 -- could result in lockout
# NOTE: could also change this to a non-standard port.
iptables -A INPUT -s 76.173.0.0/16 -p tcp --dport 22 -j ACCEPT

# allow all incoming web traffic on port 80 (http)
iptables -A INPUT -p tcp --dport 80 -j ACCEPT
# and on port 443 (https)
iptables -A INPUT -p tcp --dport 443 -j ACCEPT

# FIRST RULE - note that this is an insert rather than append
# it keeps loopback enabled for every type of traffic:
iptables -I INPUT 1 -i lo -j ACCEPT


# DROP EVERYTHING ELSE
iptables -A INPUT -j DROP
Don't forget iptables-save and/or iptables-restore

Comments welcome. I really don't want to lock myself out of my server. I'll be working through the connection to fail2ban next.

Last edited by sneakyimp; 07-21-2011 at 08:39 PM. Reason: forgot save/restore commands.
 
Old 07-21-2011, 08:34 PM   #28
unSpawn
Moderator
 
Registered: May 2001
Posts: 29,415
Blog Entries: 55

Rep: Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600
Quote:
Originally Posted by sneakyimp View Post
I think I'll go with iptables directly because it seems more precise.
Your choice.


Quote:
Originally Posted by sneakyimp View Post
* if any rule results in ACCEPT, this overrules any DROP, regardless of rule sequence (do I have that right?)
In each table the main chains like INPUT, OUTPUT, FORWARD have a default policy. If the filter table INPUT chain has a policy of DROP then you need to explicitly allow inbound traffic. Rules in chains are parsed on a first match basis.


Quote:
Originally Posted by sneakyimp View Post
* be careful not to DROP loopback; i.e., make sure your first rule is to permit loopback/localhost access
Yes.


Quote:
Originally Posted by sneakyimp View Post
* For this hardening project, I'm really just interested in the INPUT stage and not so much FORWARD or OUTPUT (or does ESTABLISHED,RELATED somehow affect OUTPUT) ?
Default policy for output AFAIK is ACCEPT so best leave it at that.


Quote:
Originally Posted by sneakyimp View Post
Suggestions welcome here for avoiding lockout.
Last time I'll mention it: off-site backups, staging server (OK, and an 'at' job that restores a previous rule set in say five minutes ;-p).
 
1 members found this post helpful.
Old 07-21-2011, 08:42 PM   #29
sneakyimp
Senior Member
 
Registered: Dec 2004
Posts: 1,056

Original Poster
Rep: Reputation: 78
Quote:
Originally Posted by unSpawn View Post
Last time I'll mention it: off-site backups, staging server (OK, and an 'at' job that restores a previous rule set in say five minutes ;-p).
I'm looking into how I might save the current config as an AMI so I can just instantiate a new one should I lock myself out. I'm accustomed to creating off-site backups of database files and source code, but, aside from cloud situations, I am not at all familiar with a way to store a dehydrated/static archive of a running machine so that it might spring whole into life.

*AT*...brilliant.
 
Old 07-21-2011, 08:52 PM   #30
unSpawn
Moderator
 
Registered: May 2001
Posts: 29,415
Blog Entries: 55

Rep: Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600
Quote:
Originally Posted by sneakyimp View Post
Comments welcome.
Best start from a basic rule set and modify that according to needs. Please do not expose services other than SSH before you have finished the hardening stage. You can load a rule set locally, test it and then post or attach "/tmp/iptables.rules" (the output of 'sudo iptables-save > /tmp/iptables.rules') as your rule set is incomplete. Something like this
Code:
*filter
:INPUT DROP [0:0]
:FORWARD DROP [0:0]
:OUTPUT ACCEPT [0:0]
# Loopback
-A INPUT -i lo -j ACCEPT
# Established connections and related to FTP:
-A INPUT -i eth0 -m state --state ESTABLISHED,RELATED -j ACCEPT 
# Any ICMP only from 76.173.0.0/16 (should restrict this to certain types):
-A INPUT -i eth0 -p icmp -m icmp -s 76.173.0.0/16 --icmp-type any -j ACCEPT 
# Inbound SSH connections only from 76.173.0.0/16:
-A INPUT -i eth0 -p tcp -m tcp -s 76.173.0.0/16 --dport 22 -m state --state NEW -j ACCEPT 
# FORWARD doesn't need rules
# OUTPUT doesn't need rules
COMMIT
can be loaded locally for testing with 'iptables-restore < /path/to/savedfile' after you cleaned out the current rule set.
 
1 members found this post helpful.
  


Reply

Tags
hardening, lamp, ubuntu



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
Is this a compromised machine. jovie Linux - Security 1 03-14-2007 04:18 AM
Configuring a transparent proxy on a client machine ONLY instead of a server machine. clinux_rulz Linux - Networking 1 05-31-2006 02:53 AM
Machine compromised, now have ports opened tvn Fedora 1 09-13-2005 05:30 PM
Compromised machine delling81 Linux - Security 3 04-05-2005 10:20 PM
If I had a compromised machine... TheIrish Linux - Security 9 11-28-2003 01:31 PM

LinuxQuestions.org > Forums > Linux Forums > Linux - Security

All times are GMT -5. The time now is 12:44 PM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration