NFS Howto Security
From Linux NFS
Contents |
Security and NFS
This list of security tips and explanations will not make your site completely secure. NOTHING will make your site completely secure. Reading this section may help you get an idea of the security problems with NFS. This is not a comprehensive guide and it will always be undergoing changes. If you have any tips or hints to give us please send them to the HOWTO maintainer.
If you are on a network with no access to the outside world (not even a modem) and you trust all the internal machines and all your users then this section will be of no use to you. However, its our belief that there are relatively few networks in this situation so we would suggest reading this section thoroughly for anyone setting up NFS.
With NFS, there are two steps required for a client to gain access to a file contained in a remote directory on the server. The first step is mount access. Mount access is achieved by the client machine attempting to attach to the server. The security for this is provided by the /etc/exports file. This file lists the names or IP addresses for machines that are allowed to access a share point. If the client's ip address matches one of the entries in the access list then it will be allowed to mount. This is not terribly secure. If someone is capable of spoofing or taking over a trusted address then they can access your mount points. To give a real-world example of this type of "authentication": This is equivalent to someone introducing themselves to you and you believing they are who they claim to be because they are wearing a sticker that says "Hello, My Name is ...." Once the machine has mounted a volume, its operating system will have access to all files on the volume (with the possible exception of those owned by root; see below) and write access to those files as well, if the volume was exported with the rw option.
The second step is file access. This is a function of normal file system access controls on the client and not a specialized function of NFS. Once the drive is mounted the user and group permissions on the files determine access control.
An example: bob on the server maps to the UserID 9999. Bob makes a file on the server that is only accessible the user (the equivalent to typing chmod 600 filename). A client is allowed to mount the drive where the file is stored. On the client mary maps to UserID 9999. This means that the client user mary can access bob's file that is marked as only accessible by him. It gets worse: If someone has become superuser on the client machine they can su - username and become any user. NFS will be none the wiser.
Its not all terrible. There are a few measures you can take on the server to offset the danger of the clients. We will cover those shortly.
If you don't think the security measures apply to you, you're probably wrong. In Section 6.1 we'll cover securing the portmapper, server and client security in Section 6.2 and Section 6.3 respectively. Finally, in Section 6.4 we'll briefly talk about proper firewalling for your nfs server.
Finally, it is critical that all of your nfs daemons and client programs are current. If you think that a flaw is too recently announced for it to be a problem for you, then you've probably already been compromised.
A good way to keep up to date on security alerts is to subscribe to the bugtraq mailinglists. You can read up on how to subscribe and various other information about bugtraq here: http://www.securityfocus.com/forums/bugtraq/faq.html
Additionally searching for NFS at securityfocus.com's search engine will show you all security reports pertaining to NFS.
You should also regularly check CERT advisories. See the CERT web page at www.cert.org.
The Portmapper
The portmapper keeps a list of what services are running on what ports. This list is used by a connecting machine to see what ports it wants to talk to access certain services.
The portmapper is not in as bad a shape as a few years ago but it is still a point of worry for many sys admins. The portmapper, like NFS and NIS, should not really have connections made to it outside of a trusted local area network. If you have to expose them to the outside world - be careful and keep up diligent monitoring of those systems.
Not all Linux distributions were created equal. Some seemingly up-to-date distributions do not include a securable portmapper. The easy way to check if your portmapper is good or not is to run strings(1) and see if it reads the relevant files, /etc/hosts.deny and /etc/hosts.allow. Assuming your portmapper is /sbin/portmap you can check it with this command. The output that follows would look something like this on a securable machine:
strings /sbin/portmap | grep hosts /etc/hosts.allow /etc/hosts.deny @(#) hosts_ctl.c 1.4 94/12/28 17:42:27 @(#) hosts_access.c 1.21 97/02/12 02:13:22
First we edit /etc/hosts.deny. It should contain the line
portmap: ALL
which will deny access to everyone. While it is closed run:
rpcinfo -p
just to check that your portmapper really reads and obeys this file. Rpcinfo should give no output, or possibly an error message. The files /etc/hosts.allow and /etc/hosts.deny take effect immediately after you save them. No daemon needs to be restarted.
Closing the portmapper for everyone is a bit drastic, so we open it again by editing /etc/hosts.allow. But first we need to figure out what to put in it. It should basically list all machines that should have access to your portmapper. On a run of the mill Linux system there are very few machines that need any access for any reason. The portmapper administers nfsd, mountd, ypbind/ypserv, rquotad, lockd (which shows up as nlockmgr), statd (which shows up as status) and 'r' services like ruptime and rusers. Of these only nfsd, mountd, ypbind/ypserv and perhaps rquotad,lockd and statd are of any consequence. All machines that need to access services on your machine should be allowed to do that. Let's say that your machine's address is 192.168.0.254 and that it lives on the subnet 192.168.0.0, and that all machines on the subnet should have access to it (for an overview of those terms see the the Networking-Overview-HOWTO). Then we write:
portmap: 192.168.0.0/255.255.255.0
in /etc/hosts.allow. If you are not sure what your network or netmask are, you can use the ifconfig command to determine the netmask and the netstat command to determine the network. For, example, for the device eth0 on the above machine ifconfig should show:
... eth0 Link encap:Ethernet HWaddr 00:60:8C:96:D5:56 inet addr:192.168.0.254 Bcast:192.168.0.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:360315 errors:0 dropped:0 overruns:0 TX packets:179274 errors:0 dropped:0 overruns:0 Interrupt:10 Base address:0x320 ...
and netstat -rn should show:
Kernel routing table Destination Gateway Genmask Flags Metric Ref Use Iface ... 192.168.0.0 0.0.0.0 255.255.255.0 U 0 0 174412 eth0 ...
(The network address is in the first column).
The /etc/hosts.deny and /etc/hosts.allow files are described in the manual pages of the same names.
IMPORTANT: Do not put anything but IP NUMBERS in the portmap lines of these files. Host name lookups can indirectly cause portmap activity which will trigger host name lookups which can indirectly cause portmap activity which will trigger... Versions 0.2.0 and higher of the nfs-utils package also use the hosts.allow and hosts.deny files, so you should put in entries for lockd, statd, mountd, and rquotad in these files too. For a complete example, see Section 3.2.2.
The above things should make your server tighter. The only remaining problem is if someone gains administrative access to one of your trusted client machines and is able to send bogus NFS requests. The next section deals with safeguards against this problem.
Server security: nfsd and mountd
On the server we can decide that we don't want to trust any requests made as root on the client. We can do that by using the root_squash option in /etc/exports:
# cat /etc/exports /home slave1(rw,root_squash)
This is, in fact, the default. It should always be turned on unless you have a very good reason to turn it off. To turn it off use the no_root_squash option.
Now, if a user with UID 0 (i.e., root's user ID number) on the client attempts to access (read, write, delete) the file system, the server substitutes the UID of the server's 'nobody' account. Which means that the root user on the client can't access or change files that only root on the server can access or change. That's good, and you should probably use root_squash on all the file systems you export. "But the root user on the client can still use su to become any other user and access and change that users files!" say you. To which the answer is: Yes, and that's the way it is, and has to be with Unix and NFS. This has one important implication: All important binaries and files should be owned by root, and not bin or other non-root account, since the only account the clients root user cannot access is the servers root account. In the exports(5) man page there are several other squash options listed so that you can decide to mistrust whomever you (don't) like on the clients.
The TCP ports 1-1024 are reserved for root's use (and therefore sometimes referred to as "secure ports") A non-root user cannot bind these ports. Adding the secure option to an /etc/exports means that it will only listed to requests coming from ports 1-1024 on the client, so that a malicious non-root user on the client cannot come along and open up a spoofed NFS dialogue on a non-reserved port. This option is set by default.
Client Security
The nosuid mount option
On the client we can decide that we don't want to trust the server too much a couple of ways with options to mount. For example we can forbid suid programs to work off the NFS file system with the nosuid option. Some unix programs, such as passwd, are called "suid" programs: They set the id of the person running them to whomever is the owner of the file. If a file is owned by root and is suid, then the program will execute as root, so that they can perform operations (such as writing to the password file) that only root is allowed to do. Using the nosuid option is a good idea and you should consider using this with all NFS mounted disks. It means that the server's root user cannot make a suid-root program on the file system, log in to the client as a normal user and then use the suid-root program to become root on the client too. One could also forbid execution of files on the mounted file system altogether with the noexec option. But this is more likely to be impractical than nosuid since a file system is likely to at least contain some scripts or programs that need to be executed.
The broken_suid mount option
Some older programs (xterm being one of them) used to rely on the idea that root can write everywhere. This is will break under new kernels on NFS mounts. The security implications are that programs that do this type of suid action can potentially be used to change your apparent uid on nfs servers doing uid mapping. So the default has been to disable this broken_suid in the linux kernel.
The long and short of it is this: If you're using an old linux distribution, some sort of old suid program or an older unix of some type you might have to mount from your clients with the broken_suid option to mount. However, most recent unixes and linux distros have xterm and such programs just as a normal executable with no suid status, they call programs to do their setuid work.
You enter the above options in the options column, with the rsize and wsize, separated by commas.
Securing portmapper, rpc.statd, and rpc.lockd on the client
In the current (2.2.18+) implementation of NFS, full file locking is supported. This means that rpc.statd and rpc.lockd must be running on the client in order for locks to function correctly. These services require the portmapper to be running. So, most of the problems you will find with nfs on the server you may also be plagued with on the client. Read through the portmapper section above for information on securing the portmapper.
NFS and firewalls (ipchains and netfilter)
IPchains (under the 2.2.X kernels) and netfilter (under the 2.4.x kernels) allow a good level of security - instead of relying on the daemon (or perhaps its TCP wrapper) to determine which machines can connect, the connection attempt is allowed or disallowed at a lower level. In this case, you can stop the connection much earlier and more globally, which can protect you from all sorts of attacks.
Describing how to set up a Linux firewall is well beyond the scope of this document. Interested readers may wish to read the Firewall-HOWTO or the IPCHAINS-HOWTO. For users of kernel 2.4 and above you might want to visit the netfilter webpage at: http://netfilter.filewatcher.org. If you are already familiar with the workings of ipchains or netfilter this section will give you a few tips on how to better setup your NFS daemons to more easily firewall and protect them.
A good rule to follow for your firewall configuration is to deny all, and allow only some - this helps to keep you from accidentally allowing more than you intended.
In order to understand how to firewall the NFS daemons, it will help to breifly review how they bind to ports.
When a daemon starts up, it requests a free port from the portmapper. The portmapper gets the port for the daemon and keeps track of the port currently used by that daemon. When other hosts or processes need to communicate with the daemon, they request the port number from the portmapper in order to find the daemon. So the ports will perpetually float because different ports may be free at different times and so the portmapper will allocate them differently each time. This is a pain for setting up a firewall. If you never know where the daemons are going to be then you don't know precisely which ports to allow access to. This might not be a big deal for many people running on a protected or isolated LAN. For those people on a public network, though, this is horrible.
In kernels 2.4.13 and later with nfs-utils 0.3.3 or later you no longer have to worry about the floating of ports in the portmapper. Now all of the daemons pertaining to nfs can be "pinned" to a port. Most of them nicely take a -p option when they are started; those daemons that are started by the kernel take some kernel arguments or module options. They are described below.
Some of the daemons involved in sharing data via nfs are already bound to a port. portmap is always on port 111 tcp and udp. nfsd is always on port 2049 TCP and UDP (however, as of kernel 2.4.17, NFS over TCP is considered experimental and is not for use on production machines).
The other daemons, statd, mountd, lockd, and rquotad, will normally move around to the first available port they are informed of by the portmapper.
To force statd to bind to a particular port, use the -p portnum option. To force statd to respond on a particular port, additionally use the -o portnum option when starting it.
To force mountd to bind to a particular port use the -p portnum option.
For example, to have statd broadcast of port 32765 and listen on port 32766, and mountd listen on port 32767, you would type:
# statd -p 32765 -o 32766 # mountd -p 32767
lockd is started by the kernel when it is needed. Therefore you need to pass module options (if you have it built as a module) or kernel options to force lockd to listen and respond only on certain ports.
If you are using loadable modules and you would like to specify these options in your /etc/modules.conf file add a line like this to the file:
options lockd nlm_udpport=32768 nlm_tcpport=32768
The above line would specify the udp and tcp port for lockd to be 32768.
If you are not using loadable modules or if you have compiled lockd into the kernel instead of building it as a module then you will need to pass it an option on the kernel boot line.
It should look something like this:
vmlinuz 3 root=/dev/hda1 lockd.udpport=32768 lockd.tcpport=32768
The port numbers do not have to match but it would simply add unnecessary confusion if they didn't.
If you are using quotas and using rpc.quotad to make these quotas viewable over nfs you will need to also take it into account when setting up your firewall. There are two rpc.rquotad source trees. One of those is maintained in the nfs-utils tree. The other in the quota-tools tree. They do not operate identically. The one provided with nfs-utils supports binding the daemon to a port with the -p directive. The one in quota-tools does not. Consult your distribution's documentation to determine if yours does.
For the sake of this discussion lets describe a network and setup a firewall to protect our nfs server. Our nfs server is 192.168.0.42 our client is 192.168.0.45 only. As in the example above, statd has been started so that it only binds to port 32765 for incoming requests and it must answer on port 32766. mountd is forced to bind to port 32767. lockd's module parameters have been set to bind to 32768. nfsd is, of course, on port 2049 and the portmapper is on port 111.
We are not using quotas.
Using IPCHAINS, a simple firewall might look something like this:
ipchains -A input -f -j ACCEPT -s 192.168.0.45 ipchains -A input -s 192.168.0.45 -d 0/0 32765:32768 -p 6 -j ACCEPT ipchains -A input -s 192.168.0.45 -d 0/0 32765:32768 -p 17 -j ACCEPT ipchains -A input -s 192.168.0.45 -d 0/0 2049 -p 17 -j ACCEPT ipchains -A input -s 192.168.0.45 -d 0/0 2049 -p 6 -j ACCEPT ipchains -A input -s 192.168.0.45 -d 0/0 111 -p 6 -j ACCEPT ipchains -A input -s 192.168.0.45 -d 0/0 111 -p 17 -j ACCEPT ipchains -A input -s 0/0 -d 0/0 -p 6 -j DENY -y -l ipchains -A input -s 0/0 -d 0/0 -p 17 -j DENY -l
The equivalent set of commands in netfilter is:
iptables -A INPUT -f -j ACCEPT -s 192.168.0.45 iptables -A INPUT -s 192.168.0.45 -d 0/0 32765:32768 -p 6 -j ACCEPT iptables -A INPUT -s 192.168.0.45 -d 0/0 32765:32768 -p 17 -j ACCEPT iptables -A INPUT -s 192.168.0.45 -d 0/0 2049 -p 17 -j ACCEPT iptables -A INPUT -s 192.168.0.45 -d 0/0 2049 -p 6 -j ACCEPT iptables -A INPUT -s 192.168.0.45 -d 0/0 111 -p 6 -j ACCEPT iptables -A INPUT -s 192.168.0.45 -d 0/0 111 -p 17 -j ACCEPT iptables -A INPUT -s 0/0 -d 0/0 -p 6 -j DENY --syn --log-level 5 iptables -A INPUT -s 0/0 -d 0/0 -p 17 -j DENY --log-level 5
The first line says to accept all packet fragments (except the first packet fragment which will be treated as a normal packet). In theory no packet will pass through until it is reassembled, and it won't be reassembled unless the first packet fragment is passed. Of course there are attacks that can be generated by overloading a machine with packet fragments. But NFS won't work correctly unless you let fragments through. See Section 7.8 for details.
The other lines allow specific connections from any port on our client host to the specific ports we have made available on our server. This means that if, say, 192.158.0.46 attempts to contact the NFS server it will not be able to mount or see what mounts are available.
With the new port pinning capabilities it is obviously much easier to control what hosts are allowed to mount your NFS shares. It is worth mentioning that NFS is not an encrypted protocol and anyone on the same physical network could sniff the traffic and reassemble the information being passed back and forth.
Tunneling NFS through SSH
One method of encrypting NFS traffic over a network is to use the port-forwarding capabilities of ssh. However, as we shall see, doing so has a serious drawback if you do not utterly and completely trust the local users on your server.
The first step will be to export files to the localhost. For example, to export the /home partition, enter the following into /etc/exports:
/home 127.0.0.1(rw)
The next step is to use ssh to forward ports. For example, ssh can tell the server to forward to any port on any machine from a port on the client. Let us assume, as in the previous section, that our server is 192.168.0.42, and that we have pinned mountd to port 32767 using the argument -p 32767. Then, on the client, we'll type:
# ssh root@192.168.0.42 -L 250:localhost:2049 -f sleep 60m # ssh root@192.168.0.42 -L 251:localhost:32767 -f sleep 60m
The above command causes ssh on the client to take any request directed at the client's port 250 and forward it, first through sshd on the server, and then on to the server's port 2049. The second line causes a similar type of forwarding between requests to port 251 on the client and port 32767 on the server. The localhost is relative to the server; that is, the forwarding will be done to the server itself. The port could otherwise have been made to forward to any other machine, and the requests would look to the outside world as if they were coming from the server. Thus, the requests will appear to NFSD on the server as if they are coming from the server itself. Note that in order to bind to a port below 1024 on the client, we have to run this command as root on the client. Doing this will be necessary if we have exported our filesystem with the default secure option.
Finally, we are pulling a little trick with the last option, -f sleep 60m. Normally, when we use ssh, even with the -L option, we will open up a shell on the remote machine. But instead, we just want the port forwarding to execute in the background so that we get our shell on the client back. So, we tell ssh to execute a command in the background on the server to sleep for 60 minutes. This will cause the port to be forwarded for 60 minutes until it gets a connection; at that point, the port will continue to be forwarded until the connection dies or until the 60 minutes are up, whichever happens later. The above command could be put in our startup scripts on the client, right after the network is started.
Next, we have to mount the filesystem on the client. To do this, we tell the client to mount a filesystem on the localhost, but at a different port from the usual 2049. Specifically, an entry in /etc/fstab would look like:
localhost:/home /mnt/home nfs rw,hard,intr,port=250,mountport=251 0 0
Having done this, we can see why the above will be incredibly insecure if we have any ordinary users who are able to log in to the server locally. If they can, there is nothing preventing them from doing what we did and using ssh to forward a privileged port on their own client machine (where they are legitimately root) to ports 2049 and 32767 on the server. Thus, any ordinary user on the server can mount our filesystems with the same rights as root on our client.
If you are using an NFS server that does not have a way for ordinary users to log in, and you wish to use this method, there are two additional caveats: First, the connection travels from the client to the server via sshd; therefore you will have to leave port 22 (where sshd listens) open to your client on the firewall. However you do not need to leave the other ports, such as 2049 and 32767, open anymore. Second, file locking will no longer work. It is not possible to ask statd or the locking manager to make requests to a particular port for a particular mount; therefore, any locking requests will cause statd to connect to statd on localhost, i.e., itself, and it will fail with an error. Any attempt to correct this would require a major rewrite of NFS.
It may also be possible to use IPSec to encrypt network traffic between your client and your server, without compromising any local security on the server; this will not be taken up here. See the [1] home page for details on using IPSec under Linux.
Summary
If you use the hosts.allow, hosts.deny, root_squash, nosuid and privileged port features in the portmapper/NFS software, you avoid many of the presently known bugs in NFS and can almost feel secure about that at least. But still, after all that: When an intruder has access to your network, s/he can make strange commands appear in your .forward or read your mail when /home or /var/mail is NFS exported. For the same reason, you should never access your PGP private key over NFS. Or at least you should know the risk involved. And now you know a bit of it.
NFS and the portmapper makes up a complex subsystem and therefore it's not totally unlikely that new bugs will be discovered, either in the basic design or the implementation we use. There might even be holes known now, which someone is abusing. But that's life.