82

I have three machines in production -

machineA    10.66.136.129
machineB    10.66.138.181
machineC    10.66.138.183

and all those machines have Ubuntu 12.04 installed in it and I have root access to all those three machines.

Now I am supposed to do below things in my above machines -

Create mount point /opt/exhibitor/conf
Mount the directory in all servers.
 sudo mount <NFS-SERVER>:/opt/exhibitor/conf /opt/exhibitor/conf/

I have already created /opt/exhibitor/conf directory in all those three machines as mentioned above.

Now I am trying to create a Mount Point. So I followed the below process -

Install NFS support files and NFS kernel server in all the above three machines

$ sudo apt-get install nfs-common nfs-kernel-server

Create the shared directory in all the above three machines

$ mkdir /opt/exhibitor/conf/

Edited the /etc/exports and added the entry like this in all the above three machines -

# /etc/exports: the access control list for filesystems which may be exported
#               to NFS clients.  See exports(5).
#
# Example for NFSv2 and NFSv3:
# /srv/homes       hostname1(rw,sync,no_subtree_check) hostname2(ro,sync,no_subtree_check)
#
# Example for NFSv4:
# /srv/nfs4        gss/krb5i(rw,sync,fsid=0,crossmnt,no_subtree_check)
# /srv/nfs4/homes  gss/krb5i(rw,sync,no_subtree_check)
#
/opt/exhibitor/conf/     10.66.136.129(rw)
/opt/exhibitor/conf/     10.66.138.181(rw)
/opt/exhibitor/conf/     10.66.138.183(rw)

I have tried mounting on machineA like below from machineB and machineC and it gives me this error-

root@machineB:/# sudo mount -t nfs 10.66.136.129:/opt/exhibitor/conf /opt/exhibitor/conf/
mount.nfs: access denied by server while mounting 10.66.136.129:/opt/exhibitor/conf

root@machineC:/# sudo mount -t nfs 10.66.136.129:/opt/exhibitor/conf /opt/exhibitor/conf/
mount.nfs: access denied by server while mounting 10.66.136.129:/opt/exhibitor/conf

Did my /etc/exports file looks good? I am pretty sure, I have messed up my exports file. As I have the same content in all the three machines in exports file.

Any idea what wrong I am doing here? And what will be the correct /exports file here?

Rui F Ribeiro
  • 56,709
  • 26
  • 150
  • 232
arsenal
  • 3,093
  • 1
    FYI double check permissions on the host/client. If the NFS host has permissions 0750 or 0700 then the client trying to mount is very likely to fail with this same error message. I changed the host from 0750 to 0755 and then the error went away and all was well. – Trevor Boyd Smith Feb 28 '17 at 00:28
  • @TrevorBoydSmith what are "host permissions", how to set them? – Dims Jul 23 '21 at 12:48
  • Don't forget to check server logs, for example /var/log/daemon.log – golimar Jan 09 '23 at 11:39

15 Answers15

97

exportfs

When you create a /etc/exports file on a server you need to make sure that you export it. Typically you'll want to run this command:

$ exportfs -a

This will export all the entries in the exports file.

showmount

The other thing I'll often do is from other machines I'll check any machine that's exporting NFS shares to the network using the showmount command.

$ showmount -e <NFS server name>

Example

Say for example I'm logged into scully.

$ showmount -e mulder
Export list for mulder:
/export/raid1/isos     192.168.1.0/24
/export/raid1/proj     192.168.1.0/24
/export/raid1/data     192.168.1.0/24
/export/raid1/home     192.168.1.0/24
/export/raid1/packages 192.168.1.0/24

fstab

To mount these upon boots you'd add this line to your client machines that want to consume the NFS mounts.

server:/shared/dir /opt/mounted/dir nfs rsize=8192,wsize=8192,timeo=14,intr

automounting

If you're going to be rebooting these servers then I highly suggest you look into setting up automounting (autofs) instead of adding these entries to /etc/fstab. It's a bit more work but is well worth the effort.

Doing so will allow you to reboot the servers more independently from one another and also will only create the NFS mount when it's actually needed and/or being used. When it goes idle it will get unmounted.

References

slm
  • 369,824
  • 1
    Thanks for suggestion. I just did that and now it works fine. Instead of running exportfs -a, I ran exportfs -rv. Is there any difference in between those? And in my case, showmount -e 10.66.136.129 I will be doing from machineB and machineC. right? – arsenal Dec 21 '13 at 08:05
  • 2
    @TechGeeky - not really. exportfs -rv just does a reexport + is verbose. The -a will export everything. As to showmount -e yes you can run it from those machines or the one serving the shares. – slm Dec 21 '13 at 08:08
  • ok.. Thanks, makese sense now.. One last thing. I believe there is one more thing to this mount point thing, fstab file.. correct? Now which machine fstab file, I am supposed to modify? And what content I am supposed to add in there? Any idea? – arsenal Dec 21 '13 at 08:13
  • @TechGeeky see updates. You add entries to the clients that want to consume the NFS shares. – slm Dec 21 '13 at 08:15
  • +1, this answers the original problem that the OP had, my answer only appeared to be correct because of misleading wording in the question. – Chris Down Dec 21 '13 at 09:02
  • 1
    On Ubuntu, you must first install nfs-kernel-server for exportfs to be available. Source: http://manpages.ubuntu.com/manpages/trusty/man8/exportfs.8.html – flickerfly Jun 05 '15 at 18:36
  • It looks like I have to run exportfs -a on every restart of my server. Is it normal? What's the usual way to make this happen automagically? – Gauthier Feb 20 '17 at 14:37
  • @Gauthier - that shouldn't be required every restart. It should be done when the NFS service(s) are started on your box. This can vary from distro to distro so your question is probably better to ask on the main site than to ask via a comment on a pre-existing question/answer. – slm Feb 20 '17 at 15:50
49

I saw the same error (mount.nfs: access denied by server while mounting...) and the issue was fixed by -o v3 option as follows:

$ sudo mount -o v3 a-nfs-server:/path/to/export /path/to/mount
  • Server is Ubuntu 14.04 64bit LTS.
  • Client is CentOS 6.5 64bit.
HalosGhost
  • 4,790
10

In my case works using nfs4 doing:

$ sudo mount -t nfs4 server-name:/ /path/to/mount

In the /etc/export file on server

/Path/to/export 192.168.1.0/24(rw,sync,fsid=0,no_root_squash,crossmnt,no_subtree_check,no_acl)

fsid=0 makes the /Path/to/export the root directory when you mount the share.

crossmnt, because I have some others drives in the exported file system that I want to access also.

no_root_squash, because I want to access as root user (su) from the client side. I'm pretty sure that I'm the only one that can do that in my local network.

Server and clients are Ubuntu 14.04 64bit.

If you want to use nfs3, the answer of @fumisky-wells works for me as well.

victe
  • 201
  • You earned yourself an upvote sir; I have a NAS, so modding the /etc/export file isn't an option, but specifying the complete path did the trick. well done. – MDMoore313 Dec 18 '14 at 01:19
8

I was getting the same error message and my issue turned out to be due to the client machine having two network interfaces connected to the same LAN. The server had been configured to expect a specific IP address and traffic was going out on the second interface that has a dhcp IP address. So I just configured the second interface to have a static IP address and also added the second static IP address to the server configuration.

5

After battling with this same error message for hours, my problem turned out to be nothing more complicated than good old fashioned Linux file permissions on the NFS host.

The folder I was trying to share (/home/foo/app/share) had the correct permissions, but because the user's home directory (/home/foo) had 0750 mode on it, NFS wasn't able to traverse into it to access the shared dir.

As soon as I set the user's home directory to mode 0751, the NFS service was able to traverse into it and I was able to mount the share from my client machine.

3

/etc/exports needs to be edited on the NFS server machine, not the clients, as you state you did, as it is checked by the NFS server when a client requests access to a share.

If you put the following in /etc/exports on the NFS server, it should work:

/opt/exhibitor/conf 10.66.136.129(rw)
/opt/exhibitor/conf 10.66.138.181(rw)
/opt/exhibitor/conf 10.66.138.183(rw)
Chris Down
  • 125,559
  • 25
  • 270
  • 266
  • I already have this in my exports file on machineA. And then I am mounting it from machineB and machineC and it doesn't work somehow.. Is it possible I have added same information in all the three machines in exports file, will that be a problem? I should be adding only in machineA? – arsenal Dec 21 '13 at 07:56
  • 1
    @TechGeeky Did you reload the NFS exports after doing that, using exportfs -a? – Chris Down Dec 21 '13 at 07:58
  • I just did that and now it works fine. I am trying to understand this whole thing in better way, so my first question is, machineA is NFS server and machineB and machineC are clients.. Correct? Second question is, if machineA is my NFS server, then only in the /etc/exports file of machineA, I will be adding the above three lines as you mentioned in your solution and we won't touch exports file of machineB and machineC? Correct? – arsenal Dec 21 '13 at 08:04
  • @TechGeeky As long as you are mounting a share on machine A, then that's correct in both cases. – Chris Down Dec 21 '13 at 08:04
  • Thanks. Now I understand this much better. Why I have asked this question because I also have similar stuff in staging environment. And what I did in those three machines in staging environment, I added same three lines in all my /etc/exports files of three machines instead of adding it only in machineA but still it works fine. And now I have understood the whole concept more clearly. Thanks for the help. – arsenal Dec 21 '13 at 08:08
  • One last thing. I believe there is one more thing to this mount point, fstab file.. correct? Now which machine fstab file, I am supposed to modify? And what content I am supposed to add in there? – arsenal Dec 21 '13 at 08:09
  • @TechGeeky You don't need it in the fstab if you are going to mount it manually to a mountpoint, but if you want it mounted automatically (or want the mountpoint to be determined automatically based on the value in fstab), take a look at https://www.centos.org/docs/5/html/5.1/Deployment_Guide/s2-nfs-fstab.html. – Chris Down Dec 21 '13 at 08:21
3

Evidently this error can be triggered by many causes. In my case the solution consisted in adding the insecure option in /etc/exports on the server:

/path/to/be/exported authorized_client(rw,root_squash,sync,no_subtree_check,insecure)

This is because some NFS clients don't respect the established rule of not originating mounting requests from high TCP ports (port number above the 0-1023 range).

The reason of the rule is that low TCP ports can only be used by privileged users (i.e. root) on UNIX systems. And importing NFS shares on a system is such an example.

2

If nfs-client is trying to mount exported share inside linux container then container should run in privileged mode.

In case of docker;

$ docker run -it --rm --privileged ubuntu:14.04

efesaid
  • 207
2

For me the problem was that I was using server's ip address in /etc/exports/ instead of the client one.

The thing is, you should put all the ips you grant access to on server's /etc/exports/

Vanuan
  • 281
0

For me the problem was, that my router changed the used IP-address of the client, so that the entry in /etc/exports on the server machine allowed only access for an IP address which was not used any more.

Alex
  • 790
0

Same thing could happen if you try to mount a NFS share on Virtual Box instance with network adapter configured as NAT.

Choosing Bridged Adapter in virtual machine network settings fixes this issue.

0

I know this is an old thread, but my problem had to do with LXC and AppArmor.

Killing AppArmor, or adding an exception profile, fixed it.

lgeorget
  • 13,914
Josh
  • 1
0

This error can also be caused by trying to mount an encrypted path. (For example in your home directory, if you chose to encrypt it)

0

The only solution that worked for me was to export filesystems starting with /srv. It looks like this is a limitation (or default option, at least) of NFSv4.

Since I was trying to export a USB drive that automounts to /media, I needed a way to get that 'mounted' under /srv. To accomplish that:

sudo mkdir /srv/videos
sudo mount --bind /media/jim/wdportable/videos /srv/videos

And in /etc/exports:

/srv/videos 192.168.0.200(ro)

When I exported /media/jim/wdportable/videos directly, attempting to mount on the client always resulted in mount.nfs: access denied by server.

The -o v3 solution worked, but I didn't want to force v3.

  • 3
    I can almost guarantee this would have been due to permissions on the /media/jim folder. If the directory you're trying to share is (or is inside of) a dir with only 700 or 750 mode, NFS won't be able to traverse into it. If you changed /media/jim to 751, it would probably work. – Dale C. Anderson Apr 03 '19 at 05:54
  • @DaleAnderson is right. After a successful sudo mount -o v3 192.168.0.200:"/media/pi/mydrive" /mnt/nfs-share (raspbian on Raspi 3 B+), I also tried to sudo chmod 751 /media/pi. Afterwards, I didn't need -o v3 anymore: sudo mount 192.168.0.200:"/media/pi/mydrive" /mnt/nfs-share did the job (after unmounting). Many thanks to @DaleAnderson . – Thomas Praxl Apr 15 '19 at 18:51
  • This is probably the problem. I guess I'm used to the ancient days when the NFS server just ran as root and blindly exported what it was told. I'll test this out. – Jim Stewart Apr 15 '19 at 19:50
0

It should be noted that a linked page that lead me here had my correct answer which was that you can NOT use * wildcard in IP address in the export. It is either * (all IP's) or used as a wildcard in domain names IE: *.domain.com.

Eg: this is correct

/Path/to/export 192.168.1.0/24(flags)

This won't work (or is incorrect at least), but worked for me for years until I tried mounting the export from a Fedora VM.

/Path/to/export 192.168.1.*(flags)
  • https://unix.stackexchange.com/questions/106122/mount-nfs-access-denied-by-server-while-mounting-on-ubuntu-machines?noredirect=1&lq=1 – FreeSoftwareServers Feb 07 '18 at 06:24
  • I think the reason it failed is possibly NFSv4 because I know Fedora is bleeding edge new stuff and my old VM's worked fine but probably used older NFS version. Just a guess. – FreeSoftwareServers Feb 07 '18 at 06:25