The Network File System (NFS) is a distributed file system protocol
originally developed by Sun Microsystems in 1984. An NFS server
exports one or more of its file systems, known as
An NFS client can mount these exported shares on its own file system.
You can perform file actions on this mounted remote file system as
if the file system were local.
The NFS driver, and other drivers based on it, work quite differently than a traditional block storage driver.
The NFS driver does not actually allow an instance to access a storage
device at the block level. Instead, files are created on an NFS share
and mapped to instances, which emulates a block device.
This works in a similar way to QEMU, which stores instances in the
Creating an NFS server is outside the scope of this document.
This example assumes access to the following NFS server and mount point:
This example demonstrates the usage of this driver with one NFS server.
nas_host option to the IP address or host name of your NFS
server, and the
nas_share_path option to the NFS export path:
nas_host = 192.168.1.200 nas_share_path = /storage
You can use the multiple NFS servers with cinder multi back ends feature.
Configure the enabled_backends option with
multiple values, and use the
for each back end as described above.
The below example is another method to use multiple NFS servers, and demonstrates the usage of this driver with multiple NFS servers. Multiple servers are not required. One is usually enough.
This example assumes access to the following NFS servers and mount points:
Add your list of NFS servers to the file you specified with the
nfs_shares_config option. For example, if the value of this option
was set to
/etc/cinder/shares.txt file, then:
# cat /etc/cinder/shares.txt 192.168.1.200:/storage 192.168.1.201:/storage 192.168.1.202:/storage
Comments are allowed in this file. They begin with a
nfs_mount_point_base option. This is a directory
cinder-volume mounts all NFS shares stored in the
file. For this example,
/var/lib/cinder/nfs is used. You can,
of course, use the default value of
now contain a directory for each NFS share specified in the
file. The name of each directory is a hashed name:
# ls /var/lib/cinder/nfs/ ... 46c5db75dc3a3a50a10bfd1a456a9f3f ...
You can now create volumes as you normally would:
$ openstack volume create --size 5 MYVOLUME # ls /var/lib/cinder/nfs/46c5db75dc3a3a50a10bfd1a456a9f3f volume-a8862558-e6d6-4648-b5df-bb84f31c8935
This volume can also be attached and deleted just like other volumes. However, snapshotting is not supported.
cinder-volumemanages the mounting of the NFS shares as well as volume creation on the shares. Keep this in mind when planning your OpenStack architecture. If you have one master NFS server, it might make sense to only have one
cinder-volumeservice to handle all requests to that NFS server. However, if that single server is unable to handle all requests, more than one
cinder-volumeservice is needed as well as potentially more than one NFS server.
Regular IO flushing and syncing still stands.