Setting up CephFS

#What is CephFS

  • CephFS is a POSIX file system that can be accessed by multiple clients
  • Mounted directly via the Linux kernel or via FUSE and libcephfs
  • Metadata (file names, directory contents, permissions, ...) and data kept in seperate pools
  • Metadata is handled by an MDS (Metadata Server)
  • Clients connect directly to OSDs to store/retrieve data.

#Metadata Servers (MDS)

MDS are required to get started with CephFS.

#How many MDS?

  • Each file system can have any number of active MDS
  • Each active MDS will need a separate passive failover MDS to take over
  • For most cases one active MDS and one standby MDS is sufficient
  • Multiple active MDS are typically used once you have > 100 million files...
  • ...and a larger number of metadata operations or open files

#Setting up MDS

  • Navigate to http://mgmt-node:8080/services
  • Press + MDS in the action bar at the bottom of your window
  • Select at least two servers to use (1 active + 1 standby)
Create CephFS Metadata Server

#More than one active MDS

Per default, there will always be only one active MDS. If you really do need more, you will have to set that manually.

ceph fs set cephfs max_mds 2

#CephFS Explorer

Now you can test your CephFS with our built in CephFS Explorer: http://mgmt-node:8080/cephfs. You can move, copy and delete files like in any other file explorer by using the usual shortcuts.

CephFS explorer

#NFS access

Learn how to setup NFS access here.