NEW SERVER IMAGE
This release comes with a new boot image for the servers which will be downloaded and set to default automatically after upgrading croit. Some of the new features here depend on this image, so a rolling reboot of all servers is required to complete the upgrade. We’ve added a new task to our UI to make this easier: go to the Image view in the Server tab and select “Rolling reboot”. This will reboot all servers not running the default image making sure that the cluster recovers fully before proceeding to reboot the next server. Rolling reboots do not impact the availability of your cluster.
NEW AND IMPROVED NFS SHARES
We have completely redesigned our NFS gateway for this release. croit now uses NFS Ganesha by default as NFS server. We have been deploying this new NFS server manually for select customers with larger setups and found that the new gateway service improves compatibility, performance, and stability.
SIMPLIFIED HIGH AVAILABILITY SETUP
Maintaining the same list of shares across several NFS gateway services for high availability was always quite cumbersome. You can now configure the servers on which an NFS gateway will be deployed centrally in the service edit dialog.
It is important to use the exact same NFS gateway service on all servers within an high availability group to ensure that the failover works properly. Do not use multiple gateway instances, even if they have the exact same configuration.
NOTES FOR ESXI USERS
Our stress tests currently show a few stability issues with NFS version 4 and ESXi. Please use NFS version 3 to attach croit storage to VMWare ESXi.
We have already uncovered the root cause of these stability issues and we are working on a fix to use NFS 4 with ESXi in the future.
MIGRATING TO THE NEW NFS GATEWAY
Upgrading to croit v1805 does not automatically re-configure existing NFS shares to avoid possible client incompatibilitites or even service disruptions. All currently configured NFS gateways continue to work as before, changing to the new NFS gateway requires the following steps:
IF YOUR CLIENTS SUPPORT A CHANGE IN THE NFS FILESYSTEM ID
- Make sure that you are using a high-availability (HA) setup to avoid disruptions
- Delete one of the passive NFS services that is not actively being used
- Create a new NFS service, note the changed configuration
- A single gateway can now export multiple paths
- Each exported CephFS path can now be explicitly mapped to a path on the NFS share
- To emulate the behavior of the old gateway: export one CephFS path to the folder
/cephfs
- Test your clients with the new share
- Delete the active old NFS service, forcing a failover to the new share
- Add the old active server to the newly configured NFS service
IF YOUR CLIENTS DO NOT SUPPORT A CHANGING NFS FILESYSTEM ID
Some clients can’t cope with the necessary change in the filesystem id (most versions of ESXi). Configure a new NFS share and migrate the client manually in this case.
ACCESSING S3 BUCKETS VIA NFS
Another new feature of the new NFS gateay is that it can now also make S3 buckets available via NFS. Keep in mind that S3 is not a file system but an object store, exporting a bucket via NFS does not remove restrictions imposed by S3. In particular, random write access to files is not possible with S3, mount the NFS share with the sync option (-o sync
in Linux and macOS) to avoid re-ordering. Some client applications require random write access and cannot be used with this share. The intended use case of the S3 gateway is a simple S3 interface for legacy applications, it is not a general-purpose NFS share like the CephFS share.
LDAP FOR USER ACCOUNTS
We now fully support user authentication via LDAP and Active Directory including permissions based on group membership. An example configuration explaining the available options can be found in the example config file.
CLUSTER IMPORT
We always supported importing existing clusters, but it required some manual interventions on the shell during setup. v1805 allows you to import your existing Ceph cluster from our beautiful web interface. It’s really simple: we just need the IP addresses of your existing monitor servers and the admin keyring and you are good to go.
You can re-boot servers with our croit OS image after importing them, we automatically detect OSDs. Monitor services should be manually deleted from the existing cluster and re-created using croit due to the different disk layout. Most cluster statistics will only be available after creating the first monitor service with croit.
HARDWARE SURVEY
Selecting the right hardware for a Ceph cluster can be hard. (Looking for help with hardware selection? Contact us!) How many CPUs do you need? How much RAM? Which SSDs perform well? We got a lot of experience with these choices as we have been building Ceph clusters since 2013. But we do not know how clusters that are not designed by us are being built and how well they perform, we would like to learn from these clusters as well. croit v1805 contains a hardware survey module that will report your hardware configuration (such as server types and disk models) and the configured services to us.
We don’t collect any sensitive information, for example, we will never transmit any of your stored data, your hardware’s serial numbers or any user-defined names such as names of pools, buckets, files or servers. You can preview the data that would be sent to us in the license view where you can also disable this. Further, the survey can always be disabled by setting the environment variable CROIT_DISABLE_SURVEY
.
LAST BUT NOT LEAST
As always, our new version comes with a lot of fixes and small new features. A few particulary noteworthy new features are
- LLDP: we now make LLDP information available in the frontend so you can quickly find the switch ports where a server is attached. Make sure that LLDP is enabled on your switch.
- CephFS quota: set per-directory quotas for CephFS directly from our integrated file browser.
- UEFI boot: some NICs do not come with PXE firmware by default or require annoying configuration in the server’s BIOS. croit can now perform a full native UEFI PXE boot by booting via the UEFI IP stack
UPDATE YOUR CROIT SOFTWARE
If you are already using croit, we strongly recommend to upgrade to croit v1805 to get the best possible experience! Just enter the following two commands on the management node:
docker rm -f croit docker run --net=host --restart=always --volumes-from croit-data --name croit -d croit/croit:1805
Disclaimer: please make sure that you have a working backup before entering these commands.