This is our fourth release for Ceph Nautilus and our first release for Ceph Octopus, featuring a lot of new features and user experience enhancements.
This release is our biggest yet: our developers made a total of 580 changes for you.

Please read the upgrade notes at the end of this post carefully before upgrading.

#New server image

There are two new server images with the latest Ceph 14.2.12 (Nautilus) and 15.2.5 (Octopus) releases available now.

#Automatic updates

Croit now features an auto-updater that can run on a schedule of your choosing, or be triggered manually.
The auto-updater aids in our transition to a high-frequency release model which will take place in the coming weeks and months.

Don't worry, it's disabled by default, giving you a chance to vet updates beforehand.

To update, Croit needs access to its Docker socket, meaning the new Croit container needs to be launched with the -v /var/run/docker.sock:/var/run/docker.sock option.

You can choose your upgrade path: Staying on latest is strongly recommended, but you can take a look at nightly if you're curious about what we're working on.

If you're curious about what the updater does: It simply starts an auxiliary container that replaces the current Croit container with the updated one and then quits.

# To enable auto updates, recreate the croit docker container like this:
docker rm -f croit
docker pull croit/croit:latest
docker run --cap-add=SYS_TIME -v /var/run/docker.sock:/var/run/docker.sock --net=host --restart=always --volumes-from croit-data --name croit -d croit/croit:latest

#RBD Mirroring

Octopus only! RBD mirroring automatically syncs your RBDs between clusters.

It can run in journal mode or in snapshot mode.

Rbd mirror

#Journal mode

With journal mode a journal of all changes to an RBD is kept, and those changes are replayed on another cluster, keeping them in sync.

Journal mode more than halves your write performance, and as such is not recommended by us.

#Snapshot mode

With snapshot mode only point-in-time snapshots of an RBD are synced to the other cluster. Those snapshots can be created manually or automatically on a fixed schedule.

Note that these aren't normal RBD snapshots; they are used for mirroring purposes only.


To start mirroring RBD images from a remote cluster to a local cluster, you have to run the new RBD mirror service on some of your servers in the local cluster.
Then, for each pool to be mirrored, copy one cluster's RBD Mirror Token to the other cluster, and choose how the pool and images should be mirrored.
You can decide between mirroring an entire pool of RBDs (pool mode) or mirroring some select RBDs from a pool (image mode).

You can also add an RBD mirror service to the remote cluster to sync images in both directions.

See the Ceph docs for more information on RBD mirroring.

#Ceph Octopus

This is our first release supporting Ceph Octopus.

We don't consider Ceph Octopus stable enough for production use, but we want to allow our users to set up Octopus clusters for testing.

Newly created clusters will still be running Ceph Nautilus, but you can upgrade to Ceph Octopus manually.

#Automatic Snapshots for RBD and CephFS

You can now set up scheduled snapshots for RBDs and CephFS directories.

Our software will automatically create snapshots at intervals of your choosing, and remove old snapshots (you can choose how many to keep).

Keep in mind that some CephFS clients may struggle with large numbers of snapshots.

There is also a new hook point OnCephFsSnapshotComplete which runs your event script on a server that has the snapshot already mounted (the mountpoint is passed to the script), so you can automatically sync your snapshots, e.g. via rsync.

Snapshot Manager

#IO Limits for RBDs

You can now limit RBD traffic per pool or even per RBD.

#Smaller changes

  • non-physical network interfaces can now be hidden
  • you can now mute health warnings (Octopus only)
  • added password reset scripts (root/reset_password.sh and root/restore_admin.sh)
  • new OnPersistentAvailable hook (triggers once /persistent is mounted on a server)
  • rolling service restart now works on unhealthy clusters (you will be asked for confirmation)
  • new helpful Task Advisor hints
  • lots of bug fixes
Io limits

#Upgrade notes

As always: ensure that you have a working backup (like our encrypted cloud backup) before upgrading the container.

The new updater will only work if Croit has access to its Docker socket.

When upgrading from v1901 or earlier or if you are not yet running Ceph Nautilus: Please refer to the upgrade instructions in the release notes of our v1910 release.

docker rm -f croit
docker run --cap-add=SYS_TIME --net=host -v /var/run/docker.sock:/var/run/docker.sock --restart=always --volumes-from croit-data --name croit -d croit/croit:v2010

#API Changes

We made two incompatible changes in our API:

The RGW certificate is now configured via the /api/config/rgw-ssl-cert GET and PUT endpoints. Attempting to configure the certificate via /api/config will cause an HTTP 301 error.

RBD snapshot endpoints have moved from /pools/<pool>/rbds/<rbd>/<snap> to /pools/<pool>/rbds/<rbd>/snapshots/<snap>

See our API docs for details.

An OpenAPI specification is also available from your deployment at /api/swagger.json.