With great thanks and respect to the developers of Ceph, we are pleased to announce that our new operating system images are now based on Ceph Luminous v12.2.1. With this bugfix release, nothing stands in the way of a soon release of our v1710.
Of course, we will offer a simple and convenient solution to upgrade from Ceph v11.2 Kraken to v12.2 Luminous.
What is the v12.2.1 release?
This is the first bugfix release of Luminous v12.2 for long-term stable release series. It contains a number of bug fixes and some new features for CephFS, RBD & RGW. We recommend all users of the 12.2 Luminous version to update to the new 12.2.1.
Dynamic resharding is now enabled by default for RGW. RGW will now automatically reset the bucket index as soon as the index grows beyond
Limiting the MDS cache over a memory limit is now supported with the new config option
mds_cache_memory_limit(1GB by default). A cache reservation can also be specified as a percentage of the limit with
mds_cache_reservation(by default 5%). Limits by inode count are still supported with
mds_cache_sizeis set to 0 (default), the inode limit is deactivated.
The maximum number of PGs per OSD before the monitor issues a warning has been reduced from 300 to 200. 200 is still twice as high as the generally recommended target of 100 PGs per OSD. This limit can be set via the option
mon_max_pg_per_osdon the MON servers. The older option
mon_pg_warn_max_per_osdhas been removed.
Creating pools or customizing
pg_numwill now fail if the change would cause the number of PGs per OSD to exceed the configured
mon_max_pg_per_osdlimit. The option can be adjusted if it is really necessary to create a pool with more PGs
There was an error in the PG mapping behavior of the new upmap function. If you have used this feature (e. g. using the
ceph osd pg-upmap-itemscommand), we recommend that you remove all mappings before upgrading to this intermediate version (via the
ceph osd rm-pg-upmap-itemscommand).
A stall in BlueStore IO submission that was affecting many users has been resolved.
Further interesting details can be found at ceph.com.