Four Days Ceph In-Depth
Training

Dates:

Sorry, no public training scheduled at the moment. Please contact us! info@croit.io

Storage Consultant

#Ceph - Open Source scale-out storage

In the following we would like to give you an insight into our training agenda. Please do not hesitate to contact us if you have any questions. We are also happy to include other topics or tailor the course to your individual needs. We can also offer in-house training.

Ceph is a high performance open source storage solution. Thanks to its massive and simple scalability, Ceph is suitable for almost all application scenarios. These include virtual servers, cloud, backup, and much more.

#Trainer

We pay special attention to direct practical relevance in all courses. Therefore all our trainers have extensive knowledge in dealing with Ceph, Linux, networks and software development. As developers of the innovative Ceph Storage Management Software, we can fall back on knowledge from a multitude of projects and thus offer the best possible range of services. We strongly believe that this is unique in the market.

#Materials provided

  • Free WiFi-access
  • Training Handouts
  • Digital documentation
  • Virtual training center
  • Lunch & Coffee breaks

#Target group:

Our course is aimed at IT administrators who need to ensure 24/7 operation of the storage environment. Typically, they are in regular contact with developers, users or run their own storage applications. As previous knowledge we expect basic knowledge in Linux.

#Agenda

#Introduction:

  • Introduction of the trainer and croit
  • Demonstration of the training process
  • Setting up access to the test environment

#General Ceph Basics:

  • History of Ceph
  • Horizontal vs. vertical scalability
  • Ceph vs. X
  • Typical Ceph use cases
  • Overview of Ceph management systems
  • Introduction Low-Level Object Store (RADOS)
  • Introduction: RADOS Block Store (RBD)
  • Introduction: POSIX Filesystem (CephFS)
  • Introduction High-Level Object Store (S3/Swift)

#Components of a Ceph Cluster:

  • MONs (monitor daemons)
  • MGRs (Manager Daemons)
  • OSDs (Object Storage Daemons
  • CRUSH Map
  • MDSs (Meta Data Server)
  • RGWs (RADOS Gateways)
  • Replication
  • Erasure Code (EC)
  • Example

#Initial Ceph cluster setup:

  • Obtainng Ceph
  • Deployment automation
  • Setting up the first MON service
  • Adding MON services
  • MGR services
  • Summary

#Deploying OSDs:

  • BlueStore device setup variants
  • Using ceph-deploy
  • ceph-volume
  • Summary

#Distributing data into pools with CRUSH:

  • Ceph pools and Placement Groups
  • CRUSH rules
  • Adding CRUSH rules
  • Erasure coded pools
  • Summary

#RBD details:

  • Mounting RBD devices
  • Using RBD devices
  • Details of erasure coded pools
  • Best practices

#CephFS details:

  • Creating and mounting CephFS
  • CephFS attributes

#Other connectors:

  • Setting up and use Samba (SMB / CIFS)
  • Setting up and use NFS

#RadosGW details:

  • Deploying RadosGW
  • Access via S3 API

#Cluster planning:

  • Cluster planning for 1y,3y,5y
  • Clarify requirements
  • How much performance can you expect from hard drives?
  • Hardware sizing done right
  • Failure domains
  • Exercise

#Network planning:

  • Simple network design
  • Complex network design
  • Typical network related problems
  • Example possible network setups
  • Dos and Don’ts of cluster networks

#Authorization management:

  • Ceph keys
  • Permissions for RBD
  • Permissions for CephFS
  • Firewall configuration

#Operations: Error handling, upgrades and all that:

  • Scenario: a disk died, what now?
  • PG states
  • Debugging crashes with log files
  • Using the objectstore tool
  • Cluster is running full
  • Controlling recovery speed
  • Upgrading Ceph

#Case studies:

  • RGW for Big Data
  • RGW for video streaming
  • RBD for virtualization
  • CephFS with NFS and SMB for backups
  • CephFS for a large number of small files

#Monitoring and performance:

  • Alerting when something goes wrong
  • Tuning performance
  • Monitoring Performance