Loading Events

« All Events

  • This event has passed.

4-days Ceph online training

2021-08-02 @ 9:00 AM - 2021-08-05 @ 5:00 PM CEST

Image of CEPH which is an open-source software defined storage system

CEPH – OPEN SOURCE SCALE-OUT STORAGE

In the following we would like to give you an insight into our training agenda. Please do not hesitate to contact us if you have any questions. We are also happy to include other topics or tailor the course to your individual needs. We can also offer in-house training.

Ceph is a high performance open source storage solution. Thanks to its massive and simple scalability, Ceph is suitable for almost all application scenarios. These include virtual servers, cloud, backup, and much more.

TRAINER

We pay special attention to direct practical relevance in all courses. Therefore all our trainers have extensive knowledge in dealing with Ceph, Linux, networks and software development. As developers of the innovative Ceph Storage Management Software, we can fall back on knowledge from a multitude of projects and thus offer the best possible range of services. We strongly believe that this is unique in the market.

MATERIALS PROVIDED

  • Digital documentation
  • Virtual training center

TARGET GROUP:

Our course is aimed at IT administrators who need to ensure 24/7 operation of the storage environment. Typically, they are in regular contact with developers, users or run their own storage applications. As previous knowledge we expect basic knowledge in Linux.

AGENDA

INTRODUCTION:

  • Introduction of the trainer and croit
  • Demonstration of the training process
  • Setting up access to the test environment

GENERAL CEPH BASICS:

  • History of Ceph
  • Horizontal vs. vertical scalability
  • Ceph vs. X
  • Typical Ceph use cases
  • Overview of Ceph management systems
  • Introduction Low-Level Object Store (RADOS)
  • Introduction: RADOS Block Store (RBD)
  • Introduction: POSIX Filesystem (CephFS)
  • Introduction High-Level Object Store (S3/Swift)

COMPONENTS OF A CEPH CLUSTER:

  • MONs (monitor daemons)
  • MGRs (Manager Daemons)
  • OSDs (Object Storage Daemons
  • CRUSH Map
  • MDSs (Meta Data Server)
  • RGWs (RADOS Gateways)
  • Replication
  • Erasure Code (EC)
  • Example

INITIAL CEPH CLUSTER SETUP:

  • Obtainng Ceph
  • Deployment automation
  • Setting up the first MON service
  • Adding MON services
  • MGR services
  • Summary

DEPLOYING OSDS:

  • BlueStore device setup variants
  • Using ceph-deploy
  • ceph-volume
  • Summary

DISTRIBUTING DATA INTO POOLS WITH CRUSH:

  • Ceph pools and Placement Groups
  • CRUSH rules
  • Adding CRUSH rules
  • Erasure coded pools
  • Summary

RBD DETAILS:

  • Mounting RBD devices
  • Using RBD devices
  • Details of erasure coded pools
  • Best practices

CEPHFS DETAILS:

  • Creating and mounting CephFS
  • CephFS attributes

OTHER CONNECTORS:

  • Setting up and use Samba (SMB / CIFS)
  • Setting up and use NFS

RADOSGW DETAILS:

  • Deploying RadosGW
  • Access via S3 API

CLUSTER PLANNING:

  • Cluster planning for 1y,3y,5y
  • Clarify requirements
  • How much performance can you expect from hard drives?
  • Hardware sizing done right
  • Failure domains
  • Exercise

NETWORK PLANNING:

  • Simple network design
  • Complex network design
  • Typical network related problems
  • Example possible network setups
  • Dos and Don’ts of cluster networks

AUTHORIZATION MANAGEMENT:

  • Ceph keys
  • Permissions for RBD
  • Permissions for CephFS
  • Firewall configuration

OPERATIONS: ERROR HANDLING, UPGRADES AND ALL THAT:

  • Scenario: a disk died, what now?
  • PG states
  • Debugging crashes with log files
  • Using the objectstore tool
  • Cluster is running full
  • Controlling recovery speed
  • Upgrading Ceph

MANAGING A CLUSTER WITH CROIT:

  • Import existing cluster to croit
  • Configure Ceph with croit
  • Setting up NFS & SMB with croit
  • Run an RGW HA group
  • Update croit (cluster & container)

CASE STUDIES:

  • RGW for Big Data
  • RGW for video streaming
  • RBD for virtualization
  • CephFS with NFS and SMB for backups
  • CephFS for a large number of small files

MONITORING AND PERFORMANCE:

  • Alerting when something goes wrong
  • Tuning performance
  • Monitoring Performance

Details

Start:
2021-08-02 @ 9:00 AM CEST
End:
2021-08-05 @ 5:00 PM CEST
Event Category:

Venue

Organizer

Details

Start:
2021-08-02 @ 9:00 AM CEST
End:
2021-08-05 @ 5:00 PM CEST
Event Category:

Venue

Organizer