4 days Ceph in-depth training

Everything you always wanted to know about Ceph

Ceph - Open Source scale-out storage

In the following we would like to give you an insight into our training agenda. Please do not hesitate to contact us if you have any questions. We are also happy to include other topics or tailor the course to your individual needs. We can also offer in-house training.

Ceph is a high performance open source storage solution from RedHat. Thanks to its massive and simple scalability, Ceph is suitable for almost all application scenarios. These include virtual servers, cloud, backup, and much more.

Trainer

We pay special attention to direct practical relevance in all courses. Therefore all our trainers have extensive knowledge in dealing with Ceph, Linux, networks and software development. As developers of the innovative Ceph Storage Management Software, we can fall back on knowledge from a multitude of projects and thus offer the best possible range of services. We strongly believe that this is unique in the market.

Materials provided

  • Free WiFi-access
  • Training Handouts
  • Digital documentation
  • Virtual training center
  • Lunch & Coffee breaks

Target group:

Our course is aimed at IT administrators who need to ensure 24/7 operation of the storage environment. Typically, they are in regular contact with developers, users or run their own storage applications. As previous knowledge we expect basic knowledge in Linux.

Agenda

Introduction:

  • Introduction of the trainer
  • Demonstration of the training process
  • Setting up access to the test environment

General Ceph Basics:

  • A brief history of Ceph
  • Horizontal vs. vertical scalability
  • Ceph vs ZFS vs DRBD vs ...
  • Typical Ceph Use Cases
  • Overview of possible management systems
  • Red Hat Storage Console 3
  • SUSE Enterprise Storage 5
  • OpenAttic Dashboard
  • croit Storage Manager v1804
  • Introduction Low-Level Object Store (RADOS)
  • Introduction to Block Store (RBD)
  • Introduction POSIX Filesystem (CephFS)
  • Introduction High-Level Object Store (S3/Swift)
  • Current features since Ceph Luminous
  • Future development with Ceph Mimic

Components of a Ceph Cluster:

  • MONs (monitor daemons)
  • OSDs (Object Storage Daemons)
  • MDSs (Meta Data Server
  • MGRs (Manager Daemons)
  • RGWs (RADOS Gateways)
  • CRUSH Map
  • PGs (Placement Groups)
  • What is FileStore?
  • What is BlueStore?
  • Advantages and Disadvantages of Journal/DB Discs

Hardware requirements and planning basics:

  • Cluster planning for 1y,3y,5y
  • Scaling options
  • System and component selection
  • Failure Domains
  • Performance testing with test hardware
  • Typical sources of error
  • Dos and Don'ts

Network basics and planning:

  • Example of a simple network design
  • Example of a separate front and backend network
  • Stumbling blocks of the network in operation
  • Dos and Don'ts

Installation of a Ceph Cluster:

  • Setup of all daemons (MON, OSD,...)
  • Practice with ceph-deploy
  • Other options: ceph-docker, ceph-ansible, DeepSea (Salt)
  • Provision of client access

Configuration of the CRUSH Map:

  • Introduction to the CRUSH Map
  • Setting up CRUSH rules
  • Setting up the CRUSH Map (simple)
  • Setting up the CRUSH Map (multi bucket)

Dealing with Ceph pools:

  • Determine the use (application) of the pool
  • Installation of a replica pool
  • Establishment of an Erasure Pool
  • Understanding Erasure Coding
  • Trade-offs with Erasure Coding
  • Calculation of the ideal PG number
  • Know and understand quotas

Security and authorization management:

  • Basics and possibilities
  • Setup of Ceph Accounts
  • Setup of RGW S3/Swift accounts
  • Setting up CephFS Exports
  • Necessary firewall hints
  • important safety instructions

RBD (RADOS Block Device) in detail:

  • Network procedure for client access
  • Typical sources of error
  • Access using KRBD
  • Access using KVM (LibRBD)
  • Information about logging and statistics

RGW (RADOS Gateways) in detail:

  • Network procedure for client access
  • Access using s3cmd
  • Instructions for use with Hadoop
  • Optimize your own applications, but how?
  • Information about logging and statistics

CephFS (Ceph POSIX File System) in detail:

  • MDS (Meta Data Server) in detail
  • Master redundancy and scaling
  • Network procedure for client access
  • Integration via kernel client
  • Integration via FUSE Client
  • Authorizations and authorization
  • Assignment of CRUSH rules to files/folders

Create further access options:

  • NFS kernel server vs. NFS Ganesha
  • NFS connection using NFS-Ganesha
  • Deploy CIFS/SMB using Samba

Monitoring of Ceph clusters:

  • Which components must be monitored?
  • How do I monitor correctly
  • Create and understand metrics
  • Bandwidth
  • IOPS
  • Latency
  • Creating valuable KPIs
  • Understanding trade-offs
  • Making errors identifiable before they occur

Perform Performance Analysis:

  • How to test correctly
  • CPU, RAM, network analysis
  • Identify bottlenecks correctly

Performance Optimization:

  • Practical tips for networking
  • Practical tips for the operating system
  • Practical tips for MONs
  • Practical tips for FileStore OSDs
  • Practical tips for BlueStore OSDs
  • Practical tips for RadosGW
  • Practical tips for MDS and CephFS

Real-life case studies:

(hardware, configuration and challenges)

  • RGW Cluster for Big Data
  • RGW for video data
  • RBD for Virtualization
  • CephFS + NFS + SMB for backups
  • CephFS for many small files

Troubleshooting:

  • Practical exercise in case of service failures
  • Practical exercise on hard disks/data errors
  • Practical exercise in case of system overload
  • Practical exercise in the recovery case
  • Recovery IO vs. Client IO: Prioritization
  • Log analysis / debug level
  • Determination of necessary data for tickets
  • Avoid and prevent service crashes

Deeper know compact:

  • Know and understand the Objectstore tool
  • Stumbling blocks for version migrations
  • Online migration of data
  • Understanding and using caching

Info

Dates

Date: - 2018

Location:Munich

Price:2.975 € (incl. 19% VAT)

Language: English