2 days Ceph training

Get a good overview of Software Defined Storage with Ceph

Ceph - Open Source scale-out storage

In the following we would like to give you an insight into our training agenda. Please do not hesitate to contact us if you have any questions. We are also happy to include other topics or tailor the course to your individual needs. We can also offer in-house training.

Ceph is a high performance open source storage solution from RedHat. Thanks to its massive and simple scalability, Ceph is suitable for almost all application scenarios. These include virtual servers, cloud, backup, and much more.

Trainer

We pay special attention to direct practical relevance in all courses. Therefore all our trainers have extensive knowledge in dealing with Ceph, Linux, networks and software development. As developers of the innovative Ceph Storage Management Software, we can fall back on knowledge from a multitude of projects and thus offer the best possible range of services. We strongly believe that this is unique in the market.

Materials provided

  • Free WiFi-access
  • Training Handouts
  • Digital documentation
  • Virtual training center
  • Lunch & Coffee breaks

Target group:

Our course is aimed at IT administrators who need to ensure 24/7 operation of the storage environment. Typically, they are in regular contact with developers, users or run their own storage applications. As previous knowledge we expect basic knowledge in Linux.

Agenda

Introduction:

  • Introduction of the trainer
  • Demonstration of the training process
  • Setting up access to the test environment

General Ceph Basics:

  • A brief history of Ceph
  • Horizontal vs. vertical scalability
  • Ceph vs ZFS vs DRBD vs ...
  • Typical Ceph Use Cases
  • Overview of possible management systems
  • Red Hat Storage Console 3
  • SUSE Enterprise Storage 5
  • OpenAttic Dashboard
  • croit Storage Manager v1804
  • Introduction Low-Level Object Store (RADOS)
  • Introduction to Block Store (RBD)
  • Introduction POSIX Filesystem (CephFS)
  • Introduction High-Level Object Store (S3/Swift)
  • Current features since Ceph Luminous
  • Future development with Ceph Mimic

Components of a Ceph Cluster:

  • MONs (monitor daemons)
  • OSDs (Object Storage Daemons)
  • MDSs (Meta Data Server
  • MGRs (Manager Daemons)
  • RGWs (RADOS Gateways)
  • CRUSH Map
  • PGs (Placement Groups)
  • What is BlueStore?
  • Advantages and Disadvantages of Journal/DB Discs

Hardware requirements and planning basics:

  • Cluster planning for 1y,3y,5y
  • Scaling options
  • System and component selection
  • Failure Domains
  • Performance testing with test hardware
  • Typical sources of error
  • Dos and Don'ts

Installation of a Ceph Cluster:

  • Setup of all daemons (MON, OSD,...)
  • Practice with ceph-deploy
  • Provision of client access

Configuration of the CRUSH Map:

  • Introduction to the CRUSH Map
  • Setting up CRUSH rules
  • Setting up the CRUSH Map (simple)

Dealing with Ceph pools:

  • Determine the use (application) of the pool
  • Installation of a replica pool
  • Establishment of an Erasure Pool
  • Understanding Erasure Coding
  • Trade-offs with Erasure Coding
  • Calculation of the ideal PG number
  • Know and understand quotas

Security and authorization management:

  • Basics and possibilities
  • Setup of Ceph Accounts
  • Setup of RGW S3/Swift accounts
  • Setting up CephFS Exports
  • Necessary firewall hints
  • important safety instructions

RBD (RADOS Block Device) in detail:

  • Network procedure for client access
  • Typical sources of error

RGW (RADOS Gateways) in detail:

  • Network procedure for client access
  • Access using s3cmd

CephFS (Ceph POSIX File System) in detail:

  • MDS (Meta Data Server) in detail
  • Network procedure for client access
  • Integration via kernel client
  • Integration via FUSE Client
  • Authorizations and authorization

Monitoring of Ceph clusters:

  • Which components must be monitored?
  • How do I monitor correctly
  • Create and understand metrics
  • Bandwidth
  • IOPS
  • Latency
  • Creating valuable KPIs
  • Understanding trade-offs
  • Making errors identifiable before they occur

Perform Performance Analysis:

  • How to test correctly
  • CPU, RAM, network analysis
  • Identify bottlenecks correctly

Real-life case studies:

(hardware, configuration and challenges)

  • RGW Cluster for Big Data
  • RGW for video data
  • RBD for Virtualization
  • CephFS + NFS + SMB for backups
  • CephFS for many small files

Info

Dates

Date: - 2019

Location:Munich

Price:1.487 € (incl. 19% VAT)

Language: English / German

Date: - 2019

Location:Munich

Price:1.487 € (incl. 19% VAT)

Language: English / German

Date: - 2019

Location:Barcelona

Price:1.250 € (incl. 19% VAT)

Language: English

Date: - 2019

Location:Barcelona

Price:1.250 € (incl. 19% VAT)

Language: English

Date: - 2019

Location:Munich

Price:1.487 € (incl. 19% VAT)

Language: German