In the evolving landscape of digital storage, Ceph and iSCSI stand as two critical technologies that are shaping the future of data management. In this article, we will demystify these technologies, exploring their unique attributes and the synergies derived from their intersection.
A Glimpse at Ceph
Ceph is an open-source software-defined storage platform known for its reliability, scalability, and flexibility. Its core strength lies in its object storage approach, whereby data is bundled with metadata and a unique identifier. This paradigm permits Ceph to distribute data across a network without a single point of failure, delivering high availability and redundancy.
Highlights of Ceph:
- Scalability: Ceph can comfortably handle petabytes of data, growing seamlessly by simply adding more hardware as the data demands grow.
- Fault Tolerance: With its decentralized design, Ceph is capable of replicating data and recovering from hardware failures automatically, ensuring data safety.
- Compatibility: Ceph provides a multitude of interfaces, including object, block, and file-level storage, catering to various storage requirements.
Insight into iSCSI
iSCSI (Internet Small Computer Systems Interface) is a standard-based network protocol that facilitates the transmission of SCSI commands over IP networks. This feature allows data transfers across LANs, WANs, or the Internet, facilitating storage area networks (SANs) without requiring physically connected storage devices.
iSCSI employs several components that work together to facilitate communication between servers and storage devices over IP networks. These components include:
- iSCSI Initiator: The iSCSI initiator is a client-side component that sends SCSI commands to the iSCSI target. This could be a dedicated hardware device or software installed on a server.
- iSCSI Target: The iSCSI target is the storage device that receives SCSI commands from the initiator. It provides disk storage to the initiator as if it were a locally attached SCSI disk.
- iSCSI Qualified Name (IQN): Each iSCSI initiator and target has a unique identifier called the IQN. This is used for authentication and to establish connections between initiators and targets.
- Logical Unit Number (LUN): Within an iSCSI target, data is organized into logical units identified by LUNs. Each LUN represents a unique storage volume that can be accessed by the iSCSI initiator.
The Power Duo: Ceph and iSCSI
The combination of Ceph and iSCSI brings forth a potent, highly available, and flexible storage solution.
Ceph’s own block device, RADOS Block Device (RBD), offers valuable block storage features such as snapshotting and replication. However, its compatibility can be a limitation, e.g. non-Linux systems such as VMware and Windows cannot talk directly to the RBD kernel module to connect to a Ceph cluster.
This is where iSCSI comes in, with iSCSI initiators it is possible to make them accessible to clients and applications that do not natively support Ceph.
The iSCSI Gateway presents a Highly Available iSCSI target that exports RADOS Block Device (RBD) images as SCSI disks. The iSCSI protocol allows clients (initiators) to send SCSI commands to storage devices (targets) over a TCP/IP network, enabling clients without native Ceph client support to access Ceph block storage.
Each iSCSI gateway exploits the Linux IO target kernel subsystem (LIO) to provide iSCSI protocol support. LIO uses the tcmu-runner (Target Core Module Userspace) daemon to create a link between the kernel SCSI target infrastructure and a userspace application to expose RBD images to iSCSI clients. The kernel-level module that is involved is called target_core_user and can be viewed as a virtual HBA.
With Ceph’s iSCSI gateway you can provision a fully integrated block-storage infrastructure with all the features and benefits of a conventional Storage Area Network (SAN).
Figure 1 – Ceph + iSCSI infrastructure
The interaction between all the Ceph iSCSI daemons is key to delivering a well-functioning iSCSI gateway that links iSCSI initiators to the Ceph storage backend. Here’s an overview of how these services interact:
tcmu-runner: This daemon interacts directly with the Linux kernel’s LIO subsystem and librados, the Ceph client library. When an iSCSI initiator sends a SCSI command, the LIO subsystem passes the command to
tcmu-runner, which then translates the command into a librados call and sends it to the Ceph storage cluster.
rbd-target-api and rbd-target-gw: These two daemons work in concert to manage the LIO configuration.
rbd-target-apihandles REST API calls for managing iSCSI targets and persists the LIO configuration to Ceph RADOS storage.
rbd-target-gwensures that the in-kernel LIO configuration matches the configuration stored in RADOS.
When a change is made to the iSCSI configuration via the
rbd-target-api, this change is saved to RADOS. The
rbd-target-gwdaemon on each gateway node then receives a notification of the change, retrieves the updated configuration from RADOS, and applies the new configuration to the LIO subsystem in the kernel.
Ceph daemons: The
tcmu-runnerinteracts with the Ceph storage cluster, which is managed by various Ceph daemons like
ceph-osd, etc. These daemons manage the different parts of the Ceph storage cluster, including handling client connections, storing and retrieving data, managing cluster state, and so on.
tcmu-runnercommunicates with the
ceph-osddaemons via librados to store and retrieve data. Changes to the LIO configuration, handled by
rbd-target-gw, are stored in the Ceph cluster, and notifications of these changes are sent out by the
In terms of resource usage, as mentioned earlier,
tcmu-runner can be CPU-intensive as it handles the actual I/O operations. On the other hand,
rbd-target-gw primarily consume resources when the iSCSI configuration changes, which is typically infrequent once the initial setup is complete. The Ceph daemons’ resource usage will depend on the size and activity of the Ceph cluster.
Setting Up Basic iSCSI Configuration with Ceph
Here’s a simplified step-by-step guide on setting up a basic iSCSI configuration with Ceph:
Prerequisites: You should have a working Ceph cluster, a couple of machines to act as the iSCSI gateways and a pool for the RBD images with at least one (eg: rbd/disk_1)
1. Install Ceph iSCSI gateway: Install the necessary software packages on the iSCSI gateway machines. On a Debian-based system, you would use commands such as:
1. # apt update 2. # apt install ceph-iscsi tcmu-runner
Configure the iSCSI Gateways: As
root, create and edit a file named
iscsi-gateway.cfg in the
# /etc/ceph/iscsi-gateway.cfg LOCAL_IFACE="bond0" [config] cluster_name = ceph gateway_keyring = ceph.client.admin.keyring # API settings. # To support the API, the bare minimum settings are: api_secure = false # Additional API configuration options are as follows, defaults shown. # api_user = admin # api_password = admin # api_port = 5001 # trusted_ip_list = 192.168.0.10,192.168.0.11
iscsi-gateway.cfg file must be identical on all iSCSI gateway nodes.
trusted_ip_list is a list of IPv4 or IPv6 addresses who have access to the API. By default, only the iSCSI gateway nodes have access.
1. Start the API: As
root, on all iSCSI gateway nodes, enable and start the API service:
# systemctl daemon-reload # systemctl enable --now rbd-target-gw # systemctl enable --now rbd-target-api
2. Create an iSCSI Target: As root, on a iSCSI gateway node create an iSCSI target using the
gwcli command-line interface:
# gwcli > /> cd /iscsi-targets > /iscsi-targets> create iqn.2023-08.com.mydomain:iscsi-igw
The IQN format takes the form
yyyy-mm is the year and month when the naming authority was established, and
naming-authority is the reverse syntax of the Internet domain name of the naming authority.
3. Create the iSCSI gateways: Navigate to the corresponding section and proceed to create two gateways with the IP addresses corresponding to the nodes that have been used so far:
> /iscsi-targets> cd iqn.2023-08.com.mydomain:iscsi-igw/gateways > /iscsi-target...-igw/gateways> create ceph-gw-1 10.172.19.21 > /iscsi-target...-igw/gateways> create ceph-gw-2 10.172.19.22
4. Create an iSCSI client: It is necessary to create an iqn and credentials to be able to authenticate them to each client that will mount the images in the initiators:
> /disks> cd /iscsi-targets/iqn.2023-08.com.mydomain:iscsi-igw/hosts > /iscsi-target...eph-igw/hosts> create iqn.2023-08.com.mydomain:rh7-client > /iscsi-target...eph-igw/hosts> cd iqn.2023-08.com.mydomain:rh7-client > /iscsi-target...in:rh7-client> auth username=myiscsiusername password=myiscsipassword
5. Add the disk to the client:
> /iscsi-target...in:rh7-client> disk add rbd/disk_1
6. Configure the iSCSI Initiator: On the client machine, install the iSCSI initiator software (if not already installed) and configure it to connect to the iSCSI target.
7. Authenticate and Connect: Finally, the iSCSI initiator authenticates and connects to the iSCSI target and now you are able to start mounting the disks using the iSCSI protocol.
Ceph iSCSI represents a pragmatic combination of two well-known technologies: Ceph’s distributed storage and the iSCSI protocol. By merging them, Ceph iSCSI offers a solution that makes accessing storage over the network straightforward, without heralding a radical change in the storage landscape.
Utilitarian By Design: Ceph iSCSI is essentially a tool for those who are familiar with Ceph and want to leverage existing network protocols to access their storage. It doesn’t reinvent the wheel but makes the existing infrastructure work more cohesively.
Open Source Benefits: The open-source nature of Ceph iSCSI ensures that it’s accessible and can be modified as per needs, but it also comes with the usual caveats of open-source solutions, like being dependent on community support.
Interoperability as a Standard Feature: One of the primary motivations behind Ceph iSCSI is to ensure compatibility across varied IT environments. It’s more about ensuring continuity than pioneering innovation.
In summary, Ceph iSCSI is a functional, no-nonsense approach to storage. It’s a blend of established technologies designed to fulfill specific needs rather than revolutionizing the storage domain.
Want to maximize your storage potential? Contact us today to discover how croit’s storage platform can help.