How I Removed Ceph from My Proxmox Cluster

Ceph is part of that top league of solutions that amazes us for the fact that they’re sophisticated and accessible to everyone. It’s a matter of putting the effort to learn and use. The top league I’m refering to has solutions like: GNU/Linux, Kubernetes, Proxmox and others. It is a powerful distributed storage system and has smooth integration with Proxmox. The problem comes when you don’t have the proper hardware. I’ve tried my best to set up in my homelab even though I knew I needed a dedicated 10Gbit network. To me it was like those things that even knowing that it doesn’t work it’s worth experiencing the effects for myself. So, even having just a 1000Mbits network and tried on my 3-node cluster. I always like the feeling of making controlled and harmless mistakes. So, the only “side effect“I got in the process was that my blog went down for some minutes.

That being said I decided to remove it from my homeserver/homelab. That simplifies my setup, free up resources and doesn’t force me to spend more money for get the proper hardware. Below I document the steps I followed to completely remove Ceph from my Proxmox cluster. Perhaps there is a better way to achieve the same outcome, but that’s what worked in my case. Also consider the versions and how old this post is.

Proxmox version: 8.4.1

Ceph version: Squid (19.2.1)

Step 1: Identify and Unmount Ceph Mounts

First, check if any Ceph-related filesystems are mounted.

mount | grep ceph

If you find any mounted Ceph paths (such as OSDs), unmount them forcefully:

umount -f <path>   # e.g., /var/lib/ceph/osd/ceph-0

This ensures no Ceph process is actively using the disks.

Step 2: Stop and Disable Ceph Services

Then stop and disable them:

systemctl stop ceph.target ceph-mon.target ceph-mgr.target ceph-mds.target ceph-osd.target ceph-crash
systemctl disable --now ceph.target ceph-mon.target ceph-mgr.target ceph-mds.target ceph-osd.target ceph-crash

This step guarantees that Ceph won’t automatically restart on boot.

Note: there are some services that appears even after the previous commands (systemctl list-units | grep ceph to list all ceph related processes), something like system-ceph\x2dvolume.slice, but don’t worry those will be removed once the system is rebooted (step 5).

Step 3: Purge Ceph from Proxmox

Proxmox provides a helper command to remove Ceph integration:

pveceph purge

This removes Ceph configuration from the Proxmox cluster level.

Step 4: Clean Up Ceph Files and Configurations

To ensure no leftover files remain, delete Ceph’s systemd units, configuration files, and data directories:

rm -rf /etc/systemd/system/ceph* \
       /var/lib/ceph/mon/ \
       /var/lib/ceph/mgr/ \
       /var/lib/ceph/mds/ \
       /etc/pve/priv/ceph.* \
       /etc/ceph/* \
       /etc/pve/ceph.conf

And remove the directories entirely:

rm -rf /etc/pve/ceph
rm -rf /etc/ceph
rm -rf /var/lib/ceph

Step 5: Reboot the Node

Reboot to clear out any cached services or stale mounts:

reboot

Step 6: Wipe the Disks Used by Ceph

Once the node comes back, check your available disks:

lsblk

Then wipe all Ceph metadata and partitions from the disk(s) previously used by Ceph. Replace /dev/nvme0n1 with your actual device:

sgdisk --zap-all /dev/nvme0n1
wipefs -a /dev/nvme0n1
dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=10
  • sgdisk --zap-all: removes GPT and partition table.
  • wipefs -a: clears filesystem signatures.
  • dd ...: zeroes the beginning of the disk to remove any leftover Ceph headers.

All commands

mount | grep ceph
umount -f <path> # i.e: /var/lib/ceph/osd/ceph-1

systemctl list-units | grep ceph

systemctl stop ceph.target ceph-mon.target ceph-mgr.target ceph-mds.target ceph-osd.target ceph-crash <etc>
systemctl disable --now ceph.target ceph-mon.target ceph-mgr.target ceph-mds.target ceph-osd.target ceph-crash

pveceph purge

rm -rf /etc/systemd/system/ceph* /var/lib/ceph/mon/ /var/lib/ceph/mgr/ /var/lib/ceph/mds/ /etc/pve/priv/ceph.* /etc/ceph/* /etc/pve/ceph.conf
rm -rf /etc/pve/ceph
rm -rf /etc/ceph
rm -rf /var/lib/ceph

reboot

lsblk
sgdisk --zap-all /dev/nvme0n1
wipefs -a /dev/nvme0n1
dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=10

Final Notes

After these steps, Ceph is fully removed from the node. At this point, the disk(s) are ready to be reused for other storage backends like ZFS, LVM, or plain directories. In the future, I will invest in a 10 Gbit network and install it again to have HA in my homelab/homeserver, but for now having just a dedicated NVMe for my VMs/LXCs is enough to do the stuff I want to do.