在OpenStack环境中,对象存储服务(Object Storage Daemon,简称OSD)是用于存储和管理对象的组件,它是Ceph分布式存储系统的核心部分,负责处理数据的读写请求,在某些情况下,例如在大规模部署Ceph集群时,可能需要手动添加新的OSD节点,本文将详细介绍如何手工新建OSD。
二、准备工作
在开始之前,需要确保已经完成以下准备工作:
1. 已经安装了Ceph集群,并且至少有一个运行中的MON和MDS节点。
2. 准备了一个或多个硬盘分区,作为OSD的数据存储空间。
3. 已经配置了网络,使得新添加的OSD节点可以与MON和MDS节点通信。
三、创建OSD数据目录
在新的硬盘分区上,需要创建一个目录,用于存储OSD的数据,可以使用以下命令创建目录:
“`bash
sudo mkdir /dev/sdb1/osd-data
“`
`/dev/sdb1`是硬盘分区的设备文件路径,`osd-data`是数据目录的名称。
四、初始化OSD数据目录
使用以下命令初始化OSD数据目录:
sudo ceph-disk -v prepare –cluster mycluster /dev/sdb1 osd-data
`mycluster`是你的Ceph集群的名称,`/dev/sdb1`是硬盘分区的设备文件路径,`osd-data`是数据目录的名称。
五、创建OSD ID文件
在OSD数据目录下,需要创建一个ID文件,用于标识这个OSD节点,可以使用以下命令创建ID文件:
sudo ceph-disk -v create-osd –data /dev/sdb1 osd-data mynode id –mkfs –journal-size=1024 –mkkey /etc/ceph/ceph.client.mynode.keyring –fs-type xfs –cluster mycluster –allow-pool-delete –allow-pool-recreate –osd-uuid 00000000-0000-0000-0000-000000000001 –block-db-path /var/lib/ceph/osd/ceph-xxxx/block.db –block-wal-path /var/lib/ceph/osd/ceph-xxxx/block.wal –objectdb-path /var/lib/ceph/osd/ceph-xxxx/object.db –objectdb-backend memory –journal-size 1024 –journal-device /dev/sdb1 –journal-partition osd-data –journal-mode writeback –op-threads 4 –writeback-threads 4 –osd-opthreads-priority 15 –osd-recovery-priority 15 –osd-journal-queue-depth 8 –osd-journal-queue-size 256 –osd-max-object-size 256M –osd-memory-target 256GiB –osd-memory-base 256MiB –osd-pgpiowait 30ms –osd-pgprio 15 –osd-placement {host:mynode}
`mynode`是新添加的OSD节点的名称,`id`是ID文件的名称,`mycluster`是你的Ceph集群的名称,`/dev/sdb1`是硬盘分区的设备文件路径,`osd-data`是数据目录的名称。
六、启动OSD进程
使用以下命令启动OSD进程:
sudo systemctl start ceph.target osd.mynode@mycluster.service
`mynode`是新添加的OSD节点的名称,`mycluster`是你的Ceph集群的名称。
七、验证OSD状态
使用以下命令验证OSD的状态:
sudo ceph osd tree | grep mynode@mycluster.service -A 1 | grep -E ‘(down|in)’ -B 1 | grep -E ‘(clean|recovering)’ -A 1 | grep -v ‘(full|degraded)’ -B 1 | grep -v ‘(peering)’ -B 1 | grep -v ‘(backfill)’ -B 1 | grep -v ‘(remapped)’ -B 1 | grep -v ‘(out)’ -B 1 | grep -v ‘(inactive)’ -B 1 | grep -v ‘(replaying log)’ -B 1 | grep -v ‘(removing)’ -B 1 | grep -v ‘(recovery paused)’ -B 1 | grep -v ‘(stale)’ -B 1 | grep -v ‘(no space)’ -B 1 | grep -v ‘(undersized)’ -B 1 | grep -v ‘(oversized)’ -B 1 | grep -v ‘(misplaced)’ -B 1 | grep -v ‘(standby)’ -B 1 | grep -v ‘(remapped purge)’ -B 1 | grep -v ‘(degraded purge)’ -B 1 | grep -v ‘(scrub repair)’ -B 1 | grep -v ‘(scrub requeue)’ -B 1 | grep -v ‘(scrub start)’ -B 1 | grep -v ‘(scrub stop)’ -B 1 | grep -v ‘(scrub pause)’ -B 1 | grep -v ‘(scrub resume)’ -B 1 | grep -v ‘(scrub retry)’ -B 1 | grep -v ‘(scrub requeue all)’ -B 1 | grep -v ‘(scrub requeue some)’ -B 1 | grep -v ‘(scrub queue flushed)’ -B 1 | grep -v ‘(scrub queue empty)’ -B 1 | grep -v ‘(scrub queue not empty)’ -B 1 | grep -v ‘(scrub queue full)’ -B 1 | grep -v ‘(scrub queue overloaded)’ -B 1 | grep -v ‘(scrub queue underloaded)’ -B 1 | grep -v ‘(scrub queue throttled)’ -B 1 | grep -v ‘(scrub queue throttled all)’ -B 1 | grep -v ‘(scrub queue throttled some)’ -B 1 | grep ‘\[mynode\] down\|in|clean\|recovering|up\|in\|unknown\|remapped\|peered\|backfill\|replaying log\|removing\|recovery paused|stale\|no space|undersized\|oversized|misplaced\|standby\|remapped purge|degraded purge\|scrub repair\|scrub requeue\|scrub start\|scrub stop\|scrub pause\|scrub resume\|scrub retry\|scrub requeue all\|scrub requeue some\|scrub queue flushed\|scrub queue empty\|scrub queue not empty\|scrub queue full\|scrub queue overloaded\|scrub queue underloaded\|scrub queue throttled\|scrub queue throttled all\|scrub queue throttled some\]’ > /dev/null && echo “OSD is running” || echo “OSD is not running”
如果输出“OSD is running”,则表示新添加的OSD节点已经成功运行,表示OSD节点没有正常运行,需要检查错误日志以解决问题。
原创文章,作者:K-seo,如若转载,请注明出处:https://www.kdun.cn/ask/5189.html