
对象存储网关组件rgw概述
1.对象存储网关概述
Ceph对象网关可以将数据存储在用于存储来自cephfs客户端或ceph rbd客户端的数据的同一ceph存储集群中。
object是对象存储系统中数据存储的基本单位,每个Object是数据和数据属性集的综合体,数据可以根据应用的需求进行设置,包括数据分布,服务质量等。
每个对象自我维护其属性,从而简化了存储系统的管理任务,对象的大小可以不同,甚至可以包含整个数据结构,如文件,数据库表项等,文件的上传和下载,默认有一个最大的数据块15MB。
Ceph对象存储使用Ceph对象网关守护进程(Rados GateWay,简称rgw),它是用于与ceph存储集群进行交互式的HTTP服务器。
Ceph RGW基于librados,是为应用提供RESTful类型的对象存储接口,默认使用Civetweb作为其Web Service。
在N版本中Civetweb默认使用法端口7480提供服务,但R版本(18.2.4)中使用了80端口,若想自定义端口就需要修改ceph的配置文件。
- 自0.80版本(Firefly,2014-05-01~2016-04-01)起,Ceph放弃了apache和fastcgi提供radosgw服务;
- 默认嵌入了在ceph-radosgw进程中的Citeweb,这种新的实现方式更加轻便和简洁,但直到Ceph 11.0.1 Kraken(2017-01-01~2017-08-01)版本,Citeweb才开始支持SSL协议。
推荐阅读:
https://docs.ceph.com/en/squid/radosgw/
https://docs.ceph.com/en/nautilus/radosgw/
https://docs.ceph.com/en/nautilus/radosgw/bucketpolicy/
https://docs.aws.amazon.com/zh_cn/AmazonS3/latest/userguide/bucketnamingrules.html
https://www.s3express.com/help/help.html
2.对象存储系统的核心资源概述
各种存储方案虽然在设计与实现上有所区别,但大多数对象存储系统对外呈现的核心资源类型大同小异。
一般来说,一个对象存储系统的核心资源应该包括(User),存储桶(Bucket)和对象(object),它们之间的关系是:
- 1.User将Object存储到存储系统上的Bucket;
- 2.存储桶属于某个用户并可以容纳对象,一个存储桶用于存储多个对象;
- 3.同一个用户可以拥有多个存储桶,不同用户允许使用相同名称的bucket;
3.ceph rgw支持的接口
RGW需要自己独有的守护进程服务才可以正常的使用,RGW并非必须的接口,仅在需要用到S3和Swift兼容的RESTful接口时才需要部署RGW实例,RGW在创建的时候,会自动初始化自己的存储池。
如上图所示,由于RGW提供与OpenStack Swift和Amazon S3兼容的接口,因此ceph对象网关具有自己的用户管理。
- Amazon S3:
兼容Amazon S3RESTful API,侧重命令行操作。
提供了user,bucket和object分别表示用户,存储桶和对象,其中bucket隶属于user。
因此user名称即可作为bucket的名称空间,不同用户允许使用相同的bucket。
- OpenStack Swift:
兼容OpenStack Swift API,侧重应用代码实践。
提供了user,container和object分别对应于用户,存储桶和对象,不过它还额外为user提供了父及组件account,用于表示一个项目或租户。
因此一个account中可包含一到多个user,它们可共享使用同一组container,并为container提供名称空间。
- RadosGW:
提供了user,subuser,bucket和object,其中user对应于S3的user,而subuser则对应于Swift的user,不过user和subuser都不支持为bucket提供名称空间,因此不同用户的存储桶也不允许同名。
不过,自Jewel版本(10.2.11,2016-04-01~2018-07-01)起,RadosGW引入了tenant(租户)用于为user和bucket提供名称空间,但它是可选组件。
Jewel版本之前,radosgw的所有user位于同一名称空间,它要求所有user的ID必须唯一,并且即便是不同user的bucket也不允许使用相同的bucket ID。
PG的objects数量高于平均值告警故障案例
- 1.故障现象
[root@ceph141 ~]# ceph -s
cluster:
id: 11e66474-0e02-11f0-82d6-4dcae3d59070
health: HEALTH_WARN
1 pools have many more objects per pg than average
services:
mon: 3 daemons, quorum ceph141,ceph142,ceph143 (age 19h)
mgr: ceph141.mbakds(active, since 2d), standbys: ceph142.qgifwo
mds: 1/1 daemons up, 1 standby
osd: 9 osds: 9 up (since 19h), 9 in (since 2d)
data:
volumes: 1/1 healthy
pools: 5 pools, 313 pgs
objects: 10.04k objects, 26 GiB
usage: 89 GiB used, 5.3 TiB / 5.3 TiB avail
pgs: 313 active+clean
io:
client: 165 KiB/s wr, 0 op/s rd, 5 op/s wr
[root@ceph141 ~]#
- 2.错误分析
2.1 查看具体是哪个存储池导致的告警
[root@ceph141 ~]# ceph health detail
HEALTH_WARN 1 pools have many more objects per pg than average
[WRN] MANY_OBJECTS_PER_PG: 1 pools have many more objects per pg than average
pool oldboyedu objects per pg (837) is more than 26.1562 times cluster average (32)
[root@ceph141 ~]#
[root@ceph141 ~]# echo 837/32 | bc
26
[root@ceph141 ~]#
问题分析:
报错说明的很明确了,就是因为oldboyedu存储池每个PG的objects数量远大于平均值32,当前每个pg有837个对象,其中是32的26.1562倍。
2.2 分享数据存储在哪些节点上
[root@ceph141 ~]# ceph df
--- RAW STORAGE ---
CLASS SIZE AVAIL USED RAW USED %RAW USED
hdd 5.3 TiB 5.3 TiB 89 GiB 89 GiB 1.62
TOTAL 5.3 TiB 5.3 TiB 89 GiB 89 GiB 1.62
--- POOLS ---
POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL
.mgr 1 1 449 KiB 2 1.3 MiB 0 1.6 TiB
oldboyedu 8 8 25 GiB 6.70k 76 GiB 1.51 1.6 TiB
linux96 11 16 1000 B 8 61 KiB 0 1.6 TiB
cephfs_data 12 256 76 MiB 3.31k 252 MiB 0 1.6 TiB
cephfs_metadata 13 32 20 MiB 28 61 MiB 0 1.6 TiB
[root@ceph141 ~]#
[root@ceph141 ~]# rbd ls violet
child-xixi-001
docker
harbor
mysql80
node-exporter
prometheus
prometheus-server
ubuntu-2204
wordpress-db
[root@ceph141 ~]#
[root@ceph141 ~]# ceph osd df
ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS
0 hdd 0.29300 1.00000 300 GiB 4.3 GiB 3.1 GiB 4 KiB 1.2 GiB 296 GiB 1.43 0.88 57 up
1 hdd 0.48830 1.00000 500 GiB 21 GiB 19 GiB 14 KiB 1.8 GiB 479 GiB 4.19 2.58 90 up
2 hdd 1.00000 1.00000 1024 GiB 4.6 GiB 3.2 GiB 4 KiB 1.4 GiB 1019 GiB 0.45 0.28 166 up
3 hdd 0.29300 1.00000 300 GiB 7.3 GiB 6.3 GiB 10 KiB 1.1 GiB 293 GiB 2.44 1.51 53 up
4 hdd 0.48830 1.00000 500 GiB 11 GiB 9.6 GiB 54 KiB 1.1 GiB 489 GiB 2.14 1.32 87 up
5 hdd 1.00000 1.00000 1024 GiB 11 GiB 9.7 GiB 14 KiB 1.3 GiB 1013 GiB 1.07 0.66 173 up
6 hdd 0.29300 1.00000 300 GiB 4.5 GiB 3.2 GiB 4 KiB 1.3 GiB 295 GiB 1.50 0.93 49 up
7 hdd 0.48830 1.00000 500 GiB 4.5 GiB 3.3 GiB 4 KiB 1.2 GiB 495 GiB 0.91 0.56 89 up
8 hdd 1.00000 1.00000 1024 GiB 21 GiB 19 GiB 15 KiB 1.8 GiB 1003 GiB 2.03 1.25 175 up
TOTAL 5.3 TiB 89 GiB 77 GiB 127 KiB 12 GiB 5.3 TiB 1.62
MIN/MAX VAR: 0.28/2.58 STDDEV: 1.05
[root@ceph141 ~]#
[root@ceph141 ~]# ceph osd tree
ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
-1 5.34389 root default
-3 1.78130 host ceph141
0 hdd 0.29300 osd.0 up 1.00000 1.00000
1 hdd 0.48830 osd.1 up 1.00000 1.00000
2 hdd 1.00000 osd.2 up 1.00000 1.00000
-5 1.78130 host ceph142
3 hdd 0.29300 osd.3 up 1.00000 1.00000
4 hdd 0.48830 osd.4 up 1.00000 1.00000
5 hdd 1.00000 osd.5 up 1.00000 1.00000
-7 1.78130 host ceph143
6 hdd 0.29300 osd.6 up 1.00000 1.00000
7 hdd 0.48830 osd.7 up 1.00000 1.00000
8 hdd 1.00000 osd.8 up 1.00000 1.00000
[root@ceph141 ~]#
[root@ceph141 ~]# ceph osd pool ls detail | grep violet
pool 8 'violet' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 8 pgp_num 8 autoscale_mode off last_change 358 flags hashpspool,selfmanaged_snaps stripe_width 0 application rbd read_balance_score 3.38
[root@ceph141 ~]#
分析得出:
我们的oldboyedu存储池仅有8个pg,导致每个pg的objects数量过大。
也就是说,总的pg数量应该为: 209
[root@ceph141 ~]# echo "837 * 8"/32 | bc
209
[root@ceph141 ~]#
- 推荐方案有三种:
- 方案一: 将现有的存储池的pg数量增大【推荐】
问题: 此方案会涉及到现有pg的objects迁移到新的pg的IO操作。
[root@ceph141 ~]# ceph osd pool set violet pg_num 256
set pool 8 pg_num to 256
[root@ceph141 ~]#
[root@ceph141 ~]# ceph osd pool set violet pgp_num 256
set pool 8 pgp_num to 256
[root@ceph141 ~]#
[root@ceph141 ~]# ceph -s
cluster:
id: 11e66474-0e02-11f0-82d6-4dcae3d59070
health: HEALTH_OK
services:
mon: 3 daemons, quorum ceph141,ceph142,ceph143 (age 20h)
mgr: ceph141.mbakds(active, since 2d), standbys: ceph142.qgifwo
mds: 1/1 daemons up, 1 standby
osd: 9 osds: 9 up (since 20h), 9 in (since 2d)
data:
volumes: 1/1 healthy
pools: 5 pools, 441 pgs
objects: 8.88k objects, 21 GiB
usage: 87 GiB used, 5.3 TiB / 5.3 TiB avail
pgs: 21.769% pgs unknown
8.617% pgs not active
307 active+clean
96 unknown
38 peering
[root@ceph141 ~]#
[root@ceph141 ~]# ceph osd pool ls detail | grep violet
pool 8 'violet' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 142 pgp_num 14 pg_num_target 256 pgp_num_target 256 autoscale_mode off last_change 478 lfor 0/0/478 flags hashpspool,selfmanaged_snaps stripe_width 0 application rbd read_balance_score 3.99
[root@ceph141 ~]#
- 方案二: 移除无用的块设备数据,减少objects数量。
问题: 此方案也需要进行I/O操作,但是成本相比于方案一更快,只是数据的删除。但是有局限性,比如现有的数据无法删除。
- 方案三: 禁用mon_pg_warn_max_object_skew告警【需要时间验证测试,等一段看看,目前来说等了3min不好使】
https://docs.ceph.com/en/squid/rados/configuration/pool-pg-config-ref/#confval-mon_pg_warn_max_object_skew
1.查看示例
[root@ceph141 ~]# ceph config get osd osd_pool_default_pg_num # 获取默认的pg数量
32
[root@ceph141 ~]#
[root@ceph141 ~]# ceph config get osd mon_pg_warn_max_object_skew
10.000000
[root@ceph141 ~]#
2.修改示例
[root@ceph141 ~]# ceph config set osd mon_pg_warn_max_object_skew 0
[root@ceph141 ~]#
[root@ceph141 ~]# ceph config get osd mon_pg_warn_max_object_skew
0.000000
[root@ceph141 ~]#
3.貌似等了2min效果不是很明显
[root@ceph141 ~]# ceph -s
cluster:
id: 11e66474-0e02-11f0-82d6-4dcae3d59070
health: HEALTH_WARN
1 pools have many more objects per pg than average
services:
mon: 3 daemons, quorum ceph141,ceph142,ceph143 (age 20h)
mgr: ceph141.mbakds(active, since 2d), standbys: ceph142.qgifwo
mds: 1/1 daemons up, 1 standby
osd: 9 osds: 9 up (since 20h), 9 in (since 2d)
data:
volumes: 1/1 healthy
pools: 5 pools, 313 pgs
objects: 10.04k objects, 26 GiB
usage: 87 GiB used, 5.3 TiB / 5.3 TiB avail
pgs: 313 active+clean
io:
client: 117 KiB/s wr, 0 op/s rd, 3 op/s wr
[root@ceph141 ~]#
部署radosgw实操案例
- 1 部署之前查看集群状态
[root@ceph141 ~]# ceph -s
cluster:
id: 11e66474-0e02-11f0-82d6-4dcae3d59070
health: HEALTH_OK
services:
mon: 3 daemons, quorum ceph141,ceph142,ceph143 (age 20h)
mgr: ceph141.mbakds(active, since 2d), standbys: ceph142.qgifwo
mds: 1/1 daemons up, 1 standby
osd: 9 osds: 9 up (since 20h), 9 in (since 2d); 23 remapped pgs
data:
volumes: 1/1 healthy
pools: 5 pools, 451 pgs
objects: 9.91k objects, 25 GiB
usage: 88 GiB used, 5.3 TiB / 5.3 TiB avail
pgs: 0.222% pgs not active
1373/29736 objects misplaced (4.617%)
427 active+clean
22 active+remapped+backfill_wait
1 peering
1 active+remapped+backfilling
io:
client: 167 KiB/s wr, 0 op/s rd, 6 op/s wr
recovery: 58 MiB/s, 14 objects/s
[root@ceph141 ~]#
- 2 创建一个服务
[root@ceph141 ~]# ceph orch apply rgw lax
Scheduled rgw.lax update...
[root@ceph141 ~]#
- 3 部署rgw组件
[root@ceph141 ~]# ceph orch daemon add rgw lax ceph142
Deployed rgw.lax.ceph142.tmtpzs on host 'ceph142'
[root@ceph141 ~]#
- 4 检查rgw组件是否部署成功
[root@ceph141 ~]# ceph -s
cluster:
id: 11e66474-0e02-11f0-82d6-4dcae3d59070
health: HEALTH_WARN
Reduced data availability: 1 pg peering
services:
mon: 3 daemons, quorum ceph141,ceph142,ceph143 (age 20h)
mgr: ceph141.mbakds(active, since 2d), standbys: ceph142.qgifwo
mds: 1/1 daemons up, 1 standby
osd: 9 osds: 9 up (since 20h), 9 in (since 2d); 39 remapped pgs
rgw: 2 daemons active (2 hosts, 1 zones) # 观察是否有该组件
data:
volumes: 1/1 healthy
pools: 9 pools, 599 pgs
objects: 10.22k objects, 25 GiB
usage: 89 GiB used, 5.3 TiB / 5.3 TiB avail
pgs: 0.167% pgs unknown
0.668% pgs not active
1583/30660 objects misplaced (5.163%)
553 active+clean
37 active+remapped+backfill_wait
4 active+remapped+backfilling
3 peering
1 unknown
1 remapped+peering
io:
client: 3.6 MiB/s rd, 166 MiB/s wr, 195 op/s rd, 2.38k op/s wr
recovery: 59 MiB/s, 20 objects/s
progress:
Global Recovery Event (105s)
[=========================...] (remaining: 8s)
[root@ceph141 ~]#
- 5 查看rgw默认创建的存储池信息
[root@ceph141 ~]# ceph osd pool ls
...
.rgw.root
default.rgw.log
default.rgw.control
default.rgw.meta
[root@ceph141 ~]#
- 6 查看rgw类型的服务信息及所在的主机列表
[root@ceph141 ~]# ceph orch ls rgw rgw.lax --export
service_type: rgw
service_id: lax
service_name: rgw.lax
placement:
count: 2
[root@ceph141 ~]#
[root@ceph141 ~]# ceph orch ps --service_name rgw.lax
NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID
rgw.yinzhengjie.ceph141.svisis ceph141 *:80 running (21m) 3m ago 21m 102M - 19.2.1 f2efb0401a30 1d3c29e587dd
rgw.yinzhengjie.ceph142.tmtpzs ceph142 *:80 running (21m) 3m ago 21m 105M - 19.2.1 f2efb0401a30 4f90102b7675
[root@ceph141 ~]#
[root@ceph141 ~]# ceph orch ps --daemon_type rgw
NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID
rgw.yinzhengjie.ceph141.svisis ceph141 *:80 running (21m) 3m ago 21m 102M - 19.2.1 f2efb0401a30 1d3c29e587dd
rgw.yinzhengjie.ceph142.tmtpzs ceph142 *:80 running (21m) 3m ago 21m 105M - 19.2.1 f2efb0401a30 4f90102b7675
[root@ceph141 ~]#
- 7 访问对象存储的WebUI
http://10.0.0.142/
s3cmd工具初始化配置
- 1.安装s3cmd工具包
[root@ceph141 ~]# echo 10.0.0.142 www.violet.com >> /etc/hosts
[root@ceph141 ~]#
[root@ceph141 ~]# apt -y install s3cmd
- 2 创建rgw账号
[root@ceph141 ~]# radosgw-admin user create --uid "xiaoming" --display-name "小明"
{
"user_id": "xiaoming",
"display_name": "小明",
"email": "",
"suspended": 0,
"max_buckets": 1000,
"subusers": [],
"keys": [
{
"user": "xiaoming",
"access_key": "A3AGQ7XZLN2DL3NIR3GA",
"secret_key": "z0pbFr5riqbl40LmgzQqmLJf1aZC0xAD0KTlFkGm"
}
],
"swift_keys": [],
"caps": [],
"op_mask": "read, write, delete",
"default_placement": "",
"default_storage_class": "",
"placement_tags": [],
"bucket_quota": {
"enabled": false,
"check_on_raw": false,
"max_size": -1,
"max_size_kb": 0,
"max_objects": -1
},
"user_quota": {
"enabled": false,
"check_on_raw": false,
"max_size": -1,
"max_size_kb": 0,
"max_objects": -1
},
"temp_url_keys": [],
"type": "rgw",
"mfa_ids": []
}
[root@ceph141 ~]#
- 3 运行s3cmd的运行环境,生成”/root/.s3cfg”配置文件
[root@ceph141 ~]# ll /root/.s3cfg
ls: cannot access '/root/.s3cfg': No such file or directory
[root@ceph141 ~]#
[root@ceph141 ~]# s3cmd --configure
Enter new values or accept defaults in brackets with Enter.
Refer to user manual for detailed description of all options.
Access key and Secret key are your identifiers for Amazon S3. Leave them empty for using the env variables.
Access Key: A3AGQ7XZLN2DL3NIR3GA # rgw账号的access_key
Secret Key: z0pbFr5riqbl40LmgzQqmLJf1aZC0xAD0KTlFkGm # rgw账号的secret_key
Default Region [US]: # 直接回车即可
Use "s3.amazonaws.com" for S3 Endpoint and not modify it to the target Amazon S3.
S3 Endpoint [s3.amazonaws.com]: www.oldboyedu.com # 用于访问rgw的地址
Use "%(bucket)s.s3.amazonaws.com" to the target Amazon S3. "%(bucket)s" and "%(location)s" vars can be used
if the target S3 system supports dns based buckets.
DNS-style bucket+hostname:port template for accessing a bucket [%(bucket)s.s3.amazonaws.com]: www.oldboyedu.com/%(bucket) # 设置DNS解析风格
Encryption password is used to protect your files from reading
by unauthorized persons while in transfer to S3
Encryption password: # 文件不加密,直接回车即可
Path to GPG program [/usr/bin/gpg]: # 指定自定义的gpg程序路径,直接回车即可
When using secure HTTPS protocol all communication with Amazon S3
servers is protected from 3rd party eavesdropping. This method is
slower than plain HTTP, and can only be proxied with Python 2.7 or newer
Use HTTPS protocol [Yes]: No # 你的rgw是否是https,如果不是设置为No
On some networks all internet access must go through a HTTP proxy.
Try setting it here if you can't connect to S3 directly
HTTP Proxy server name: # 代理服务器的地址,我并没有配置代理服务器,因此直接回车即可
New settings: # 注意,下面的信息是上面咱们填写时一个总的预览信息
Access Key: A3AGQ7XZLN2DL3NIR3GA
Secret Key: z0pbFr5riqbl40LmgzQqmLJf1aZC0xAD0KTlFkGm
Default Region: US
S3 Endpoint: www.violet.com
DNS-style bucket+hostname:port template for accessing a bucket: www.oldboyedu.com/%(bucket)
Encryption password:
Path to GPG program: /usr/bin/gpg
Use HTTPS protocol: False
HTTP Proxy server name:
HTTP Proxy server port: 0
Test access with supplied credentials? [Y/n] Y # 如果确认上述信息没问题的话,则输入字母Y即可。
Please wait, attempting to list all buckets...
Success. Your access key and secret key worked fine :-)
Now verifying that encryption works...
Not configured. Never mind.
Save settings? [y/N] y # 是否保存配置,我们输入y,默认是不保存配置的。
Configuration saved to '/root/.s3cfg'
[root@ceph141 ~]#
[root@ceph141 ~]#
[root@ceph141 ~]# ll /root/.s3cfg
-rw------- 1 root root 2269 Aug 23 09:59 /root/.s3cfg
[root@ceph141 ~]#
上传测试视频案例模拟抖音,快手
- 1.创建buckets
[root@ceph141 ~]# s3cmd mb s3://lax-bucket
Bucket 's3://lax-bucket/' created
[root@ceph141 ~]#
温馨提示:
通用存储桶命名规则,以下命名规则适用于通用存储桶。
- 1存储桶名称必须介于 3(最少)到 63(最多)个字符之间。
- 2.存储桶名称只能由小写字母、数字、句点(.)和连字符(-)组成。
- 3.存储桶名称必须以字母或数字开头和结尾。
- 4.存储桶名称不得包含两个相邻的句点。
- 5.存储桶名称不得采用 IP 地址格式(例如,192.168.5.4)。
- 6.存储桶名称不得以前缀 xn-- 开头。
- 7.存储桶名称不得以前缀 sthree- 开头。
- 8.存储桶名称不得以前缀 sthree-configurator 开头。
- 9.存储桶名称不得以前缀 amzn-s3-demo- 开头。
- 10.存储桶名称不得以后缀 -s3alias 结尾。此后缀是为接入点别名预留的。有关更多信息,请参阅 为您的 S3 存储桶接入点使用存储桶式别名。
- 11.存储桶名称不得以后缀 --ol-s3 结尾。此后缀是为对象 Lambda 接入点别名预留的。有关更多信息,请参阅 如何为您的 S3 存储桶对象 Lambda 接入点使用存储桶式别名。
- 12.存储桶名称不得以后缀 .mrap 结尾。此后缀预留用于多区域接入点名称。有关更多信息,请参阅 命名 Amazon S3 多区域接入点的规则。
- 13.存储桶名称不得以后缀 --x-s3 结尾。此后缀预留用于目录存储桶。有关更多信息,请参阅 目录存储桶命名规则。
- 14.存储桶名称在分区内所有 AWS 区域中的所有 AWS 账户间必须是唯一的。分区是一组区域。AWS 目前有三个分区:aws(标准区域)、aws-cn(中国区域)和 aws-us-gov(AWS GovCloud (US))。
- 15.存储桶名称不能被同一分区中的另一个 AWS 账户使用,直到存储桶被删除。
- 16.与 Amazon S3 Transfer Acceleration 一起使用的存储桶名称中不能有句点(.)。
为了获得最佳兼容性,我们建议您避免在存储桶名称中使用句点(.),但仅用于静态网站托管的存储桶除外。如果您在存储桶名称中包含句点,则无法通过 HTTPS 使用虚拟主机式寻址,除非您执行自己的证书验证。这是因为用于存储桶虚拟托管的安全证书不适用于名称中带有句点的存储桶。
此限制不会影响用于静态网站托管的存储桶,因为静态网站托管只能通过 HTTP 提供。有关虚拟主机式寻址的更多信息,请参阅存储桶的虚拟托管。有关静态网站托管的更多信息,请参阅使用 Amazon S3 托管静态网站。
参考链接:
https://docs.aws.amazon.com/zh_cn/AmazonS3/latest/userguide/bucketnamingrules.html
- 2.查看buckets
[root@ceph141 ~]# s3cmd ls
2025-04-03 04:01 s3://lax-bucket
[root@ceph141 ~]#
[root@ceph141 ~]# radosgw-admin buckets list
[
"lax-bucket"
]
[root@ceph141 ~]#
- 3.使用s3cmd上传数据到buckets
[root@ceph141 /]# ll -h 01-昨日内容回顾及今日内容预告.mp4 02-对象存储网关组件rgw概述.mp4
-rw-r--r-- 1 root root 54M Apr 3 12:01 01-昨日内容回顾及今日内容预告.mp4
-rw-r--r-- 1 root root 161M Apr 3 12:01 02-对象存储网关组件rgw概述.mp4
[root@ceph141 /]#
[root@ceph141 /]# s3cmd put 01-昨日内容回顾及今日内容预告.mp4 s3://lax-bucket
upload: '01-昨日内容回顾及今日内容预告.mp4' -> 's3://lax-bucket/01-昨日内容回顾及今日内容预告.mp4' [part 1 of 4, 15MB] [1 of 1]
15728640 of 15728640 100% in 3s 4.76 MB/s done
upload: '01-昨日内容回顾及今日内容预告.mp4' -> 's3://lax-bucket/01-昨日内容回顾及今日内容预告.mp4' [part 2 of 4, 15MB] [1 of 1]
15728640 of 15728640 100% in 0s 62.77 MB/s done
upload: '01-昨日内容回顾及今日内容预告.mp4' -> 's3://lax-bucket/01-昨日内容回顾及今日内容预告.mp4' [part 3 of 4, 15MB] [1 of 1]
15728640 of 15728640 100% in 0s 17.88 MB/s done
upload: '01-昨日内容回顾及今日内容预告.mp4' -> 's3://lax-bucket/01-昨日内容回顾及今日内容预告.mp4' [part 4 of 4, 8MB] [1 of 1]
8787432 of 8787432 100% in 0s 51.04 MB/s done
[root@ceph141 /]#
[root@ceph141 /]# echo 15728640/1024/1024 | bc
15
[root@ceph141 /]#
[root@ceph141 /]# s3cmd put 02-对象存储网关组件rgw概述.mp4 s3://lax-bucket
upload: '02-对象存储网关组件rgw概述.mp4' -> 's3://lax-bucket/02-对象存储网关组件rgw概述.mp4' [part 1 of 11, 15MB] [1 of 1]
15728640 of 15728640 100% in 0s 66.26 MB/s done
upload: '02-对象存储网关组件rgw概述.mp4' -> 's3://lax-bucket/02-对象存储网关组件rgw概述.mp4' [part 2 of 11, 15MB] [1 of 1]
15728640 of 15728640 100% in 0s 69.42 MB/s done
upload: '02-对象存储网关组件rgw概述.mp4' -> 's3://lax-bucket/02-对象存储网关组件rgw概述.mp4' [part 3 of 11, 15MB] [1 of 1]
15728640 of 15728640 100% in 0s 71.78 MB/s done
upload: '02-对象存储网关组件rgw概述.mp4' -> 's3://lax-bucket/02-对象存储网关组件rgw概述.mp4' [part 4 of 11, 15MB] [1 of 1]
15728640 of 15728640 100% in 0s 66.32 MB/s done
upload: '02-对象存储网关组件rgw概述.mp4' -> 's3://lax-bucket/02-对象存储网关组件rgw概述.mp4' [part 5 of 11, 15MB] [1 of 1]
15728640 of 15728640 100% in 0s 66.39 MB/s done
upload: '02-对象存储网关组件rgw概述.mp4' -> 's3://lax-bucket/02-对象存储网关组件rgw概述.mp4' [part 6 of 11, 15MB] [1 of 1]
15728640 of 15728640 100% in 0s 68.92 MB/s done
upload: '02-对象存储网关组件rgw概述.mp4' -> 's3://lax-bucket/02-对象存储网关组件rgw概述.mp4' [part 7 of 11, 15MB] [1 of 1]
15728640 of 15728640 100% in 0s 67.89 MB/s done
upload: '02-对象存储网关组件rgw概述.mp4' -> 's3://lax-bucket/02-对象存储网关组件rgw概述.mp4' [part 8 of 11, 15MB] [1 of 1]
15728640 of 15728640 100% in 0s 63.18 MB/s done
upload: '02-对象存储网关组件rgw概述.mp4' -> 's3://lax-bucket/02-对象存储网关组件rgw概述.mp4' [part 9 of 11, 15MB] [1 of 1]
15728640 of 15728640 100% in 0s 61.05 MB/s done
upload: '02-对象存储网关组件rgw概述.mp4' -> 's3://lax-bucket/02-对象存储网关组件rgw概述.mp4' [part 10 of 11, 15MB] [1 of 1]
15728640 of 15728640 100% in 0s 65.40 MB/s done
upload: '02-对象存储网关组件rgw概述.mp4' -> 's3://lax-bucket/02-对象存储网关组件rgw概述.mp4' [part 11 of 11, 10MB] [1 of 1]
11372996 of 11372996 100% in 0s 64.11 MB/s done
[root@ceph141 /]#
[root@ceph141 /]# s3cmd ls s3://lax-bucket
2025-04-03 04:04 55973352 s3://lax-bucket/01-昨日内容回顾及今日内容预告.mp4
2025-04-03 04:05 168659396 s3://lax-bucket/02-对象存储网关组件rgw概述.mp4
[root@ceph141 /]#
[root@ceph141 /]# ll 01-昨日内容回顾及今日内容预告.mp4 02-对象存储网关组件rgw概述.mp4
-rw-r--r-- 1 root root 55973352 Apr 3 12:01 01-昨日内容回顾及今日内容预告.mp4
-rw-r--r-- 1 root root 168659396 Apr 3 12:01 02-对象存储网关组件rgw概述.mp4
[root@ceph141 /]#
- 4.使用s3cmd下载数据
[root@ceph141 /]# s3cmd get s3://lax-bucket/01-昨日内容回顾及今日内容预告.mp4 /tmp/
download: 's3://lax-bucket/01-昨日内容回顾及今日内容预告.mp4' -> '/tmp/01-昨日内容回顾及今日内容预告.mp4' [1 of 1]
55973352 of 55973352 100% in 0s 108.94 MB/s done
[root@ceph141 /]#
[root@ceph141 /]# ll /tmp/01-昨日内容回顾及今日内容预告.mp4
-rw-r--r-- 1 root root 55973352 Apr 3 04:04 /tmp/01-昨日内容回顾及今日内容预告.mp4
[root@ceph141 /]#
- 5.授权策略
[root@ceph141 ~]# cat lax-anonymous-access-policy.json
{
"Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Principal": {"AWS": ["*"]},
"Action": "s3:GetObject",
"Resource": [
"arn:aws:s3:::lax-bucket/*"
]
}]
}
[root@ceph141 ~]#
参考链接:
https://docs.ceph.com/en/squid/radosgw/bucketpolicy/
- 6.添加授权策略
[root@ceph141 /]# s3cmd info s3://lax-bucket
s3://yinzhengjie-bucket/ (bucket):
Location: default
Payer: BucketOwner
Expiration Rule: none
Policy: none
CORS: none
ACL: 刘安讯: FULL_CONTROL
[root@ceph141 /]#
[root@ceph141 /]#
[root@ceph141 /]# s3cmd setpolicy lax-anonymous-access-policy.json s3://lax-bucket
s3://lax-bucket/: Policy updated
[root@ceph141 /]#
[root@ceph141 /]# s3cmd info s3://lax-bucket
s3://yinzhengjie-bucket/ (bucket):
Location: default
Payer: BucketOwner
Expiration Rule: none
Policy: {
"Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Principal": {"AWS": ["*"]},
"Action": "s3:GetObject",
"Resource": [
"arn:aws:s3:::lax-bucket/*"
]
}]
}
CORS: none
ACL: 刘安讯: FULL_CONTROL
[root@ceph141 /]#
- 7.访问测试
http://10.0.0.142/lax-bucket/01-昨日内容回顾及今日内容预告.mp4
http://10.0.0.142/lax-bucket/02-对象存储网关组件rgw概述.mp4
- 8.删除策略
[root@ceph141 ~]# s3cmd info s3://lax-bucket
s3://yinzhengjie-bucket/ (bucket):
Location: default
Payer: BucketOwner
Expiration Rule: none
Policy: {
"Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Principal": {"AWS": ["*"]},
"Action": "s3:GetObject",
"Resource": [
"arn:aws:s3:::lax-bucket/*"
]
}]
}
CORS: none
ACL: 刘安讯: FULL_CONTROL
[root@ceph141 ~]#
[root@ceph141 ~]#
[root@ceph141 ~]# s3cmd delpolicy s3://lax-bucket
s3://lax-bucket/: Policy deleted
[root@ceph141 ~]#
[root@ceph141 ~]# s3cmd info s3://lax-bucket
s3://lax-bucket/ (bucket):
Location: default
Payer: BucketOwner
Expiration Rule: none
Policy: none
CORS: none
ACL: 刘安讯: FULL_CONTROL
[root@ceph141 ~]#
- 9.再次访问测试(无权限)
http://10.0.0.142/lax-bucket/02-应用部署容器化演进之路.mp4
http://10.0.0.142/lax-bucket/03-虚拟机和容器技术对比.mp4
ceph集群的监控及常用指标概述
- 1.查看集群的架构
[root@ceph141 ~]# ceph orch ls
NAME PORTS RUNNING REFRESHED AGE PLACEMENT
alertmanager ?:9093,9094 1/1 9m ago 3d count:1
ceph-exporter 3/3 9m ago 3d *
crash 3/3 9m ago 3d *
grafana ?:3000 1/1 9m ago 3d count:1
mds.violet-cephfs 2/2 9m ago 29h count:2
mgr 2/2 9m ago 3d count:2
mon 3/5 9m ago 3d count:5
node-exporter ?:9100 3/3 9m ago 3d *
osd 9 9m ago - <unmanaged>
prometheus ?:9095 1/1 9m ago 3d count:1
rgw.lax ?:80 2/2 9m ago 5h count:2
[root@ceph141 ~]#
[root@ceph141 ~]# ceph orch ps --service_name alertmanager
NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID
alertmanager.ceph141 ceph141 *:9093,9094 running (2d) 9m ago 3d 30.0M - 0.25.0 c8568f914cd2 4b8bc4b1fc5b
[root@ceph141 ~]#
[root@ceph141 ~]# ceph orch ps --service_name grafana
NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID
grafana.ceph141 ceph141 *:3000 running (2d) 9m ago 3d 155M - 10.4.0 c8b91775d855 e5a398893001
[root@ceph141 ~]#
[root@ceph141 ~]# ceph orch ps --service_name node-exporter
NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID
node-exporter.ceph141 ceph141 *:9100 running (2d) 10m ago 3d 19.0M - 1.7.0 72c9c2088986 cd20eec4cf53
node-exporter.ceph142 ceph142 *:9100 running (26h) 10m ago 3d 18.9M - 1.7.0 72c9c2088986 08ef8871f112
node-exporter.ceph143 ceph143 *:9100 running (5m) 5m ago 2d 2811k - 1.7.0 72c9c2088986 3a895f87bad2
[root@ceph141 ~]#
[root@ceph141 ~]# ceph orch ps --service_name prometheus
NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID
prometheus.ceph141 ceph141 *:9095 running (2d) 10m ago 3d 121M - 2.51.0 1d3b7f56885b abeb2ed5eab2
[root@ceph141 ~]#
[root@ceph141 ~]# ceph orch ps --service_name ceph-exporter
NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID
ceph-exporter.ceph141 ceph141 running (2d) 10m ago 3d 15.7M - 19.2.1 f2efb0401a30 0cac3a5b6be7
ceph-exporter.ceph142 ceph142 running (26h) 10m ago 3d 8103k - 19.2.1 f2efb0401a30 f8d8762f14d0
ceph-exporter.ceph143 ceph143 running (2d) 5m ago 2d 23.2M - 19.2.1 f2efb0401a30 e85d535ca925
[root@ceph141 ~]#
- 2.访问prometheus的WebUI
http://10.0.0.141:9095/targets
- 3.访问Alertmanager的WebUI
http://10.0.0.141:9093/#/alerts
- 4.访问grafana的WebUI
https://10.0.0.141:3000/
- 5.常用指标概述
参考链接:
https://docs.ceph.com/en/squid/monitoring/#ceph-metrics
https://docs.ceph.com/en/squid/mgr/dashboard/
ceph集群的维护命令
- 1.查看ceph集群的服务
[root@ceph141 ~]# ceph orch ls
NAME PORTS RUNNING REFRESHED AGE PLACEMENT
alertmanager ?:9093,9094 1/1 6m ago 3d count:1
ceph-exporter 3/3 6m ago 3d *
crash 3/3 6m ago 3d *
grafana ?:3000 1/1 6m ago 3d count:1
mds.violet-cephfs 2/2 6m ago 28h count:2
mgr 2/2 6m ago 3d count:2
mon 3/5 6m ago 3d count:5
node-exporter ?:9100 3/3 6m ago 3d *
osd 9 6m ago - <unmanaged>
prometheus ?:9095 1/1 6m ago 3d count:1
rgw.lax ?:80 2/2 6m ago 5h count:2
[root@ceph141 ~]#
- 2.查看ceph集群的守护进程
[root@ceph141 ~]# ceph orch ps
NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID
alertmanager.ceph141 ceph141 *:9093,9094 running (2d) 8m ago 3d 29.8M - 0.25.0 c8568f914cd2 4b8bc4b1fc5b
ceph-exporter.ceph141 ceph141 running (2d) 8m ago 3d 15.7M - 19.2.1 f2efb0401a30 0cac3a5b6be7
ceph-exporter.ceph142 ceph142 running (26h) 8m ago 2d 8196k - 19.2.1 f2efb0401a30 f8d8762f14d0
ceph-exporter.ceph143 ceph143 running (2d) 8m ago 2d 23.1M - 19.2.1 f2efb0401a30 e85d535ca925
crash.ceph141 ceph141 running (2d) 8m ago 3d 3063k - 19.2.1 f2efb0401a30 05eb9f1bcb82
crash.ceph142 ceph142 running (26h) 8m ago 2d 6903k - 19.2.1 f2efb0401a30 3240c6d5bf73
crash.ceph143 ceph143 running (2d) 8m ago 2d 9048k - 19.2.1 f2efb0401a30 1337b15f0f1d
grafana.ceph141 ceph141 *:3000 running (2d) 8m ago 3d 155M - 10.4.0 c8b91775d855 e5a398893001
mds.violet-cephfs.ceph141.pthitg ceph141 running (28h) 8m ago 28h 28.6M - 19.2.1 f2efb0401a30 182602b46520
mds.violet-cephfs.ceph142.pmzglk ceph142 running (26h) 8m ago 28h 15.2M - 19.2.1 f2efb0401a30 e7aa3045e349
mgr.ceph141.mbakds ceph141 *:9283,8765,8443 running (2d) 8m ago 3d 279M - 19.2.1 f2efb0401a30 face10bad3d7
mgr.ceph142.qgifwo ceph142 *:8443,9283,8765 running (26h) 8m ago 2d 112M - 19.2.1 f2efb0401a30 76f84b998f74
mon.ceph141 ceph141 running (2d) 8m ago 3d 391M 2048M 19.2.1 f2efb0401a30 a7ca13016694
mon.ceph142 ceph142 running (26h) 8m ago 2d 409M 2048M 19.2.1 f2efb0401a30 7a3d9677b82c
mon.ceph143 ceph143 running (2d) 8m ago 2d 441M 2048M 19.2.1 f2efb0401a30 da2a9b89611b
node-exporter.ceph141 ceph141 *:9100 running (2d) 8m ago 3d 18.7M - 1.7.0 72c9c2088986 cd20eec4cf53
node-exporter.ceph142 ceph142 *:9100 running (26h) 8m ago 2d 19.1M - 1.7.0 72c9c2088986 08ef8871f112
node-exporter.ceph143 ceph143 *:9100 running (2d) 8m ago 2d 19.4M - 1.7.0 72c9c2088986 f76f9ef7be86
osd.0 ceph141 running (2d) 8m ago 2d 218M 4096M 19.2.1 f2efb0401a30 16decb320ba8
osd.1 ceph141 running (2d) 8m ago 2d 438M 4096M 19.2.1 f2efb0401a30 fb7711a31bd6
osd.2 ceph141 running (2d) 8m ago 2d 437M 4096M 19.2.1 f2efb0401a30 5b340b1c7c00
osd.3 ceph142 running (26h) 8m ago 2d 450M 4096M 19.2.1 f2efb0401a30 97e7a0376d4b
osd.4 ceph142 running (26h) 8m ago 2d 669M 4096M 19.2.1 f2efb0401a30 a7754adfc55e
osd.5 ceph142 running (26h) 8m ago 2d 748M 4096M 19.2.1 f2efb0401a30 b58fdbe55cbd
osd.6 ceph143 running (2d) 8m ago 2d 413M 4096M 19.2.1 f2efb0401a30 b61e1ab3edd3
osd.7 ceph143 running (2d) 8m ago 2d 440M 4096M 19.2.1 f2efb0401a30 63d3a5a3cdbc
osd.8 ceph143 running (2d) 8m ago 2d 871M 4096M 19.2.1 f2efb0401a30 38ad03158af7
prometheus.ceph141 ceph141 *:9095 running (2d) 8m ago 3d 120M - 2.51.0 1d3b7f56885b abeb2ed5eab2
rgw.lax.ceph141.svisis ceph141 *:80 running (5h) 8m ago 5h 51.9M - 19.2.1 f2efb0401a30 1d3c29e587dd
rgw.lax.ceph142.tmtpzs ceph142 *:80 running (5h) 8m ago 5h 150M - 19.2.1 f2efb0401a30 4f90102b7675
[root@ceph141 ~]#
[root@ceph141 ~]#
[root@ceph141 ~]# ceph orch ls rgw --export
service_type: rgw
service_id: lax
service_name: rgw.lax
placement:
count: 2
[root@ceph141 ~]#
[root@ceph141 ~]# ceph orch ps --daemon_type=rgw
NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID
rgw.lax.ceph141.svisis ceph141 *:80 running (5h) 8m ago 5h 51.9M - 19.2.1 f2efb0401a30 1d3c29e587dd
rgw.lax.ceph142.tmtpzs ceph142 *:80 running (5h) 8m ago 5h 150M - 19.2.1 f2efb0401a30 4f90102b7675
[root@ceph141 ~]#
[root@ceph141 ~]# ceph orch ps --daemon_type=osd
NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID
osd.0 ceph141 running (2d) 8m ago 2d 218M 4096M 19.2.1 f2efb0401a30 16decb320ba8
osd.1 ceph141 running (2d) 8m ago 2d 438M 4096M 19.2.1 f2efb0401a30 fb7711a31bd6
osd.2 ceph141 running (2d) 8m ago 2d 437M 4096M 19.2.1 f2efb0401a30 5b340b1c7c00
osd.3 ceph142 running (26h) 8m ago 2d 450M 4096M 19.2.1 f2efb0401a30 97e7a0376d4b
osd.4 ceph142 running (26h) 8m ago 2d 669M 4096M 19.2.1 f2efb0401a30 a7754adfc55e
osd.5 ceph142 running (26h) 8m ago 2d 748M 4096M 19.2.1 f2efb0401a30 b58fdbe55cbd
osd.6 ceph143 running (2d) 8m ago 2d 413M 4096M 19.2.1 f2efb0401a30 b61e1ab3edd3
osd.7 ceph143 running (2d) 8m ago 2d 440M 4096M 19.2.1 f2efb0401a30 63d3a5a3cdbc
osd.8 ceph143 running (2d) 8m ago 2d 871M 4096M 19.2.1 f2efb0401a30 38ad03158af7
[root@ceph141 ~]#
[root@ceph141 ~]# ceph orch ps --daemon_type=prometheus
NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID
prometheus.ceph141 ceph141 *:9095 running (2d) 8m ago 3d 120M - 2.51.0 1d3b7f56885b abeb2ed5eab2
[root@ceph141 ~]#
[root@ceph141 ~]# ceph orch ps --service_name=rgw.lax
NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID
rgw.lax.ceph141.svisis ceph141 *:80 running (5h) 9m ago 5h 51.9M - 19.2.1 f2efb0401a30 1d3c29e587dd
rgw.lax.ceph142.tmtpzs ceph142 *:80 running (5h) 9m ago 5h 150M - 19.2.1 f2efb0401a30 4f90102b7675
[root@ceph141 ~]#
- 3.查看指定节点的守护进程
[root@ceph141 ~]# ceph orch ps ceph141
NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID
alertmanager.ceph141 ceph141 *:9093,9094 running (2d) 62s ago 3d 30.0M - 0.25.0 c8568f914cd2 4b8bc4b1fc5b
ceph-exporter.ceph141 ceph141 running (2d) 62s ago 3d 15.7M - 19.2.1 f2efb0401a30 0cac3a5b6be7
crash.ceph141 ceph141 running (2d) 62s ago 3d 3084k - 19.2.1 f2efb0401a30 05eb9f1bcb82
grafana.ceph141 ceph141 *:3000 running (2d) 62s ago 3d 155M - 10.4.0 c8b91775d855 e5a398893001
mds.violet-cephfs.ceph141.pthitg ceph141 running (28h) 62s ago 28h 28.9M - 19.2.1 f2efb0401a30 182602b46520
mgr.ceph141.mbakds ceph141 *:9283,8765,8443 running (2d) 62s ago 3d 284M - 19.2.1 f2efb0401a30 face10bad3d7
mon.ceph141 ceph141 running (2d) 62s ago 3d 394M 2048M 19.2.1 f2efb0401a30 a7ca13016694
node-exporter.ceph141 ceph141 *:9100 running (2d) 62s ago 3d 19.0M - 1.7.0 72c9c2088986 cd20eec4cf53
osd.0 ceph141 running (2d) 62s ago 2d 217M 4096M 19.2.1 f2efb0401a30 16decb320ba8
osd.1 ceph141 running (2d) 62s ago 2d 439M 4096M 19.2.1 f2efb0401a30 fb7711a31bd6
osd.2 ceph141 running (2d) 62s ago 2d 437M 4096M 19.2.1 f2efb0401a30 5b340b1c7c00
prometheus.ceph141 ceph141 *:9095 running (2d) 62s ago 3d 121M - 2.51.0 1d3b7f56885b abeb2ed5eab2
rgw.lax.ceph141.svisis ceph141 *:80 running (5h) 62s ago 5h 52.1M - 19.2.1 f2efb0401a30 1d3c29e587dd
[root@ceph141 ~]#
[root@ceph141 ~]#
[root@ceph141 ~]# ceph orch ps ceph143
NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID
ceph-exporter.ceph143 ceph143 running (2d) 65s ago 2d 22.9M - 19.2.1 f2efb0401a30 e85d535ca925
crash.ceph143 ceph143 running (2d) 65s ago 2d 9012k - 19.2.1 f2efb0401a30 1337b15f0f1d
mon.ceph143 ceph143 running (2d) 65s ago 2d 431M 2048M 19.2.1 f2efb0401a30 da2a9b89611b
node-exporter.ceph143 ceph143 *:9100 running (2d) 65s ago 2d 19.2M - 1.7.0 72c9c2088986 f76f9ef7be86
osd.6 ceph143 running (2d) 65s ago 2d 411M 4096M 19.2.1 f2efb0401a30 b61e1ab3edd3
osd.7 ceph143 running (2d) 65s ago 2d 438M 4096M 19.2.1 f2efb0401a30 63d3a5a3cdbc
osd.8 ceph143 running (2d) 65s ago 2d 861M 4096M 19.2.1 f2efb0401a30 38ad03158af7
[root@ceph141 ~]#
- 4.重启指定节点守护进程服务
[root@ceph141 ~]# ceph orch daemon restart node-exporter.ceph143
Scheduled to restart node-exporter.ceph143 on host 'ceph143'
[root@ceph141 ~]#
[root@ceph141 ~]# ceph orch ps ceph143
NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID
ceph-exporter.ceph143 ceph143 running (3d) 5s ago 3d 5820k - 19.2.0 37996728e013 ef35746c2790
crash.ceph143 ceph143 running (3d) 5s ago 3d 7084k - 19.2.0 37996728e013 2c7cf4d86cec
mgr.ceph143.ihhymg ceph143 *:8443,9283,8765 running (3d) 5s ago 3d 390M - 19.2.0 37996728e013 07c3adf66618
mon.ceph143 ceph143 running (3d) 5s ago 3d 457M 2048M 19.2.0 37996728e013 87811f5e96d8
node-exporter.ceph143 ceph143 *:9100 running 5s ago 3d - - <unknown> <unknown> <unknown>
osd.5 ceph143 running (3d) 5s ago 3d 117M 4096M 19.2.0 37996728e013 7f43160e7730
osd.6 ceph143 running (3d) 5s ago 3d 134M 4096M 19.2.0 37996728e013 30dce89758bf
[root@ceph141 ~]#
[root@ceph141 ~]# ceph orch ps ceph143
NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID
ceph-exporter.ceph143 ceph143 running (3d) 30s ago 3d 5816k - 19.2.0 37996728e013 ef35746c2790
crash.ceph143 ceph143 running (3d) 30s ago 3d 7063k - 19.2.0 37996728e013 2c7cf4d86cec
mgr.ceph143.ihhymg ceph143 *:8443,9283,8765 running (3d) 30s ago 3d 389M - 19.2.0 37996728e013 07c3adf66618
mon.ceph143 ceph143 running (3d) 30s ago 3d 457M 2048M 19.2.0 37996728e013 87811f5e96d8
node-exporter.ceph143 ceph143 *:9100 running (35s) 30s ago 3d 2515k - 1.5.0 0da6a335fe13 ce23389f20e6
osd.5 ceph143 running (3d) 30s ago 3d 116M 4096M 19.2.0 37996728e013 7f43160e7730
osd.6 ceph143 running (3d) 30s ago 3d 134M 4096M 19.2.0 37996728e013 30dce89758bf
[root@ceph141 ~]#
- 5.查看主机有哪些设备列表
[root@ceph141 ~]# ceph orch device ls
HOST PATH TYPE DEVICE ID SIZE AVAILABLE REFRESHED REJECT REASONS
ceph141 /dev/sda hdd 300G No 16m ago Has a FileSystem, Insufficient space (<10 extents) on vgs, LVM detected
ceph141 /dev/sdc hdd 500G No 16m ago Has a FileSystem, Insufficient space (<10 extents) on vgs, LVM detected
ceph141 /dev/sdd hdd 1024G No 16m ago Has a FileSystem, Insufficient space (<10 extents) on vgs, LVM detected
ceph141 /dev/sr0 hdd VMware_Virtual_SATA_CDRW_Drive_01000000000000000001 1023M No 16m ago Failed to determine if device is BlueStore, Insufficient space (<5GB)
ceph142 /dev/sdb hdd 300G No 13m ago Has a FileSystem, Insufficient space (<10 extents) on vgs, LVM detected
ceph142 /dev/sdc hdd 500G No 13m ago Has a FileSystem, Insufficient space (<10 extents) on vgs, LVM detected
ceph142 /dev/sdd hdd 1024G No 13m ago Has a FileSystem, Insufficient space (<10 extents) on vgs, LVM detected
ceph142 /dev/sr0 hdd VMware_Virtual_SATA_CDRW_Drive_01000000000000000001 1023M No 13m ago Failed to determine if device is BlueStore, Insufficient space (<5GB)
ceph143 /dev/sdb hdd 300G No 13m ago Has a FileSystem, Insufficient space (<10 extents) on vgs, LVM detected
ceph143 /dev/sdc hdd 500G No 13m ago Has a FileSystem, Insufficient space (<10 extents) on vgs, LVM detected
ceph143 /dev/sdd hdd 1024G No 13m ago Has a FileSystem, Insufficient space (<10 extents) on vgs, LVM detected
ceph143 /dev/sr0 hdd VMware_Virtual_SATA_CDRW_Drive_01000000000000000001 1023M No 13m ago Failed to determine if device is BlueStore, Insufficient space (<5GB)
[root@ceph141 ~]#
- 6.查看集群有哪些主机列表
[root@ceph141 ~]# ceph orch host ls
HOST ADDR LABELS STATUS
ceph141 10.0.0.141 _admin
ceph142 10.0.0.142
ceph143 10.0.0.143
3 hosts in cluster
[root@ceph141 ~]#
- 7.报告配置的后端及其状态
[root@ceph141 ~]# ceph orch status
Backend: cephadm
Available: Yes
Paused: No
[root@ceph141 ~]#
[root@ceph141 ~]# ceph orch status --detail
Backend: cephadm
Available: Yes
Paused: No
Host Parallelism: 10
[root@ceph141 ~]#
[root@ceph141 ~]# ceph orch status --detail --format json
{"available": true, "backend": "cephadm", "paused": false, "workers": 10}
[root@ceph141 ~]#
- 8.检查服务版本与可用和目标容器
[root@ceph141 ~]# docker image ls
REPOSITORY TAG IMAGE ID CREATED SIZE
quay.io/ceph/ceph v19 f2efb0401a30 8 weeks ago 1.3GB
quay.io/prometheus/prometheus v2.51.0 1d3b7f56885b 12 months ago 262MB
quay.io/ceph/grafana 10.4.0 c8b91775d855 13 months ago 430MB
quay.io/prometheus/node-exporter v1.7.0 72c9c2088986 16 months ago 22.7MB
quay.io/prometheus/alertmanager v0.25.0 c8568f914cd2 2 years ago 65.1MB
[root@ceph141 ~]#
[root@ceph141 ~]# ceph orch upgrade check quay.io/ceph/ceph:v19
{
"needs_update": {},
"non_ceph_image_daemons": [
"prometheus.ceph141",
"grafana.ceph141",
"node-exporter.ceph141",
"alertmanager.ceph141",
"node-exporter.ceph142",
"node-exporter.ceph143"
],
"target_digest": "quay.io/ceph/ceph@sha256:41d3f5e46ff7de28544cc8869fdea13fca824dcef83936cb3288ed9de935e4de",
"target_id": "f2efb0401a30ec7eda97b6da76b314bd081fcb910cc5dcd826bc7c72c9dfdd7d",
"target_name": "quay.io/ceph/ceph:v19",
"target_version": "ceph version 19.2.1 (58a7fab8be0a062d730ad7da874972fd3fba59fb) squid (stable)",
"up_to_date": [
"osd.1",
"crash.ceph141",
"osd.2",
"mgr.ceph141.mbakds",
"rgw.lax.ceph141.svisis",
"osd.0",
"mon.ceph141",
"mds.violet-cephfs.ceph141.pthitg",
"ceph-exporter.ceph141",
"osd.4",
"ceph-exporter.ceph142",
"rgw.lax.ceph142.tmtpzs",
"mon.ceph142",
"mgr.ceph142.qgifwo",
"crash.ceph142",
"mds.violet-cephfs.ceph142.pmzglk",
"osd.3",
"osd.5",
"osd.6",
"osd.8",
"osd.7",
"mon.ceph143",
"ceph-exporter.ceph143",
"crash.ceph143"
]
}
[root@ceph141 ~]#