参考文档

1
2
https://www.virtualtothecore.com/en/upgrade-ceph-cluster-luminous/
http://www.chinastor.com/distristor/11033L502017.html

缘起

首先看之前安装版本链接及测试 https://www.jianshu.com/p/379d313c2bf9 https://www.jianshu.com/p/b11144ea407f mon ceph0、ceph2、cphe3 osd ceph0、ceph1、ceph2、ceph3 rgw ceph1 deploy ceph0 之前在Centos7.5上测试了Jewel版本的集群,随着对Ceph了解深入,计划线上运行比较新的LTS版本Ceph集群,最终选择了Luminous版本。 本来计划重新部署Luminous版本,看到这是测试环境数据丢失风险小就想尝试升级Jewel版本到Luminous版本,由于之前是Yum安装的根据之前经验原理是更新二进制文件,最后重启服务即可。看介绍文档升级步骤比较简单,但是测试中发现国内用户肯定会遇到一个坑,升级过程中会自动修改yum源到国外的站点,由于网络延迟大300s反应不及时会自动断开连接停止升级服务,故后续的操作我改成了国内源,并找出rpm包手工升级,由于本身有依赖关系故最终就收到yum install ceph ceph-radosgw即可全部升级所有的ceph包,最后重启相关服务即完成升级,最终数据没有丢失,各个功能也正常。

升级过程

参照官方的升级指南一步一步的小心操作,要注意,升级时候要确保系统是健康运行的状态。 1、登录,确认sortbitwise是enabled状态:

[root@idcv-ceph0 yum.repos.d]# ceph osd set sortbitwise set sortbitwise

2、设置noout标志,告诉Ceph不要重新去做集群的负载均衡,虽然这是可选项,但是建议设置一下,避免每次停止节点的时候,Ceph就尝试通过复制数据到其他可用节点来重新平衡集群。

[root@idcv-ceph0 yum.repos.d]# ceph osd set noout set noout

3、升级时,可以选择手工升级每个节点,也可以使用使用Ceph-deploy实现自动升级。如果选择手动升级,在CentOS系统里,你需要先编辑Ceph yum repo获取新的Luminous版本来替换老版本Jewel,这就需要一个简单的文本替换操作:

[root@idcv-ceph0 yum.repos.d]# sed -i ‘s/jewel/luminous/‘ /etc/yum.repos.d/ceph.repo

4、使用Ceph-deploy可以实现一个命令完成集群的自动升级

[root@idcv-ceph0 yum.repos.d]# yum install ceph-deploy python-pushy Running transaction Updating : ceph-deploy-2.0.0-0.noarch 1/2 Cleanup : ceph-deploy-1.5.39-0.noarch 2/2 Verifying : ceph-deploy-2.0.0-0.noarch 1/2 Verifying : ceph-deploy-1.5.39-0.noarch 2/2 Updated: ceph-deploy.noarch 0:2.0.0-0
Complete! [root@idcv-ceph0 yum.repos.d]# rpm -qa |grep ceph-deploy ceph-deploy-2.0.0-0.noarch

5、一旦Ceph-deploy升级完成,首先要做的是在同一台机器上升级Ceph。 发现从这个时候开始按照官网的步骤在国内行不通了,主要原因是yum源改成了国外的网速达不到,后续就是手工升级结合官网步骤了。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
[root@idcv-ceph0 yum.repos.d]# ceph -s
cluster 812d3acb-eaa8-4355-9a74-64f2cd5209b3
health HEALTH_WARN
noout flag(s) set
monmap e2: 3 mons at {idcv-ceph0=172.20.1.138:6789/0,idcv-ceph2=172.20.1.140:6789/0,idcv-ceph3=172.20.1.141:6789/0}
election epoch 8, quorum 0,1,2 idcv-ceph0,idcv-ceph2,idcv-ceph3
osdmap e49: 4 osds: 4 up, 4 in
flags noout,sortbitwise,require_jewel_osds
pgmap v53288: 272 pgs, 12 pools, 97496 MB data, 1785 kobjects
296 GB used, 84824 MB / 379 GB avail
272 active+clean
[root@idcv-ceph0 yum.repos.d]# ceph -v
ceph version 10.2.10 (5dc1e4c05cb68dbf62ae6fce3f0700e4654fdbbe)
[root@idcv-ceph0 yum.repos.d]# cd /root/cluster/
[root@idcv-ceph0 cluster]# ls
ceph.bootstrap-mds.keyring ceph.bootstrap-osd.keyring ceph.client.admin.keyring ceph-deploy-ceph.log
ceph.bootstrap-mgr.keyring ceph.bootstrap-rgw.keyring ceph.conf
[root@idcv-ceph0 cluster]# ceph-deploy install --release luminous idcv-ceph0
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (2.0.0): /usr/bin/ceph-deploy install --release luminous idcv-ceph0
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] testing : None
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f38ae7a1d40>
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] dev_commit : None
[ceph_deploy.cli][INFO ] install_mds : False
[ceph_deploy.cli][INFO ] stable : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] adjust_repos : True
[ceph_deploy.cli][INFO ] func : <function install at 0x7f38ae9d8ed8>
[ceph_deploy.cli][INFO ] install_mgr : False
[ceph_deploy.cli][INFO ] install_all : False
[ceph_deploy.cli][INFO ] repo : False
[ceph_deploy.cli][INFO ] host : ['idcv-ceph0']
[ceph_deploy.cli][INFO ] install_rgw : False
[ceph_deploy.cli][INFO ] install_tests : False
[ceph_deploy.cli][INFO ] repo_url : None
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] install_osd : False
[ceph_deploy.cli][INFO ] version_kind : stable
[ceph_deploy.cli][INFO ] install_common : False
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] dev : master
[ceph_deploy.cli][INFO ] nogpgcheck : False
[ceph_deploy.cli][INFO ] local_mirror : None
[ceph_deploy.cli][INFO ] release : luminous
[ceph_deploy.cli][INFO ] install_mon : False
[ceph_deploy.cli][INFO ] gpg_url : None
[ceph_deploy.install][DEBUG ] Installing stable version luminous on cluster ceph hosts idcv-ceph0
[ceph_deploy.install][DEBUG ] Detecting platform for host idcv-ceph0 ...
[idcv-ceph0][DEBUG ] connected to host: idcv-ceph0
[idcv-ceph0][DEBUG ] detect platform information from remote host
[idcv-ceph0][DEBUG ] detect machine type
[ceph_deploy.install][INFO ] Distro info: CentOS Linux 7.5.1804 Core
[idcv-ceph0][INFO ] installing Ceph on idcv-ceph0
[idcv-ceph0][INFO ] Running command: yum clean all
[idcv-ceph0][DEBUG ] Loaded plugins: fastestmirror, priorities
[idcv-ceph0][DEBUG ] Cleaning repos: Ceph Ceph-noarch base ceph-source epel extras updates
[idcv-ceph0][DEBUG ] Cleaning up everything
[idcv-ceph0][DEBUG ] Maybe you want: rm -rf /var/cache/yum, to also free up space taken by orphaned data from disabled or removed repos
[idcv-ceph0][DEBUG ] Cleaning up list of fastest mirrors
[idcv-ceph0][INFO ] Running command: yum -y install epel-release
[idcv-ceph0][DEBUG ] Loaded plugins: fastestmirror, priorities
[idcv-ceph0][DEBUG ] Determining fastest mirrors
[idcv-ceph0][DEBUG ] * base: mirrors.huaweicloud.com
[idcv-ceph0][DEBUG ] * epel: mirrors.huaweicloud.com
[idcv-ceph0][DEBUG ] * extras: mirrors.neusoft.edu.cn
[idcv-ceph0][DEBUG ] * updates: mirrors.huaweicloud.com
[idcv-ceph0][DEBUG ] 8 packages excluded due to repository priority protections
[idcv-ceph0][DEBUG ] Package epel-release-7-11.noarch already installed and latest version
[idcv-ceph0][DEBUG ] Nothing to do
[idcv-ceph0][INFO ] Running command: yum -y install yum-plugin-priorities
[idcv-ceph0][DEBUG ] Loaded plugins: fastestmirror, priorities
[idcv-ceph0][DEBUG ] Loading mirror speeds from cached hostfile
[idcv-ceph0][DEBUG ] * base: mirrors.huaweicloud.com
[idcv-ceph0][DEBUG ] * epel: mirrors.huaweicloud.com
[idcv-ceph0][DEBUG ] * extras: mirrors.neusoft.edu.cn
[idcv-ceph0][DEBUG ] * updates: mirrors.huaweicloud.com
[idcv-ceph0][DEBUG ] 8 packages excluded due to repository priority protections
[idcv-ceph0][DEBUG ] Package yum-plugin-priorities-1.1.31-45.el7.noarch already installed and latest version
[idcv-ceph0][DEBUG ] Nothing to do
[idcv-ceph0][DEBUG ] Configure Yum priorities to include obsoletes
[idcv-ceph0][WARNIN] check_obsoletes has been enabled for Yum priorities plugin
[idcv-ceph0][INFO ] Running command: rpm --import https://download.ceph.com/keys/release.asc
[idcv-ceph0][INFO ] Running command: yum remove -y ceph-release
[idcv-ceph0][DEBUG ] Loaded plugins: fastestmirror, priorities
[idcv-ceph0][DEBUG ] Resolving Dependencies
[idcv-ceph0][DEBUG ] --> Running transaction check
[idcv-ceph0][DEBUG ] ---> Package ceph-release.noarch 0:1-1.el7 will be erased
[idcv-ceph0][DEBUG ] --> Finished Dependency Resolution
[idcv-ceph0][DEBUG ]
[idcv-ceph0][DEBUG ] Dependencies Resolved
[idcv-ceph0][DEBUG ]
[idcv-ceph0][DEBUG ] ================================================================================
[idcv-ceph0][DEBUG ] Package Arch Version Repository Size
[idcv-ceph0][DEBUG ] ================================================================================
[idcv-ceph0][DEBUG ] Removing:
[idcv-ceph0][DEBUG ] ceph-release noarch 1-1.el7 installed 535
[idcv-ceph0][DEBUG ]
[idcv-ceph0][DEBUG ] Transaction Summary
[idcv-ceph0][DEBUG ] ================================================================================
[idcv-ceph0][DEBUG ] Remove 1 Package
[idcv-ceph0][DEBUG ]
[idcv-ceph0][DEBUG ] Installed size: 535
[idcv-ceph0][DEBUG ] Downloading packages:
[idcv-ceph0][DEBUG ] Running transaction check
[idcv-ceph0][DEBUG ] Running transaction test
[idcv-ceph0][DEBUG ] Transaction test succeeded
[idcv-ceph0][DEBUG ] Running transaction
[idcv-ceph0][DEBUG ] Erasing : ceph-release-1-1.el7.noarch 1/1
[idcv-ceph0][DEBUG ] warning: /etc/yum.repos.d/ceph.repo saved as /etc/yum.repos.d/ceph.repo.rpmsave
[idcv-ceph0][DEBUG ] Verifying : ceph-release-1-1.el7.noarch 1/1
[idcv-ceph0][DEBUG ]
[idcv-ceph0][DEBUG ] Removed:
[idcv-ceph0][DEBUG ] ceph-release.noarch 0:1-1.el7
[idcv-ceph0][DEBUG ]
[idcv-ceph0][DEBUG ] Complete!
[idcv-ceph0][INFO ] Running command: yum install -y https://download.ceph.com/rpm-luminous/el7/noarch/ceph-release-1-0.el7.noarch.rpm
[idcv-ceph0][DEBUG ] Loaded plugins: fastestmirror, priorities
[idcv-ceph0][DEBUG ] Examining /var/tmp/yum-root-dPpRu6/ceph-release-1-0.el7.noarch.rpm: ceph-release-1-1.el7.noarch
[idcv-ceph0][DEBUG ] Marking /var/tmp/yum-root-dPpRu6/ceph-release-1-0.el7.noarch.rpm to be installed
[idcv-ceph0][DEBUG ] Resolving Dependencies
[idcv-ceph0][DEBUG ] --> Running transaction check
[idcv-ceph0][DEBUG ] ---> Package ceph-release.noarch 0:1-1.el7 will be installed
[idcv-ceph0][DEBUG ] --> Finished Dependency Resolution
[idcv-ceph0][DEBUG ]
[idcv-ceph0][DEBUG ] Dependencies Resolved
[idcv-ceph0][DEBUG ]
[idcv-ceph0][DEBUG ] ================================================================================
[idcv-ceph0][DEBUG ] Package Arch Version Repository Size
[idcv-ceph0][DEBUG ] ================================================================================
[idcv-ceph0][DEBUG ] Installing:
[idcv-ceph0][DEBUG ] ceph-release noarch 1-1.el7 /ceph-release-1-0.el7.noarch 544
[idcv-ceph0][DEBUG ]
[idcv-ceph0][DEBUG ] Transaction Summary
[idcv-ceph0][DEBUG ] ================================================================================
[idcv-ceph0][DEBUG ] Install 1 Package
[idcv-ceph0][DEBUG ]
[idcv-ceph0][DEBUG ] Total size: 544
[idcv-ceph0][DEBUG ] Installed size: 544
[idcv-ceph0][DEBUG ] Downloading packages:
[idcv-ceph0][DEBUG ] Running transaction check
[idcv-ceph0][DEBUG ] Running transaction test
[idcv-ceph0][DEBUG ] Transaction test succeeded
[idcv-ceph0][DEBUG ] Running transaction
[idcv-ceph0][DEBUG ] Installing : ceph-release-1-1.el7.noarch 1/1
[idcv-ceph0][DEBUG ] Verifying : ceph-release-1-1.el7.noarch 1/1
[idcv-ceph0][DEBUG ]
[idcv-ceph0][DEBUG ] Installed:
[idcv-ceph0][DEBUG ] ceph-release.noarch 0:1-1.el7
[idcv-ceph0][DEBUG ]
[idcv-ceph0][DEBUG ] Complete!
[idcv-ceph0][WARNIN] ensuring that /etc/yum.repos.d/ceph.repo contains a high priority
[idcv-ceph0][WARNIN] altered ceph.repo priorities to contain: priority=1
[idcv-ceph0][INFO ] Running command: yum -y install ceph ceph-radosgw
[idcv-ceph0][DEBUG ] Loaded plugins: fastestmirror, priorities
[idcv-ceph0][DEBUG ] Loading mirror speeds from cached hostfile
[idcv-ceph0][DEBUG ] * base: mirrors.huaweicloud.com
[idcv-ceph0][DEBUG ] * epel: mirrors.huaweicloud.com
[idcv-ceph0][DEBUG ] * extras: mirrors.neusoft.edu.cn
[idcv-ceph0][DEBUG ] * updates: mirrors.huaweicloud.com
[idcv-ceph0][DEBUG ] 8 packages excluded due to repository priority protections
[idcv-ceph0][DEBUG ] Resolving Dependencies
[idcv-ceph0][DEBUG ] --> Running transaction check
[idcv-ceph0][DEBUG ] ---> Package ceph.x86_64 1:10.2.10-0.el7 will be updated
[idcv-ceph0][DEBUG ] ---> Package ceph.x86_64 2:12.2.5-0.el7 will be an update
[idcv-ceph0][DEBUG ] --> Processing Dependency: ceph-osd = 2:12.2.5-0.el7 for package: 2:ceph-12.2.5-0.el7.x86_64
[idcv-ceph0][DEBUG ] --> Processing Dependency: ceph-mon = 2:12.2.5-0.el7 for package: 2:ceph-12.2.5-0.el7.x86_64
[idcv-ceph0][DEBUG ] --> Processing Dependency: ceph-mgr = 2:12.2.5-0.el7 for package: 2:ceph-12.2.5-0.el7.x86_64
[idcv-ceph0][DEBUG ] --> Processing Dependency: ceph-mds = 2:12.2.5-0.el7 for package: 2:ceph-12.2.5-0.el7.x86_64
[idcv-ceph0][DEBUG ] ---> Package ceph-radosgw.x86_64 1:10.2.10-0.el7 will be updated
[idcv-ceph0][DEBUG ] ---> Package ceph-radosgw.x86_64 2:12.2.5-0.el7 will be an update
[idcv-ceph0][DEBUG ] --> Processing Dependency: ceph-selinux = 2:12.2.5-0.el7 for package: 2:ceph-radosgw-12.2.5-0.el7.x86_64
[idcv-ceph0][DEBUG ] --> Processing Dependency: librgw2 = 2:12.2.5-0.el7 for package: 2:ceph-radosgw-12.2.5-0.el7.x86_64
[idcv-ceph0][DEBUG ] --> Processing Dependency: librados2 = 2:12.2.5-0.el7 for package: 2:ceph-radosgw-12.2.5-0.el7.x86_64
[idcv-ceph0][DEBUG ] --> Processing Dependency: ceph-common = 2:12.2.5-0.el7 for package: 2:ceph-radosgw-12.2.5-0.el7.x86_64
[idcv-ceph0][DEBUG ] --> Processing Dependency: libibverbs.so.1()(64bit) for package: 2:ceph-radosgw-12.2.5-0.el7.x86_64
[idcv-ceph0][DEBUG ] --> Processing Dependency: libceph-common.so.0()(64bit) for package: 2:ceph-radosgw-12.2.5-0.el7.x86_64
[idcv-ceph0][DEBUG ] --> Running transaction check
[idcv-ceph0][DEBUG ] ---> Package ceph-common.x86_64 1:10.2.10-0.el7 will be updated
[idcv-ceph0][DEBUG ] --> Processing Dependency: ceph-common = 1:10.2.10-0.el7 for package: 1:ceph-base-10.2.10-0.el7.x86_64
[idcv-ceph0][DEBUG ] ---> Package ceph-common.x86_64 2:12.2.5-0.el7 will be an update
[idcv-ceph0][DEBUG ] --> Processing Dependency: python-rbd = 2:12.2.5-0.el7 for package: 2:ceph-common-12.2.5-0.el7.x86_64
[idcv-ceph0][DEBUG ] --> Processing Dependency: libcephfs2 = 2:12.2.5-0.el7 for package: 2:ceph-common-12.2.5-0.el7.x86_64
[idcv-ceph0][DEBUG ] --> Processing Dependency: python-rgw = 2:12.2.5-0.el7 for package: 2:ceph-common-12.2.5-0.el7.x86_64
[idcv-ceph0][DEBUG ] --> Processing Dependency: python-cephfs = 2:12.2.5-0.el7 for package: 2:ceph-common-12.2.5-0.el7.x86_64
[idcv-ceph0][DEBUG ] --> Processing Dependency: python-rados = 2:12.2.5-0.el7 for package: 2:ceph-common-12.2.5-0.el7.x86_64
[idcv-ceph0][DEBUG ] --> Processing Dependency: librbd1 = 2:12.2.5-0.el7 for package: 2:ceph-common-12.2.5-0.el7.x86_64
[idcv-ceph0][DEBUG ] --> Processing Dependency: python-prettytable for package: 2:ceph-common-12.2.5-0.el7.x86_64
[idcv-ceph0][DEBUG ] --> Processing Dependency: libcephfs.so.2()(64bit) for package: 2:ceph-common-12.2.5-0.el7.x86_64
[idcv-ceph0][DEBUG ] ---> Package ceph-mds.x86_64 1:10.2.10-0.el7 will be updated
[idcv-ceph0][DEBUG ] ---> Package ceph-mds.x86_64 2:12.2.5-0.el7 will be an update
[idcv-ceph0][DEBUG ] ---> Package ceph-mgr.x86_64 2:12.2.5-0.el7 will be installed
[idcv-ceph0][DEBUG ] --> Processing Dependency: python-cherrypy for package: 2:ceph-mgr-12.2.5-0.el7.x86_64
[idcv-ceph0][DEBUG ] --> Processing Dependency: pyOpenSSL for package: 2:ceph-mgr-12.2.5-0.el7.x86_64
[idcv-ceph0][DEBUG ] --> Processing Dependency: python-pecan for package: 2:ceph-mgr-12.2.5-0.el7.x86_64
[idcv-ceph0][DEBUG ] ---> Package ceph-mon.x86_64 1:10.2.10-0.el7 will be updated
[idcv-ceph0][DEBUG ] ---> Package ceph-mon.x86_64 2:12.2.5-0.el7 will be an update
[idcv-ceph0][DEBUG ] ---> Package ceph-osd.x86_64 1:10.2.10-0.el7 will be updated
[idcv-ceph0][DEBUG ] ---> Package ceph-osd.x86_64 2:12.2.5-0.el7 will be an update
[idcv-ceph0][DEBUG ] ---> Package ceph-selinux.x86_64 1:10.2.10-0.el7 will be updated
[idcv-ceph0][DEBUG ] ---> Package ceph-selinux.x86_64 2:12.2.5-0.el7 will be an update
[idcv-ceph0][DEBUG ] ---> Package libibverbs.x86_64 0:15-7.el7_5 will be installed
[idcv-ceph0][DEBUG ] --> Processing Dependency: rdma-core(x86-64) = 15-7.el7_5 for package: libibverbs-15-7.el7_5.x86_64
[idcv-ceph0][DEBUG ] ---> Package librados2.x86_64 1:10.2.10-0.el7 will be updated
[idcv-ceph0][DEBUG ] --> Processing Dependency: librados2 = 1:10.2.10-0.el7 for package: 1:rbd-nbd-10.2.10-0.el7.x86_64
[idcv-ceph0][DEBUG ] --> Processing Dependency: librados2 = 1:10.2.10-0.el7 for package: 1:libradosstriper1-10.2.10-0.el7.x86_64
[idcv-ceph0][DEBUG ] ---> Package librados2.x86_64 2:12.2.5-0.el7 will be an update
[idcv-ceph0][DEBUG ] ---> Package librgw2.x86_64 1:10.2.10-0.el7 will be updated
[idcv-ceph0][DEBUG ] ---> Package librgw2.x86_64 2:12.2.5-0.el7 will be an update
[idcv-ceph0][DEBUG ] --> Running transaction check
[idcv-ceph0][DEBUG ] ---> Package ceph-base.x86_64 1:10.2.10-0.el7 will be updated
[idcv-ceph0][DEBUG ] ---> Package ceph-base.x86_64 2:12.2.5-0.el7 will be an update
[idcv-ceph0][DEBUG ] ---> Package libcephfs1.x86_64 1:10.2.10-0.el7 will be obsoleted
[idcv-ceph0][DEBUG ] ---> Package libcephfs2.x86_64 2:12.2.5-0.el7 will be obsoleting
[idcv-ceph0][DEBUG ] ---> Package libradosstriper1.x86_64 1:10.2.10-0.el7 will be updated
[idcv-ceph0][DEBUG ] ---> Package libradosstriper1.x86_64 2:12.2.5-0.el7 will be an update
[idcv-ceph0][DEBUG ] ---> Package librbd1.x86_64 1:10.2.10-0.el7 will be updated
[idcv-ceph0][DEBUG ] ---> Package librbd1.x86_64 2:12.2.5-0.el7 will be an update
[idcv-ceph0][DEBUG ] ---> Package pyOpenSSL.x86_64 0:0.13.1-3.el7 will be installed
[idcv-ceph0][DEBUG ] ---> Package python-cephfs.x86_64 1:10.2.10-0.el7 will be updated
[idcv-ceph0][DEBUG ] ---> Package python-cephfs.x86_64 2:12.2.5-0.el7 will be an update
[idcv-ceph0][DEBUG ] ---> Package python-cherrypy.noarch 0:3.2.2-4.el7 will be installed
[idcv-ceph0][DEBUG ] ---> Package python-pecan.noarch 0:0.4.5-2.el7 will be installed
[idcv-ceph0][DEBUG ] --> Processing Dependency: python-webtest >= 1.3.1 for package: python-pecan-0.4.5-2.el7.noarch
[idcv-ceph0][DEBUG ] --> Processing Dependency: python-webob >= 1.2 for package: python-pecan-0.4.5-2.el7.noarch
[idcv-ceph0][DEBUG ] --> Processing Dependency: python-simplegeneric >= 0.8 for package: python-pecan-0.4.5-2.el7.noarch
[idcv-ceph0][DEBUG ] --> Processing Dependency: python-mako >= 0.4.0 for package: python-pecan-0.4.5-2.el7.noarch
[idcv-ceph0][DEBUG ] --> Processing Dependency: python-singledispatch for package: python-pecan-0.4.5-2.el7.noarch
[idcv-ceph0][DEBUG ] ---> Package python-prettytable.noarch 0:0.7.2-3.el7 will be installed
[idcv-ceph0][DEBUG ] ---> Package python-rados.x86_64 1:10.2.10-0.el7 will be updated
[idcv-ceph0][DEBUG ] ---> Package python-rados.x86_64 2:12.2.5-0.el7 will be an update
[idcv-ceph0][DEBUG ] ---> Package python-rbd.x86_64 1:10.2.10-0.el7 will be updated
[idcv-ceph0][DEBUG ] ---> Package python-rbd.x86_64 2:12.2.5-0.el7 will be an update
[idcv-ceph0][DEBUG ] ---> Package python-rgw.x86_64 2:12.2.5-0.el7 will be installed
[idcv-ceph0][DEBUG ] ---> Package rbd-nbd.x86_64 1:10.2.10-0.el7 will be updated
[idcv-ceph0][DEBUG ] ---> Package rbd-nbd.x86_64 2:12.2.5-0.el7 will be an update
[idcv-ceph0][DEBUG ] ---> Package rdma-core.x86_64 0:15-7.el7_5 will be installed
[idcv-ceph0][DEBUG ] --> Processing Dependency: pciutils for package: rdma-core-15-7.el7_5.x86_64
[idcv-ceph0][DEBUG ] --> Running transaction check
[idcv-ceph0][DEBUG ] ---> Package pciutils.x86_64 0:3.5.1-3.el7 will be installed
[idcv-ceph0][DEBUG ] ---> Package python-mako.noarch 0:0.8.1-2.el7 will be installed
[idcv-ceph0][DEBUG ] --> Processing Dependency: python-beaker for package: python-mako-0.8.1-2.el7.noarch
[idcv-ceph0][DEBUG ] ---> Package python-simplegeneric.noarch 0:0.8-7.el7 will be installed
[idcv-ceph0][DEBUG ] ---> Package python-singledispatch.noarch 0:3.4.0.2-2.el7 will be installed
[idcv-ceph0][DEBUG ] ---> Package python-webob.noarch 0:1.2.3-7.el7 will be installed
[idcv-ceph0][DEBUG ] ---> Package python-webtest.noarch 0:1.3.4-6.el7 will be installed
[idcv-ceph0][DEBUG ] --> Running transaction check
[idcv-ceph0][DEBUG ] ---> Package python-beaker.noarch 0:1.5.4-10.el7 will be installed
[idcv-ceph0][DEBUG ] --> Processing Dependency: python-paste for package: python-beaker-1.5.4-10.el7.noarch
[idcv-ceph0][DEBUG ] --> Running transaction check
[idcv-ceph0][DEBUG ] ---> Package python-paste.noarch 0:1.7.5.1-9.20111221hg1498.el7 will be installed
[idcv-ceph0][DEBUG ] --> Processing Dependency: python-tempita for package: python-paste-1.7.5.1-9.20111221hg1498.el7.noarch
[idcv-ceph0][DEBUG ] --> Running transaction check
[idcv-ceph0][DEBUG ] ---> Package python-tempita.noarch 0:0.5.1-6.el7 will be installed
[idcv-ceph0][DEBUG ] --> Finished Dependency Resolution
[idcv-ceph0][DEBUG ]
[idcv-ceph0][DEBUG ] Dependencies Resolved
[idcv-ceph0][DEBUG ]
[idcv-ceph0][DEBUG ] ================================================================================
[idcv-ceph0][DEBUG ] Package Arch Version Repository
[idcv-ceph0][DEBUG ] Size
[idcv-ceph0][DEBUG ] ================================================================================
[idcv-ceph0][DEBUG ] Installing:
[idcv-ceph0][DEBUG ] libcephfs2 x86_64 2:12.2.5-0.el7 Ceph 432 k
[idcv-ceph0][DEBUG ] replacing libcephfs1.x86_64 1:10.2.10-0.el7
[idcv-ceph0][DEBUG ] Updating:
[idcv-ceph0][DEBUG ] ceph x86_64 2:12.2.5-0.el7 Ceph 3.0 k
[idcv-ceph0][DEBUG ] ceph-radosgw x86_64 2:12.2.5-0.el7 Ceph 3.8 M
[idcv-ceph0][DEBUG ] Installing for dependencies:
[idcv-ceph0][DEBUG ] ceph-mgr x86_64 2:12.2.5-0.el7 Ceph 3.6 M
[idcv-ceph0][DEBUG ] libibverbs x86_64 15-7.el7_5 updates 224 k
[idcv-ceph0][DEBUG ] pciutils x86_64 3.5.1-3.el7 base 93 k
[idcv-ceph0][DEBUG ] pyOpenSSL x86_64 0.13.1-3.el7 base 133 k
[idcv-ceph0][DEBUG ] python-beaker noarch 1.5.4-10.el7 base 80 k
[idcv-ceph0][DEBUG ] python-cherrypy noarch 3.2.2-4.el7 base 422 k
[idcv-ceph0][DEBUG ] python-mako noarch 0.8.1-2.el7 base 307 k
[idcv-ceph0][DEBUG ] python-paste noarch 1.7.5.1-9.20111221hg1498.el7 base 866 k
[idcv-ceph0][DEBUG ] python-pecan noarch 0.4.5-2.el7 epel 255 k
[idcv-ceph0][DEBUG ] python-prettytable noarch 0.7.2-3.el7 base 37 k
[idcv-ceph0][DEBUG ] python-rgw x86_64 2:12.2.5-0.el7 Ceph 73 k
[idcv-ceph0][DEBUG ] python-simplegeneric noarch 0.8-7.el7 epel 12 k
[idcv-ceph0][DEBUG ] python-singledispatch noarch 3.4.0.2-2.el7 epel 18 k
[idcv-ceph0][DEBUG ] python-tempita noarch 0.5.1-6.el7 base 33 k
[idcv-ceph0][DEBUG ] python-webob noarch 1.2.3-7.el7 base 202 k
[idcv-ceph0][DEBUG ] python-webtest noarch 1.3.4-6.el7 base 102 k
[idcv-ceph0][DEBUG ] rdma-core x86_64 15-7.el7_5 updates 48 k
[idcv-ceph0][DEBUG ] Updating for dependencies:
[idcv-ceph0][DEBUG ] ceph-base x86_64 2:12.2.5-0.el7 Ceph 3.9 M
[idcv-ceph0][DEBUG ] ceph-common x86_64 2:12.2.5-0.el7 Ceph 15 M
[idcv-ceph0][DEBUG ] ceph-mds x86_64 2:12.2.5-0.el7 Ceph 3.6 M
[idcv-ceph0][DEBUG ] ceph-mon x86_64 2:12.2.5-0.el7 Ceph 5.0 M
[idcv-ceph0][DEBUG ] ceph-osd x86_64 2:12.2.5-0.el7 Ceph 13 M
[idcv-ceph0][DEBUG ] ceph-selinux x86_64 2:12.2.5-0.el7 Ceph 20 k
[idcv-ceph0][DEBUG ] librados2 x86_64 2:12.2.5-0.el7 Ceph 2.9 M
[idcv-ceph0][DEBUG ] libradosstriper1 x86_64 2:12.2.5-0.el7 Ceph 330 k
[idcv-ceph0][DEBUG ] librbd1 x86_64 2:12.2.5-0.el7 Ceph 1.1 M
[idcv-ceph0][DEBUG ] librgw2 x86_64 2:12.2.5-0.el7 Ceph 1.7 M
[idcv-ceph0][DEBUG ] python-cephfs x86_64 2:12.2.5-0.el7 Ceph 82 k
[idcv-ceph0][DEBUG ] python-rados x86_64 2:12.2.5-0.el7 Ceph 172 k
[idcv-ceph0][DEBUG ] python-rbd x86_64 2:12.2.5-0.el7 Ceph 105 k
[idcv-ceph0][DEBUG ] rbd-nbd x86_64 2:12.2.5-0.el7 Ceph 81 k
[idcv-ceph0][DEBUG ]
[idcv-ceph0][DEBUG ] Transaction Summary
[idcv-ceph0][DEBUG ] ================================================================================
[idcv-ceph0][DEBUG ] Install 1 Package (+17 Dependent packages)
[idcv-ceph0][DEBUG ] Upgrade 2 Packages (+14 Dependent packages)
[idcv-ceph0][DEBUG ]
[idcv-ceph0][DEBUG ] Total download size: 57 M
[idcv-ceph0][DEBUG ] Downloading packages:
[idcv-ceph0][DEBUG ] Delta RPMs disabled because /usr/bin/applydeltarpm not installed.
[idcv-ceph0][WARNIN] No data was received after 300 seconds, disconnecting...
[idcv-ceph0][INFO ] Running command: ceph --version
[idcv-ceph0][DEBUG ] ceph version 10.2.10 (5dc1e4c05cb68dbf62ae6fce3f0700e4654fdbbe)

报错一

Delta RPMs disabled because /usr/bin/applydeltarpm not installed.

解决方案

[root@idcv-ceph0 cluster]# yum install deltarpm -y Loaded plugins: fastestmirror, priorities Existing lock /var/run/yum.pid: another copy is running as pid 90654. Another app is currently holding the yum lock; waiting for it to exit… The other application is: yum Memory : 132 M RSS (523 MB VSZ) Started: Tue Jul 10 16:15:59 2018 - 10:22 ago State : Sleeping, pid: 90654 Another app is currently holding the yum lock; waiting for it to exit… The other application is: yum Memory : 132 M RSS (523 MB VSZ) Started: Tue Jul 10 16:15:59 2018 - 10:24 ago State : Sleeping, pid: 90654 ^C Exiting on user cancel. [root@idcv-ceph0 cluster]# kill -9 90654 [root@idcv-ceph0 cluster]# yum install deltarpm -y

报错二

No data was received after 300 seconds, disconnecting…

解决方案

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
另外需要修改ceph源为国内yum源,比如阿里yum,否则会报错No data was received after 300 seconds, disconnecting...
[root@idcv-ceph0 cluster]# sed -i 's#download.ceph.com#mirrors.aliyun.com/ceph#g' /etc/yum.repos.d/ceph.repo
但是命令会自动改成国外源,这里的解决方案是rpm -qa |grep cphe相关,修改成国内yum源后直接手动安装yum install或者使用ceph-deploy install服务
[root@idcv-ceph0 ceph]# cat /etc/yum.repos.d/ceph.repo
[Ceph]
name=Ceph packages for $basearch
baseurl=http://mirrors.aliyun.com/ceph/rpm-luminous/el7/$basearch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://mirrors.aliyun.com/ceph/keys/release.asc
priority=1

[Ceph-noarch]
name=Ceph noarch packages
baseurl=http://mirrors.aliyun.com/ceph/rpm-luminous/el7/noarch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://mirrors.aliyun.com/ceph/keys/release.asc
priority=1

[ceph-source]
name=Ceph source packages
baseurl=http://mirrors.aliyun.com/ceph/rpm-luminous/el7/SRPMS
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://mirrors.aliyun.com/ceph/keys/release.asc
priority=1
[root@idcv-ceph0 yum.repos.d]#yum -y install ceph ceph-radosgw
[root@idcv-ceph0 yum.repos.d]# rpm -qa |grep ceph
ceph-deploy-2.0.0-0.noarch
libcephfs2-12.2.5-0.el7.x86_64
python-cephfs-12.2.5-0.el7.x86_64
ceph-selinux-12.2.5-0.el7.x86_64
ceph-radosgw-12.2.5-0.el7.x86_64
ceph-release-1-1.el7.noarch
ceph-base-12.2.5-0.el7.x86_64
ceph-mon-12.2.5-0.el7.x86_64
ceph-osd-12.2.5-0.el7.x86_64
ceph-12.2.5-0.el7.x86_64
ceph-common-12.2.5-0.el7.x86_64
ceph-mds-12.2.5-0.el7.x86_64
ceph-mgr-12.2.5-0.el7.x86_64
[root@idcv-ceph0 yum.repos.d]# ceph -s
cluster 812d3acb-eaa8-4355-9a74-64f2cd5209b3
health HEALTH_WARN
noout flag(s) set
monmap e2: 3 mons at {idcv-ceph0=172.20.1.138:6789/0,idcv-ceph2=172.20.1.140:6789/0,idcv-ceph3=172.20.1.141:6789/0}
election epoch 8, quorum 0,1,2 idcv-ceph0,idcv-ceph2,idcv-ceph3
osdmap e49: 4 osds: 4 up, 4 in
flags noout,sortbitwise,require_jewel_osds
pgmap v53473: 272 pgs, 12 pools, 97496 MB data, 1785 kobjects
296 GB used, 84819 MB / 379 GB avail
272 active+clean

6、在每一个监控节点,需要重启mon服务,命令如下:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
[root@idcv-ceph0 cluster]# systemctl restart ceph-mon.target
[root@idcv-ceph0 cluster]# systemctl status ceph-mon.target
● ceph-mon.target - ceph target allowing to start/stop all ceph-mon@.service instances at once
Loaded: loaded (/usr/lib/systemd/system/ceph-mon.target; enabled; vendor preset: enabled)
Active: active since Tue 2018-07-10 17:27:39 CST; 11s agoJul 10 17:27:39 idcv-ceph0 systemd[1]: Reached target ceph target allowing to start/stop all ceph-mon@.service instances at once.
Jul 10 17:27:39 idcv-ceph0 systemd[1]: Starting ceph target allowing to start/stop all ceph-mon@.service instances at once.
[root@idcv-ceph0 cluster]# ceph -v
ceph version 12.2.5 (cad919881333ac92274171586c827e01f554a70a) luminous (stable)
[root@idcv-ceph0 cluster]# ceph -s
cluster:
id: 812d3acb-eaa8-4355-9a74-64f2cd5209b3
health: HEALTH_WARN
too many PGs per OSD (204 > max 200)
noout flag(s) set

services:
mon: 3 daemons, quorum idcv-ceph0,idcv-ceph2,idcv-ceph3
mgr: no daemons active
osd: 4 osds: 4 up, 4 in
flags noout

data:
pools: 12 pools, 272 pgs
objects: 1785k objects, 97496 MB
usage: 296 GB used, 84817 MB / 379 GB avail
pgs: 272 active+clean

7、在Kraken版本里,曾介绍过有一个Ceph管理器,在Luninous版本之后,这个ceph-mgr进程是日常操作必须的,而在Kraken版本时可选的。所以我的Jewel集群里没有这个管理区,在这里我们必须要安装它:

1
2
3
4
5
6
7

\[root@idcv-ceph0 cluster\]# ceph-deploy mgr create idcv-ceph0 idcv-ceph1 idcv-ceph2 idcv-ceph3 \[ceph\_deploy.conf\]\[DEBUG \] found configuration file at: /root/.cephdeploy.conf \[ceph\_deploy.cli\]\[INFO \] Invoked (2.0.0): /usr/bin/ceph-deploy mgr create idcv-ceph0 idcv-ceph1 idcv-ceph2 idcv-ceph3 \[ceph\_deploy.cli\]\[INFO \] ceph-deploy options: \[ceph\_deploy.cli\]\[INFO \] username : None \[ceph\_deploy.cli\]\[INFO \] verbose : False \[ceph\_deploy.cli\]\[INFO \] mgr : \[('idcv-ceph0', 'idcv-ceph0'), ('idcv-ceph1', 'idcv-ceph1'), ('idcv-ceph2', 'idcv-ceph2'), ('idcv-ceph3', 'idcv-ceph3')\] \[ceph\_deploy.cli\]\[INFO \] overwrite\_conf : False \[ceph\_deploy.cli\]\[INFO \] subcommand : create \[ceph\_deploy.cli\]\[INFO \] quiet : False \[ceph\_deploy.cli\]\[INFO \] cd\_conf : <ceph\_deploy.conf.cephdeploy.Conf instance at 0x7f229723e320> \[ceph\_deploy.cli\]\[INFO \] cluster : ceph \[ceph\_deploy.cli\]\[INFO \] func : \[ceph\_deploy.cli\]\[INFO \] ceph\_conf : None \[ceph\_deploy.cli\]\[INFO \] default\_release : False \[ceph\_deploy.mgr\]\[DEBUG \] Deploying mgr, cluster ceph hosts idcv-ceph0:idcv-ceph0 idcv-ceph1:idcv-ceph1 idcv-ceph2:idcv-ceph2 idcv-ceph3:idcv-ceph3 \[idcv-ceph0\]\[DEBUG \] connected to host: idcv-ceph0 \[idcv-ceph0\]\[DEBUG \] detect platform information from remote host \[idcv-ceph0\]\[DEBUG \] detect machine type \[ceph\_deploy.mgr\]\[INFO \] Distro info: CentOS Linux 7.5.1804 Core \[ceph\_deploy.mgr\]\[DEBUG \] remote host will use systemd \[ceph\_deploy.mgr\]\[DEBUG \] deploying mgr bootstrap to idcv-ceph0 \[idcv-ceph0\]\[DEBUG \] write cluster configuration to /etc/ceph/{cluster}.conf \[idcv-ceph0\]\[WARNIN\] mgr keyring does not exist yet, creating one \[idcv-ceph0\]\[DEBUG \] create a keyring file \[idcv-ceph0\]\[DEBUG \] create path recursively if it doesn't exist \[idcv-ceph0\]\[INFO \] Running command: ceph --cluster ceph --name client.bootstrap-mgr --keyring /var/lib/ceph/bootstrap-mgr/ceph.keyring auth get-or-create mgr.idcv-ceph0 mon allow profile mgr osd allow \* mds allow \* -o /var/lib/ceph/mgr/ceph-idcv-ceph0/keyring \[idcv-ceph0\]\[INFO \] Running command: systemctl enable ceph-mgr@idcv-ceph0 \[idcv-ceph0\]\[WARNIN\] Created symlink from /etc/systemd/system/ceph-mgr.target.wants/ceph-mgr@idcv-ceph0.service to /usr/lib/systemd/system/ceph-mgr@.service. \[idcv-ceph0\]\[INFO \] Running command: systemctl start ceph-mgr@idcv-ceph0 \[idcv-ceph0\]\[INFO \] Running command: systemctl enable ceph.target \[idcv-ceph1\]\[DEBUG \] connection detected need for sudo \[idcv-ceph1\]\[DEBUG \] connected to host: idcv-ceph1 \[idcv-ceph1\]\[DEBUG \] detect platform information from remote host \[idcv-ceph1\]\[DEBUG \] detect machine type \[ceph\_deploy.mgr\]\[INFO \] Distro info: CentOS Linux 7.5.1804 Core \[ceph\_deploy.mgr\]\[DEBUG \] remote host will use systemd \[ceph\_deploy.mgr\]\[DEBUG \] deploying mgr bootstrap to idcv-ceph1 \[idcv-ceph1\]\[DEBUG \] write cluster configuration to /etc/ceph/{cluster}.conf \[idcv-ceph1\]\[WARNIN\] mgr keyring does not exist yet, creating one \[idcv-ceph1\]\[DEBUG \] create a keyring file \[idcv-ceph1\]\[DEBUG \] create path recursively if it doesn't exist \[idcv-ceph1\]\[INFO \] Running command: sudo ceph --cluster ceph --name client.bootstrap-mgr --keyring /var/lib/ceph/bootstrap-mgr/ceph.keyring auth get-or-create mgr.idcv-ceph1 mon allow profile mgr osd allow \* mds allow \* -o /var/lib/ceph/mgr/ceph-idcv-ceph1/keyring \[idcv-ceph1\]\[INFO \] Running command: sudo systemctl enable ceph-mgr@idcv-ceph1 \[idcv-ceph1\]\[WARNIN\] Created symlink from /etc/systemd/system/ceph-mgr.target.wants/ceph-mgr@idcv-ceph1.service to /usr/lib/systemd/system/ceph-mgr@.service. \[idcv-ceph1\]\[INFO \] Running command: sudo systemctl start ceph-mgr@idcv-ceph1 \[idcv-ceph1\]\[INFO \] Running command: sudo systemctl enable ceph.target \[idcv-ceph2\]\[DEBUG \] connection detected need for sudo \[idcv-ceph2\]\[DEBUG \] connected to host: idcv-ceph2 \[idcv-ceph2\]\[DEBUG \] detect platform information from remote host \[idcv-ceph2\]\[DEBUG \] detect machine type \[ceph\_deploy.mgr\]\[INFO \] Distro info: CentOS Linux 7.5.1804 Core \[ceph\_deploy.mgr\]\[DEBUG \] remote host will use systemd \[ceph\_deploy.mgr\]\[DEBUG \] deploying mgr bootstrap to idcv-ceph2 \[idcv-ceph2\]\[DEBUG \] write cluster configuration to /etc/ceph/{cluster}.conf \[idcv-ceph2\]\[WARNIN\] mgr keyring does not exist yet, creating one \[idcv-ceph2\]\[DEBUG \] create a keyring file \[idcv-ceph2\]\[DEBUG \] create path recursively if it doesn't exist \[idcv-ceph2\]\[INFO \] Running command: sudo ceph --cluster ceph --name client.bootstrap-mgr --keyring /var/lib/ceph/bootstrap-mgr/ceph.keyring auth get-or-create mgr.idcv-ceph2 mon allow profile mgr osd allow \* mds allow \* -o /var/lib/ceph/mgr/ceph-idcv-ceph2/keyring \[idcv-ceph2\]\[INFO \] Running command: sudo systemctl enable ceph-mgr@idcv-ceph2 \[idcv-ceph2\]\[WARNIN\] Created symlink from /etc/systemd/system/ceph-mgr.target.wants/ceph-mgr@idcv-ceph2.service to /usr/lib/systemd/system/ceph-mgr@.service. \[idcv-ceph2\]\[INFO \] Running command: sudo systemctl start ceph-mgr@idcv-ceph2 \[idcv-ceph2\]\[INFO \] Running command: sudo systemctl enable ceph.target \[idcv-ceph3\]\[DEBUG \] connection detected need for sudo \[idcv-ceph3\]\[DEBUG \] connected to host: idcv-ceph3 \[idcv-ceph3\]\[DEBUG \] detect platform information from remote host \[idcv-ceph3\]\[DEBUG \] detect machine type \[ceph\_deploy.mgr\]\[INFO \] Distro info: CentOS Linux 7.5.1804 Core \[ceph\_deploy.mgr\]\[DEBUG \] remote host will use systemd \[ceph\_deploy.mgr\]\[DEBUG \] deploying mgr bootstrap to idcv-ceph3 \[idcv-ceph3\]\[DEBUG \] write cluster configuration to /etc/ceph/{cluster}.conf \[idcv-ceph3\]\[WARNIN\] mgr keyring does not exist yet, creating one \[idcv-ceph3\]\[DEBUG \] create a keyring file \[idcv-ceph3\]\[DEBUG \] create path recursively if it doesn't exist \[idcv-ceph3\]\[INFO \] Running command: sudo ceph --cluster ceph --name client.bootstrap-mgr --keyring /var/lib/ceph/bootstrap-mgr/ceph.keyring auth get-or-create mgr.idcv-ceph3 mon allow profile mgr osd allow \* mds allow \* -o /var/lib/ceph/mgr/ceph-idcv-ceph3/keyring \[idcv-ceph3\]\[INFO \] Running command: sudo systemctl enable ceph-mgr@idcv-ceph3 \[idcv-ceph3\]\[WARNIN\] Created symlink from /etc/systemd/system/ceph-mgr.target.wants/ceph-mgr@idcv-ceph3.service to /usr/lib/systemd/system/ceph-mgr@.service. \[idcv-ceph3\]\[INFO \] Running command: sudo systemctl start ceph-mgr@idcv-ceph3 \[idcv-ceph3\]\[INFO \] Running command: sudo systemctl enable ceph.target \[root@idcv-ceph0 cluster\]# ceph -s cluster: id: 812d3acb-eaa8-4355-9a74-64f2cd5209b3 health: HEALTH\_WARN too many PGs per OSD (204 > max 200) noout flag(s) set

services: mon: 3 daemons, quorum idcv-ceph0,idcv-ceph2,idcv-ceph3 mgr: idcv-ceph0(active), standbys: idcv-ceph1, idcv-ceph2, idcv-ceph3 osd: 4 osds: 4 up, 4 in flags noout

data: pools: 12 pools, 272 pgs objects: 1785k objects, 97496 MB usage: 296 GB used, 84816 MB / 379 GB avail pgs: 272 active+clean

8、重启osd
前题是所有节点都修改了国内L版本yum源第6步骤又介绍,执行了yum -y install ceph ceph-radosgw,他会升级二进制文件

[root@idcv-ceph0 ceph]#systemctl restart ceph-osd.target
[root@idcv-ceph0 ceph]# ceph versions
{
“mon”: {
“ceph version 12.2.5 (cad919881333ac92274171586c827e01f554a70a) luminous (stable)”: 3
},
“mgr”: {
“ceph version 12.2.5 (cad919881333ac92274171586c827e01f554a70a) luminous (stable)”: 4
},
“osd”: {
“ceph version 12.2.5 (cad919881333ac92274171586c827e01f554a70a) luminous (stable)”: 4
},
“mds”: {},
“overall”: {
“ceph version 12.2.5 (cad919881333ac92274171586c827e01f554a70a) luminous (stable)”: 11
}
}

9、现在所有的组件都是最新的12.5版本了,我们可以禁止Luminous版本之前的OSD,运行Luminous的独有功能:

[root@idcv-ceph0 ceph]# ceph osd require-osd-release luminous
recovery_deletes is set

这也意味着现在只有Luminous节点才能加入这个集群了。

10、rgw服务也需要重启在ceph1上

[root@idcv-ceph1 system]# systemctl restart ceph-radosgw.target
[root@idcv-ceph1 system]# systemctl status ceph-radosgw.target
● ceph-radosgw.target - ceph target allowing to start/stop all ceph-radosgw@.service instances at once
Loaded: loaded (/usr/lib/systemd/system/ceph-radosgw.target; enabled; vendor preset: enabled)
Active: active since Tue 2018-07-10 18:02:25 CST; 6s ago
Jul 10 18:02:25 idcv-ceph1 systemd[1]: Reached target ceph target allowing to start/stop all ceph-radosgw@.service instances at once.
Jul 10 18:02:25 idcv-ceph1 systemd[1]: Starting ceph target allowing to start/stop all ceph-radosgw@.service instances at once.

11、启动dashboard

1
2
3
4

\[root@idcv-ceph0 ceph\]# rpm -qa |grep mgr ceph-mgr-12.2.5-0.el7.x86\_64 \[root@idcv-ceph0 ceph\]# ceph mgr module enable dashboard
\[root@idcv-ceph0 ceph\]# ceph mgr dump { "epoch": 53, "active\_gid": 34146, "active\_name": "idcv-ceph0", "active\_addr": "172.20.1.138:6804/95951", "available": true, "standbys": \[ { "gid": 44129, "name": "idcv-ceph2", "available\_modules": \[ "balancer", "dashboard", "influx", "localpool", "prometheus", "restful", "selftest", "status", "zabbix" \] }, { "gid": 44134, "name": "idcv-ceph1", "available\_modules": \[ "balancer", "dashboard", "influx", "localpool", "prometheus", "restful", "selftest", "status", "zabbix" \] }, { "gid": 44135, "name": "idcv-ceph3", "available\_modules": \[ "balancer", "dashboard", "influx", "localpool", "prometheus", "restful", "selftest", "status", "zabbix" \] } \], "modules": \[ "balancer", "dashboard", "restful", "status" \], "available\_modules": \[ "balancer", "dashboard", "influx", "localpool", "prometheus", "restful", "selftest", "status", "zabbix" \], "services": { "dashboard": "http://idcv-ceph0:7000/" } }

浏览器访问http://172.20.1.138:7000

12、最后一步,禁止noot,以后集群就可以在需要的时候自己做负载均衡了:


\[root@idcv-ceph0 ceph\]# ceph osd unset noout noout is unset \[root@idcv-ceph0 ceph\]# ceph -s cluster: id: 812d3acb-eaa8-4355-9a74-64f2cd5209b3 health: HEALTH\_WARN application not enabled on 1 pool(s)

services: mon: 3 daemons, quorum idcv-ceph0,idcv-ceph2,idcv-ceph3 mgr: idcv-ceph0(active), standbys: idcv-ceph2, idcv-ceph1, idcv-ceph3 osd: 4 osds: 4 up, 4 in rgw: 1 daemon active

data: pools: 12 pools, 272 pgs objects: 1785k objects, 97496 MB usage: 296 GB used, 84830 MB / 379 GB avail pgs: 272 active+clean

io: client: 0 B/s rd, 0 op/s rd, 0 op/s wr

报错 health: HEALTH\_WARN application not enabled on 1 pool(s)

解决方案 \[root@idcv-ceph0 ceph\]# ceph health detail HEALTH\_WARN application not enabled on 1 pool(s) POOL\_APP\_NOT\_ENABLED application not enabled on 1 pool(s) application not enabled on pool 'test\_pool' use 'ceph osd pool application enable ', where is 'cephfs', 'rbd', 'rgw', or freeform for custom applications. \[root@idcv-ceph0 ceph\]# ceph osd pool application enable test\_pool Invalid command: missing required parameter app(<string(goodchars \[A-Za-z0-9-\_.\])>) osd pool application enable {--yes-i-really-mean-it} : enable use of an application \[cephfs,rbd,rgw\] on pool Error EINVAL: invalid command \[root@idcv-ceph0 ceph\]# ceph osd pool application enable test\_pool rbd enabled application 'rbd' on pool 'test\_pool' \[root@idcv-ceph0 ceph\]# ceph -s cluster: id: 812d3acb-eaa8-4355-9a74-64f2cd5209b3 health: HEALTH\_OK

services: mon: 3 daemons, quorum idcv-ceph0,idcv-ceph2,idcv-ceph3 mgr: idcv-ceph0(active), standbys: idcv-ceph2, idcv-ceph1, idcv-ceph3 osd: 4 osds: 4 up, 4 in rgw: 1 daemon active

data: pools: 12 pools, 272 pgs objects: 1785k objects, 97496 MB usage: 296 GB used, 84829 MB / 379 GB avail pgs: 272 active+clean

io: client: 0 B/s rd, 0 op/s rd, 0 op/s wr \`\`\`

### **总结**

整个升级过程大概1个多小时,主要是结合国内情况否则升级还是比较简单的,另外L版本新增了mgr和dashboard功能,升级完成后测试了对象存储功能和块存储功能都正常。