实验环境

由于实验环境中并没有ssd磁盘,所以这里操作时假设每一个主机有一块ssd盘,操作时手动把对应的osd调整class标签。

修改crush class

1、查看当前osd分布情况
  1. ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
  2. -1 0.74797 root default
  3. -15 0.21999 rack rack1
  4. -9 0.08800 host node1
  5. 3 hdd 0.04399 osd.3 up 1.00000 1.00000
  6. 10 hdd 0.04399 osd.10 up 0.98001 1.00000
  7. 11 hdd 0.04399 osd.11 up 0.96002 1.00000
  8. -3 0.13199 host storage-0
  9. 4 hdd 0.04399 osd.4 up 0.96002 1.00000
  10. 5 hdd 0.04399 osd.5 up 0.96002 1.00000
  11. 14 hdd 0.04399 osd.14 up 0.98001 1.00000
  12. -16 0.26399 rack rack2
  13. -5 0.13199 host node2
  14. 0 hdd 0.04399 osd.0 up 0.98628 1.00000
  15. 6 hdd 0.04399 osd.6 up 1.00000 1.00000
  16. 16 hdd 0.04399 osd.16 up 1.00000 1.00000
  17. -7 0.13199 host storage-1
  18. 2 hdd 0.04399 osd.2 up 1.00000 1.00000
  19. 8 hdd 0.04399 osd.8 up 1.00000 1.00000
  20. 12 hdd 0.04399 osd.12 up 1.00000 1.00000
  21. -17 0.26399 rack rack3
  22. -11 0.13199 host node3
  23. 1 hdd 0.04399 osd.1 up 1.00000 1.00000
  24. 7 hdd 0.04399 osd.7 up 1.00000 1.00000
  25. 15 hdd 0.04399 osd.15 up 1.00000 1.00000
  26. -13 0.13199 host storage-2
  27. 9 hdd 0.04399 osd.9 up 1.00000 1.00000
  28. 13 hdd 0.04399 osd.13 up 1.00000 1.00000
  29. 17 hdd 0.04399 osd.17 up 0.96002 1.00000
2、查看当前集群的crush class
  1. # ceph osd crush class ls
  2. [
  3. "hdd",
  4. ]
3、删除osd.0,osd.1,osd.2,osd.3,osd.4,osd.9的class
  1. # for i in 0 1 2 3 4 9 ; do ceph osd crush rm-device-class osd.$i ; done
  2. done removing class of osd(s): 0
  3. done removing class of osd(s): 1
  4. done removing class of osd(s): 2
  5. done removing class of osd(s): 3
  6. done removing class of osd(s): 9
  7. # ceph osd tree
  8. ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
  9. -1 0.74797 root default
  10. -15 0.21999 rack rack1
  11. -9 0.08800 host node1
  12. 3 0.04399 osd.3 up 1.00000 1.00000
  13. 10 hdd 0.04399 osd.10 up 0.98001 1.00000
  14. 11 hdd 0.04399 osd.11 up 0.96002 1.00000
  15. -3 0.13199 host storage-0
  16. 4 0.04399 osd.4 up 0.96002 1.00000
  17. 5 hdd 0.04399 osd.5 up 0.96002 1.00000
  18. 14 hdd 0.04399 osd.14 up 0.98001 1.00000
  19. -16 0.26399 rack rack2
  20. -5 0.13199 host node2
  21. 0 0.04399 osd.0 up 0.98628 1.00000
  22. 6 hdd 0.04399 osd.6 up 1.00000 1.00000
  23. 16 hdd 0.04399 osd.16 up 1.00000 1.00000
  24. -7 0.13199 host storage-1
  25. 2 0.04399 osd.2 up 1.00000 1.00000
  26. 8 hdd 0.04399 osd.8 up 1.00000 1.00000
  27. 12 hdd 0.04399 osd.12 up 1.00000 1.00000
  28. -17 0.26399 rack rack3
  29. -11 0.13199 host node3
  30. 1 0.04399 osd.1 up 1.00000 1.00000
  31. 7 hdd 0.04399 osd.7 up 1.00000 1.00000
  32. 15 hdd 0.04399 osd.15 up 1.00000 1.00000
  33. -13 0.13199 host storage-2
  34. 9 0.04399 osd.9 up 1.00000 1.00000
  35. 13 hdd 0.04399 osd.13 up 1.00000 1.00000
  36. 17 hdd 0.04399 osd.17 up 0.96002 1.00000
4、设置osd.0,osd.1,osd.2,osd.3,osd.4,osd.9的class为ssd
5、再次查看crush class,多了个ssd
  1. # ceph osd crush class ls
  2. [
  3. "hdd",
  4. "ssd"
  5. ]
  1. # ceph osd crush rule create-replicated rule-ssd default rack ssd
  2. # ceph osd crush rule dump rule-ssd
  3. "rule_id": 1,
  4. "rule_name": "rule-ssd",
  5. "ruleset": 1,
  6. "type": 1,
  7. "min_size": 1,
  8. "max_size": 10,
  9. "steps": [
  10. {
  11. "op": "take",
  12. "item": -30,
  13. "item_name": "default~ssd"
  14. },
  15. {
  16. "op": "chooseleaf_firstn",
  17. "num": 0,
  18. "type": "rack"
  19. },
  20. {
  21. "op": "emit"
  22. }
  23. ]
  24. }

验证

方法一:
1、获取crushmap
  1. # ceph osd getcrushmap -o monmap
  2. 60
2、反编译crushmap

可以看到在crushmap中多了一项rule-ssd的crush rule

3、测试
  1. # crushtool -i monmap --test --min-x 0 --max-x 9 --num-rep 3 --ruleset 1 --show_mappings
  2. CRUSH rule 1 x 0 [3,2,9]
  3. CRUSH rule 1 x 1 [2,4,9]
  4. CRUSH rule 1 x 2 [1,4,0]
  5. CRUSH rule 1 x 3 [9,0,3]
  6. CRUSH rule 1 x 4 [2,9,3]
  7. CRUSH rule 1 x 5 [1,2,4]
  8. CRUSH rule 1 x 6 [1,3,0]
  9. CRUSH rule 1 x 7 [1,0,4]
  10. CRUSH rule 1 x 8 [0,4,1]
  11. CRUSH rule 1 x 9 [0,1,3]
方法二:
1、创建名为ssdtest的pool,并设置crush rule为rule-ssd
  1. # ceph osd pool create ssdtest 64 64 rule-ssd
  2. pool 'ssdtest' created
  3. [root@node1 ~]# ceph osd pool get ssdtest crush_rule
2、上传对象
  1. # rados -p ssdtest put init.txt init.sh
3、查询对象的OSD组

从查询结果可以看出,对象所在的OSD,是class为ssd的OSD