1

I've created a small ceph cluster 3 servers each with 5 disks for osd's with one monitor per server.

The actual setup seems to have gone OK and the mons are in quorum and all 15 osd's are up and in however when creating a pool the pg's keep getting stuck inactive and never actually properly create.

I've read around as many threads/tutorials as I could find and still can't seem to work out why they're stuck creating and never complete.

I could really use some suggestions of things to look for, for errors,issues or is this pool creation really that slow. The system has been setup and running like this for 2 weeks now and pgmap from ceph -w shows the MB Used value increasing very very very slowly 1mb per 2 mins or so.

Output of ceph -w

cephadmin@cnc:~$ ceph -w
    cluster 7908651c-252e-4761-8a83-4b1cfcf90522
     health HEALTH_ERR
            700 pgs are stuck inactive for more than 300 seconds
            700 pgs stuck inactive
     monmap e1: 3 mons at {ceph1=10.0.80.10:6789/0,ceph2=10.0.80.11:6789/0,ceph3=10.0.80.12:6789/0}
            election epoch 18, quorum 0,1,2 ceph1,ceph2,ceph3
     osdmap e304359: 15 osds: 15 up, 15 in
            flags sortbitwise,require_jewel_osds
      pgmap v1097264: 700 pgs, 1 pools, 0 bytes data, 0 objects
            90932 MB used, 55699 GB / 55788 GB avail
                 700 creating

2017-02-02 11:20:10.774943 mon.0 [INF] pgmap v1097264: 700 pgs: 700 creating; 0 bytes data, 90932 MB used, 55699 GB / 55788 GB avail
2017-02-02 11:20:11.152412 mon.0 [INF] mds.? 10.0.80.10:6800/1746 up:boot
2017-02-02 11:20:11.152632 mon.0 [INF] fsmap e304293:, 1 up:standby
2017-02-02 11:20:11.853221 mon.0 [INF] pgmap v1097265: 700 pgs: 700 creating; 0 bytes data, 90932 MB used, 55699 GB / 55788 GB avail
2017-02-02 11:20:12.931001 mon.0 [INF] pgmap v1097266: 700 pgs: 700 creating; 0 bytes data, 90932 MB used, 55699 GB / 55788 GB avail
2017-02-02 11:20:14.097210 mon.0 [INF] pgmap v1097267: 700 pgs: 700 creating; 0 bytes data, 90932 MB used, 55699 GB / 55788 GB avail
2017-02-02 11:20:14.707583 mon.0 [INF] osdmap e304360: 15 osds: 15 up, 15 in
2017-02-02 11:20:14.774994 mon.0 [INF] pgmap v1097268: 700 pgs: 700 creating; 0 bytes data, 90932 MB used, 55699 GB / 55788 GB avail
2017-02-02 11:20:15.197354 mon.0 [INF] mds.? 10.0.80.10:6801/2222 up:boot
2017-02-02 11:20:15.197528 mon.0 [INF] fsmap e304294:, 1 up:standby
2017-02-02 11:20:15.875919 mon.0 [INF] pgmap v1097269: 700 pgs: 700 creating; 0 bytes data, 90932 MB used, 55699 GB / 55788 GB avail
2017-02-02 11:20:16.975746 mon.0 [INF] pgmap v1097270: 700 pgs: 700 creating; 0 bytes data, 90932 MB used, 55699 GB / 55788 GB avail
2017-02-02 11:20:18.075955 mon.0 [INF] pgmap v1097271: 700 pgs: 700 creating; 0 bytes data, 90932 MB used, 55699 GB / 55788 GB avail
2017-02-02 11:20:18.708059 mon.0 [INF] osdmap e304361: 15 osds: 15 up, 15 in
2017-02-02 11:20:18.775552 mon.0 [INF] pgmap v1097272: 700 pgs: 700 creating; 0 bytes data, 90932 MB used, 55699 GB / 55788 GB avail
2017-02-02 11:20:19.253143 mon.0 [INF] mds.? 10.0.80.10:6800/1746 up:boot
2017-02-02 11:20:19.253314 mon.0 [INF] fsmap e304295:, 1 up:standby
2017-02-02 11:20:19.853348 mon.0 [INF] pgmap v1097273: 700 pgs: 700 creating; 0 bytes data, 90932 MB used, 55699 GB / 55788 GB avail
2017-02-02 11:20:20.988606 mon.0 [INF] pgmap v1097274: 700 pgs: 700 creating; 0 bytes data, 90932 MB used, 55699 GB / 55788 GB avail
2017-02-02 11:20:22.188444 mon.0 [INF] pgmap v1097275: 700 pgs: 700 creating; 0 bytes data, 90932 MB used, 55699 GB / 55788 GB avail
2017-02-02 11:20:22.709647 mon.0 [INF] osdmap e304362: 15 osds: 15 up, 15 in
2017-02-02 11:20:22.777063 mon.0 [INF] pgmap v1097276: 700 pgs: 700 creating; 0 bytes data, 90932 MB used, 55699 GB / 55788 GB avail
2017-02-02 11:20:23.288351 mon.0 [INF] mds.? 10.0.80.10:6801/2222 up:boot
2017-02-02 11:20:23.288498 mon.0 [INF] fsmap e304296:, 1 up:standby
2017-02-02 11:20:23.855536 mon.0 [INF] pgmap v1097277: 700 pgs: 700 creating; 0 bytes data, 90932 MB used, 55699 GB / 55788 GB avail
2017-02-02 11:20:25.533595 mon.0 [INF] pgmap v1097278: 700 pgs: 700 creating; 0 bytes data, 90932 MB used, 55699 GB / 55788 GB avail
2017-02-02 11:20:26.610728 mon.0 [INF] pgmap v1097279: 700 pgs: 700 creating; 0 bytes data, 90932 MB used, 55699 GB / 55788 GB avail
2017-02-02 11:20:26.743563 mon.0 [INF] osdmap e304363: 15 osds: 15 up, 15 in
2017-02-02 11:20:26.743636 mon.0 [INF] mds.? 10.0.80.10:6800/1746 up:boot
2017-02-02 11:20:26.743722 mon.0 [INF] fsmap e304297:, 1 up:standby
2017-02-02 11:20:26.822333 mon.0 [INF] pgmap v1097280: 700 pgs: 700 creating; 0 bytes data, 90932 MB used, 55699 GB / 55788 GB avail
2017-02-02 11:20:27.900114 mon.0 [INF] pgmap v1097281: 700 pgs: 700 creating; 0 bytes data, 90932 MB used, 55699 GB / 55788 GB avail
2017-02-02 11:20:29.111348 mon.0 [INF] pgmap v1097282: 700 pgs: 700 creating; 0 bytes data, 90933 MB used, 55699 GB / 55788 GB avail
2017-02-02 11:20:30.188991 mon.0 [INF] pgmap v1097283: 700 pgs: 700 creating; 0 bytes data, 90933 MB used, 55699 GB / 55788 GB avail
2017-02-02 11:20:30.721728 mon.0 [INF] osdmap e304364: 15 osds: 15 up, 15 in
2017-02-02 11:20:30.778195 mon.0 [INF] pgmap v1097284: 700 pgs: 700 creating; 0 bytes data, 90933 MB used, 55699 GB / 55788 GB avail

ceph.conf

[global]
public network = 10.0.80.0/23
cluster network = 10.0.80.0/23

fsid = 7908651c-252e-4761-8a83-4b1cfcf90522
mon_initial_members = ceph1, ceph2, ceph3
mon_host = 10.0.80.10,10.0.80.11,10.0.80.12
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx

osd pool default size = 2
osd pool default min size = 1
osd pool default pg num = 750
osd pool default pgp num = 750
osd crush chooseleaf type = 2

[mon.ceph1]
mon addr = 10.0.80.10:6789
host = ceph1
[mon.ceph2]
mon addr = 10.0.80.11:6789
host = ceph2
[mon.ceph3]
mon addr = 10.0.80.12:6789
host = ceph3

[mds]
keyring = /var/lib/ceph/mds/ceph-ceph1/keyring
[mds.ceph1]
host = ceph1

[osd.0]
cluster addr = 10.0.80.13
host = ceph1
[osd.1]
cluster addr = 10.0.80.13
host = ceph1
[osd.2]
cluster addr = 10.0.80.13
host = ceph1
[osd.3]
cluster addr = 10.0.80.13
host = ceph1
[osd.4]
cluster addr = 10.0.80.13
host = ceph1

[osd.5]
cluster addr = 10.0.80.14
host = ceph2
[osd.6]
cluster addr = 10.0.80.14
host = ceph2
[osd.7]
cluster addr = 10.0.80.14
host = ceph2
[osd.8]
cluster addr = 10.0.80.14
host = ceph2
[osd.9]
cluster addr = 10.0.80.14
host = ceph2

[osd.10]
cluster addr = 10.0.80.15
host = ceph3
[osd.11]
cluster addr = 10.0.80.15
host = ceph3
[osd.12]
cluster addr = 10.0.80.15
host = ceph3
[osd.13]
cluster addr = 10.0.80.15
host = ceph3
[osd.14]
cluster addr = 10.0.80.15
host = ceph3

ceph df

cephadmin@cnc:~$ ceph df
GLOBAL:
    SIZE       AVAIL      RAW USED     %RAW USED
    55788G     55699G       90973M          0.16
POOLS:
    NAME              ID     USED     %USED     MAX AVAIL     OBJECTS
    rbd_vmstorage     4         0         0        27849G           0
cephadmin@cnc:~$

ceph osd tree

cephadmin@cnc:~$ ceph osd tree
ID WEIGHT   TYPE NAME      UP/DOWN REWEIGHT PRIMARY-AFFINITY
-1 54.47983 root default
-2 18.15994     host ceph1
 0  3.63199         osd.0       up  1.00000          1.00000
 1  3.63199         osd.1       up  1.00000          1.00000
 2  3.63199         osd.2       up  1.00000          1.00000
 3  3.63199         osd.3       up  1.00000          1.00000
 4  3.63199         osd.4       up  1.00000          1.00000
-3 18.15994     host ceph2
 5  3.63199         osd.5       up  1.00000          1.00000
 6  3.63199         osd.6       up  1.00000          1.00000
 7  3.63199         osd.7       up  1.00000          1.00000
 8  3.63199         osd.8       up  1.00000          1.00000
 9  3.63199         osd.9       up  1.00000          1.00000
-4 18.15994     host ceph3
10  3.63199         osd.10      up  1.00000          1.00000
11  3.63199         osd.11      up  1.00000          1.00000
12  3.63199         osd.12      up  1.00000          1.00000
13  3.63199         osd.13      up  1.00000          1.00000
14  3.63199         osd.14      up  1.00000          1.00000

crushmap decompile.

# begin crush map
tunable choose_local_tries 0
tunable choose_local_fallback_tries 0
tunable choose_total_tries 50
tunable chooseleaf_descend_once 1
tunable chooseleaf_vary_r 1
tunable straw_calc_version 1

# devices
device 0 osd.0
device 1 osd.1
device 2 osd.2
device 3 osd.3
device 4 osd.4
device 5 osd.5
device 6 osd.6
device 7 osd.7
device 8 osd.8
device 9 osd.9
device 10 osd.10
device 11 osd.11
device 12 osd.12
device 13 osd.13
device 14 osd.14

# types
type 0 osd
type 1 host
type 2 chassis
type 3 rack
type 4 row
type 5 pdu
type 6 pod
type 7 room
type 8 datacenter
type 9 region
type 10 root

# buckets
host ceph1 {
        id -2           # do not change unnecessarily
        # weight 18.160
        alg straw
        hash 0  # rjenkins1
        item osd.0 weight 3.632
        item osd.1 weight 3.632
        item osd.2 weight 3.632
        item osd.3 weight 3.632
        item osd.4 weight 3.632
}
host ceph2 {
        id -3           # do not change unnecessarily
        # weight 18.160
        alg straw
        hash 0  # rjenkins1
        item osd.5 weight 3.632
        item osd.6 weight 3.632
        item osd.7 weight 3.632
        item osd.8 weight 3.632
        item osd.9 weight 3.632
}
host ceph3 {
        id -4           # do not change unnecessarily
        # weight 18.160
        alg straw
        hash 0  # rjenkins1
        item osd.10 weight 3.632
        item osd.11 weight 3.632
        item osd.12 weight 3.632
        item osd.13 weight 3.632
        item osd.14 weight 3.632
}
root default {
        id -1           # do not change unnecessarily
        # weight 54.480
        alg straw
        hash 0  # rjenkins1
        item ceph1 weight 18.160
        item ceph2 weight 18.160
        item ceph3 weight 18.160
}

# rules
rule replicated_ruleset {
        ruleset 0
        type replicated
        min_size 1
        max_size 10
        step take default
        step chooseleaf firstn 0 type chassis
        step emit
}

# end crush map

Should it really take over a week to create a pool? have I done something wrong in a config somewhere and its not talking to each other or something anything? I'll run any commands you want me to run if you need more information just post the command up and i'll run it. I just need some idea's as I do really want to try/use ceph but I'm currently stuck to the levels of my knowledge and am struggling to find additional knowledge/similar issues trawling google

Dave
  • 11
  • 5

2 Answers2

1

You can refer to here

https://github.com/ceph/ceph/commit/b73d0d325d382e32662ba5fab3c3f4d3a1b1681b

We used to have a complicated pg creation process in which we would query any previous mappings for the pg before we created the new 'empty' pg locally. The tracking of the prior mappings was very simple (and broken), but it didn't really matter because the mon would resend pg create messages periodically. Now it doesn't, so that broke.

itfanr
  • 11
  • 1
0

I would start to look into OSD

  1. ceph tell osd.0 injectargs --debug-osd 0/5
  2. Have a look here for pools commands http://docs.ceph.com/docs/jewel/rados/operations/pools/

If that does not work, modify everything to maximum debug level http://docs.ceph.com/docs/master/rados/troubleshooting/log-and-debug/, then check the log files described in the documentation.

With my limited knowledge of CEPH, I think it's better just to see the online documentation (as CEPH version evolve quickly, then try to understand functions, add debug where you can, have a look in the logs).

Let me know what errors you find.

Alex H
  • 1,814
  • 11
  • 18
  • osd logs showing empty even after setting debug high mon logs just showing same as ceph -w – Dave Feb 07 '17 at 09:32
  • interestingly the pg's never seem to be getting assigned to osd's even though the osd's are up and in. `root@cnc:~# ceph pg map 5.c9 osdmap e411368 pg 5.c9 (5.c9) -> up [] acting []` – Dave Feb 07 '17 at 10:24
  • Have you followed the documentation and added all the required needed option for pool create? – Alex H Feb 07 '17 at 14:49
  • yep thats whats so annoying not only did I follow the docs but talking to someone else he followed exactly the same commands as me and its worked fine on his servers :( – Dave Feb 07 '17 at 16:34
  • Can you try to run this "ceph health detail" and let me know the outcome? – Alex H Feb 08 '17 at 11:09
  • have nuked the cluster now and given up on it :) – Dave Feb 08 '17 at 14:52
  • You ca try proxmox, maybe it's easier and preconfigured. – Alex H Feb 08 '17 at 18:25