Erasure Coding and multi-hypervisor

Alex,

So I have taken a stab and getting erasure coding configured.

I was following along with the info in this link:

Doing the following commands with in the ceph-mon/0* container.

    Create erasure code profile 

        ceph osd erasure-code-profile set ec-51-profile k=5 m=1 plugin=jerasure technique=reed_sol_van crush-failure-domain=host

    Create erasure coded pool

        ceph osd pool create ec51 64 erasure ec-51-profile

    Enable overwrites

        ceph osd pool set ec51 allow_ec_overwrites true

    Tag EC pool as RBD pool
        
        ceph osd pool application enable ec51 rbd

However, when I got to the section on “Using EC pools with RBD”, I realized that there were no RBD pools in the default install so I could not continue. I only see RGW pools.

 root@juju-a1eb13-1:~# ceph osd lspools
 1 default.rgw.buckets
 2 default.rgw
 3 default.rgw.root
 4 default.rgw.control
 5 default.rgw.gc
 6 default.rgw.buckets.index
 7 default.rgw.buckets.extra
 8 default.log
 9 default.intent-log
 10 default.usage
 11 default.users
 12 default.users.email
 13 default.users.swift
 14 default.users.uid
 15 .rgw.root
 16 gnocchi
 17 glance
 18 default.rgw.meta
 19 default.rgw.log
 20 cinder-ceph
 21 ec51

So I Googled some more and found this link:

It seems straight forward but I do have questions.

First, am I correct that I should be using RGW pools rather than RBD pools,

Second, it appears in the article that it is a “create, copy, rename” process rather than converting. Although I have no images or instances in my environment as yet. For recovery purposes, I will probably take that approach.

Third, what about the glance, gnocchi, and cinder-ceph default pools. Do I need to convert those to EC pools as well? I would think that I would as I “assume” that they are using replication.

Fourth, once I have converted the RGW pools to use EC. Do I need to change anything in the other service containers as I am eventually renaming the EC pools back to the default name at install?

Lastly, I am doing this all from inside the ceph-mon/0* container, is the a workable approach? I looked at the JUJU actions for the Ceph_Mon charm and was not sure how to accomplish all the necessary steps correctly.

I appreciate any guidance.
Greg

1 Like