CEPH RadosGW – Choosing Placement Targets

In CEPH object storage you can create placement targets and documentation about creating placement targets is very poor.

 

Ceph_Logo_Stacked_aRGB_Black_120411_fa

What are placement targets?

With placement targets you cen create multiple pool of storage to by used by the same Rados Gateways. You can define a default placement-target and allow some user to write data to another placement target (another data pool). The default placement target can by specified per user.

Our use case

We are using CEPH v. 12.2.5 Luminous and have we S3 object storage via RadosGW with default created pools. We want to add new data pool to allow user to specify which pool they want to use. Now we have default.rgw.buckets.data and we want to add another pool ssd.rgw.buckets.data to store data here. So fast-storage and default-slow-storage.

Lets start

At this point you need to have Healty cluster, wordking RGW in default settings and CRUSH rules defined. First test this manual on you test cluster!

Current ceph osd tree:

ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
-1 2.80446 root default
-3 0.46689 host ceph-test-1
 0 hdd 0.46689 osd.0 up 1.00000 1.00000
-7 0.47600 host ceph-test-2
2 hdd 0.47600 osd.2 up 1.00000 1.00000
-5 0.93079 host ceph-test-3
1 hdd 0.93079 osd.1 up 1.00000 1.00000
-9 0.93079 host ceph-test-4
3 hdd 0.93079 osd.3 up 1.00000 1.00000
-30 0.93079 host ceph-test-5
3 ssd 0.93079 osd.4 up 1.00000 1.00000
-62 0.93079 host ceph-test-7
12 ssd 0.93079 osd.5 up 1.00000 1.00000
-52 0.93079 host ceph-test-6
11 ssd 0.93079 osd.6 up 1.00000 1.00000

Current crush rules:

rule data {
ruleset 0
type replicated
min_size 1
max_size 10
step take default
step chooseleaf firstn 0 type host
step emit
}

rule ssd-rule {
ruleset 1
type replicated
min_size 1
max_size 10
step take ssd
step choose firstn 0 type host
step emit
}

 

We need to define placement targets for zone and zonegroup. But first we need to create realm, zonegroup, and place default zone to zonegroup.

First we need to create realm called „firstrealm“ and make this the default

radosgw-admin realm create --rgw-realm=firstrealm --default

 

Next we want to create zonegroup to place into our „firstrealm“. This zonegroup will be new default and master. If you already have zonegroup (you can list them by command „radosgw-admin zonegroup list“) skip this.

radosgw-admin zonegroup create --rgw-zonegroup=default --master --default

 

Then edit zonegroup – add „placement_targets“ for SSD.

radosgw-admin zonegroup get > myzonegroup
{
"name": "default",
"api_name": "",
"is_master": "true",
"endpoints": [],
"hostnames": [],
"master_zone": "",
"zones": [
{
"name": "default",
"endpoints": [],
"log_meta": "false",
"log_data": "false",
"bucket_index_max_shards": 0
}
],
"placement_targets": [
{
"name": "default-placement",
"tags": ["default-placement"]
},
{
"name": "ssd",
"tags": ["ssd"]
}
],
"default_placement": "default-placement"
}
radosgw-admin zonegroup set < myzonegroup

 

Now we can edit zone map.

radosgw-admin zone get > ourzonemap
{
"domain_root": ".rgw",
"control_pool": ".rgw.control",
"gc_pool": ".rgw.gc",
"log_pool": ".log",
"intent_log_pool": ".intent-log",
"usage_log_pool": ".usage",
"user_keys_pool": ".users",
"user_email_pool": ".users.email",
"user_swift_pool": ".users.swift",
"user_uid_pool": ".users.uid",
"system_key": {
"access_key": "",
"secret_key": ""
},
"placement_pools": [
{
"key": "default-placement",
"val": {
"index_pool": ".rgw.buckets.index",
"data_pool": ".rgw.buckets",
"data_extra_pool": ".rgw.buckets.extra"
}
},
{
"key": "ssd",
"val": {
"index_pool": "ssd.rgw.buckets.data",
"data_pool": "ssd.rgw.buckets-ssd",
"data_extra_pool": "ssd.rgw.buckets-ssd.extra"
}
},
]
}
radosgw-admin zone set < ourzonemap

Now wee need to create three new pools in the zone and assign them to the SSD ruleset.

ceph osd pool create ssd.rgw.buckets-ssd.index 8
ceph osd pool create ssd.rgw.buckets.data 32
ceph osd pool create ssd.rgw.buckets-ssd.extra 8
ceph osd pool set ssd.rgw.buckets-ssd.index crush_ruleset 1
ceph osd pool set ssd.rgw.buckets-ssd crush_ruleset 1
ceph osd pool set ssd.rgw.buckets-ssd.extra crush_ruleset 1

Now we update changes to radosgw, after this (need to end without error! if not repair it!).

radosgw-admin period update --commit

Test it!

Create new user in S3

radosgw-admin user create --uid=testuser --display_name "testuser"

For test purpose set for user „testuser“ default_placement in ssd.

radosgw-admin metadata get user:testuser > testuser
{
"key": "user:testuser",
[..]
"default_placement": "ssd",
"placement_tags": ["default-placement", "ssd"],
[...]
}
radosgw-admin metadata get user:testuser < testuser

 

Now you can test it for example with Radula S3 client. Now you will be able to create bucket and upload files in ssd data pool!

Source 1: https://blog.thoughtsandbeers.com/2015/11/06/Ceph-RadosGW-Placement-Targets/ (Old post which is suitable for CEPH < jewel version)
Source: 2: http://docs.ceph.com/docs/jewel/radosgw/multisite/

Napsat komentář

Vaše emailová adresa nebude zveřejněna. Vyžadované informace jsou označeny *

Můžete používat následující HTML značky a atributy: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>