Skip to content

Commit

Permalink
docs: update
Browse files Browse the repository at this point in the history
  • Loading branch information
hhk7734 committed Feb 14, 2025
1 parent 3799129 commit eb07dda
Show file tree
Hide file tree
Showing 2 changed files with 48 additions and 2 deletions.
30 changes: 29 additions & 1 deletion docs/mlops/storage/ceph/osd.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -174,8 +174,36 @@ kubectl rook-ceph ceph osd df osd.<OSDID>
kubectl rook-ceph rook purge-osd <OSDID> --force
```

OSD가 제거된 후 클러스터 상태가 **HEALTH_OK**인지 확인합니다.
OSD가 제거된 후 클러스터 상태가 **HEALTH_OK**인지 확인합니다. `failureDomain`이 `host`인 경우 최대 Node 하나, `osd`인 경우 OSD 하나만 제거한 후 모든 작업이 끝날 때까지 기다렸다가 다음 작업을 진행해야합니다.

```shell
kubectl rook-ceph ceph status
```

:::tip

backfilling 속도를 높이기 위해 아래와 같은 설정을 적용할 수 있습니다.

```shell
kubectl rook-ceph ceph config set osd osd_mclock_profile high_recovery_ops
```

```shell
kubectl rook-ceph ceph tell 'osd.*' injectargs '--osd-max-backfills 20'
```

```shell
kubectl rook-ceph ceph config show osd.<OSDID> osd_max_backfills
```

작업이 끝나면 초기 설정으로 되돌려야합니다.

```shell
kubectl rook-ceph ceph tell 'osd.*' injectargs '--osd-max-backfills 1'
```

```shell
kubectl rook-ceph ceph config rm osd osd_mclock_profile
```

:::
20 changes: 19 additions & 1 deletion docs/mlops/storage/ceph/pg.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -12,9 +12,23 @@ keywords:

- [Data 흐름](/docs/mlops/storage/ceph/osd#data-흐름)

OSD당 100 개의 PG를 갖는 것이 권장되며, PG 수는(**pg_num**)은 2의 거듭제곱으로 설정해야합니다.

```shell
(OSDs) * 100
Total PGs = ------------
pool size
```

pool size는 복제본 수 또는 EC의 K+M 값입니다. OSD 200 개에 복제본 3 개를 갖는 pool의 PG는 `(200 * 100)/ 3 = 6667 <= 8192`으로 계산될 수 있습니다.

## PG AutoScale

- https://docs.ceph.com/en/latest/rados/operations/placement-groups/#viewing-pg-scaling-recommendations
:::info[Reference]

- [Ceph / Operations / Placement Groups # Autoscaling placement groups](https://docs.ceph.com/en/latest/rados/operations/placement-groups/#autoscaling-placement-groups)

:::

```shell
kubectl rook-ceph ceph osd pool set <pool> <option> <value>
Expand All @@ -28,6 +42,10 @@ kubectl rook-ceph ceph osd pool set <pool> <option> <value>
- `bulk`: `<bool>`
- `true`: 규모가 클 것으로 예상되는 pool이라는 설정입니다.

```shell
kubectl rook-ceph ceph config set global mon_target_pg_per_osd 100
```

```shell
kubectl rook-ceph ceph osd pool autoscale-status
```
Expand Down

0 comments on commit eb07dda

Please sign in to comment.