diff --git a/docs/demo.md b/docs/demo.md deleted file mode 100644 index 4286a44..0000000 --- a/docs/demo.md +++ /dev/null @@ -1,93 +0,0 @@ -# Use Case: Sharding a User Collection by `userId` - -## 1. **Enable Sharding on a Database** - -First, we will create a database called `testdb` and enable sharding on it. - -1. Connect to the `mongos` instance: - - ```bash - kubectl exec -it $(kubectl get pods -l app=mongos -o jsonpath="{.items[0].metadata.name}") -- mongosh --port 27100 - ``` - -2. Enable sharding on the `testdb` database: - - ```javascript - sh.enableSharding("testdb") - ``` - -## 2. **Create a Collection with a Shard Key** - -Next, we will create a collection called `users` and shard it based on the `userId` field, which will act as our shard key. - -```javascript -db.createCollection("users") -sh.shardCollection("testdb.users", { "userId": 1 }) -## ou -db.users.createIndex({ 'userId': "hashed" }) -sh.shardCollection("testdb.users1", { 'userId': "hashed" }) - -## Pour verifier si les index ont été créer -db.users.getIndexes() -``` - -## 3. **Generate a Large Dataset** - -Now, we’ll generate a significant dataset of users to observe the sharding behavior. We’ll use a simple loop to insert a large number of documents with `userId` values that will be evenly distributed across shards. - -In the `mongos` shell, run the following script to insert 100,000 user documents: - -```javascript -let batch = []; -for (let i = 1; i <= 100000; i++) { - batch.push({ userId: i, name: "User " + i, age: Math.floor(Math.random() * 50) + 18 }); - if (batch.length === 1000) { // Insert every 1000 documents - db.users.insertMany(batch); - batch = []; - } -} -if (batch.length > 0) { - db.users.insertMany(batch); // Insert remaining documents -} -``` - -This will insert 100,000 users into the `users` collection with random ages. The `userId` field is used as the shard key, which will help distribute the documents across your shards. - -## 4. **Check Shard Distribution** - -Once the dataset is inserted, you can verify how the chunks have been distributed across the shards. Use the following command in the `mongos` shell: - -```javascript -db.adminCommand({ balancerStatus: 1 }) -``` - -This will show whether the balancer is actively distributing chunks across shards. - -Next, you can check the chunk distribution for the `users` collection: - -```javascript -db.printShardingStatus() -db.users.getShardDistribution() -``` - -Look for the `testdb.users` section in the output, which will display the chunk distribution across your shards. Each chunk should represent a range of `userId` values, and you should see how many chunks are assigned to each shard. - -## 5. **Test Queries to Ensure Sharding Works** - -You can perform some test queries to check how sharding affects the query results. - -For example, to query users with specific `userId` ranges and see how MongoDB handles it across shards: - -```javascript -db.users.find({ userId: { $gte: 1000, $lt: 2000 } }).explain("executionStats") -``` - -The output will show how many shards were involved in the query. - -## Summary of Steps - -1. **Enable sharding** on the `testdb` database. -2. **Shard the `users` collection** by `userId`. -3. **Insert 100,000 users** with `userId` values. -4. **Check chunk distribution** using `db.printShardingStatus()`. -5. **Run queries** to observe how the data is distributed across shards. diff --git a/docs/run.md b/docs/run.md deleted file mode 100644 index 20c8b8f..0000000 --- a/docs/run.md +++ /dev/null @@ -1,78 +0,0 @@ -# Run docs - -## Configs - -kubectl apply -f config-server/config-server-statefulset.yaml - -kubectl exec -it config-server-0 -- mongosh --port 27201 - -rs.initiate({ - _id: "cfgrs", - configsvr: true, - members: [ - { _id: 0, host: "config-server-0.config-server:27201" }, - { _id: 1, host: "config-server-1.config-server:27202" }, - { _id: 2, host: "config-server-2.config-server:27203" } - ] -}) - -## Shards - -kubectl apply -f shards/shard1-statefulset.yaml -kubectl exec -it shard1-0 -- mongosh --port 27301 -rs.initiate({ - _id: "shard1rs", - members: [ - { _id: 0, host: "shard1-0.shard1:27301" }, - { _id: 1, host: "shard1-1.shard1:27302" }, - { _id: 2, host: "shard1-2.shard1:27303" } - ] -}) - -kubectl apply -f shards/shard2-statefulset.yaml -kubectl exec -it shard2-0 -- mongosh --port 27401 -rs.initiate({ - _id: "shard2rs", - members: [ - { _id: 0, host: "shard2-0.shard2:27401" }, - { _id: 1, host: "shard2-1.shard2:27402" }, - { _id: 2, host: "shard2-2.shard2:27403" } - ] -}) - -kubectl apply -f shards/shard3-statefulset.yaml -kubectl exec -it shard3-0 -- mongosh --port 27501 -rs.initiate({ - _id: "shard3rs", - members: [ - { _id: 0, host: "shard3-0.shard3:27501" }, - { _id: 1, host: "shard3-1.shard3:27502" }, - { _id: 2, host: "shard3-2.shard3:27503" } - ] -}) - -kubectl apply -f shards/shard4-statefulset.yaml -kubectl exec -it shard4-0 -- mongosh --port 27601 -rs.initiate({ - _id: "shard4rs", - members: [ - { _id: 0, host: "shard4-0.shard4:27601" }, - { _id: 1, host: "shard4-1.shard4:27602" }, - { _id: 2, host: "shard4-2.shard4:27603" } - ] -}) - -## Mongos - -kubectl exec -it shard1-0 -- mongosh --port 27301 --eval "db.serverStatus().connections" - -kubectl exec -it $(kubectl get pods -l app=mongos -o jsonpath="{.items[0].metadata.name}") -- mongosh --port 27100 - -sh.addShard("shard1rs/shard1-0.shard1:27301,shard1-1.shard1:27302") -sh.addShard("shard2rs/shard2-0.shard2:27401,shard2-1.shard2:27402,shard2-2.shard2:27403") -sh.addShard("shard3rs/shard3-0.shard3:27501,shard3-1.shard3:27502,shard3-2.shard3:27503") -sh.addShard("shard4rs/shard4-0.shard4:27601,shard4-1.shard4:27602") - -## Case issues - -mongosh --host config-server-0.config-server.default.svc.cluster.local:27201 diff --git a/docs/test.md b/docs/test.md deleted file mode 100644 index 6f4efc0..0000000 --- a/docs/test.md +++ /dev/null @@ -1,27 +0,0 @@ -### Step 1: **Deploy a Test Pod with Networking Tools** - -You can run a simple `busybox` or `alpine` pod that includes `telnet` or `curl`. Here’s how to create a test pod: - -```bash -kubectl run test-network-a --image=busybox --restart=Never -- sh -c "sleep 3600" -``` - -### Step 2: **Exec into the Test Pod and Test Connectivity** - -Once the pod is running, exec into it and check connectivity from there: - -```bash -kubectl exec -it test-network-a -- sh -``` - -Inside the pod, run the following command to test connectivity to `config-server-2`: - -```bash -telnet config-server-2.config-server.default.svc.cluster.local 27201 -``` - -or - -```bash -nc -zv config-server-2.config-server.default.svc.cluster.local 27201 -```