Skip to content

Commit 363d884

Browse files
authored
Merge pull request #5664 from saintube/patch-4
Fix typo in KEP 4671
2 parents e6986ed + c0d94b6 commit 363d884

File tree

1 file changed

+6
-6
lines changed
  • keps/sig-scheduling/4671-gang-scheduling

1 file changed

+6
-6
lines changed

keps/sig-scheduling/4671-gang-scheduling/README.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -281,8 +281,8 @@ spec:
281281
```yaml
282282
apiVersion: v1
283283
kind: Pod
284-
name:
285-
jobset-job-1-abc123
284+
metadata:
285+
name: jobset-job-1-abc123
286286
spec:
287287
...
288288
workload:
@@ -293,15 +293,15 @@ spec:
293293
294294
```
295295

296-
We decided for this option because it is more succint and makes the role of a pod clear just
296+
We decided for this option because it is more succinct and makes the role of a pod clear just
297297
from inspecting the pod (and simple/efficient to group).
298298
We acknowledge the fact that this option may require additional minor changes in the controllers
299299
to adopt this pattern (e.g. for LeaderWorkerSet we will need to populate the pod template
300300
similarly that we currently populate the labels).
301301

302-
The primary alternative we consider was to introduce the the `PodGroupSelector` on each `PodGroup`
302+
The primary alternative we consider was to introduce the `PodGroupSelector` on each `PodGroup`
303303
to identify pods belonging to it. However, with this pattern:
304-
- there are additional corner cases (e.g. a pod links to a workload but none of its PodGroups matching
304+
- there are additional corner cases (e.g. a pod links to a workload but none of its PodGroups match
305305
that pod)
306306
- for replicated gang, we can't use the full label selector, but rather support specifying only the
307307
label key, similar to `MatchLabelKeys` in pod affinity
@@ -438,7 +438,7 @@ For `Beta`, we want to also touch requirements (2) and (3) by extending the sche
438438
a new dedicated phase (tentatively called Workload). In that phase,
439439
kube-scheduler will be looking at all pods from a gang (part of `Workload`) and compute the placement
440440
for all of these pods in a single scheduling cycle. Those placements will be stored only in-memory and
441-
block the required resources from scheduling. Tentively we plan to use `NominatedNodeName` field for it.
441+
block the required resources from scheduling. Tentatively we plan to use `NominatedNodeName` field for it.
442442
After that, pods will go through regular pod-by-pod scheduling phases (including Filter and Score)
443443
with a nomination as a form of validation the proposed placement and execution of this placement decision.
444444
Therefore we expect the order of processing pods won't ever be important, but all-or-nothing nature of

0 commit comments

Comments
 (0)