You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: src/cohere/base_client.py
+40Lines changed: 40 additions & 0 deletions
Original file line number
Diff line number
Diff line change
@@ -128,6 +128,7 @@ def chat_stream(
128
128
max_tokens: typing.Optional[int] =OMIT,
129
129
k: typing.Optional[int] =OMIT,
130
130
p: typing.Optional[float] =OMIT,
131
+
seed: typing.Optional[float] =OMIT,
131
132
frequency_penalty: typing.Optional[float] =OMIT,
132
133
presence_penalty: typing.Optional[float] =OMIT,
133
134
raw_prompting: typing.Optional[bool] =OMIT,
@@ -210,6 +211,8 @@ def chat_stream(
210
211
- p: typing.Optional[float]. Ensures that only the most likely tokens, with total probability mass of `p`, are considered for generation at each step. If both `k` and `p` are enabled, `p` acts after `k`.
211
212
Defaults to `0.75`. min value of `0.01`, max value of `0.99`.
212
213
214
+
- seed: typing.Optional[float]. If specified, the backend will make a best effort to sample tokens deterministically, such that repeated requests with the same seed and parameters should return the same result. However, determinism cannot be totally guaranteed.
215
+
213
216
- frequency_penalty: typing.Optional[float]. Defaults to `0.0`, min value of `0.0`, max value of `1.0`.
214
217
215
218
Used to reduce repetitiveness of generated tokens. The higher the value, the stronger a penalty is applied to previously present tokens, proportional to how many times they have already appeared in the prompt or prior generation.
@@ -353,6 +356,8 @@ def chat_stream(
353
356
_request["k"] =k
354
357
ifpisnotOMIT:
355
358
_request["p"] =p
359
+
ifseedisnotOMIT:
360
+
_request["seed"] =seed
356
361
iffrequency_penaltyisnotOMIT:
357
362
_request["frequency_penalty"] =frequency_penalty
358
363
ifpresence_penaltyisnotOMIT:
@@ -420,6 +425,7 @@ def chat(
420
425
max_tokens: typing.Optional[int] =OMIT,
421
426
k: typing.Optional[int] =OMIT,
422
427
p: typing.Optional[float] =OMIT,
428
+
seed: typing.Optional[float] =OMIT,
423
429
frequency_penalty: typing.Optional[float] =OMIT,
424
430
presence_penalty: typing.Optional[float] =OMIT,
425
431
raw_prompting: typing.Optional[bool] =OMIT,
@@ -502,6 +508,8 @@ def chat(
502
508
- p: typing.Optional[float]. Ensures that only the most likely tokens, with total probability mass of `p`, are considered for generation at each step. If both `k` and `p` are enabled, `p` acts after `k`.
503
509
Defaults to `0.75`. min value of `0.01`, max value of `0.99`.
504
510
511
+
- seed: typing.Optional[float]. If specified, the backend will make a best effort to sample tokens deterministically, such that repeated requests with the same seed and parameters should return the same result. However, determinism cannot be totally guaranteed.
512
+
505
513
- frequency_penalty: typing.Optional[float]. Defaults to `0.0`, min value of `0.0`, max value of `1.0`.
506
514
507
515
Used to reduce repetitiveness of generated tokens. The higher the value, the stronger a penalty is applied to previously present tokens, proportional to how many times they have already appeared in the prompt or prior generation.
- temperature: typing.Optional[float]. A non-negative float that tunes the degree of randomness in generation. Lower temperatures mean less random generations. See [Temperature](/temperature-wiki) for more details.
684
695
Defaults to `0.75`, min value of `0.0`, max value of `5.0`.
685
696
697
+
- seed: typing.Optional[float]. If specified, the backend will make a best effort to sample tokens deterministically, such that repeated requests with the same seed and parameters should return the same result. However, determinsim cannot be totally guaranteed.
698
+
686
699
- preset: typing.Optional[str]. Identifier of a custom preset. A preset is a combination of parameters, such as prompt, temperature etc. You can create presets in the [playground](https://dashboard.cohere.ai/playground/generate).
687
700
When a preset is specified, the `prompt` parameter becomes optional, and any included parameters will override the preset's parameters.
- temperature: typing.Optional[float]. A non-negative float that tunes the degree of randomness in generation. Lower temperatures mean less random generations. See [Temperature](/temperature-wiki) for more details.
864
880
Defaults to `0.75`, min value of `0.0`, max value of `5.0`.
865
881
882
+
- seed: typing.Optional[float]. If specified, the backend will make a best effort to sample tokens deterministically, such that repeated requests with the same seed and parameters should return the same result. However, determinsim cannot be totally guaranteed.
883
+
866
884
- preset: typing.Optional[str]. Identifier of a custom preset. A preset is a combination of parameters, such as prompt, temperature etc. You can create presets in the [playground](https://dashboard.cohere.ai/playground/generate).
867
885
When a preset is specified, the `prompt` parameter becomes optional, and any included parameters will override the preset's parameters.
868
886
@@ -917,6 +935,8 @@ def generate(
917
935
_request["truncate"] =truncate
918
936
iftemperatureisnotOMIT:
919
937
_request["temperature"] =temperature
938
+
ifseedisnotOMIT:
939
+
_request["seed"] =seed
920
940
ifpresetisnotOMIT:
921
941
_request["preset"] =preset
922
942
ifend_sequencesisnotOMIT:
@@ -1608,6 +1628,7 @@ async def chat_stream(
1608
1628
max_tokens: typing.Optional[int] =OMIT,
1609
1629
k: typing.Optional[int] =OMIT,
1610
1630
p: typing.Optional[float] =OMIT,
1631
+
seed: typing.Optional[float] =OMIT,
1611
1632
frequency_penalty: typing.Optional[float] =OMIT,
1612
1633
presence_penalty: typing.Optional[float] =OMIT,
1613
1634
raw_prompting: typing.Optional[bool] =OMIT,
@@ -1690,6 +1711,8 @@ async def chat_stream(
1690
1711
- p: typing.Optional[float]. Ensures that only the most likely tokens, with total probability mass of `p`, are considered for generation at each step. If both `k` and `p` are enabled, `p` acts after `k`.
1691
1712
Defaults to `0.75`. min value of `0.01`, max value of `0.99`.
1692
1713
1714
+
- seed: typing.Optional[float]. If specified, the backend will make a best effort to sample tokens deterministically, such that repeated requests with the same seed and parameters should return the same result. However, determinism cannot be totally guaranteed.
1715
+
1693
1716
- frequency_penalty: typing.Optional[float]. Defaults to `0.0`, min value of `0.0`, max value of `1.0`.
1694
1717
1695
1718
Used to reduce repetitiveness of generated tokens. The higher the value, the stronger a penalty is applied to previously present tokens, proportional to how many times they have already appeared in the prompt or prior generation.
@@ -1833,6 +1856,8 @@ async def chat_stream(
1833
1856
_request["k"] =k
1834
1857
ifpisnotOMIT:
1835
1858
_request["p"] =p
1859
+
ifseedisnotOMIT:
1860
+
_request["seed"] =seed
1836
1861
iffrequency_penaltyisnotOMIT:
1837
1862
_request["frequency_penalty"] =frequency_penalty
1838
1863
ifpresence_penaltyisnotOMIT:
@@ -1900,6 +1925,7 @@ async def chat(
1900
1925
max_tokens: typing.Optional[int] =OMIT,
1901
1926
k: typing.Optional[int] =OMIT,
1902
1927
p: typing.Optional[float] =OMIT,
1928
+
seed: typing.Optional[float] =OMIT,
1903
1929
frequency_penalty: typing.Optional[float] =OMIT,
1904
1930
presence_penalty: typing.Optional[float] =OMIT,
1905
1931
raw_prompting: typing.Optional[bool] =OMIT,
@@ -1982,6 +2008,8 @@ async def chat(
1982
2008
- p: typing.Optional[float]. Ensures that only the most likely tokens, with total probability mass of `p`, are considered for generation at each step. If both `k` and `p` are enabled, `p` acts after `k`.
1983
2009
Defaults to `0.75`. min value of `0.01`, max value of `0.99`.
1984
2010
2011
+
- seed: typing.Optional[float]. If specified, the backend will make a best effort to sample tokens deterministically, such that repeated requests with the same seed and parameters should return the same result. However, determinism cannot be totally guaranteed.
2012
+
1985
2013
- frequency_penalty: typing.Optional[float]. Defaults to `0.0`, min value of `0.0`, max value of `1.0`.
1986
2014
1987
2015
Used to reduce repetitiveness of generated tokens. The higher the value, the stronger a penalty is applied to previously present tokens, proportional to how many times they have already appeared in the prompt or prior generation.
- temperature: typing.Optional[float]. A non-negative float that tunes the degree of randomness in generation. Lower temperatures mean less random generations. See [Temperature](/temperature-wiki) for more details.
2164
2195
Defaults to `0.75`, min value of `0.0`, max value of `5.0`.
2165
2196
2197
+
- seed: typing.Optional[float]. If specified, the backend will make a best effort to sample tokens deterministically, such that repeated requests with the same seed and parameters should return the same result. However, determinsim cannot be totally guaranteed.
2198
+
2166
2199
- preset: typing.Optional[str]. Identifier of a custom preset. A preset is a combination of parameters, such as prompt, temperature etc. You can create presets in the [playground](https://dashboard.cohere.ai/playground/generate).
2167
2200
When a preset is specified, the `prompt` parameter becomes optional, and any included parameters will override the preset's parameters.
- temperature: typing.Optional[float]. A non-negative float that tunes the degree of randomness in generation. Lower temperatures mean less random generations. See [Temperature](/temperature-wiki) for more details.
2344
2380
Defaults to `0.75`, min value of `0.0`, max value of `5.0`.
2345
2381
2382
+
- seed: typing.Optional[float]. If specified, the backend will make a best effort to sample tokens deterministically, such that repeated requests with the same seed and parameters should return the same result. However, determinsim cannot be totally guaranteed.
2383
+
2346
2384
- preset: typing.Optional[str]. Identifier of a custom preset. A preset is a combination of parameters, such as prompt, temperature etc. You can create presets in the [playground](https://dashboard.cohere.ai/playground/generate).
2347
2385
When a preset is specified, the `prompt` parameter becomes optional, and any included parameters will override the preset's parameters.
0 commit comments