mbpp OSError: [WinError 6] err-- AttributeError: Can't pickle local object 'execution.._execution' #1630
Replies: 2 comments
-
api没问题,因为这些数据都有 {
"0": {
"origin_prompt": "You are an expert Python programmer, and here is your task: Write a function to find the similar elements from the given two tuple lists. Your code should pass these tests:\n\n assert similar_elements((3, 4, 5, 6),(5, 7, 4, 10)) == (4, 5)\n assert similar_elements((1, 2, 3, 4),(5, 4, 3, 7)) == (3, 4) \n assert similar_elements((11, 12, 14, 13),(17, 15, 14, 13)) == (13, 14) \n\n[BEGIN]\n 'def similar_elements(test_tup1, test_tup2):\r\n res = tuple(set(test_tup1) & set(test_tup2))\r\n return (res)' \n[DONE] \n\n \nYou are an expert Python programmer, and here is your task: Write a python function to identify non-prime numbers. Your code should pass these tests:\n\n assert is_not_prime(2) == False \n assert is_not_prime(10) == True \n assert is_not_prime(35) == True \n\n[BEGIN]\n 'import math\r\ndef is_not_prime(n):\r\n result = False\r\n for i in range(2,int(math.sqrt(n)) + 1):\r\n if n % i == 0:\r\n result = True\r\n return result' \n[DONE] \n\n \nYou are an expert Python programmer, and here is your task: Write a function to find the largest integers from a given list of numbers using heap queue algorithm. Your code should pass these tests:\n\n assert heap_queue_largest( [25, 35, 22, 85, 14, 65, 75, 22, 58],3)==[85, 75, 65] \n assert heap_queue_largest( [25, 35, 22, 85, 14, 65, 75, 22, 58],2)==[85, 75] \n assert heap_queue_largest( [25, 35, 22, 85, 14, 65, 75, 22, 58],5)==[85, 75, 65, 58, 35] \n\n[BEGIN]\n 'import heapq as hq\r\ndef heap_queue_largest(nums,n):\r\n largest_nums = hq.nlargest(n, nums)\r\n return largest_nums' \n[DONE] \n\n \nYou are an expert Python programmer, and here is your task: Write a python function to remove first and last occurrence of a given character from the string. Your code should pass these tests:\n\n assert remove_Occ(\"hello\",\"l\") == \"heo\"\nassert remove_Occ(\"abcda\",\"a\") == \"bcd\"\nassert remove_Occ(\"PHP\",\"P\") == \"H\" \n\n[BEGIN]\n",
"prediction": " Here's the solution to remove the first and last occurrence of a given character from the string:\n```python\ndef remove_Occ(s, c):\n if c in s:\n first_index = s.find(c)\n last_index = s.rfind(c)\n if first_index == last_index:\n return s.replace(c, \"\")\n else:\n return s[:first_index] + s[first_index+1:last_index] + s[last_index+1:]\n else:\n return s\n```\nThis function first checks if the given character is present in the string. If it is, it finds the index of the first and last occurrence of the character in the string. If the first and last indices are the same, it simply removes all occurrences of the character from the string. Otherwise, it removes the first and last occurrences and returns the modified string. If the character is not present in the string, the original string is returned.",
"gold": "assert remove_Occ(\"hello\",\"l\") == \"heo\"\nassert remove_Occ(\"abcda\",\"a\") == \"bcd\"\nassert remove_Occ(\"PHP\",\"P\") == \"H\""
}, 请求完毕 D:\proj\python\OpenCompass2\outputs\api_openbuddy\20241022_094149\logs\infer\chatglm_pro\mbpp.out class OpenBuddy(BaseAPIModel):
"""Model wrapper around ZhiPuAI.
Args:
path (str): The name of OpenAI's model.
key (str): Authorization key.
query_per_second (int): The maximum queries allowed per second
between two consecutive calls of the API. Defaults to 1.
max_seq_len (int): Unused here.
meta_template (Dict, optional): The model's meta prompt
template if needed, in case the requirement of injecting or
wrapping of any meta instructions.
retry (int): Number of retires if the API call fails. Defaults to 2.
"""
def __init__(
self,
path: str,
key: str,
query_per_second: int = 2,
max_seq_len: int = 2048,
meta_template: Optional[Dict] = None,
retry: int = 2,
url=""
):
super().__init__(path=path,
max_seq_len=max_seq_len,
query_per_second=query_per_second,
meta_template=meta_template,
retry=retry)
self.model = path
self.client = OpenAI(
base_url=url,
api_key=key,
)
def generate(
self,
inputs: List[PromptType],
max_out_len: int = 512,
) -> List[str]:
"""Generate results given a list of inputs.
Args:
inputs (List[PromptType]): A list of strings or PromptDicts.
The PromptDict should be organized in OpenCompass'
API format.
max_out_len (int): The maximum length of the output.
Returns:
List[str]: A list of generated strings.
"""
with ThreadPoolExecutor() as executor:
results = list(
executor.map(self._generate, inputs,
[max_out_len] * len(inputs)))
self.flush()
return results
def _generate(
self,
input: PromptType,
max_out_len: int = 512,
) -> str:
"""Generate results given an input.
Args:
inputs (PromptType): A string or PromptDict.
The PromptDict should be organized in OpenCompass'
API format.
max_out_len (int): The maximum length of the output.
Returns:
str: The generated string.
"""
assert isinstance(input, (str, PromptList))
if isinstance(input, str):
messages = [{'role': 'user', 'content': input}]
else:
messages = []
for item in input:
msg = {'content': item['prompt']}
if item['role'] == 'HUMAN':
msg['role'] = 'user'
elif item['role'] == 'BOT':
# msg['role'] = 'assistant'
msg['role'] = 'system'
messages.append(msg)
data = {'model': self.model, 'prompt': messages}
max_num_retries = 0
while max_num_retries < self.retry:
self.acquire()
max_num_retries += 1
print("开始询问模型... connect_192_openai ")
try:
completion = self.client.chat.completions.create(
model=self.model,
messages=messages,
temperature=0.8,
max_tokens=5000 # max_tokens指的是对模型输出的token数限制
)
except Exception as e:
print("询问模型 err:",e)
continue
self.release()
try:
response = completion.choices[0]
return response.message.content
except Exception as e:
self.wait()
continue
raise RuntimeError("wrong") |
Beta Was this translation helpful? Give feedback.
-
我遇到同样的问题,同学你解决了吗 |
Beta Was this translation helpful? Give feedback.
-
run:
D:\env\anaconda3\python run.py configs/api_examples/eval_api_openbuddy.py
conf:
D:\proj\python\OpenCompass2\configs\api_examples\eval_api_openbuddy.py
log :
2024-10-22 10:14:44.818402: I tensorflow/core/util/port.cc:153] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable
TF_ENABLE_ONEDNN_OPTS=0
.2024-10-22 10:14:44.853959: I tensorflow/core/util/port.cc:153] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable
TF_ENABLE_ONEDNN_OPTS=0
.2024-10-22 10:14:44.932806: I tensorflow/core/util/port.cc:153] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable
TF_ENABLE_ONEDNN_OPTS=0
.Traceback (most recent call last):
File "", line 1, in
File "D:\env\anaconda3\Lib\multiprocessing\spawn.py", line 113, in spawn_main
new_handle = reduction.duplicate(pipe_handle,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\env\anaconda3\Lib\multiprocessing\reduction.py", line 79, in duplicate
return _winapi.DuplicateHandle(
^^^^^^^^^^^^^^^^^^^^^^^^
OSError: [WinError 6] �������
Traceback (most recent call last):
File "", line 1, in
File "D:\env\anaconda3\Lib\multiprocessing\spawn.py", line 113, in spawn_main
new_handle = reduction.duplicate(pipe_handle,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\env\anaconda3\Lib\multiprocessing\reduction.py", line 79, in duplicate
return _winapi.DuplicateHandle(
^^^^^^^^^^^^^^^^^^^^^^^^
OSError: [WinError 6] �������
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "D:\proj\python\OpenCompass2\opencompass\tasks\openicl_eval.py", line 462, in
inferencer.run()
File "D:\proj\python\OpenCompass2\opencompass\tasks\openicl_eval.py", line 114, in run
self._score()
File "D:\proj\python\OpenCompass2\opencompass\tasks\openicl_eval.py", line 250, in _score
result = icl_evaluator.score(**preds)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\env\anaconda3\Lib\site-packages\opencompass\datasets\mbpp.py", line 266, in score
index, ret = future.result()
^^^^^^^^^^^^^^^
File "D:\env\anaconda3\Lib\concurrent\futures_base.py", line 449, in result
return self.__get_result()
^^^^^^^^^^^^^^^^^^^
File "D:\env\anaconda3\Lib\concurrent\futures_base.py", line 401, in __get_result
raise self._exception
AttributeError: Can't pickle local object 'execution.._execution'
Beta Was this translation helpful? Give feedback.
All reactions