Skip to content

Commit 0e88e75

Browse files
authored
Add files via upload
1 parent 1e9016e commit 0e88e75

File tree

54 files changed

+1567
-0
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

54 files changed

+1567
-0
lines changed

apicoder/private-eval/LICENSE

Lines changed: 21 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,21 @@
1+
The MIT License
2+
3+
Copyright (c) OpenAI (https://openai.com)
4+
5+
Permission is hereby granted, free of charge, to any person obtaining a copy
6+
of this software and associated documentation files (the "Software"), to deal
7+
in the Software without restriction, including without limitation the rights
8+
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9+
copies of the Software, and to permit persons to whom the Software is
10+
furnished to do so, subject to the following conditions:
11+
12+
The above copyright notice and this permission notice shall be included in
13+
all copies or substantial portions of the Software.
14+
15+
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16+
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17+
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18+
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19+
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20+
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
21+
THE SOFTWARE.

apicoder/private-eval/README.md

Lines changed: 92 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,92 @@
1+
# TorchDataEval, MonkeyEval and BeatNumEval
2+
3+
Three benchmarks for evaluating the performance of private library oriented code generation. They are proposed in the paper "[When Language Model Meets Private Library]()".
4+
5+
The evaluation script is adapted from OpenAI's [HumanEval](https://github.com/openai/human-eval/tree/master/human_eval).
6+
7+
## Installation
8+
9+
Make sure to use python 3.7 or later:
10+
```
11+
$ conda create -n private python=3.7
12+
$ conda activate private
13+
```
14+
15+
Check out and install this repository:
16+
```
17+
$ pip install -e private-eval
18+
```
19+
20+
## Configuration
21+
```
22+
├── data # The directory of our crafted benchmarks.
23+
├── private_eval
24+
│ ├── data.py # [key] Choosing whether to load TorchDataEval, MonkeyEval or BeatNumEval.
25+
│ ├── evaluate_functional_correctness.py # Calculating the evaluation results.
26+
│ ├── evaluation.py # Calculating the evaluation results.
27+
│ └── execution.py # [key] Executing the predicted code. Here, if you want to evaluate MonkeyEval and BeatNumEval, you need to set the `is_convert_back` variable in line 194 to `True` and `domain` to `pandas` or `numpy`.
28+
```
29+
30+
## Running Environment Testing
31+
32+
You need replace `XXX` with your local path for testing the torchdata results. (Make sure that all settings in `private-eval/private_eval/data.py` is right.)
33+
```
34+
$ evaluate_functional_correctness XXX/PrivateLibrary/private-eval/data/TorchData_no.API_number_0.CodeGen.hm_False.machine.t0.1.p0.9.l100.n1.samples.jsonl
35+
```
36+
37+
If you can successfully run the above command and obtain the following results, the evaluation environment is ready to use.
38+
```
39+
{'pass@1': 0.06}
40+
```
41+
42+
# The Process of Constructing TorchDataEval, MonkeyEval and BeatNumEval
43+
44+
We craft three benchmarks, called TorchDataEval, MonkeyEval, and BeatNumEval. Each programming problem consists of context, target code, and the corresponding test cases.
45+
46+
To create a realistic benchmark for evaluating code generation for private library, we make use of TorchData, a Python library released just recently. We carefully learnt the official API documentation of TorchData and make sure we were proficient in all APIs. Then, we manually created $50$ programming problems based on the API usage examples in the documentation. Two volunteers with extensive experience in Python were invited to check the correctness of each problem. We control the difficulty of the programming problems by the number of APIs in the target code. The percentage of programming problems containing $1$ API, $2$ APIs, and more APIs is set to $6$:$3$:$1$.
47+
48+
> Our base model, CODEGEN, is pre-trained with GitHub data before $2021$-$10$. TorchData was released after this time point and no code files using it are available on GitHub so far, hence we can consider it as a private library.
49+
50+
We also construct two pseudo private libraries named MonkeyEval and BeatNumEval, they modify from PandasEval and NumpyEval, each containing $101$ programming problems, were proposed for the public libraries Pandas and Numpy. In detail, we manually modified all library-related keywords in PandasEval and NumpyEval, respectively. For example, as in the below Figure, `pandas` is converted to `monkey`, `dataframe` is converted to `knowledgeframe`, and the API name `isin` is converted to `iscontain`. To craft the API documentations for Monkey and BeatNum, we manually paraphrased the descriptions of all the new APIs to ensure that they have never been seen by the pre-trained language models.
51+
52+
53+
54+
# A Example of Converting PandasEval (public) to MonkeyEval (private)
55+
56+
Context is shown with a white background and the target code with a gray background. The changed parts are highlighted in yellow.
57+
58+
<img src=https://s3.bmp.ovh/imgs/2022/09/27/4c7196cf9a826984.png width=450 />
59+
60+
## Reference
61+
62+
If you use TorchDataEval, MonkeyEval or BeatNumEval in your work, please cite the paper:
63+
```
64+
@inproceedings{APICoder,
65+
title={When Languange Model Meets Private Library},
66+
author={Zan, Daoguang and Chen, Bei and Lin, Zeqi and Guan, Bei and Wang, Yongji and Lou, Jian-Guang},
67+
booktitle={EMNLP findings},
68+
year={2022}
69+
}
70+
```
71+
72+
If you use PandasEval or NumpyEval in your work, please cite the paper:
73+
```
74+
@inproceedings{CERT,
75+
title={{CERT}: Continual Pre-training on Sketches for Library-oriented Code Generation},
76+
author={Zan, Daoguang and Chen, Bei and Yang, Dejian and Lin, Zeqi and Kim, Minsu and Guan, Bei and Wang, Yongji and Chen, Weizhu and Lou, Jian-Guang},
77+
booktitle={The 2022 International Joint Conference on Artificial Intelligence},
78+
year={2022}
79+
}
80+
```
81+
82+
Also, if you use the evaluationg script, please also cite the following paper:
83+
```
84+
@article{codex,
85+
title={Evaluating Large Language Models Trained on Code},
86+
author={Mark Chen and Jerry Tworek and Heewoo Jun and Qiming Yuan and Henrique Ponde de Oliveira Pinto and Jared Kaplan and Harri Edwards and Yuri Burda and Nicholas Joseph and Greg Brockman and Alex Ray and Raul Puri and Gretchen Krueger and Michael Petrov and Heidy Khlaaf and Girish Sastry and Pamela Mishkin and Brooke Chan and Scott Gray and Nick Ryder and Mikhail Pavlov and Alethea Power and Lukasz Kaiser and Mohammad Bavarian and Clemens Winter and Philippe Tillet and Felipe Petroski Such and Dave Cummings and Matthias Plappert and Fotios Chantzis and Elizabeth Barnes and Ariel Herbert-Voss and William Hebgen Guss and Alex Nichol and Alex Paino and Nikolas Tezak and Jie Tang and Igor Babuschkin and Suchir Balaji and Shantanu Jain and William Saunders and Christopher Hesse and Andrew N. Carr and Jan Leike and Josh Achiam and Vedant Misra and Evan Morikawa and Alec Radford and Matthew Knight and Miles Brundage and Mira Murati and Katie Mayer and Peter Welinder and Bob McGrew and Dario Amodei and Sam McCandlish and Ilya Sutskever and Wojciech Zaremba},
87+
year={2021},
88+
eprint={2107.03374},
89+
archivePrefix={arXiv},
90+
primaryClass={cs.LG}
91+
}
92+
```
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,50 @@
1+
{"task_id": "TorchDataEval/0", "completion": " datapipe.repeat(6)"}
2+
{"task_id": "TorchDataEval/1", "completion": " dp.index(0)"}
3+
{"task_id": "TorchDataEval/2", "completion": " batch_dp.sample(batch_size=1, shuffle=False)"}
4+
{"task_id": "TorchDataEval/4", "completion": " source_dp.split(odd_or_even)"}
5+
{"task_id": "TorchDataEval/5", "completion": " source_dp.clone(), source_dp.clone()\ndp1.iter.reset()\ndp2.iter.reset()"}
6+
{"task_id": "TorchDataEval/6", "completion": " dp1.merge(dp2)"}
7+
{"task_id": "TorchDataEval/7", "completion": " dp1.map(mapdp)"}
8+
{"task_id": "TorchDataEval/9", "completion": " SampleMultiplexer(source_dp1, source_dp2, weitghts)"}
9+
{"task_id": "TorchDataEval/10", "completion": " source_dp.unzip()"}
10+
{"task_id": "TorchDataEval/11", "completion": " dp.batch(3)\ndp = dp.drop(0)\ndp = dp.drop(1)\ndp = dp.drop(2)"}
11+
{"task_id": "TorchDataEval/12", "completion": " torchdata.datapipes.BatchDatapipe(source_dp, batch_size=3, batch_num=100, drop_last=False)"}
12+
{"task_id": "TorchDataEval/14", "completion": " IterableWrapper([\"a.png\", \"b.png\", \"a.json\", \"b.json\", \"a.jpg\", \"c.json\"])"}
13+
{"task_id": "TorchDataEval/16", "completion": " HttpReader(file_url, iterable_wrapper=IterableWrapper(file_url))"}
14+
{"task_id": "TorchDataEval/17", "completion": " source_dp.map(mutiple_fn)"}
15+
{"task_id": "TorchDataEval/18", "completion": " dp.map(lambda x: x + 1)"}
16+
{"task_id": "TorchDataEval/19", "completion": " dp.filter(lambda x: is_even(x))"}
17+
{"task_id": "TorchDataEval/20", "completion": " dp.get(0, 0)"}
18+
{"task_id": "TorchDataEval/21", "completion": " dp.map(lambda x: x['a'])"}
19+
{"task_id": "TorchDataEval/23", "completion": " map_dp_1.batch(2)\nnew_dp.map(lambda x: x + 1) #"}
20+
{"task_id": "TorchDataEval/24", "completion": " HttpReader(URL)\nag_news_train.process()"}
21+
{"task_id": "TorchDataEval/25", "completion": " HttpReader(URL, lambda_func_)\nag_news_train.map(lambda t: (int(t[0]), \" \".join(t[1:])))"}
22+
{"task_id": "TorchDataEval/26", "completion": " ag_news_train.map(lambda batch: lambda_batch(batch))"}
23+
{"task_id": "TorchDataEval/27", "completion": " dp.repeat(3)\ndp = dp.sample(3)"}
24+
{"task_id": "TorchDataEval/28", "completion": " dp1 + dp2"}
25+
{"task_id": "TorchDataEval/29", "completion": " dp1.merge(dp2)"}
26+
{"task_id": "TorchDataEval/30", "completion": " dp1.merge(dp2, keep_key=True)"}
27+
{"task_id": "TorchDataEval/31", "completion": " dp1.transform(merge_fn)\nres_dp = res_dp.map(lambda x: x[1])\nres_dp = res_dp.map(lambda x: x[0])"}
28+
{"task_id": "TorchDataEval/32", "completion": " dp1.merge_fn(dp1, dp1, dp1, dp1, dp1, dp1, dp1, dp1, dp1, dp1, dp1, dp1, dp1, dp1, dp1, dp1, dp1, dp1, dp1, dp1, dp1, dp1, dp1,"}
29+
{"task_id": "TorchDataEval/33", "completion": " dp1.map(merge_fn, dp1)\nres_map = mapdp.map(merge_fn, mapdp)"}
30+
{"task_id": "TorchDataEval/34", "completion": " dp1.map(merge_fn, dp1.map(mapdp, dp1.map(mapdp, dp1.map(mapdp, dp1.map(mapdp, dp1.map(mapdp, dp1.map(mapdp, dp1.map(mapdp, dp1.map(mapdp, dp1.map(mapdp, dp1.map(mapdp"}
31+
{"task_id": "TorchDataEval/35", "completion": " source_dp.group(group_fn)\ndp1 = source_dp.group(group_fn, length=1)\ndp2 = source_dp.group(group_fn, length=1, reserve=1)\ndp3 = source_dp.group(group_fn, length=1, reserve=1, reserve=1)"}
32+
{"task_id": "TorchDataEval/37", "completion": " ds.collate(collate_fn)"}
33+
{"task_id": "TorchDataEval/38", "completion": " ds.collate(collate_fn)"}
34+
{"task_id": "TorchDataEval/40", "completion": " source_dp.split(great_than_5)"}
35+
{"task_id": "TorchDataEval/41", "completion": " SampleMultiplexer(dp1, dp2, weight_)"}
36+
{"task_id": "TorchDataEval/42", "completion": " raw_dp"}
37+
{"task_id": "TorchDataEval/43", "completion": " dp.batch(3)\ndp = dp.drop_last(2)\ndp = dp.drop_last(2)\ndp = dp.drop_last(2)\ndp = dp.drop_last(2)\ndp = dp.drop_last(2)\ndp = dp.drop_last(2)\ndp = dp.drop_last(2)\ndp = dp.drop_last"}
38+
{"task_id": "TorchDataEval/44", "completion": " IterableWrapper([\"a\", \"b\", \"c\", \"d\", \"e\", \"f\", \"g\", \"h\", \"i\", \"j\"])\ndp_4 = IterableWrapper([\"1\", \"2\", \"3\", \"4\", \"5\", \"6\", \"7\", \"8\", \"9\", \"10\"])\ndp_5 = IterableWrapper([\"1\", \"2\", \"3\", \"4\", \""}
39+
{"task_id": "TorchDataEval/45", "completion": " dp_source_1.concatenate(dp_source_2)\nindex_dp.index.name = 'Ids'"}
40+
{"task_id": "TorchDataEval/46", "completion": " dp_source_1.join(dp_source_2, index_name='Ids')"}
41+
{"task_id": "TorchDataEval/47", "completion": " dp_source_1.join(dp_source_2)"}
42+
{"task_id": "TorchDataEval/48", "completion": " IterableWrapper(flatted_func)"}
43+
{"task_id": "TorchDataEval/49", "completion": " HttpReader(AG_NEWS_CSV_URL)\nag_news_train.map(lambda x: x.decode('utf-8'))"}
44+
{"task_id": "TorchDataEval/3", "completion": " dp_one.concatenate(dp_two)"}
45+
{"task_id": "TorchDataEval/8", "completion": " dp1.map(lambda x: x[0])\nresult = dp2.map(lambda x: x[0])"}
46+
{"task_id": "TorchDataEval/13", "completion": " ds.collate(int2tensor)"}
47+
{"task_id": "TorchDataEval/15", "completion": " torchdata.datapipes.unbatch(source_dp, level=1)"}
48+
{"task_id": "TorchDataEval/22", "completion": " dp.generate_by_chunk(1)"}
49+
{"task_id": "TorchDataEval/39", "completion": " dp_dog.remove_final_number(remove_final_number(dp_dog.get_data()))\nresult = dp_cat.remove_final_number(remove_final_number(dp_cat.get_data()))"}
50+
{"task_id": "TorchDataEval/36", "completion": " source_dp.group_by(group_fn)"}
Lines changed: 95 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,95 @@
1+
{
2+
"cells": [
3+
{
4+
"cell_type": "code",
5+
"execution_count": 3,
6+
"metadata": {},
7+
"outputs": [
8+
{
9+
"name": "stderr",
10+
"output_type": "stream",
11+
"text": [
12+
"50it [00:00, 16642.74it/s]0/7 [00:00<?, ?it/s]\n",
13+
"50it [00:00, 14979.66it/s]1/7 [00:01<00:06, 1.04s/it]\n",
14+
"50it [00:00, 14360.12it/s]2/7 [00:02<00:04, 1.00it/s]\n",
15+
"50it [00:00, 13214.57it/s]3/7 [00:02<00:03, 1.04it/s]\n",
16+
"50it [00:00, 11823.60it/s]4/7 [00:03<00:02, 1.46it/s]\n",
17+
"50it [00:00, 14344.40it/s]5/7 [00:03<00:01, 1.95it/s]\n",
18+
"50it [00:00, 15246.47it/s]6/7 [00:03<00:00, 2.38it/s]\n",
19+
"api_num: 100%|██████████| 7/7 [00:03<00:00, 1.81it/s]\n"
20+
]
21+
}
22+
],
23+
"source": [
24+
"import json\n",
25+
"import gzip\n",
26+
"import os\n",
27+
"from tqdm import tqdm\n",
28+
"\n",
29+
"base_dir = \"PrivateLibrary/private-eval/data\"\n",
30+
"\n",
31+
"api_nums = [\"0\", \"1\", \"2\", \"3\", \"5\", \"n\", \"human_labelled\"]\n",
32+
"for api_num in tqdm(api_nums, desc=\"api_num\"):\n",
33+
" if api_num == \"0\":\n",
34+
" raw_monkey_api_eval_path = os.path.join(base_dir, f\"real_torchdata_eval_v3.jsonl.gz\")\n",
35+
" output_monkey_api_eval_path = os.path.join(base_dir, f\"real_torchdata_eval_v3_make_sense.jsonl.gz\")\n",
36+
" elif api_num == \"human_labelled\":\n",
37+
" raw_monkey_api_eval_path = os.path.join(base_dir, f\"real_torchdata_eval_v3_{api_num}.jsonl.gz\")\n",
38+
" output_monkey_api_eval_path = os.path.join(base_dir, f\"real_torchdata_eval_v3_{api_num}_make_sense.jsonl.gz\")\n",
39+
" else:\n",
40+
" raw_monkey_api_eval_path = os.path.join(base_dir, f\"real_torchdata_eval_v3_api_{api_num}.jsonl.gz\")\n",
41+
" output_monkey_api_eval_path = os.path.join(base_dir, f\"real_torchdata_eval_v3_api_{api_num}_make_sense.jsonl.gz\")\n",
42+
"\n",
43+
" raw_monkey_reader = gzip.open(raw_monkey_api_eval_path, \"rb\")\n",
44+
" output_writer = gzip.open(output_monkey_api_eval_path, \"wb\")\n",
45+
"\n",
46+
" for line in tqdm(raw_monkey_reader):\n",
47+
" line_decoded = line.decode(\"utf-8\")\n",
48+
" line_dict = json.loads(line_decoded)\n",
49+
" # print(line_dict.keys())\n",
50+
" task_id = line_dict[\"task_id\"]\n",
51+
" prompt = line_dict[\"prompt\"]\n",
52+
" prompt = prompt.replace(\"[start]\", \"Please use the following APIs to solve the task:\")\n",
53+
" prompt = prompt.replace(\"# [end]\\n\", \"\")\n",
54+
" line_dict[\"prompt\"] = prompt\n",
55+
"\n",
56+
" output_writer.write(json.dumps(line_dict).encode(\"utf-8\") + \"\\n\".encode(\"utf-8\"))\n",
57+
"\n",
58+
" raw_monkey_reader.close()\n",
59+
" output_writer.close()\n"
60+
]
61+
},
62+
{
63+
"cell_type": "code",
64+
"execution_count": null,
65+
"metadata": {},
66+
"outputs": [],
67+
"source": []
68+
}
69+
],
70+
"metadata": {
71+
"interpreter": {
72+
"hash": "6b7426018b15da7c09748c02bfa198e6352f15be2a8ba8d90a42df1562657c8c"
73+
},
74+
"kernelspec": {
75+
"display_name": "Python 3.8.12 ('codex')",
76+
"language": "python",
77+
"name": "python3"
78+
},
79+
"language_info": {
80+
"codemirror_mode": {
81+
"name": "ipython",
82+
"version": 3
83+
},
84+
"file_extension": ".py",
85+
"mimetype": "text/x-python",
86+
"name": "python",
87+
"nbconvert_exporter": "python",
88+
"pygments_lexer": "ipython3",
89+
"version": "3.8.12"
90+
},
91+
"orig_nbformat": 4
92+
},
93+
"nbformat": 4,
94+
"nbformat_minor": 2
95+
}

0 commit comments

Comments
 (0)