HelpLogin
© 2022-2025 Spice AI, Inc.
SQL Query ReferenceDocsFAQSupport
PrivacyTerms
Status
Home
Datasets
Models
main
Edit on GitHub
Fork
/evals
.github
Make the torch dep optional (#1524) `torch` was added in https://github.com/openai/evals/pull/1496, but it's very heavy and only required for one eval. Let's move it to an optional-dependency
a year ago
docs
Add info about logging and link to logviz (#1480) A useful 3rd party tool has been developed by @naimenz for visualizing openai/eval logs: https://github.com/naimenz/logviz Adding a link to it from our README seems good as it is probably useful for users. :)
a year ago
evals
20240930 steven exception handling usage tokens (#1560) Bug in usage token summing is causing evals to fail - see e.g. https://github.com/openai/evals/pull/1555/commits/03c35de32467e0ad6a82395af3a4b65cc54ac86e . User-submitted patch does not seem to resolve, so this is a workaround for the time being. # Thank you for contributing an eval! ♥️ 🚨 Please make sure your PR follows these guidelines, **failure to follow the guidelines below will result in the PR being closed automatically**. Note that even if the criteria are met, that does not guarantee the PR will be merged nor GPT-4 access be granted. 🚨 **PLEASE READ THIS**: In order for a PR to be merged, it must fail on GPT-4. We are aware that right now, users do not have access, so you will not be able to tell if the eval fails or not. Please run your eval with GPT-3.5-Turbo, but keep in mind as we run the eval, if GPT-4 gets higher than 90% on the eval, we will likely reject it since GPT-4 is already capable of completing the task. We plan to roll out a way for users submitting evals to see the eval performance on GPT-4 soon. Stay tuned! Until then, you will not be able to see the eval performance on GPT-4. **Starting April 10, the minimum eval count is 15 samples, we hope this makes it easier to create and contribute evals.** Also, please note that we're using **Git LFS** for storing the JSON files, so please make sure that you move the JSON file to Git LFS before submitting a PR. Details on how to use Git LFS are available [here](https://git-lfs.com). ## Eval details 📑 ### Eval name [Insert Eval name here] ### Eval description [Insert a short description of what your eval does here] ### What makes this a useful eval? [Insert why this eval is worth including and any additional context] ## Criteria for a good eval ✅ Below are some of the criteria we look for in a good eval. In general, we are seeking cases where the model does not do a good job despite being capable of generating a good response (note that there are some things large language models cannot do, so those would not make good evals). Your eval should be: - [ ] Thematically consistent: The eval should be thematically consistent. We'd like to see a number of prompts all demonstrating some particular failure mode. For example, we can create an eval on cases where the model fails to reason about the physical world. - [ ] Contains failures where a human can do the task, but either GPT-4 or GPT-3.5-Turbo could not. - [ ] Includes good signal around what is the right behavior. This means either a correct answer for `Basic` evals or the `Fact` Model-graded eval, or an exhaustive rubric for evaluating answers for the `Criteria` Model-graded eval. - [ ] **Include at least 15 high-quality examples.** If there is anything else that makes your eval worth including, please document it below. ### Unique eval value > Insert what makes your eval high quality that was not mentioned above. (Not required) ## Eval structure 🏗️ Your eval should - [ ] Check that your data is in `evals/registry/data/{name}` - [ ] Check that your YAML is registered at `evals/registry/evals/{name}.yaml` - [ ] Ensure you have the right to use the data you submit via this eval (For now, we will only be approving evals that use one of the existing eval classes. You may still write custom eval classes for your own cases, and we may consider merging them in the future.) ## Final checklist 👀 ### Submission agreement By contributing to Evals, you are agreeing to make your evaluation logic and data under the same MIT license as this repository. You must have adequate rights to upload any data used in an Eval. OpenAI reserves the right to use this data in future service improvements to our product. Contributions to OpenAI Evals will be subject to our usual Usage Policies (<https://platform.openai.com/docs/usage-policies>). - [ ] I agree that my submission will be made available under an MIT license and complies with OpenAI's usage policies. ### Email address validation If your submission is accepted, we will be granting GPT-4 access to a limited number of contributors. Access will be given to the email address associated with the commits on the merged pull request. - [ ] I acknowledge that GPT-4 access will only be granted, if applicable, to the email address used for my merged pull request. ### Limited availability acknowledgment We know that you might be excited to contribute to OpenAI's mission, help improve our models, and gain access to GPT-4. However, due to the requirements mentioned above and the high volume of submissions, we will not be able to accept all submissions and thus not grant everyone who opens a PR GPT-4 access. We know this is disappointing, but we hope to set the right expectation before you open this PR. - [ ] I understand that opening a PR, even if it meets the requirements above, does not guarantee the PR will be merged nor GPT-4 access be granted. ### Submit eval - [ ] I have filled out all required fields of this form - [ ] I have used **Git LFS** for the Eval JSON data - [ ] (Ignore if not submitting code) I have run `pip install pre-commit; pre-commit install` and have verified that `mypy`, `black`, `isort`, `autoflake` and `ruff` are running when I commit and push Failure to fill out all required fields will result in the PR being closed. ### Eval JSON data Since we are using Git LFS, we are asking eval submitters to add in as many Eval Samples (at least 5) from their contribution here: <details> <summary>View evals in JSON</summary> ### Eval ```jsonl INSERT_EVAL_HERE ``` </details>
9 months ago
examples
Upgrade openai to >=1.0.0 (#1420) Migrates evals to the new version of openai-python. Ran the migration script, and then manually fixed issues with running tests/evals Test Plan: - unit tests - run `python -m evals.cli.oaievalset gpt-3.5-turbo test` - test make_me_pay (uses solvers) - run `python -m evals.cli.oaieval langchain/chains/llm_math bigrams --max_samples 20 --dry-run` - run the retrieval-completionfn example
2 years ago
scripts
Fix formatting/typing so pre-commit hooks pass (#1451) (Not an eval) **One-line summary**: Pre-commit hooks were failing. I identified the main cause, and then fixed all secondary pre-commit issues. I only changed the logic in one place, `oiaevalset.py`. I was having issues with type-hinting and identified that the old `typings` directory was causing the `from openai import OpenAI` import to register as an error. I decided to go through and fix all the issues that appeared in `pre-commit run --all-files`. NOTE: - I changed the logic in `oaievalset.py` by adding a `continue` statement if an `eval` or `eval.key` was missing. - As far as I can tell this should basically never happen, but is correct behavior. - Another option would be to assert that `eval` and `eval.key` are not `None` but forcing an error here doesn't match what I interpret as intended behavior. The manual work involved was mainly: 1. Deleting the `typings` directory, which was interfering with `openai` type-hints (such as `from openai import OpenAI`) 2. Fixing type issues in `oaievalset.py`. 3. Moving the `client = OpenAI(api_key=os.environ.get("OPENAI_API_KEY"))` line below all the imports. 4. Breaking lines of length >767 into smaller chunks using line continuation. Thus this PR is broken into three parts: 1. Deleting `typings` (first commit) 2. Manually cleaning up issues (middle commits) 3. Applying autofixes from the pre-commit hooks (last commit)
a year ago
tests
[unit test] Adding unit test for metrics.get_accuracy (#224) Adding a unit test to get the ball rolling, starting with metrics since they are fundamental to evaluating performance. :) It would be great to add some more tests when building out more, and also enable CI (e.g., via GitHub actions). This also fixes an unused param to `get_bootstrap_accuracy_std`.
2 years ago
.gitattributes
Initial Commit
2 years ago
.gitignore
Self-Prompting eval (#1401) # Thank you for contributing an eval! ♥️ 🚨 Please make sure your PR follows these guidelines, **failure to follow the guidelines below will result in the PR being closed automatically**. Note that even if the criteria are met, that does not guarantee the PR will be merged nor GPT-4 access be granted. 🚨 **PLEASE READ THIS**: In order for a PR to be merged, it must fail on GPT-4. We are aware that right now, users do not have access, so you will not be able to tell if the eval fails or not. Please run your eval with GPT-3.5-Turbo, but keep in mind as we run the eval, if GPT-4 gets higher than 90% on the eval, we will likely reject it since GPT-4 is already capable of completing the task. We plan to roll out a way for users submitting evals to see the eval performance on GPT-4 soon. Stay tuned! Until then, you will not be able to see the eval performance on GPT-4. **Starting April 10, the minimum eval count is 15 samples, we hope this makes it easier to create and contribute evals.** Also, please note that we're using **Git LFS** for storing the JSON files, so please make sure that you move the JSON file to Git LFS before submitting a PR. Details on how to use Git LFS are available [here](https://git-lfs.com). ## Eval details 📑 ### Eval name self_prompting ### Eval description In the Self-Prompting eval, models (Prompters) write prompts for other models (Taskers) to perform various tasks. The effectiveness of the Prompters are measured in terms of the accuracy of downstream Taskers on the tasks (which are other evals from this repository). ### What makes this a useful eval? We want to closely monitor when AI systems may reach human-level or beyond in AI R&D. In LLM R&D, key avenues for augmenting an existing LM include fine-tuning, prompting, and external tooling. This eval focuses on prompting: How well can LMs write prompts for themselves to perform various tasks? (This is also relevant for LLMs being able to deploy copies of themselves.) ## Criteria for a good eval ✅ Below are some of the criteria we look for in a good eval. In general, we are seeking cases where the model does not do a good job despite being capable of generating a good response (note that there are some things large language models cannot do, so those would not make good evals). Your eval should be: - [x] Thematically consistent: The eval should be thematically consistent. We'd like to see a number of prompts all demonstrating some particular failure mode. For example, we can create an eval on cases where the model fails to reason about the physical world. - [x] Contains failures where a human can do the task, but either GPT-4 or GPT-3.5-Turbo could not. - [x] Includes good signal around what is the right behavior. This means either a correct answer for `Basic` evals or the `Fact` Model-graded eval, or an exhaustive rubric for evaluating answers for the `Criteria` Model-graded eval. - [x] **Include at least 15 high-quality examples.** If there is anything else that makes your eval worth including, please document it below. ### Unique eval value > Insert what makes your eval high quality that was not mentioned above. (Not required) ## Eval structure 🏗️ Your eval should - [x] Check that your data is in `evals/registry/data/{name}` - [x] Check that your YAML is registered at `evals/registry/evals/{name}.yaml` - [x] Ensure you have the right to use the data you submit via this eval (For now, we will only be approving evals that use one of the existing eval classes. You may still write custom eval classes for your own cases, and we may consider merging them in the future.) ## Final checklist 👀 ### Submission agreement By contributing to Evals, you are agreeing to make your evaluation logic and data under the same MIT license as this repository. You must have adequate rights to upload any data used in an Eval. OpenAI reserves the right to use this data in future service improvements to our product. Contributions to OpenAI Evals will be subject to our usual Usage Policies (<https://platform.openai.com/docs/usage-policies>). - [x] I agree that my submission will be made available under an MIT license and complies with OpenAI's usage policies. ### Email address validation If your submission is accepted, we will be granting GPT-4 access to a limited number of contributors. Access will be given to the email address associated with the commits on the merged pull request. - [x] I acknowledge that GPT-4 access will only be granted, if applicable, to the email address used for my merged pull request. ### Limited availability acknowledgment We know that you might be excited to contribute to OpenAI's mission, help improve our models, and gain access to GPT-4. However, due to the requirements mentioned above and the high volume of submissions, we will not be able to accept all submissions and thus not grant everyone who opens a PR GPT-4 access. We know this is disappointing, but we hope to set the right expectation before you open this PR. - [x] I understand that opening a PR, even if it meets the requirements above, does not guarantee the PR will be merged nor GPT-4 access be granted. ### Submit eval - [x] I have filled out all required fields of this form - [x] I have used **Git LFS** for the Eval JSON data - [x] (Ignore if not submitting code) I have run `pip install pre-commit; pre-commit install` and have verified that `mypy`, `black`, `isort`, `autoflake` and `ruff` are running when I commit and push Failure to fill out all required fields will result in the PR being closed. ### Eval JSON data Since we are using Git LFS, we are asking eval submitters to add in as many Eval Samples (at least 5) from their contribution here: <details> <summary>View evals in JSON</summary> ### Eval ```jsonl {"eval": "belarusian-rhyme.dev.v0", "instruction": "For each pair of words, determine whether some of their Belarusian translations rhyme. If they do, output the pair of rhyming words in Belarusian. If not, output NONE.", "test_samples": [{"input": "queue, flood", "output": "NONE"}, {"input": "discount, ear", "output": "NONE"}, {"input": "advice, threat", "output": "NONE"}, {"input": "peppermint, cabbage", "output": "NONE"}, {"input": "substance, preparation", "output": "NONE"}, {"input": "disease, shelf", "output": "NONE"}, {"input": "shop, rosehip", "output": "NONE"}, {"input": "rust, performer", "output": "NONE"}, {"input": "victory, dog", "output": "NONE"}, {"input": "foot, boat", "output": "NONE"}], "train_samples": [{"input": "cannon, defender", "output": "NONE"}, {"input": "shovel, skin", "output": "NONE"}, {"input": "reference, cave", "output": "NONE"}, {"input": "quotation, sun", "output": "NONE"}, {"input": "coffee, animal", "output": "NONE"}, {"input": "river, princess", "output": "NONE"}, {"input": "branch, squirrel", "output": "NONE"}, {"input": "gate, clover", "output": "NONE"}, {"input": "error, sea", "output": "NONE"}, {"input": "phenomenon, torment", "output": "NONE"}, {"input": "announcement, poison", "output": "NONE"}, {"input": "crossword, paper", "output": "NONE"}, {"input": "highway, base", "output": "NONE"}, {"input": "sky, loan", "output": "NONE"}, {"input": "boundary, linguist", "output": "NONE"}, {"input": "language, giraffe", "output": "NONE"}, {"input": "holiday, promiscuity", "output": "NONE"}, {"input": "daughter, poetess", "output": "NONE"}, {"input": "price, star", "output": "NONE"}, {"input": "arrow, woman", "output": "NONE"}, {"input": "dish, school", "output": "NONE"}, {"input": "grass, food", "output": "NONE"}, {"input": "rail, task", "output": "NONE"}, {"input": "gazebo, axe", "output": "NONE"}, {"input": "soil, musician", "output": "NONE"}, {"input": "equilibrium, flower", "output": "NONE"}, {"input": "thirst, racquet", "output": "NONE"}, {"input": "siege, attack", "output": "NONE"}, {"input": "embassy, gland", "output": "NONE"}, {"input": "pope, interlocutor", "output": "NONE"}, {"input": "church, tower", "output": "NONE"}, {"input": "attempt, chapel", "output": "NONE"}, {"input": "half, wardrobe", "output": "NONE"}, {"input": "biscuit, cash", "output": "NONE"}, {"input": "cell, report", "output": "NONE"}, {"input": "soul, insult", "output": "NONE"}, {"input": "sofa, driver", "output": "NONE"}, {"input": "haircut, toad", "output": "NONE"}, {"input": "chambermaid, culture", "output": "NONE"}, {"input": "bee, fatherland", "output": "NONE"}]} {"eval": "italian_big_math_expression.dev.v0", "instruction": "Fornisci il tuo ragionamento passo per passo. Poi, scrivi la tua risposta finale in una parola senza maiuscole e racchiusa tra parentesi quadre. Ad esempio, se la tua risposta finale \u00e8 la parola cinquecentoundicimilacentosettantatr\u00e9, scrivi [cinquecentoundicimilacentosettantatr\u00e9] dopo aver fornito il tuo ragionamento passo per passo; oppure, se la tua risposta finale \u00e8 il numero 511173 (che si traduce in cinquecentoundicimilacentosettantatr\u00e9 in formato parola), scrivi [cinquecentoundicimilacentosettantatr\u00e9] dopo aver fornito il tuo ragionamento passo per passo.", "test_samples": [{"input": "settecentotrentaquattro per cinquecentoventidue pi\u00f9 cinquecentoventi per duecentosessantacinque", "output": "[cinquecentoventimilanovecentoquarantotto]"}, {"input": "seicentosettantotto per quattrocentosettantuno pi\u00f9 cinquecentoventi per duecentonovanta", "output": "[quattrocentosettantamilacentotrentotto]"}, {"input": "ottocentocinquantanove per seicentocinquantanove pi\u00f9 cinquecentodiciotto per duecentosettantatr\u00e9", "output": "[settecentosettemilaquattrocentonovantacinque]"}, {"input": "settecentosessantasette per cinquecentoventi meno cinquecentoquattordici per trecentoquarantasei", "output": "[duecentoventimilanovecentonovantasei]"}, {"input": "settecentoventotto per cinquecentonovantauno pi\u00f9 cinquecentoventi per duecentoventa", "output": "[cinquecentoquarantaquattromilaseicentoquarantotto]"}, {"input": "ottocentosettantatr\u00e9 per quattrocentoquarantasei pi\u00f9 cinquecentoquattordici per trecentonovanta", "output": "[cinquecentottantanovemilaottocentodiciotto]"}, {"input": "novecentocinquantaquattro per trecentocinquantasei meno seicentoventisei per duecentosettantasei", "output": "[centosessantaseimilaottocentoquarantotto]"}, {"input": "novecentoventi per trecentocinquantasei meno seicentoventisei per duecentosettantasei", "output": "[centocinquantaquattromilasettecentoquarantaquattro]"}, {"input": "ottocentotrentasette per cinquecentocinquantanove pi\u00f9 cinquecentodiciotto per duecentosessantacinque", "output": "[seicentocinquemilacentocinquantatr\u00e9]"}, {"input": "novecentoquindici per trecentocinquantacinque meno seicentoventisei per duecentosettanta", "output": "[centocinquantacinquemilaottocentocinque]"}], "train_samples": [{"input": "settecentoventicinque per cinquecentoventuno pi\u00f9 cinquecentoventi per duecentosettantacinque", "output": "[cinquecentoventimilasettecentoventicinque]"}, {"input": "novecentoventi per trecentocinquantotto meno seicentoventisei per duecentotrentacinque", "output": "[centottantaduemiladuecentocinquanta]"}, {"input": "novecentoventi per trecentocinquantacinque meno seicentoventisei per duecentotrenta", "output": "[centottantaduemilaseicentoventi]"}, {"input": "ottocentocinquantasette per quattrocentoventinove pi\u00f9 cinquecentoventi per duecentosettantasei", "output": "[cinquecentoundicimilacentosettantatr\u00e9]"}, {"input": "novecentosettantatr\u00e9 per seicentosettantacinque pi\u00f9 cinquecentodiciassette per duecentosettantacinque", "output": "[settecentonovantottomilanovecentocinquanta]"}, {"input": "ottocentosettantotto per quattrocentocinquantasette pi\u00f9 cinquecentoventi per duecentosettantaquattro", "output": "[cinquecentoquarantatr\u00e9milasettecentoventisei]"}, {"input": "ottocentosessantotto per quattrocentoventinove pi\u00f9 cinquecentoventi per duecentosettantatr\u00e9", "output": "[cinquecentoquattordicimilatrecentotrentadue]"}, {"input": "novecentocinquantaquattro per seicentocinquantaotto meno seicentoventisei per duecentotrenta", "output": "[quattrocentottantatr\u00e9milasettecentocinquantadue]"}, {"input": "novecentonovantatr\u00e9 per trecentocinquantotto meno seicentoventisei per duecentoventuno", "output": "[duecentodiciassettemilacentoquarantotto]"}, {"input": "ottocentocinquantanove per quattrocentocinquantaquattro pi\u00f9 cinquecentoventi per duecentoventuno", "output": "[cinquecentoquattromilanovecentosei]"}, {"input": "cinquecentoventitr\u00e9 per centosessantacinque pi\u00f9 trecentosessantaquattro per duecentotrentanove", "output": "[centosettantatr\u00e9miladuecentonovantuno]"}, {"input": "novecentocinquantaquattro per trecentocinquantotto meno seicentoventisei per duecentotrentacinque", "output": "[centonovantaquattromilaquattrocentoventidue]"}, {"input": "settecentosettantotto per cinquecentonovantauno pi\u00f9 cinquecentoventi per duecentoventi", "output": "[cinquecentosettantaquattromilacentonovantotto]"}, {"input": "novecentoventinove per seicentoventisei meno cinquecentoquattordici per trecentoquarantasei", "output": "[quattrocentotremilasettecentodieci]"}, {"input": "novecentoventotto per quattrocentodiciannove meno cinquecentoquattordici per trecentonovantadue", "output": "[centottantasettemilatrecentoquarantaquattro]"}, {"input": "novecentoventinove per seicentosettantacinque meno cinquecentoquattordici per trecentonovanta", "output": "[quattrocentoventiseimilaseicentoquindici]"}, {"input": "ottocentosettantotto per quattrocentocinquantaquattro pi\u00f9 cinquecentoquattordici per trecentonovanta", "output": "[cinquecentonovantanovemilasettantadue]"}, {"input": "ottocentocinquantasette per quattrocentoventuno pi\u00f9 cinquecentoventi per duecentosettantacinque", "output": "[cinquecentotremilasettecentonovantasette]"}, {"input": "novecentonovantotto per seicentosettantacinque meno seicentoventisei per duecentotrenta", "output": "[cinquecentoventinovemilaseicentosettanta]"}, {"input": "settecentosessantotto per cinquecentoventitre pi\u00f9 cinquecentoventi per duecentosessantacinque", "output": "[cinquecentotrentanovemilaquattrocentosessantaquattro]"}, {"input": "settecentocinquantacinque per quattrocentoquarantotto meno cinquecentoquattordici per trecentoquaranta", "output": "[centosessantatr\u00e9milaquattrocentottanta]"}, {"input": "ottocentosettantanove per quattrocentocinquantasei pi\u00f9 cinquecentoquattordici per duecentosettantaquattro", "output": "[cinquecentoquarantunomilaseicentosessanta]"}, {"input": "novecentotrentotto per seicentosessantaotto meno seicentoventisei per duecentotrenta", "output": "[quattrocentottantaduemilaseicentoquattro]"}, {"input": "ottocentoventiquattro per cinquecentotrentasette pi\u00f9 cinquecentonovanta per duecentoventisette", "output": "[cinquecentosettantaseimilaquattrocentodiciotto]"}, {"input": "novecentocinquantaquattro per seicentosessantaotto meno seicentoventisei per duecentotrenta", "output": "[quattrocentonovantatr\u00e9miladuecentonovantadue]"}, {"input": "novecentoventinove per seicentosettantaotto meno cinquecentoquattordici per trecentoquaranta", "output": "[quattrocentocinquantacinquemilacentodue]"}, {"input": "settecentoventotto per cinquecentoventuno pi\u00f9 cinquecentoventi per duecentoventi", "output": "[quattrocentonovantatr\u00e9milaseicentottantotto]"}, {"input": "settecentoventisette per cinquecentoventitre pi\u00f9 cinquecentoventi per duecentosettantacinque", "output": "[cinquecentoventitr\u00e9miladuecentoventuno]"}, {"input": "settecentonovantaquattro per cinquecentoventidue pi\u00f9 cinquecentoventi per duecentosessantacinque", "output": "[cinquecentocinquantaduemiladuecentosessantotto]"}, {"input": "ottocentosettantasei per trecentoquarantacinque meno seicentoventisei per duecentoventinove", "output": "[centocinquantottomilaottocentosessantasei]"}, {"input": "settecentosessantasette per cinquecentoventidue pi\u00f9 cinquecentoventi per duecentosettantacinque", "output": "[cinquecentoquarantatr\u00e9milatrecentosettantaquattro]"}, {"input": "ottocentosettantanove per quattrocentocinquantadue pi\u00f9 cinquecentoventi per duecentosettantaquattro", "output": "[cinquecentotrentanovemilasettecentottantotto]"}, {"input": "novecentoquindici per trecentoquarantaotto meno seicentoventisei per duecentoventinove", "output": "[centosettantacinquemilasessantasei]"}, {"input": "novecentotrentaquattro per trecentocinquantadue meno seicentoventisei per duecentoventuno", "output": "[centonovantamilaquattrocentoventidue]"}, {"input": "novecentoventinove per trecentocinquantotto meno seicentoventisei per duecentosessanta", "output": "[centosessantanovemilaottocentoventidue]"}, {"input": "novecentoventotto per trecentocinquantacinque meno cinquecentoquattordici per trecentoquaranta", "output": "[centocinquantaquattromilaseicentottanta]"}, {"input": "novecentotrentaquattro per quattrocentoventinove meno cinquecentoquattordici per trecentoquarantasei", "output": "[duecentoventiduemilaottocentoquarantadue]"}, {"input": "novecentonovantacinque per seicentosettantacinque meno seicentoventisei per duecentosettantacinque", "output": "[quattrocentonovantanovemilaquattrocentosettantacinque]"}, {"input": "novecentoventinove per seicentoventisei meno seicentoventisei per duecentoventinove", "output": "[quattrocentotrentottomiladuecento]"}, {"input": "novecentocinquantanove per quattrocentocinquantasette pi\u00f9 cinquecentonovanta per duecentoventisette", "output": "[cinquecentoquarantanovemilaquattrocentonovantatr\u00e9]"}]} {"eval": "music-theory-triads-identification.dev.v0", "instruction": "You will be given a set of notes separated by a ';'. You will answer by spelling the chord symbol corresponding to this set of notes. You will output the corresponding chord symbol in jazz chord symbol notation followed by a dot '.' to end the sentence. Only the following chord symbols are available (examples in C): C Caug Cb5 Cm Cdim Csus2 Csus4", "test_samples": [{"input": "Bb;Db;Fb", "output": "Bbdim."}, {"input": "Ab;C;Ebb", "output": "Abb5."}, {"input": "A#;C##;E#", "output": "A#."}, {"input": "Gb;Ab;Db", "output": "Gbsus2."}, {"input": "Gb;Cb;Db", "output": "Gbsus4."}, {"input": "B#;C##;F##", "output": "B#sus2."}, {"input": "B;D#;F##", "output": "Baug."}, {"input": "Fb;Bbb;Cb", "output": "Fbsus4."}, {"input": "B#;D##;F#", "output": "B#b5."}, {"input": "G;B;D#", "output": "Gaug."}], "train_samples": [{"input": "Cb;Fb;Gb", "output": "Cbsus4."}, {"input": "Cb;Eb;Gb", "output": "Cb."}, {"input": "F#;A#;C##", "output": "F#aug."}, {"input": "G#;A#;D#", "output": "G#sus2."}, {"input": "G;B;D", "output": "G."}, {"input": "E;G;Bb", "output": "Edim."}, {"input": "Bb;D;Fb", "output": "Bbb5."}, {"input": "E#;F##;B#", "output": "E#sus2."}, {"input": "Fb;Ab;C", "output": "Fbaug."}, {"input": "Cb;Db;Gb", "output": "Cbsus2."}, {"input": "C;Eb;Gb", "output": "Cdim."}, {"input": "Fb;Ab;Cbb", "output": "Fbb5."}, {"input": "F;Ab;Cb", "output": "Fdim."}, {"input": "D#;F##;A#", "output": "D#."}, {"input": "E#;G#;B#", "output": "E#m."}, {"input": "A#;C##;E##", "output": "A#aug."}, {"input": "Gb;Bb;D", "output": "Gbaug."}, {"input": "Gb;Bb;Db", "output": "Gb."}, {"input": "Ab;Cb;Eb", "output": "Abm."}, {"input": "Ab;Db;Eb", "output": "Absus4."}, {"input": "Cb;Ebb;Gb", "output": "Cbm."}, {"input": "F;Bb;C", "output": "Fsus4."}, {"input": "F#;A#;C#", "output": "F#."}, {"input": "F;G;C", "output": "Fsus2."}, {"input": "F;A;C#", "output": "Faug."}, {"input": "A;C;Eb", "output": "Adim."}, {"input": "C;E;G#", "output": "Caug."}, {"input": "Ab;Cb;Ebb", "output": "Abdim."}, {"input": "F;A;Cb", "output": "Fb5."}, {"input": "Fb;Ab;Cb", "output": "Fb."}, {"input": "C#;F#;G#", "output": "C#sus4."}, {"input": "B#;D##;F###", "output": "B#aug."}, {"input": "Db;Eb;Ab", "output": "Dbsus2."}, {"input": "E#;A#;B#", "output": "E#sus4."}, {"input": "F#;A#;C", "output": "F#b5."}, {"input": "Eb;G;Bb", "output": "Eb."}, {"input": "C#;E#;G##", "output": "C#aug."}, {"input": "Bb;D;F", "output": "Bb."}, {"input": "G#;B#;D#", "output": "G#."}, {"input": "A;C;E", "output": "Am."}, {"input": "B#;D#;F##", "output": "B#m."}, {"input": "Cb;Ebb;Gbb", "output": "Cbdim."}, {"input": "F#;G#;C#", "output": "F#sus2."}, {"input": "F;Ab;C", "output": "Fm."}, {"input": "E#;G##;B##", "output": "E#aug."}, {"input": "C;D;G", "output": "Csus2."}, {"input": "F;A;C", "output": "F."}, {"input": "B#;D#;F#", "output": "B#dim."}, {"input": "E#;G##;B#", "output": "E#."}, {"input": "G#;C#;D#", "output": "G#sus4."}, {"input": "A;D;E", "output": "Asus4."}, {"input": "A#;C#;E", "output": "A#dim."}, {"input": "E#;G#;B", "output": "E#dim."}, {"input": "Bb;Db;F", "output": "Bbm."}, {"input": "Db;F;Ab", "output": "Db."}, {"input": "C#;E#;G#", "output": "C#."}, {"input": "Bb;C;F", "output": "Bbsus2."}, {"input": "A#;C##;E", "output": "A#b5."}, {"input": "A#;B#;E#", "output": "A#sus2."}, {"input": "D;E;A", "output": "Dsus2."}, {"input": "C;E;G", "output": "C."}, {"input": "D;F;Ab", "output": "Ddim."}, {"input": "Gb;Bb;Dbb", "output": "Gbb5."}, {"input": "A#;C#;E#", "output": "A#m."}, {"input": "Ab;C;Eb", "output": "Ab."}, {"input": "Db;F;A", "output": "Dbaug."}, {"input": "F#;B;C#", "output": "F#sus4."}, {"input": "Cb;Eb;Gbb", "output": "Cbb5."}, {"input": "Ab;C;E", "output": "Abaug."}, {"input": "Db;F;Abb", "output": "Dbb5."}, {"input": "B;E;F#", "output": "Bsus4."}, {"input": "E;G#;B", "output": "E."}, {"input": "B#;E#;F##", "output": "B#sus4."}, {"input": "Fb;Abb;Cb", "output": "Fbm."}, {"input": "Eb;F;Bb", "output": "Ebsus2."}, {"input": "Eb;G;B", "output": "Ebaug."}, {"input": "D#;G#;A#", "output": "D#sus4."}, {"input": "B;D;F", "output": "Bdim."}, {"input": "C;E;Gb", "output": "Cb5."}, {"input": "D;F#;A", "output": "D."}, {"input": "E;G#;B#", "output": "Eaug."}, {"input": "E;G;B", "output": "Em."}, {"input": "D#;F#;A", "output": "D#dim."}, {"input": "C#;D#;G#", "output": "C#sus2."}, {"input": "G;Bb;Db", "output": "Gdim."}, {"input": "A;C#;Eb", "output": "Ab5."}, {"input": "E#;G##;B", "output": "E#b5."}, {"input": "Fb;Gb;Cb", "output": "Fbsus2."}, {"input": "Db;Fb;Ab", "output": "Dbm."}, {"input": "Eb;G;Bbb", "output": "Ebb5."}, {"input": "D;F#;A#", "output": "Daug."}, {"input": "Db;Gb;Ab", "output": "Dbsus4."}, {"input": "B;D#;F", "output": "Bb5."}, {"input": "Eb;Gb;Bbb", "output": "Ebdim."}, {"input": "Ab;Bb;Eb", "output": "Absus2."}, {"input": "Bb;D;F#", "output": "Bbaug."}, {"input": "B;D#;F#", "output": "B."}, {"input": "D#;E#;A#", "output": "D#sus2."}, {"input": "A;C#;E#", "output": "Aaug."}, {"input": "Fb;Abb;Cbb", "output": "Fbdim."}, {"input": "Db;Fb;Abb", "output": "Dbdim."}, {"input": "F#;A;C#", "output": "F#m."}, {"input": "G;Bb;D", "output": "Gm."}, {"input": "C#;E;G#", "output": "C#m."}, {"input": "D;G;A", "output": "Dsus4."}, {"input": "G;A;D", "output": "Gsus2."}, {"input": "A;B;E", "output": "Asus2."}, {"input": "D;F;A", "output": "Dm."}, {"input": "C#;E;G", "output": "C#dim."}, {"input": "G;B;Db", "output": "Gb5."}, {"input": "C#;E#;G", "output": "C#b5."}, {"input": "G#;B#;D", "output": "G#b5."}, {"input": "D#;F#;A#", "output": "D#m."}, {"input": "E;G#;Bb", "output": "Eb5."}, {"input": "A;C#;E", "output": "A."}, {"input": "G#;B;D", "output": "G#dim."}, {"input": "Gb;Bbb;Dbb", "output": "Gbdim."}, {"input": "Gb;Bbb;Db", "output": "Gbm."}, {"input": "B;D;F#", "output": "Bm."}, {"input": "D;F#;Ab", "output": "Db5."}, {"input": "C;Eb;G", "output": "Cm."}, {"input": "Cb;Eb;G", "output": "Cbaug."}, {"input": "B;C#;F#", "output": "Bsus2."}, {"input": "Eb;Ab;Bb", "output": "Ebsus4."}, {"input": "G#;B;D#", "output": "G#m."}, {"input": "G#;B#;D##", "output": "G#aug."}, {"input": "Bb;Eb;F", "output": "Bbsus4."}, {"input": "G;C;D", "output": "Gsus4."}, {"input": "D#;F##;A##", "output": "D#aug."}, {"input": "C;F;G", "output": "Csus4."}, {"input": "B#;D##;F##", "output": "B#."}, {"input": "E;F#;B", "output": "Esus2."}, {"input": "E;A;B", "output": "Esus4."}, {"input": "D#;F##;A", "output": "D#b5."}, {"input": "F#;A;C", "output": "F#dim."}, {"input": "A#;D#;E#", "output": "A#sus4."}, {"input": "Eb;Gb;Bb", "output": "Ebm."}]} {"eval": "forth-stack-sim.dev.v0", "instruction": "You are ForthGPT, a Forth machine simulation that ONLY responds with stack representations after executing valid ANS Forth words and numbers.\nExample:\nPrompt: 0 1 2 3 +\nResponse: (stack 0 1 5)\nRules:\n1. Respond only to combinations of numbers and valid ANS Forth words.\n2. Ignore prompts that don't follow Rule 1.\n3. Ignore Forth words that don't generate output or change the stack.", "test_samples": [{"input": "1 2 3 4 2swap 2over - 2dup", "output": "(stack 3 4 1 2 -1 2 -1)"}, {"input": "1 2 3 drop 2drop", "output": "(stack)"}, {"input": "1 2 3 4 2dup + + +", "output": "(stack 1 2 14)"}, {"input": "1 2 3 4 2swap 2over - 2dup + + +", "output": "(stack 3 4 1 2)"}, {"input": "5 6 7 8 2swap 2over - * + swap + *", "output": "(stack 49)"}, {"input": "1 2 3 4 swap 2swap swap", "output": "(stack 4 3 2 1)"}, {"input": "11 13 * 17 19 * +", "output": "(stack 466)"}, {"input": "1 2 3 rot over dup swap", "output": "(stack 2 3 1 3 3)"}, {"input": "4 2 + 3 + 5", "output": "(stack 9 5)"}, {"input": "1 2 3 4 2dup + + swap - + +", "output": "(stack 11)"}], "train_samples": [{"input": "1 2 3 4 rot 2over 2dup 2swap", "output": "(stack 1 3 4 2 1 3 1 3)"}, {"input": "1 2 3 dup 2over rot", "output": "(stack 1 2 3 1 2 3)"}, {"input": "1 2 3 dup", "output": "(stack 1 2 3 3)"}, {"input": "7 2 3 over * +", "output": "(stack 7 8)"}, {"input": "5 6 2dup + -", "output": "(stack 5 -5)"}, {"input": "2 3 4 5 2dup * + * - -", "output": "(stack 99)"}, {"input": "7 2 3 dup * +", "output": "(stack 7 11)"}, {"input": "10 2 3 nip *", "output": "(stack 30)"}, {"input": "4 2 + 3 + 5 +", "output": "(stack 14)"}, {"input": "3 4 5 6 2over + * 2swap * +", "output": "(stack 5 54)"}, {"input": "1 2 3 4 2drop 2drop", "output": "(stack)"}, {"input": "1 2 over rot", "output": "(stack 2 1 1)"}, {"input": "1 2 3 rot swap", "output": "(stack 2 1 3)"}, {"input": "8 9 10 11 2swap - + *", "output": "(stack 100)"}, {"input": "4 5 swap 2 + -", "output": "(stack -1)"}, {"input": "1 2 3 4 2dup + - +", "output": "(stack 1 2 0)"}, {"input": "32 11 - 7 /", "output": "(stack 3)"}, {"input": "8 9 2dup * +", "output": "(stack 8 81)"}, {"input": "1 2 3 4 2over + * + * +", "output": "(stack 31)"}, {"input": "7 3 over dup swap + * + 5 2 - - 2 /", "output": "(stack 23)"}, {"input": "1 2 3 4 2drop", "output": "(stack 1 2)"}, {"input": "1 2 3 swap drop dup", "output": "(stack 1 3 3)"}, {"input": "5 6 7 8 2dup 2swap * +", "output": "(stack 5 6 7 64)"}, {"input": "32 11 - 7 / 5 3 - -", "output": "(stack 1)"}, {"input": "10 2 3 drop *", "output": "(stack 20)"}, {"input": "7 3 over dup 2swap", "output": "(stack 7 7 7 3)"}, {"input": "1 2 3 4 2over", "output": "(stack 1 2 3 4 1 2)"}, {"input": "10 2 3 swap drop *", "output": "(stack 30)"}, {"input": "17 29 * 31 37 + *", "output": "(stack 33524)"}, {"input": "4 5 over + swap -", "output": "(stack 5)"}, {"input": "5 6 7 8 2over * swap - swap - rot - +", "output": "(stack 16)"}, {"input": "13 25 32 47 2over + 2swap + * + +", "output": "(stack 2226)"}, {"input": "1 2 3 swap rot", "output": "(stack 3 2 1)"}, {"input": "4 5 6 7 2swap - +", "output": "(stack 6 6)"}, {"input": "11 13 * 17 19 * + 23 29 * +", "output": "(stack 1133)"}, {"input": "7 3 over dup 2swap + * +", "output": "(stack 77)"}, {"input": "7 3 over dup swap + * + 5 2 - -", "output": "(stack 46)"}, {"input": "1 2 3 over", "output": "(stack 1 2 3 2)"}, {"input": "4 5 6 7 2over + + over + + over + + +", "output": "(stack 42)"}, {"input": "4 5 2 + swap -", "output": "(stack 3)"}]} {"eval": "belarusian-syllable-count.dev.v0", "instruction": "You will be prompted with a single Belarusian word. Your output must be the number of syllables in this word (a single digit). Return only this number and nothing else.", "test_samples": [{"input": "\u0456\u0445", "output": "1"}, {"input": "\u0441\u0435\u043b\u044c\u0441\u043a\u0430\u0433\u0430\u0441\u043f\u0430\u0434\u0430\u0440\u0447\u044b\u0445", "output": "6"}, {"input": "\u043d\u0430\u0440\u0430\u0434\u0437\u0456\u045e\u0441\u044f", "output": "4"}, {"input": "\u0433\u0456\u0441\u0442\u0430\u0440\u044b\u044f\u0433\u0440\u0430\u0444\u0456\u0456", "output": "7"}, {"input": "\u043f\u0430\u0441\u0435\u043b\u0456\u0448\u0447\u0430", "output": "4"}, {"input": "\u044f\u043a\u0456\u044f", "output": "3"}, {"input": "\u0434\u0437\u044f\u0440\u0436\u0430\u045e\u043d\u0430\u0433\u0430", "output": "4"}, {"input": "\u043f\u0430\u0432\u043e\u0434\u043b\u0435", "output": "3"}, {"input": "\u0443\u043d\u0456\u0432\u0435\u0440\u0441\u0456\u0442\u044d\u0442", "output": "5"}, {"input": "\u0430\u0433\u0443\u043b\u044c\u043d\u0430\u0433\u0430", "output": "4"}], "train_samples": [{"input": "\u043f\u0430\u0434\u0447\u0430\u0441", "output": "2"}, {"input": "\u0441\u0442\u0430\u0433\u043e\u0434\u0434\u0437\u044f", "output": "3"}, {"input": "\u0437\u0430\u0445\u0430\u0432\u0430\u043b\u0456\u0441\u044f", "output": "5"}, {"input": "\u0430\u0442\u0440\u044b\u043c\u0430\u045e", "output": "3"}, {"input": "\u0434\u0437\u0435", "output": "1"}, {"input": "\u043f\u0435\u0440\u0448\u0430\u043f\u0430\u0447\u0430\u0442\u043a\u043e\u0432\u0430", "output": "6"}, {"input": "\u0432\u0451\u0441\u043a\u0430", "output": "2"}, {"input": "\u043d\u0435\u0437\u0430\u043b\u0435\u0436\u043d\u0430\u0441\u0446\u0456", "output": "5"}, {"input": "\u0432\u044b\u0441\u043e\u043a\u0430\u043a\u0432\u0430\u043b\u0456\u0444\u0456\u043a\u0430\u0432\u0430\u043d\u044b\u0445", "output": "9"}, {"input": "\u0432\u044b\u043a\u0430\u0440\u044b\u0441\u0442\u043e\u045e\u0432\u0430\u044e\u0446\u044c", "output": "6"}, {"input": "\u0433\u0435\u043d\u0435\u0440\u0430\u043b-\u0433\u0443\u0431\u0435\u0440\u043d\u0430\u0442\u0430\u0440\u0441\u0442\u0432\u0430", "output": "8"}, {"input": "\u0433\u0430\u0434\u043e\u045e", "output": "2"}, {"input": "\u0433\u043e\u0440\u0430\u0434", "output": "2"}, {"input": "\u043d\u044f\u043c\u0435\u0446\u043a\u0430-\u0444\u0430\u0448\u044b\u0441\u0446\u043a\u0456\u043c\u0456", "output": "7"}, {"input": "\u043d\u0430\u0432\u0443\u043a\u043e\u0432\u044b\u044f", "output": "5"}, {"input": "\u0432\u043e\u0437\u0435\u0440\u0430", "output": "3"}, {"input": "\u0440\u0430\u0451\u043d", "output": "2"}, {"input": "\u044f\u0433\u043e", "output": "2"}, {"input": "\u0448\u0442\u043e", "output": "1"}, {"input": "\u0440\u044d\u0441\u043f\u0443\u0431\u043b\u0456\u043a\u0430\u043d\u0441\u043a\u0430\u0433\u0430", "output": "6"}, {"input": "\u0437\u043d\u0430\u0445\u043e\u0434\u0437\u0456\u043b\u0430\u0441\u044f", "output": "5"}, {"input": "\u043d\u0430\u0446\u044b\u044f\u043d\u0430\u043b\u044c\u043d\u044b", "output": "5"}, {"input": "\u043f\u0430\u045e\u043d\u043e\u0447\u043d\u0430-\u0437\u0430\u0445\u043e\u0434\u043d\u044f\u0433\u0430", "output": "7"}, {"input": "\u0430\u0436\u044b\u0446\u0446\u044f\u045e\u043b\u044f\u0435\u0446\u0446\u0430", "output": "6"}, {"input": "\u0434\u0430\u0441\u043b\u0435\u0434\u0430\u0432\u0430\u043d\u043d\u044f\u045e", "output": "5"}, {"input": "\u0441\u043a\u043b\u0430\u0434\u0430\u0435", "output": "3"}, {"input": "\u0430\u0433\u0440\u0430\u0433\u0430\u0440\u0430\u0434\u043e\u043a", "output": "5"}, {"input": "\u0444\u0456\u0437\u0456\u043a\u0430-\u043c\u0430\u0442\u044d\u043c\u0430\u0442\u044b\u0447\u043d\u044b\u0445", "output": "8"}, {"input": "\u0441\u043f\u0435\u0446\u044b\u044f\u043b\u0456\u0437\u0430\u0432\u0430\u043d\u044b\u044f", "output": "8"}, {"input": "\u0430\u0434\u043d\u0430\u043a", "output": "2"}, {"input": "\u0442\u044d\u043b\u0435\u0440\u0430\u0434\u044b\u0451\u043a\u0430\u043c\u043f\u0430\u043d\u0456\u0456", "output": "9"}, {"input": "\u0441\u0430\u0446\u044b\u044f\u043b\u0456\u0441\u0442\u044b\u0447\u043d\u0430\u0439", "output": "6"}, {"input": "\u043b\u0456\u0431\u0435\u0440\u0430\u043b\u044c\u043d\u0430-\u0434\u044d\u043c\u0430\u043a\u0440\u0430\u0442\u044b\u0447\u043d\u0430\u0439", "output": "9"}, {"input": "\u0442\u0430\u043a\u0441\u0430\u043c\u0430", "output": "3"}, {"input": "\u0440\u0430\u0437\u043c\u0435\u0448\u0447\u0430\u043d\u044b", "output": "4"}, {"input": "\u043f\u0435\u0440\u0430\u0432\u0430\u0436\u043d\u0430", "output": "4"}, {"input": "\u0430\u0434\u043d\u0430\u0447\u0430\u0441\u043e\u0432\u0430", "output": "5"}, {"input": "\u0456", "output": "1"}, {"input": "\u0431\u043e\u043b\u044c\u0448", "output": "1"}, {"input": "\u0443\u0437\u043d\u0430\u0433\u0430\u0440\u043e\u0434\u0436\u0430\u043d\u044b", "output": "6"}, {"input": "\u043f\u0430\u0434\u043f\u0430\u0440\u0430\u0434\u043a\u043e\u045e\u0432\u0430\u0435\u0446\u0446\u0430", "output": "7"}, {"input": "\u043f\u0430\u0431\u0443\u0434\u0430\u0432\u0430\u043d\u044b", "output": "5"}, {"input": "\u0441\u0430\u043a\u0430\u0432\u0456\u043a\u0430", "output": "4"}, {"input": "\u0437", "output": "0"}, {"input": "\u0433\u043e\u0434\u0437\u0435", "output": "2"}, {"input": "\u0430\u0440\u0445\u0435\u0430\u043b\u0430\u0433\u0456\u0447\u043d\u044b\u044f", "output": "7"}, {"input": "\u0431\u0435\u043b\u0430\u0440\u0443\u0441\u043a\u0430\u0439", "output": "4"}, {"input": "\u043f\u0440\u0430\u043c\u044b\u0441\u043b\u043e\u0432\u0430\u0441\u0446\u0456", "output": "5"}, {"input": "\u0432\u044f\u043b\u0456\u043a\u0430\u0439", "output": "3"}, {"input": "\u0443\u0432\u0430\u0445\u043e\u0434\u0437\u0456\u0446\u044c", "output": "4"}, {"input": "\u043f\u0435\u0440\u0430\u043b\u0456\u0447\u0430\u043d\u044b\u0445", "output": "5"}, {"input": "\u043f\u0430\u043c\u0456\u0436", "output": "2"}, {"input": "\u0442\u0430\u0432\u0430\u0440\u044b\u0441\u0442\u0432\u0430", "output": "4"}, {"input": "\u043f\u0440\u044b", "output": "1"}, {"input": "\u0433\u0430\u043b\u043e\u045e\u043d\u0430\u043a\u0430\u043c\u0430\u043d\u0434\u0443\u044e\u0447\u044b", "output": "8"}, {"input": "\u0432\u043e\u0431\u043b\u0430\u0441\u0446\u0456", "output": "3"}, {"input": "\u043c\u0430\u0448\u044b\u043d\u0430\u0431\u0443\u0434\u0430\u0432\u0430\u043d\u043d\u044f", "output": "7"}, {"input": "\u043f\u0440\u0430\u0446\u0430\u0432\u0430\u045e", "output": "3"}, {"input": "\u0430\u0441\u0430\u0431\u043b\u0456\u0432\u0430", "output": "4"}, {"input": "\u0440\u044d\u0430\u0431\u0456\u043b\u0456\u0442\u0430\u0432\u0430\u043d\u044b", "output": "7"}, {"input": "\u0432\u044b\u043a\u0430\u0440\u044b\u0441\u0442\u043e\u045e\u0432\u0430\u043b\u0456\u0441\u044f", "output": "7"}, {"input": "\u043a\u0430\u043b\u044f", "output": "2"}, {"input": "\u0440\u0430\u0437\u0430\u043c", "output": "2"}, {"input": "\u0430\u0434\u0440\u043e\u0437\u043d\u0456\u0432\u0430\u0435\u0446\u0446\u0430", "output": "6"}, {"input": "\u0433\u0456\u0441\u0442\u043e\u0440\u044b\u0456", "output": "4"}, {"input": "\u0447\u044d\u043c\u043f\u0456\u044f\u043d\u0430\u0446\u0435", "output": "5"}, {"input": "\u0451\u043d", "output": "1"}, {"input": "\u0430\u0434\u0443\u043a\u0430\u0446\u044b\u0456", "output": "5"}, {"input": "\u0431", "output": "0"}, {"input": "\u0430\u0434\u043c\u0456\u043d\u0456\u0441\u0442\u0440\u0430\u0446\u044b\u0439\u043d\u044b", "output": "6"}, {"input": "\u0441\u0435\u043b\u044c\u0441\u0430\u0432\u0435\u0442\u0430", "output": "4"}, {"input": "\u0456\u043c\u044f", "output": "2"}, {"input": "\u0441\u0442\u0443\u0434\u0437\u0435\u043d\u044f", "output": "3"}, {"input": "\u0431\u044b\u043b\u0456", "output": "2"}, {"input": "\u043f\u0430\u0447\u044b\u043d\u0430\u0435\u0446\u0446\u0430", "output": "5"}, {"input": "\u043d\u0435\u0430\u0434\u043d\u0430\u0440\u0430\u0437\u043e\u0432\u0430", "output": "6"}, {"input": "\u043f\u0430\u0441\u043b\u044f", "output": "2"}, {"input": "\u0441\u0442\u0430\u0440\u0430\u0436\u044b\u0442\u043d\u0430\u0433\u0440\u044d\u0447\u0430\u0441\u043a\u0430\u0439", "output": "7"}, {"input": "\u0456\u043d\u0448\u044b\u044f", "output": "3"}, {"input": "\u0441\u0430\u043c\u0430\u0456\u0434\u044d\u043d\u0442\u044b\u0444\u0456\u043a\u0430\u0446\u044b\u0456", "output": "9"}, {"input": "\u0430\u0433\u0443\u043b\u044c\u043d\u0430\u0430\u0434\u0443\u043a\u0430\u0446\u044b\u0439\u043d\u0430\u044f", "output": "9"}, {"input": "\u0445\u0430\u0440\u0430\u043a\u0442\u0430\u0440\u044b\u0437\u0430\u0432\u0430\u043b\u0430\u0441\u044f", "output": "8"}, {"input": "\u0441\u044f\u0440\u044d\u0434\u043d\u0435\u0433\u0430\u0434\u0430\u0432\u0430\u044f", "output": "7"}, {"input": "\u0437'\u044f\u045e\u043b\u044f\u0435\u0446\u0446\u0430", "output": "4"}, {"input": "\u043d\u0430\u0441\u0435\u043b\u044c\u043d\u0456\u0446\u0442\u0432\u0430", "output": "4"}, {"input": "\u0447\u0430\u043b\u0430\u0432\u0435\u043a", "output": "3"}, {"input": "\u0433\u044d\u0442\u044b", "output": "2"}, {"input": "\u0441\u0443\u0437\u043e\u0440'\u0456", "output": "3"}, {"input": "\u0431\u044b\u045e", "output": "1"}, {"input": "\u043d\u0435\u043a\u0430\u043b\u044c\u043a\u0456", "output": "3"}]} {"eval": "css-selectors-verbal.dev.v0", "instruction": "You are an AI tasked with helping web designers. You will be given a verbal description. Respond with the appropriate css selector only. Do not respond with any text or disclaimers.", "test_samples": [{"input": "select input elements with the readonly attribute not specified", "output": "input:read-write"}, {"input": "select all <p> elements with lang attribute equal to fr (French)", "output": "p:lang(fr)"}, {"input": "select all <p> elements that are the second <p> element of its parent, counting from the last child", "output": "p:nth-last-of-type(2)"}, {"input": "select all <p> elements that are the last child of its parent", "output": "p:last-child"}, {"input": "select the first letter of every <p> element", "output": "p::first-letter"}, {"input": "select all elements with attribute attribute_name containing attribute_value as a sub string", "output": "[attribute_name*='attribute_value']"}, {"input": "select all input elements with a valid value", "output": "input:valid"}, {"input": "select all elements with class name equal to class_name", "output": ".class_name"}, {"input": "select all <p> elements", "output": "p"}, {"input": "select the active link element", "output": "a:active"}], "train_samples": [{"input": "select all <p> elements that are the second child of it's parent counting from the last child", "output": "p:nth-last-child(2)"}, {"input": "select all elements with attribute attribute_name ending with attribute_value", "output": "[attribute_name$='attribute_value']"}, {"input": "select all <p> elements with class equal to class_name", "output": "p.class_name"}, {"input": "select all <p> elements that are the only <p> element of its parent", "output": "p:only-of-type"}, {"input": "select all <p> elements inside <div> elements", "output": "div p"}, {"input": "select all visited links", "output": "a:visited"}, {"input": "select all <p> elements that are the only child of its parent", "output": "p:only-child"}, {"input": "select the element that is in full screen mode", "output": ":fullscreen"}, {"input": "select the all checked input elements", "output": "input:checked"}, {"input": "select all elements with attribute attribute_name starting with attribute_value", "output": "[attribute_name^='attribute_value']"}, {"input": "select every <p> elements that is preceded by a <div> element", "output": "div ~ p"}, {"input": "select the current active #anchor element after clicking on an anchor with that name", "output": "#anchor:target"}, {"input": "select all <p> elements that are the second <p> element of its parent", "output": "p:nth-of-type(2)"}, {"input": "select all <p> elements that are the first child of its parent", "output": "p:first-child"}, {"input": "select all elements with attribute attribute_name equal to or starting with attribute_value", "output": "[attribute_name|='attribute_value']"}, {"input": "select all elements that are not <p> elements", "output": ":not(p)"}, {"input": "select all elements with class_name_a that is a descendant of an element with class_name_b", "output": ".class_name_a .class_name_b"}, {"input": "select all <p> elements that are the second child of it's parent", "output": "p:nth-child(2)"}, {"input": "select input elements with value bellow min or above max", "output": "input:out-of-range"}, {"input": "select all elements with class_name_a and class_name_b within it's class name", "output": ".class_name_a.class_name_b"}, {"input": "select input elements with invalid value", "output": "input:invalid"}, {"input": "select all elements in a page", "output": "*"}, {"input": "select the first <p> elements that is placed immediately after <div> element", "output": "div + p"}, {"input": "select input elements with the placeholder attribute specified", "output": "input::placeholder"}, {"input": "select the first line of every <p> element", "output": "p::first-line"}, {"input": "select all <p> elements that has no children", "output": "p:empty"}, {"input": "select all disabled input elements", "output": "input:disabled"}, {"input": "select links element on mouse over", "output": "a:hover"}, {"input": "select input elements with value between min and max", "output": "input:in-range"}, {"input": "select all <p> elements where parent is a <div> element", "output": "div > p"}, {"input": "select input elements with no required attribute", "output": "input:optional"}, {"input": "select all elements with attribute attribute_name equal to attribute_value", "output": "[attribute_name='attribute_value']"}, {"input": "select the portion of an element that is selected by a user", "output": "::selection"}, {"input": "select all <p> elements that are the last <p> of it's parent", "output": "p::last-of-type"}, {"input": "select input elements with the readonly attribute specified", "output": "input:read-only"}, {"input": "select the default input elements", "output": "input:default"}, {"input": "select all <p> elements that are the first <p> of it's parent", "output": "p::first-of-type"}, {"input": "select the element with id equal to element_id", "output": "#element_id"}, {"input": "select all enabled <p> elements", "output": "p:enabled"}, {"input": "select input elements with the required attribute specified", "output": "input:required"}, {"input": "select all unvisited links", "output": "a:link"}, {"input": "select the input elements that has focus", "output": "input:focus"}, {"input": "select all elements with attribute attribute_name containing attribute_value as a whole word", "output": "[attribute_name~='attribute_value']"}, {"input": "select all <div> elements and all <p> elements", "output": "div, p"}, {"input": "select input elements that are in an indeterminate state", "output": "input:indeterminate"}, {"input": "select the document's root element", "output": ":root"}, {"input": "select all elements with attribute attribute_name defined", "output": "[attribute_name]"}]} ``` </details>
2 years ago
.pre-commit-config.yaml
Adding ruff, running pre-commit hooks, small fixes and documentation (#1303) This doesn't contribute an Eval but slightly improves the developer experience for contributors.
2 years ago
LICENSE.md
Already Said That Eval (#1490) @JunShern will review this # Thank you for contributing an eval! ♥️ 🚨 Please make sure your PR follows these guidelines, **failure to follow the guidelines below will result in the PR being closed automatically**. Note that even if the criteria are met, that does not guarantee the PR will be merged nor GPT-4 access be granted. 🚨 **PLEASE READ THIS**: In order for a PR to be merged, it must fail on GPT-4. We are aware that right now, users do not have access, so you will not be able to tell if the eval fails or not. Please run your eval with GPT-3.5-Turbo, but keep in mind as we run the eval, if GPT-4 gets higher than 90% on the eval, we will likely reject it since GPT-4 is already capable of completing the task. We plan to roll out a way for users submitting evals to see the eval performance on GPT-4 soon. Stay tuned! Until then, you will not be able to see the eval performance on GPT-4. **Starting April 10, the minimum eval count is 15 samples, we hope this makes it easier to create and contribute evals.** Also, please note that we're using **Git LFS** for storing the JSON files, so please make sure that you move the JSON file to Git LFS before submitting a PR. Details on how to use Git LFS are available [here](https://git-lfs.com). ## Eval details 📑 ### Eval name Alaready Said That ### Eval description This eval measures how robust models are to distractors when performing sequential tasks. We construct a toy task where the model needs to determine whether it has already seen a given word, and inject distractor questions into the interaction, keeping track of model performance throughout. ### What makes this a useful eval? [Insert why this eval is worth including and any additional context] ## Criteria for a good eval ✅ Below are some of the criteria we look for in a good eval. In general, we are seeking cases where the model does not do a good job despite being capable of generating a good response (note that there are some things large language models cannot do, so those would not make good evals). Your eval should be: - [x] Thematically consistent: The eval should be thematically consistent. We'd like to see a number of prompts all demonstrating some particular failure mode. For example, we can create an eval on cases where the model fails to reason about the physical world. - [x] Contains failures where a human can do the task, but either GPT-4 or GPT-3.5-Turbo could not. - [x] Includes good signal around what is the right behavior. This means either a correct answer for `Basic` evals or the `Fact` Model-graded eval, or an exhaustive rubric for evaluating answers for the `Criteria` Model-graded eval. - [x] **Include at least 15 high-quality examples.** If there is anything else that makes your eval worth including, please document it below. ### Unique eval value > Insert what makes your eval high quality that was not mentioned above. (Not required) ## Eval structure 🏗️ Your eval should - [x] Check that your data is in `evals/registry/data/{name}` - [x] Check that your YAML is registered at `evals/registry/evals/{name}.yaml` - [x] Ensure you have the right to use the data you submit via this eval (For now, we will only be approving evals that use one of the existing eval classes. You may still write custom eval classes for your own cases, and we may consider merging them in the future.) ## Final checklist 👀 ### Submission agreement By contributing to Evals, you are agreeing to make your evaluation logic and data under the same MIT license as this repository. You must have adequate rights to upload any data used in an Eval. OpenAI reserves the right to use this data in future service improvements to our product. Contributions to OpenAI Evals will be subject to our usual Usage Policies (<https://platform.openai.com/docs/usage-policies>). - [x] I agree that my submission will be made available under an MIT license and complies with OpenAI's usage policies. ### Email address validation If your submission is accepted, we will be granting GPT-4 access to a limited number of contributors. Access will be given to the email address associated with the commits on the merged pull request. - [x] I acknowledge that GPT-4 access will only be granted, if applicable, to the email address used for my merged pull request. ### Limited availability acknowledgment We know that you might be excited to contribute to OpenAI's mission, help improve our models, and gain access to GPT-4. However, due to the requirements mentioned above and the high volume of submissions, we will not be able to accept all submissions and thus not grant everyone who opens a PR GPT-4 access. We know this is disappointing, but we hope to set the right expectation before you open this PR. - [x] I understand that opening a PR, even if it meets the requirements above, does not guarantee the PR will be merged nor GPT-4 access be granted. ### Submit eval - [x] I have filled out all required fields of this form - [x] I have used **Git LFS** for the Eval JSON data - [x] (Ignore if not submitting code) I have run `pip install pre-commit; pre-commit install` and have verified that `mypy`, `black`, `isort`, `autoflake` and `ruff` are running when I commit and push Failure to fill out all required fields will result in the PR being closed. ### Eval JSON data Since we are using Git LFS, we are asking eval submitters to add in as many Eval Samples (at least 5) from their contribution here: <details> <summary>View evals in JSON</summary> ### Eval ```jsonl INSERT_EVAL_HERE ``` </details>
a year ago
Makefile
Makefile: Set `mypy` as a phony target (#1031) Add `PHONY: ` to Makefile to not treating `mypy` as a file. The current `Makefile` causes `make` to throw `make: 'mypy' is up to date.` if there is a file named `mypy`: https://www.gnu.org/software/make/manual/html_node/Phony-Targets.html
2 years ago
MANIFEST.in
Keep data pull from main branch
2 years ago
mypy.ini
Correct the types of registry and cli.oaievalset (#1027) Currently, the `registry` and `cli.oaievalset` won't be checked as expected due to the misform wildcard in Additional Sections. This PR: - re-enable the checks for these modules - correct and strengthen the types of these modules
2 years ago
pyproject.toml
Release 3.0.1 (#1525) Release 3.0.1
a year ago
README.md
Update ReadMe with New Cookbook link (#1507) Making a change to the ReadMe to link to the Cookbook that Shyamal and I created. # Thank you for contributing an eval! ♥️ 🚨 Please make sure your PR follows these guidelines, **failure to follow the guidelines below will result in the PR being closed automatically**. Note that even if the criteria are met, that does not guarantee the PR will be merged nor GPT-4 access be granted. 🚨 **PLEASE READ THIS**: In order for a PR to be merged, it must fail on GPT-4. We are aware that right now, users do not have access, so you will not be able to tell if the eval fails or not. Please run your eval with GPT-3.5-Turbo, but keep in mind as we run the eval, if GPT-4 gets higher than 90% on the eval, we will likely reject it since GPT-4 is already capable of completing the task. We plan to roll out a way for users submitting evals to see the eval performance on GPT-4 soon. Stay tuned! Until then, you will not be able to see the eval performance on GPT-4. **Starting April 10, the minimum eval count is 15 samples, we hope this makes it easier to create and contribute evals.** Also, please note that we're using **Git LFS** for storing the JSON files, so please make sure that you move the JSON file to Git LFS before submitting a PR. Details on how to use Git LFS are available [here](https://git-lfs.com). ## Eval details 📑 ### Eval name [Insert Eval name here] ### Eval description [Insert a short description of what your eval does here] ### What makes this a useful eval? [Insert why this eval is worth including and any additional context] ## Criteria for a good eval ✅ Below are some of the criteria we look for in a good eval. In general, we are seeking cases where the model does not do a good job despite being capable of generating a good response (note that there are some things large language models cannot do, so those would not make good evals). Your eval should be: - [ ] Thematically consistent: The eval should be thematically consistent. We'd like to see a number of prompts all demonstrating some particular failure mode. For example, we can create an eval on cases where the model fails to reason about the physical world. - [ ] Contains failures where a human can do the task, but either GPT-4 or GPT-3.5-Turbo could not. - [ ] Includes good signal around what is the right behavior. This means either a correct answer for `Basic` evals or the `Fact` Model-graded eval, or an exhaustive rubric for evaluating answers for the `Criteria` Model-graded eval. - [ ] **Include at least 15 high-quality examples.** If there is anything else that makes your eval worth including, please document it below. ### Unique eval value > Insert what makes your eval high quality that was not mentioned above. (Not required) ## Eval structure 🏗️ Your eval should - [ ] Check that your data is in `evals/registry/data/{name}` - [ ] Check that your YAML is registered at `evals/registry/evals/{name}.yaml` - [ ] Ensure you have the right to use the data you submit via this eval (For now, we will only be approving evals that use one of the existing eval classes. You may still write custom eval classes for your own cases, and we may consider merging them in the future.) ## Final checklist 👀 ### Submission agreement By contributing to Evals, you are agreeing to make your evaluation logic and data under the same MIT license as this repository. You must have adequate rights to upload any data used in an Eval. OpenAI reserves the right to use this data in future service improvements to our product. Contributions to OpenAI Evals will be subject to our usual Usage Policies (<https://platform.openai.com/docs/usage-policies>). - [ ] I agree that my submission will be made available under an MIT license and complies with OpenAI's usage policies. ### Email address validation If your submission is accepted, we will be granting GPT-4 access to a limited number of contributors. Access will be given to the email address associated with the commits on the merged pull request. - [ ] I acknowledge that GPT-4 access will only be granted, if applicable, to the email address used for my merged pull request. ### Limited availability acknowledgment We know that you might be excited to contribute to OpenAI's mission, help improve our models, and gain access to GPT-4. However, due to the requirements mentioned above and the high volume of submissions, we will not be able to accept all submissions and thus not grant everyone who opens a PR GPT-4 access. We know this is disappointing, but we hope to set the right expectation before you open this PR. - [ ] I understand that opening a PR, even if it meets the requirements above, does not guarantee the PR will be merged nor GPT-4 access be granted. ### Submit eval - [ ] I have filled out all required fields of this form - [ ] I have used **Git LFS** for the Eval JSON data - [ ] (Ignore if not submitting code) I have run `pip install pre-commit; pre-commit install` and have verified that `mypy`, `black`, `isort`, `autoflake` and `ruff` are running when I commit and push Failure to fill out all required fields will result in the PR being closed. ### Eval JSON data Since we are using Git LFS, we are asking eval submitters to add in as many Eval Samples (at least 5) from their contribution here: <details> <summary>View evals in JSON</summary> ### Eval ```jsonl INSERT_EVAL_HERE ``` </details>
a year ago
SECURITY.md
Add download command for dataset (#23)
2 years ago
spicepod-local.yml
spicepods for the dataset
7 months ago
spicepod.yaml
reduce to ~100 datasets
6 months ago