Finetuing CodeLLaMA on LeanDojo #5
Replies: 2 comments 13 replies
-
I'm transferring this issue to Github Discussions (see https://github.com/lean-dojo/LeanDojo#questions-and-bugs) Regarding the questions themselves, they can be found in the paper by searching for "commit" or in our data generation script: https://github.com/lean-dojo/LeanDojo/blob/main/scripts/generate-benchmark-lean3.ipynb |
Beta Was this translation helpful? Give feedback.
-
Hi @Some-random, do you have any results on LLAMA 7B running with lean-dojo by now? It would be valuable to see how well the Reprover works on a larger language model~. And Hi @yangky11, great work here! Are you planning on running the Reprover with larger language models? |
Beta Was this translation helpful? Give feedback.
-
In my ATP research, I've fine-tuned a LLAMA 7B model using lean_proof_recording and used lean-gym to interact with Lean. Now, I plan to switch to LeanDoJo for improved data collection and Lean interaction. Hence, I have two queries:
1 Which exact version of miniF2F is used in the LeanDoJo paper? Was Meta's bug-reduced version employed?
2 Which repository and commit ID can enable full mathlib3 tracing, as in lean_proof_recording, to assess my fine-tuned LLAMA's performance?
Beta Was this translation helpful? Give feedback.
All reactions