Submitted by starstruckmon t3_10qhgmv in MachineLearning
mlresearchoor t1_j6r8x7y wrote
nice find! would be helpful, as well, to compare with similar papers from 2022 that this paper cites, but did not compare to in results section
("We note that our work is concurrent with Chen et al. (2022) and Gao et al. (2022), both generating the reasoning chain in Python code and calling a Python interpreter to derive the answer. While we do not compare with them empirically since they are not yet published...")
Program of Thoughts Prompting: Disentangling Computation from Reasoning for Numerical Reasoning Tasks (Chen)
https://arxiv.org/abs/2211.12588
PAL: Program-aided Language Models (Gao)
https://arxiv.org/abs/2211.10435
LetterRip t1_j6shnin wrote
The prompts are so specific to the datasets for those two papers they don't seem very useful. We'll have to wait for the code to see if FCoT is a similar case or not.
axm92 t1_j6tw995 wrote
Thanks! Can you please clarify what do you mean by prompts are specific to the datasets for PaL?
​
As an example, for the ~10 math reasoning datasets used in PaL, identical prompts were used (same prompt for all datasets, without changing anything). The prompts/code is also open sourced at https://reasonwithpal.com/ if you want to check if out!
Incidentally, the idea that Python programs lead to faithful reasoning chains was used in PaL to create a new split of GSM, called GSM-hard. GSM-hard is available on huggingface.
​
(I'm a co-author of the PaL paper. )
LetterRip t1_j6u7cu9 wrote
In my view something like "Let's think things through step by step" prompt is extremely generic and requires no knowledge specific to the upcoming questions.
I was basing my comment on the content of this folder mostly,
https://github.com/reasoning-machines/pal/tree/main/pal/prompt
Each of the prompts seem to require extensive knowledge of the test set to have formulated the prompts.
This seems more akin to Watson where the computer scientists analyzed the form of a variety of questions and did programs for each type of question.
axm92 t1_j6uf2a7 wrote
Ah I see, thanks for clarifying. I see your point, but I wouldn't say that the prompts require an extensive knowledge of the test set. After all:
> As an example, for the ~10 math reasoning datasets used in PaL, identical prompts were used (same prompt for all datasets, without changing anything).
​
Notably, take a look at the section on GSM-hard (4.1). You may also enjoy the analysis in the new version of the paper (Section 6: https://arxiv.org/pdf/2211.10435.pdf).
​
Further, "Let's think step by step" is outperformed by "Write Python code to solve this." We'll add the numbers in the next version, but if you are interested please lmk and I can share the results earlier.
Thanks again for reading our work and sharing your feedback, I really appreciate it.
LetterRip t1_j6uj087 wrote
> Further, "Let's think step by step" is outperformed by "Write Python code to solve this."
Interesting I was just wondering while reading that paper how well that would work compared to the n-shot prompts.
> Ah I see, thanks for clarifying. I see your point, but I wouldn't say that the prompts require an extensive knowledge of the test set. After all:
>> As an example, for the ~10 math reasoning datasets used in PaL, identical prompts were used (same prompt for all datasets, without changing anything).
That's fair. My thoughts were mostly directed at the "Table 2: Solve rate on three symbolic reasoning datasets and two algorithmic datasets" items. I think you could be right that my comments don't apply to the results in Figure 5 (GSM8K GSM-HARD SVAMP ASDIV SINGLEEQ SINGLEOP ADDSUB MULTIARITH).
Would be curious how well the 'write python code to solve this' performs in and of itself vs the "Let's think things through step by step" prompt.
Viewing a single comment thread. View all comments