Code example for paper Self-Prompting Large Language Models for Zero-Shot Open-Domain QA (NAACL 2024). Requirements python 3.7 openai==0.25.0 sentence-transformers==2.2.2 torch==1.13.1 transformers==4.28.1 Steps Preparation Save your openai api key into ./related_files/openai_api.txt. We pr...
In this paper, we utilize the understanding and generative abilities of large language models (LLMs) to automatically produce customized lesson plans. This addresses the common challenge where conventional plans may not sufficiently meet the distinct requirements of various teaching contexts and student ...
这个技巧使用起来非常简单,只需要在问题的结尾里放一句Let‘s think step by step(让我们一步步地思考),模型输出的答案会更加准确。 这个技巧来自于 Kojima 等人 2022 年的论文Large Language Models are Zero-Shot Reasoners。在论文里提到,当我们向模型提一个逻辑推理问题时,模型返回了一个错误的答案,但如果我们...
In general, we are seeking cases where the model does not do a good job despite being capable of generating a good response (note that there are some things large language models cannot do, so those would not make good evals). Your eval should be: - [x] Thematically consistent: The ...
这个技巧来自于 Kojima 等人 2022 年的论文Large Language Models are Zero-Shot Reasoners。在论文里提到,当我们向模型提一个逻辑推理问题时,模型返回了一个错误的答案,但如果我们在问题最后加入Let‘s think step by step这句话之后,模型就生成了正确的答案: ...
这个技巧来自于 Kojima 等人 2022 年的论文Large Language Models are Zero-Shot Reasoners。在论文里提到,当我们向模型提一个逻辑推理问题时,模型返回了一个错误的答案,但如果我们在问题最后加入Let‘s think step by step这句话之后,模型就生成了正确的答案: ...
这个技巧来自于 Kojima 等人 2022 年的论文Large Language Models are Zero-Shot Reasoners。在论文里提到,当我们向模型提一个逻辑推理问题时,模型返回了一个错误的答案,但如果我们在问题最后加入Let‘s think step by step这句话之后,模型就生成了正确的答案: ...
这个技巧来自于 Kojima 等人 2022 年的论文Large Language Models are Zero-Shot Reasoners。在论文里提到,当我们向模型提一个逻辑推理问题时,模型返回了一个错误的答案,但如果我们在问题最后加入Let‘s think step by step这句话之后,模型就生成了正确的答案: ...
A hallmark of modern large language models (LLMs) is their impressive general zero-shot and few-shot abilities, often elicited through in-context learning (ICL) via prompting. However, while highly coveted and being the most general, zero-shot performances in LLMs are still typically weaker ...
Recently, very large language models (LLMs) have shown exceptional performance on several English NLP tasks with just in-context learning (ICL), but their utility in other languages is still underexplored. We investigate their effectiveness for NLP tasks in low-resource languages (LRLs), especially...