large language model (the text-davinci-003 variant of Generative Pre-trained Transformer (GPT)-3) on a range of analogical tasks, including a non-visual matrix reasoning task based on the rule structure of Raven’s Standard Progressive Matrices. We found that GPT-3 displayed a surprisingly ...
Large language models (LLMs) have a substantial capacity for high-level analogical reasoning: reproducing patterns in linear text that occur in their training data (zero-shot evaluation) or in the provided context (few-shot in-context learning). However, recent studies show that even the...
Large language models (LLMs) have developed impressive performance and strong explainability across various reasoning scenarios, marking a significant stride towards mimicking human-like intelligence. Despite this, when tasked with simple questions supported by a generic fact, LLMs often fail to provide ...
and version 4, which was the state-of-the-art model with enhanced reasoning, creativity and comprehension relative to previous models (https://chat.openai.com/). Each test was delivered in a separate chat: GPT is capable of learning within a chat session, as it can...
phenomena inspired by the psychological literature, including analogical reasoning (Webb, Holyoak, & Lu, 2022), pragmatic reasoning (Lipkin, Wong, Grand, & Tenenbaum, 2023), causal reasoning (Kıcıman, Ness, Sharma, & Tan, 2023) and social reasoning (Shapira et al., 2023, Ullman, ...
CoT prompting, as introduced in arecent paper, is a method that encourages LLMs to explain their reasoning process. This is achieved by providing the model with a few-shot exemplars where the reasoning process is explicitly outlined. The LLM is then expected to follow a similar reasoning proces...
Reasoning on Graphs: Faithful and Interpretable Large Language Model Reasoning. preprint Linhao Luo, Yuan-Fang Li, Gholamreza Haffari, Shirui Pan. [PDF] [Code], 2023.10, Thought Propagation: An Analogical Approach to Complex Reasoning with Large Language Models. preprint Junchi Yu, Ran He, Re...
This repo provides the source code & data of our paper: Unleashing the Potential of Large Language Models as Prompt Optimizers: An Analogical Analysis with Gradient-based Model Optimizers.😀 OverviewHighlights:1️⃣ We are the first to conduct a systematic study for LLM-based prompt optimizers...
Recently released large language models (LLMs), including ChatGPT and GPT-4, have exhibited remarkable proficiency in tasks involving natural language generation and understanding. These models possess the capacity to generate language that is both fluent and coherent, catering effectively to human requi...
Visual CoT "Visual CoT: Advancing Multi-Modal Language Models with a Comprehensive Dataset and Benchmark for Chain-of-Thought Reasoning". Shao H, Qian S, Xiao H, et al.. arXiv 2024. [Paper] [Github]. MagnifierBench "OtterHD: A High-Resolution Multi-modality Model". Li B, Zhang P, ...