Exploring the Gap: The Challenge of Achieving Human-like Generalization for Concept-based Translation Instruction Using Large Language Models

Authors

  • Ming Qian Charles River Analystics
  • Chuiqing Kong Independent Researcher

DOI:

https://doi.org/10.1609/aaaiss.v3i1.31283

Keywords:

Human-AI Collaboration, Concept-driven Prompt, Large Language Model, Concept-centric Machine Translation Memory, Few-shot Learning, Human-centered AI, Prompt Engineering, Human-Computer Interaction, Translator-machine Interaction, Inductive Generalization, Instance-based Learning, Concept Forming, Learning

Abstract

Our study utilizes concept description instructions and few-shot learning examples to examine the effectiveness of a large language model (GPT-4) in generating Chinese-to-English translations that embody related translation concepts. We discovered that human language experts possess superior abductive reasoning skills compared to GPT-4. Therefore, it is crucial for humans to employ abductive reasoning to craft more detailed instructions and infuse additional logic into exemplary prompts, a step essential for guiding a large language model effectively, in contrast to the more intuitive understanding a human expert might have. This approach would make the prompt engineering process more complicated and less human-like. Emphasizing domain-specific abductive reasoning stands out as a crucial aspect of human-like learning that AI/ML systems based on large language models should aim to replicate.

Downloads

Published

2024-05-20

Issue

Section

Symposium on Human-Like Learning