An Analysis of Large Language Models and Langchain in Mathematics Education
Loading...
Date
2023
Authors
Oğuz, Damla
Journal Title
Journal ISSN
Volume Title
Publisher
Association for Computing Machinery
Open Access Color
Green Open Access
No
OpenAIRE Downloads
OpenAIRE Views
Publicly Funded
No
Abstract
The development of large language models (LLMs) has led to the consideration of new approaches, particularly in education. Word problems, especially in subjects like mathematics, and the need to solve these problems by collectively addressing specific stages of reasoning, have raised the question of whether LLMs can be successful in this area as well. In our study, we conducted analyses by asking mathematics questions especially related to word problems using ChatGPT, which is based on the latest language models like Generative Pretrained Transformer (GPT). Additionally, we compared the correct and incorrect answers by posing the same questions to LLMMathChain, a mathematics-specific LLM based on the latest language models like LangChain. It was observed that the answers obtained were more successful with ChatGPT (GPT 3.5), particularly in the field of mathematics. However, both language models were found to be below expectations, particularly in word problems, and suggestions for improvement were provided. © 2023 ACM.
Description
Keywords
ChatGPT, LangChain, Large Language Models (LLMs), Mathematics Education
Fields of Science
Citation
WoS Q
N/A
Scopus Q
N/A

OpenCitations Citation Count
6
Source
ACM International Conference Proceeding Series -- 7th International Conference on Advances in Artificial Intelligence, ICAAI 2023 -- 13 October 2023 through 15 October 2023 -- Istanbul -- 196685
Volume
Issue
Start Page
92
End Page
97
PlumX Metrics
Citations
CrossRef : 9
Scopus : 11
Captures
Mendeley Readers : 34
SCOPUS™ Citations
11
checked on Apr 26, 2026
Page Views
131
checked on Apr 26, 2026
Google Scholar™


