Template-Type: ReDIF-Paper 1.0 Author-Name: Fernando Perez-Cruz Author-X-Name-First: Fernando Author-X-Name-Last: Perez-Cruz Author-Name: Hyun Song Shin Author-X-Name-First: Hyun Author-X-Name-Last: Song Shin Title: Testing the cognitive limits of large language models Abstract: When posed with a logical puzzle that demands reasoning about the knowledge of others and about counterfactuals, large language models (LLMs) display a distinctive and revealing pattern of failure. The LLM performs flawlessly when presented with the original wording of the puzzle available on the internet but performs poorly when incidental details are changed, suggestive of a lack of true understanding of the underlying logic. Our findings do not detract from the considerable progress in central bank applications of machine learning to data management, macro analysis and regulation/supervision. They do, however, suggest that caution should be exercised in deploying LLMs in contexts that demand rigorous reasoning in economic analysis. Length: 9 pages Creation-Date: 2024-01-04 File-URL: https://www.bis.org/publ/bisbull83.pdf File-Format: Application/pdf File-Function: Full PDF document File-URL: https://www.bis.org/publ/bisbull83.htm File-Format: text/html Number: 83 Handle: RePEc:bis:bisblt:83