(Replying to PARENT post)

GPT-3's failure at larger addition sizes is almost fully due to BPE, which is incredibly pathological (392 is a β€˜digit’, 393 is not; GPT-3 is also never told about the BPE scheme). When using commas, GPT-3 does OK at larger sizes. Not perfect, but certainly better than should be expected of it, given how bad BPEs are.

http://gptprompts.wikidot.com/logic:math

πŸ‘€VeedracπŸ•‘5yπŸ”Ό0πŸ—¨οΈ0

(Replying to PARENT post)

My thinking there wasn't because of BPEs, I think it's a graph traversal issue.
πŸ‘€mlb_hnπŸ•‘5yπŸ”Ό0πŸ—¨οΈ0