As previously reported, new research reveals inconsistencies in ChatGPT models over time. A Stanford and UC Berkeley study analyzed March and June versions of GPT-3.5 and GPT-4 on diverse tasks. The results show significant drifts in performance, even over just a few months. For example, GPT-4’s prime number accuracy plunged from 97.6% to 2.4% between […]
The post Op-ed: Benchmarking ChatGPT’s capabilities against alternatives including Anthropic’s Claude 2, Google’s Bard, and Meta’s Llama2 appeared first on CryptoSlate.