Technology

ChatGPT may not master this test, experts warn that it may outperform humans soon.

According to recent research, ChatGPT would not make a competent accountant despite its many skills. But that might alter.

Since its inauguration last November, ChatGPT has raised the bar for what we think machines can do. It later passed the American Medical Licensing Examination and the Wharton MBA Examination. So is there anything it can’t do better than humans? Yes, accounting, as it turns out.

Researchers used accounting exam questions to test the GPT 4-based ChatGPT in a study that was published in the journal American Accounting Association.

“When this innovation to begin with came out, everybody was stressed that understudies may presently utilize it to deceive. But openings to deceive have continuously existed. So for us, we’re attempting to focus on what we are able do with this innovation presently that we couldn’t do some time recently to progress the instructing prepare for workforce and the learning handle for understudies. Testing it out was eye-opening,” said lead think about creator David Wood, a teacher of bookkeeping at Brigham Youthful College (BYU), in a press explanation.

According to BYU, ChatGPT’s performance was impressive, but the students performed better. The average score for students who passed the test was 76.7 percent, compared to 47.7 percent for ChatGPT.

ChatGPT achieved a higher than average result for students in 11.3 percent of questions, it did very well in the subjects of accounting information systems and auditing. But the AI ​​bot fared worse in the tax, finance and governance ratings. This can be because it struggled with the mathematical processes necessary for these subjects.

Moreover, when it comes to address sort, ChatGPT did way better when it came to genuine or untrue questions and multiple-choice questions. But it battled with short-answer questions. The chatbot did more awful on higher-order questions. Interests, it indeed given definitive composed answers that were inaccurate.

admin
Author: admin