Why the Military Can’t Trust AI: Large Language Models Can Make Bad Decisions—and Could Trigger Nuclear War (Max Lamparth and Jacquelyn Schneider)

Share This

In 2022, OpenAI unveiled ChatGPT, a chatbot that uses large language models to mimic human conversations and to answer users’ questions. The chatbot’s extraordinary abilities sparked a debate about how LLMs might be used to perform other tasks—including fighting a war. Although there is research by some, including Professor Yvonne McDermott Rees at Swansea University, that demonstrates how generative AI technologies might be used to enforce discriminate and therefore ethical uses of force, others, such as advisers from the International Committee of the Red Cross, have warned that these technologies could remove human decision-making from the most vital questions.

Read More

Leave a Comment