Abstract
This paper explores ethical implications of AI use in warfare, focusing on two key issues: responsibility and potential limits to human autonomy. The design of autonomous weapons should prioritize transparency and autonomy in decision-making, especially in morally challenging situations. The automation bias highlights the risks of relying on AI as infallible due to its mathematical programming. This bias undermines human deliberation and violates ethical theories at both the metaethical and normative levels. Starting from the hci model, an ethics by design for AI is necessary, providing support while allowing users to maintain responsibility. Trusting autonomous weapons requires ensuring that autonomy does not compromise human decision-making, preserving the value of ethical choices’ complexity and avoiding the reduction of ethics to mathematical generalizations. Users should have the freedom to disregard AI advice and act according to the situation, thereby assuming responsibility.