I think that whilst the implementation of AI into the armed forces is inevitable, and that because other countries will utilise the benefits of AI usage in the military, it is only a matter of time that an arms race for the militarisation of AI (much like the nuclear arms race during the cold war) will happen if it hasn't already and therefore we should not be left behind in terms of military capability with potential adversaries; I do not think that the militarisation of AI is a good thing. Firstly, it will only serve to make defence contractors and military industrial complex [who are far removed from any harm, danger and suffering that their products cause] even more powerful, richer and influential than even in its current state, but it will remove the human factor from warfare. To explain what I mean by this... Would a hypothetical military AI be able to distinguish between combatant and non-combatant? How would it know not to fire on a surrendering combatant if it was programmed to kill said combattant? Would it even make sense to fight wars if AI can independently and autonomously identify, seek, attack and destroy human targets, therefore reducing the human involvement in war? To summarise, I think the militarisation of AI is inevitable, so it would be disadvantageous to not utilise it when a potential adversary would readily use it for their benefit, but that is not to say that's a good thing, quite the opposite as posed by my rhetorical questions from earlier.
Be the first to reply to this answer.
Join in on more popular conversations.