Artificial intelligence (AI) makes it possible for machines to learn from experience, adjust to new inputs and perform human-like tasks. Lethal autonomous weapons systems use artificial intelligence to identify and kill human targets without human intervention. Russia, the United States and China have all recently invested billions of dollars secretly developing AI weapons systems sparking fears of an eventual “AI Cold War.”In April 2024 +972 Magazine published a report detailing the Israeli Defense Forces intelligence-based program known as “Lavender.” Israeli intel…
Read moreNarrow down the conversation to these participants:
@ISIDEWITH1yr1Y
Yes
@B3HWGPX2mos2MO
I don’t have more data or stats but I have a very basic understanding that if less people are involved in the fighting then there is less chances for the deaths of our military.
@B4K53N82wks2W
Yes as long as we understand how and why the AI made the decisions it made as well as the ability to fully control and override and/or shut down at any time
@B45LXSB1mo1MO
Only in situations where it might be needed, such as counter-missiles that are already controlled by AI only during defensive situations.
@B3RGH4V2mos2MO
Artificial intelligence as it stands, is a marketing term. The words "artificial intelligence" are not the same as the technology that would be used for military weapons.
@B3MBJCF 2mos2MO
Yes, as long as it is highly regulated and protected so we can prevent a takeover
For none kinetic results such as search and target acquisition, yes
Yes, provided they are tested and used under strict supervision.
@B2WGS2X2mos2MO
No, absolutely not. We should not further trivialise killing or destruction, or allow programs with a limited, to non-existent understanding of nuance to facilitate it. If it has to be done, it must be done by a human being, who can understand the weight of their decisions.
@B2SXP9S3mos3MO
Yes, provided they are strictly monitored to avoid unnecessary damage.
@B2R8MS43mos3MO
if the US, Russia, and China are all tripping over themselves to be the best at it, its pure evil. losers
@B2QBWS33mos3MO
Thi is such a nuanced question that can't be a yes or no.
@9RTTQWK9mos9MO
Only when absolutely necessary to protect and preserve the immediate security of Australia.
Yes, but there should be extreme caution to their use.
@B22G8ZN5mos5MO
No, I don’t trust its accuracy nor that humans will use it ethically.
Yes, but only after rigorous testing to prevent collateral damage.
@9ZM933G5mos5MO
Get out of other countries wars. Stop using our children for your dirty work
Yes, but it must always have human oversight and final human decision making, and it must not be used to target civilians or break international laws
@9WD555W6mos6MO
Yes, as long as there is proof that the concept works without fault and no human loses a job over it.
@9VRB6S37mos7MO
Only if it's to protect Australia, not any other country
AI can aid the military's strategy but can not have sole control of weapon systems.
@9TQ5HHD7mos7MO
Yes, but with strict regulation to prevent ethical misuse.
@9QS3S9L10mos10MO
Yes, but with strict regulation and accountability.
@9PSLVTY10mos10MO
Yes, but with strict regulations to ensure ethical use and accountability.
@9M3ZMN6 12mos12MO
Yes, provided the technology has passed vigorous testing requirements to ensure it functions safely.
@9992HTR 11mos11MO
No, this will lead to unintended downsides and consequences.
@9MYFHGP11mos11MO
I think that whilst the implementation of AI into the armed forces is inevitable, and that because other countries will utilise the benefits of AI usage in the military, it is only a matter of time that an arms race for the militarisation of AI (much like the nuclear arms race during the cold war) will happen if it hasn't already and therefore we should not be left behind in terms of military capability with potential adversaries; I do not think that the militarisation of AI is a good thing. Firstly, it will only serve to make defence contractors and military industrial complex [who are… Read more
@B2SGP8Z3mos3MO
Yes, but only if these weapons and how they are altered by AI is fully understood by those operating said weapons.
@B3BKXMY 2mos2MO
No, because there should remain a human factor in armed engagement
Loading the political themes of users that engaged with this discussion
Loading data...
Join in on more popular conversations.