By Nils Adler
In a scene reminiscent of a computer war game, three battle-fatigued soldiers, dressed in white snow camouflage, emerge from a war-torn alley with their hands raised above their heads. They crouch down, following the orders being blasted at them, fear and shock etched across their faces as they stare down the barrel of a machinegun mounted on a so-called ground robot.
This footage, released in January by Ukrainian defence company DevDroid, is said to show the moment Russian soldiers were captured by a Ukrainian robot using artificial intelligence.
In April, Ukrainian President Volodymyr Zelenskyy said that, for the “first time in the history of this war, an enemy position was taken exclusively by unmanned platforms – ground systems and drones”.
“Ground robotic systems have already carried out more than 22,000 missions on the front in just three months,” he wrote in a post on X, alongside images of green machines with tank tracks and weapons mounted on top.
But for analysts who have studied the intersection of artificial intelligence (AI) and warfare, the footage reflects an expected evolution – one that will unfold far beyond the front lines in Ukraine as the world wrestles with the ethical implications of controlling it.
<snip>
In the case of ground robotics in Ukraine, a human operator has, so far, remained in control, directing machines that can still be halted by obstacles such as uneven terrain.
However, when AI is involved in the decision-making process, as is the case in Israel’s attacks on Gaza and the wider region, the scale of attacks which have resulted in “huge collateral damage and civilian casualties for a small number of military targets” challenges the rules of international humanitarian law and, in particular, the idea of proportionality, Walsh said.
Continue reading at link below:

Recent Comments