The US is rallying East African countries to adopt its proposal on the responsible use of artificial intelligence in the military, signalling a sense that the new technology could pose risks to the usual defence cooperation between Washington and its allies around the world.
So far, only a few African countries have signed up to the new ‘Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy’, a document that lists ten points for innovating, regulating and limiting the use of AI in the military.
Of the 50 countries that have endorsed the declaration, launched on November 9, only four are African, including Libya, Malawi, Morocco and Liberia. Among other things, these countries have committed to strong and transparent “norms that apply across military norms”, regardless of the system of operation or the range of potential effects. They have also committed to regular discussions on the development, deployment or responsible use of AI capabilities in the military.
However, they have also indicated that they will reserve the right to self-defence “and state’s ability to responsibly develop and US AI in their military domain.”Arms ControlPaul Dean, the US principal deputy assistant secretary at the Bureau for Arms Control, Deterrence and Stability, told a virtual press briefing last Friday that Washington wants more countries to join to prevent potential misuse of AI in the future, especially as warfare turns to technology.“I would stress that the door is open to the constructive participation from all states who have a shared interest in developing this framework of responsibility,” he said during the virtual briefing from Nigeria. He toured Ghana, Cameroon and Nigeria last week.“And we have many close partners in East Africa and I think their participation would be invaluable and highly constructive because I know that many of our East African partners have a deep and sustained commitment to responsibility, and this is a real opportunity for all countries to demonstrate their leadership on this issue.”Artificial intelligence, the development of computer systems to perform tasks that would normally require humans, has recently become a topic of discussion around the world, including at the UN. Some experts have pointed out that AI could make the world work better in certain areas where human involvement could be risky, such as rescue missions. But they also point out that taking away human control, such as giving computers the ability for visual perception, speech recognition, decision-making and translation between languages, could distort normal human boundaries and invade privacy.
In September, UN Secretary-General Antonio Guterres formed a special task force to draft rules for the use of AI. Other countries have followed suit, holding forums to discuss regulations.
Mr Dean said the idea was to have “a rule of responsibility in which we all agree that AI in the military must be designed for a specific purpose.”AI applications“We don’t want states purchasing AI applications and incorporating them into their militaries… for functions they were not designed to fulfil,” he told the briefing.
The US, whose tech companies have pioneered AI in areas including writing and problem solving, admits there is a risk of “unintended consequences” if used in the military sector. The declaration is not yet binding on members, but Washington says more signatures will show a willingness to establish a framework of responsibility.”We are deeply committed to engaging our partners in multilateral venues and bilaterally,” he said.“The benefits of AI to militaries will be quite profound. And these benefits are not limited to battlefield applications, and we very much wanted to design, in concert with our partners, these ten measures in the political declaration to apply much more broadly than battlefield applications. We wanted these responsibility principles to apply to the full range of military applications of AI.”Last month, the State Department launched its first-ever Enterprise Artificial Intelligence Strategy (EAIS) to establish what it called “a centralised vision for artificial intelligence (AI) innovation, infrastructure, policy, governance, and culture.”It wants developers to design responsibly and ethically, and users to adopt only appropriate innovations.
Source : Zawya