More
    - Advertisement -
    HomeNewsA humanitarian perspective on military AI

    A humanitarian perspective on military AI

    This years’ Beijing Xiangshan Forum comes at an important time for regional and international security and provides a valued opportunity for a neutral, independent, and impartial humanitarian organisation, such as the International Committee of the Red Cross (ICRC), to share its perspectives on contemporary challenges in armed conflict, including in relation to emerging technologies.

    The event comes two months after ICRC President, Ms. Mirjana Špoljarić, had the honour of meeting President Xi Jinping, in Beijing where she emphasised China’s important role in global humanitarian affairs and the weight of its voice in promoting respect for international humanitarian law – the law of war. President Xi stated that China is willing to further cooperate closely with ICRC and make important contributions to promoting the cause of peace and progress of mankind, and that China will continue to support the ICRC, including in the fields of technology and human resources. President’s Xi’s words were warmly welcomed by the ICRC President.

    In her lecture at Suzhou University, Ms. Špoljarić emphasised that respect for international humanitarian law and peace are mutually reinforcing – a message that ICRC Vice President Gilles Carbonnier will surely reiterate during his keynote speech at this Beijing Xiangshan Forum, which will be held later this month under the theme of ‘Common Security, Lasting Peace’.

    The ICRC has been pleased to participate for many years in the Beijing Xiangshan Forum, bringing its humanitarian voice and expanding on its long-term dialogue with China. From the ICRC’s perspective, the Forum is a unique opportunity to engage with states, militaries, and other experts from the region and beyond on varied humanitarian issues, including on the impact of weapons and current disarmament initiatives.

    On the theme of weapons, the ICRC’s sharp focus remains on assessing the impact for people affected by armed conflict, considering questions of compliance with international humanitarian law and ethical acceptability, and ultimately promoting polices, practices and legal clarification – or development of the law where necessary – to reduce adverse consequences for both civilians and combatants.

    When I first participated in the Beijing Xiangshan Forum in 2019 our expert panel discussed science and technology developments and international security. The ICRC continues to pursue significant efforts to address the serious harm caused to civilians in armed conflict caused by conventional weapons based on long established technologies, such as through advocating for the Arms Trade Treaty (which China joined in 2020) and promoting strengthened protection for civilians by parties to armed conflict avoiding the use of heavy explosive weapons in populated areas. At the same time, the ICRC seeks to examine and address the implications of weapons and methods of warfare based on emerging technologies, such as cyber and other digital operations, military operations in outer space, and military applications of robotics and artificial intelligence.

    Arguably some of the most significant developments in science and technology since my last visit to Beijing have been in machine learning, including rapid advances in large language models – AI tools that have rapidly gained public attention both for their capabilities to generate text and images, or write code, and for the risks that may accompany certain uses.

    And so, I am particularly excited to join a discussion at this years’ Forum on strengthening international security governance of AI. The ICRC has drawn attention to several applications of AI in armed conflict with significant implications from a humanitarian perspective. For example, certain military applications of AI could entail the risks of increasing the dangers posed by autonomous weapons, exacerbating harm to civilians from cyber operations and information warfare, and negatively impacting the crucial role of human judgement in armed conflict. Among the most worrying has been the suggestion that AI could be used to direct decision-making on the use of nuclear weapons, potentially exacerbating the risks of use of nuclear weapons with the catastrophic consequences that would result.

    In recent years, China has published position papers on ‘Strengthening Ethical Governance of Artificial Intelligence‘ and ‘Regulating Military Applications of Artificial Intelligence‘, and in its recent ‘A Global Community of Shared Future: China’s Proposals and Actions‘ expressed the concern that “sound governance of artificial intelligence is lacking.” It is encouraging to see China’s emphasis on a ‘people-centred approach’ to AI and the principle of AI for good. This has connections to the ICRC’s ‘human centred approach’ to AI in armed conflict, which focuses on retaining human control and judgement based on the legal obligations and ethical responsibilities of persons conducting armed conflict.

    The ICRC’s immediate concerns are with high-risk applications of AI in armed conflict today. A significant priority, therefore, is addressing the dangers posed by increasing military development of autonomous weapons – weapons that select and apply force to targets without human intervention. Such weapons raise serious humanitarian, legal and ethical concerns – that are exacerbated with AI – and so the ICRC has been urging states to adopt new international rules with specific prohibitions and restrictions.

    On 5 October the United Nations Secretary General, Antonio Guterres, and the ICRC President issued an urgent joint appeal to States to conclude these negotiations by 2026. The ICRC has emphasized the need to prohibit both autonomous weapons that target persons directly and unpredictable autonomous weapons, such as those controlled by machine learning, while setting clear and specific restrictions on all others. There are parallels with China’s approach to specifying ‘unacceptable autonomous weapons systems’ (which must be prohibited) and ‘acceptable autonomous weapons systems’ (which must be regulated).

    With rapid developments in weapon acquisition, and their use in conflicts today, the ICRC welcomes China’s initiatives and encourages it to intensify its efforts to build regional and international support and cooperation for the establishment of clear legal red lines on autonomous weapons that will protect people now, and for generations to come. Forums such as the Beijing Xiangshan Forum offer a unique opportunity to pursue dialogue in support of multilateral initiatives that will effectively address contemporary challenges.

    The ICRC greatly values its continued dialogue with China, whether via the permanent ICRC delegation in Beijing, at the United Nations in New York, and with the Chinese Permanent Mission in Geneva. Personally, I am very much looking forward to an in-depth exchange on these issues at this important 10th edition of the Beijing Xiangshan Forum.

    We acknowledge Source link for the information.

    Author

    spot_img

    Must Read

    spot_img