Archive for May 27, 2018



Here is an open letter sent by the Future of Life institute concerning automated weapons:-

As companies building the technologies in Artificial Intelligence and Robotics that may be repurposed to develop autonomous weapons, we feel especially responsible in raising this alarm. We warmly welcome the decision of the UN’s Conference of the Convention on Certain Conventional Weapons (CCW) to establish a Group of Governmental Experts (GGE) on Lethal Autonomous Weapon Systems. Many of our researchers and engineers are eager to offer technical advice to your deliberations.

We commend the appointment of Ambassador Amandeep Singh Gill of India as chair of the GGE. We entreat the High Contracting Parties participating in the GGE to work hard at finding means to prevent an arms race in these weapons, to protect civilians from their misuse, and to avoid the destabilizing effects of these technologies. We regret that the GGE’s first meeting, which was due to start today (August 21, 2017), has been cancelled due to a small number of states failing to pay their financial contributions to the UN. We urge the High Contracting Parties therefore to double their efforts at the first meeting of the GGE now planned for November.

Lethal autonomous weapons threaten to become the third revolution in warfare. Once developed, they will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend. These can be weapons of terror, weapons that despots and terrorists use against innocent populations, and weapons hacked to behave in undesirable ways. We do not have long to act. Once this Pandora’s box is opened, it will be hard to close. We therefore implore the High Contracting Parties to find a way to protect us all from these dangers.”

They are writing to the UN, and they note the problem of the UN; it has no teeth primarily because the US refuses to pay the globally-agreed percentage of GDP. The US also controls the Security Council, and yet the only possibly effective organisation for such control is the UN.

This is what we must fear. In this blog I warned about the US and NATO labelling their others as terrorists, and using the label of terrorist as an excuse for using AI-bombs. The UN could stop them by introducing global robotics laws but the US controls the UN.

We need to be afraid of western-sponsored AI-weapons and do something.

“3 laws & drones” <– Previous Post “Superintelligence – myth?” Next Post –>

Books:- Treatise, Wai Zandtao Scifi, Matriellez Education.

Blogs:- Ginsukapaapdee, Matriellez, Zandtao.

Science fiction has pointed out fears concerning AI and various apocalyptic scenarios; this is the job of the creatives. Perhaps the most famous has been Isaac Asimov’s books on robotics in which he established the following 3 laws to protect humanity (from here):-

A robot may not injure a human being or, through inaction, allow a human being to come to harm.
A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

In humanity’s development of AI so far, have we followed these laws or similar? And the answer unequivocably is NO.

And the problem is racism, humanity does not have respect for the life of the other. And where does the problem lie? With the power in the West. It is the nationalism of the West that is the problem, a nationalism that is prepared to use any weapons technology possible to further its own interests.

The main despicable weapons technology that uses AI is drones, and NATO’s use of drones is shameful. These are unmanned crafts that are being used to murder people in territories where there are no westerners. If you look at Asimov’s laws then western use of drones breaks the first 2 laws.

This is the problem with AI, if it is not programmed for safety; here is a Reuter’s article which says just that. The article is hopeful because it suggests that appropriate protective programming will be in place. But they are not facing the fact that AI is being used as weapons already as in drones and other technology. Protection is not in place already.

In other words these people have their heads in the sand:-

You could argue that drones are fired by soldiers in the Nevada desert but according to Asimov’s 3 laws that would not be allowed. The drones are killing, the drones are AI. Smart bombs are killing, smart bombs are AI. Applying Asimov’s 3 laws neither drones nor smart bombs would be able to kill even though humans are pushing the switch.

Protective laws need to be universal. They need to be programmed into AI now so that no-one can use AI-based weapons technology to kill other humans. And if there isn’t such programming then scientists working on AI are Oppenheimers:-

What we have to recognise now is that AI is already killing, the robots are killing. It is important to recognise the Vietnam factor. Vietnam was a war fought on foreign lands in my view for dubious reasons. The US establishment, particularly, suffered from internal protest and strife because of their involvement with the war – especially when US citizens came home dead. From that time technology was developed so that war could be waged without involving US troops on the ground. Decisions for war are already being taken when home lives are not being lost, decisions for war are being made using drones and smart bombs – not people. This is AI. Recognise now that AI is killing where politically NATO has difficulty justifying war with troops. We need Asimov’s 3 laws now.

No more drones, no more AI-based weaponry.

As I said the problem is nationalism and racism. Because of the arbitrary “War on Terror” all people who suffer at the hands of the US are labelled terrorist and that justifies the use of AI to kill people. The 3 laws need to be applied globally so that no AI-enabled weapon can be used to kill people. If we allow one nation such as the US or NATO to define terrorism and legitimise the murder of terrorists by AI, we are already in the doomsday scenario that science fiction writers have described.

Science needs to stop being Oppenheimers. Oppenheimer did not drop the bomb on Hiroshima, but he did create the bomb for people who are hawkish enough to drop such bombs. Those hawkish people are westerners, NATO. Oppenheimers need to stop working for NATO hawks until the laws of AI robotics have been established.

Of course that will not happen because so many scientists jobs and families depend on their being ostriches.

“Synthesising catness” <– Previous Post “Open letter” Next Post –>

Books:- Treatise, Wai Zandtao Scifi, Matriellez Education.

Blogs:- Ginsukapaapdee, Matriellez, Zandtao.