Category: AI

Google 7 principles of AI

Google employees refused to work with the military – some details here. It should be noted but it does not give me confidence – after all drones are killing innocent people, did Google contribute to that? This situation needs to be analysed but I’m not at the moment – this is for reference.

As a result of the refusal Google has issued 7 principles, in this statement he refers to Founders letters (2004 and 2017) – business stuff.

From the point of HFP the 7 principles don’t do it. It is not HFP-first, it is not hard-wired, it is not Asimov. So still scary but at least something.

But why is an entity that is for-profit in charge of humanity’s future?

<– Previous Post Next Post –>

Books:- Treatise, Wai Zandtao Scifi, Matriellez Education.

Blogs:- Ginsukapaapdee, Matriellez, Zandtao.

Entangled for War

Yesterday a friend told me about quantum computers and their instantaneous abilities. He understood little but was intellectually fascinated – as he is a slave to his intellect so my arrogance immediately dismissed him. Shame. I did however have sufficient integrity to investigate and will apologise.

I have a very limited understanding of what happens but here goes. It is an observable phenomenon of photons that they can exhibit the same properties instantaneously in two places. Somehow if these photons have opposite polarity there is a property of instantaneous transfer. Sorry to be so vague, I don’t understand.

However vague my understanding science has been able to turn this property into some practical use. Apparently a scientist, Zeilinger, has demonstrated the use of quantum theory practically. Here is a talk he gave, I don’t understand it. Basically it relies on 3 observations – superposition, randomness and entanglement. Something like 2 photons that had been connected have the same properties even though they have been separated. This does not make sense to me but what I can accept is that it has been observed and these observations can be repeated. This is what I argue about for meditation, it can be observed and repeated and therefore it is science. And then there is gravity. Gravity has been accepted for centuries – Isaac Newton. Its properties have been used by science and there are equations involving gravity that are used practically. Yet how does gravity work? I’m glad it does, walking on earth can be pleasant – certainly useful.

Zeilinger describes here (17.40 to 18.30) a new way of thinking of information. “Information is a fundamental constituent of the universe … Information might be more basic than matter … Information that characterises two systems transcends all limitations of space and time.”

Here is a description of transfer of data using quantum computers. I don’t understand how it has been done but data has been transferred – an observable reality. Serious quantum computers are here, if the FT are discussing it then we know this.

But here is the mandtao rub, and it somewhat ominously comes from the Los Alamos lab (Oppenheimer). Here is a quote from the FT article “John Sarrao, associate director for theory, simulation, and computation at the Los Alamos National Laboratory, is among the scientists looking at how to invest in the technology. The organisation, best known for its work on nuclear weapons, is taking a long-term view of quantum computing from a national security point of view.” That frightens the hell out of me. Note the use of the euphemism “national security” – how the West (primarily) disguises its efforts at global hegemony and profit-making through violent oppression.

Here is what I do understand. Computers we now use are based on the electrical transfer of data using bits (binary digits). Fundamentally “classic” computers transfer data through electricity by the use of on-off switches, and through this transfer of data computers are able to be used to do so many fantastic things – at the same time “classic” computers are able to create the threat of AI and use smart bombs and drones. Developments in computers (new generations of computers) are measured in terms of the speed of processing, the faster the processing of data the more they can do.

Quantum computing freaks me out because it takes this understanding to a new level. The processing is faster because it is instantaneous – superposition. Secondly is the notion of Qubits. Basically quantum computers can store data in these Qubits – whatever they are, and they are transferred instantaneously. Frightening computer power.

Where is the investment in quantum computing coming from? At present most investment appears to be within the computer industry itself, as can be gleaned from the FT article. But defence is moving in – note here defence is a euphemism for western hegemony. This article from Sputnik News describes global focuses on quantum computing investment, and China leads the way. The article suggests that China will have the capability of hacking western military usages. In other words China will soon become an enemy of NATO, and enemies could mean war. Here is a military view of the quantum arms race that is freaky as well. This article has no moral content, he is just playing fear tactics to ramp up the quantum race for greater US government investment. With people like Major Ryan Kenny ramping up a global arms race in the FT, what future does it hold for humanity?

With this new leap in quantum computing – new to me as being realistic, the need for a HFP-protocol is even more urgent.

“Bostrom” <– Previous Post “Science beyond Reason” Next Post –>

Books:- Treatise, Wai Zandtao Scifi, Matriellez Education.

Blogs:- Ginsukapaapdee, Matriellez, Zandtao.


Nick Bostrom has written a book called Superintellignce, Paths and Dangers, Strategies. It is fascinating to listen to the efforts that have been made to make AI. To be perfectly honest there is stuff he talks about I don’t understand, and there’s stuff there I could never understand so that is why he is an Oxford prof – Oxford is definitely the place I would go to find a white prof.

He began the book with a parable that talks of sparrows, owls and the sparrows inviting owls into their nests without knowing whether the owls eat sparrows. Except Scronkfinkle warns the sparrows, and he dedicates his book to Scronkfinkle. Whilst the sparrows of Scronkfinkle’s nest might be dinner there would still be other sparrows. With AI the worst case scenario would be that there would be no sparrows left!! Am I being picky?

He begins his preface with “Inside your cranium is the thing that does the reading.” Whilst I don’t know of any humans without a cranium who can read, this statement makes me tirade, but not here … maybe.

What is so fascinating is what they have been able to achieve. Years ago they were thinking that they couldn’t invent a machine to win at chess, now it is done. In some ways this is impressive, I would get very little further than 4 moves against Gary Kasparov. But whilst the AI is beating me at chess, I had eaten a pizza, drank coffee, and watched a swan at the nearby lake. Meanwhile I was very grateful that another AI had cleaned the house … and I am not going to mention Sophiabot who (which?) gave me pleasure in the bedroom this morning.

What I am getting at is that despite the great advances, the level of multi-tasking that women and some men can do, an AI cannot. So the question is whether I should have used “as yet” in that sentence.

Suppose we lived in a society where we are measured by our ability at chess, and only the best chess players survive, then what we have done is to invent a machine which will end our survival. Therefore before we invent the chess-AI we should have invented the HFP so that the chess-AI could not wipe out humanity. An obvious point.

Now there is a freaky but realistic scenario that we have to consider and that is I J Good’s intelligence explosion – “their” terms as I don’t see it as intelligence. We invent AI that can design AI , they design new AI that has more AI etc. There will be an explosion of AI that would make human intelligence appear minimal so why not swat the mosquito that is irritating?

This sounds a suitable doomsday scenario except for questions about intelligence.

Then there is developing intelligence from child’s brains because it is considered easier to invent a child’s brain, apply conditioning and experience, to get adult intelligence.

And there is brain emulation. Get a brain – I can’t remember whether Bostrom said it was dead or alive???, make this brain so that the AI has all the connections the human brain has.

But reading Bostrom is so infuriating. Sometimes I listen (in the car), and I have to turn it off – of the words I describe him the politest is fool. And this is an Oxford prof. There are two areas in which this occurs – spirituality and politics. Bostrom is working for British academia so indirectly he is working for the British government, NATO … Trump. How responsible to the human race to deliver war-capable AI to the leading colonialists, to an alliance run by a country whose government dropped the bomb on Hiroshima. And Trump …

Where is the power? Without Oppenheimer there was no Hiroshima and Nagasaki. Without gunpowder where was British colonialism? Without scientists where would there be drone deaths and smart-bomb deaths. Scientists take this scourge off your shoulders.

Scientists need to stand up and put safety first

“HFP-enabled” <– Previous Post “Entangled for War” Next Post –>

Books:- Treatise, Wai Zandtao Scifi, Matriellez Education.

Blogs:- Ginsukapaapdee, Matriellez, Zandtao.

HFP-Enabled Platforms

I have got somewhere, it feels good. This so-called scientific enquiry was just heading down a hole of political rhetoric. But not now because there is a solution or at least a potential solution.

It’s a long time since I have read Asimov’s books but here is what I remember of them. There was this positronic brain whose 3 laws were inviolable. The stories were concerned with how people tried to manipulate the laws for their own ends. Therefore there was an assumption underpinning Asimov’s work that humanity in general had to develop robots in such a way that they could not be manipulated by an individual to cause harm. The positronic brain, although AI – whatever that means, could not be used for harm – first law.

It is this that I am talking about with the Humanity First Protocol (HFP), what we need is a neopositronic brain with HFP. This has to be the platform on which AI is built.

The problem with our computer systems in real life is the platforms – Windows, Apple, Linux – they have no HFP. So there is the solution, legally mandate these platforms to have HFP. Then make it illegal to tamper with the platform, and then we have the end of AI problems. A simple straight forward solution if our governments are in control and want to control AI – mandate the existing computer platforms to have HFP.

Don’t get me wrong. The problem of how to enact a HFP is difficult but governments could insist it happens – if they have control.

I started this investigation by considering AI in robots then weapons and then computers in general. Now all of these have as a basis these platforms so by having a HFP in place we have control of the situation with regards to AI-Robotics, AI-weaponry and the supercomputer controlling our lives. It is so clear – control the platforms so they put humans first – HFP.

Imagine how useful this HFP could be. House security could be designed on platforms with HFP so that guns could be prevented from entering into homes, buildings, cars etc. As soon as guns are in the building alarms go off.

What about manufacture and sales of armaments. Computers could not be used for these because of HFP. These platforms are already global, HFP-enable the platforms, and there would be a vast reduction in armaments and therefore killing. I use the word reduction, it would not be a panacea – there would need to be some sort of global protection and enforcement in place. But it would be a solution.

No I am not being naïve, this of course is not going to happen. The 1% will not allow governments to insist on HFP-enabling, I know this.

But remember this is part of the Path of Scientific Enquiry, and the operative word here is Enquiry. For an Enquiry to be part of the Path, the individual scientist must make the decision. As a scientist you are working on AI. You work with people such as those who wrote the Open Letter with Hawking, and you say we don’t want AI to be used for killing, and you say we want platforms to be HFP-enabled.

Are you compassionate scientists or Oppenheimers?

Answer this question for yourself. Will they enable HFP? Then you will know who you are working for, what science is working for. Asking me you can reject the answer because you can say I am biassed – although I think I am not because I have already enquired and reached an answer. Have you?

Have you enquired?

Are you a scientist? Don’t you think you should enquire?

“3rd Weaponry” <– Previous Post “Bostrom” Next Post –>

Books:- Treatise, Wai Zandtao Scifi, Matriellez Education.

Blogs:- Ginsukapaapdee, Matriellez, Zandtao.

3rd Revolution in Weaponry

Watch this clip of people who describe AI weapons as the third revolution in warfare. The first was gunpowder in which, primarily the British, used this weapon to develop effectively a global colony. Whilst the Chinese invented gunpowder they did not particularly use it to develop a global empire but the British did.

As part of that empire the British colonised the US, and then the US fought for independence kicking out the British When the second revolution in warfare – nuclear weapons – came, the US had effectively taken over the British colonial empire, and they were in conflict with the Soviet Union. This happened when I was young and we were all frightened of nuclear holocaust.

This clip now warns of a third revolution in warfare – AI weapons, do we need to be frightened again? As the NRA says “guns don’t kill humans kill”, are we in a position to think that humans will not use AI weapons? Are we in a position to think that the use of such weapons will not cause global destruction?

One of the protagonists in a competitive AI race has a historical descendency connecting them – British, the US, NATO. In the first weapons revolution the British were the clear aggressors, with nuclear weapons the US were the first to use a nuclear bomb against the people of Japan, and they were part of the conflict which reached a crescendo at the Bay of Pigs. Now we have the potential of AI destruction, what are we doing about it? Instead of trying to control it we are entering into an arms race dominated by the same hegemonic influences that misuse the first two weaponries.

The world needs to wake up and recognise that these same forces are in control. Forces that expanded through colonialism, forces that established a US hegemony after the Second World War in part by use of nuclear weapons, forces that continue to seek expansion are in control of the AI race.

We need to block this race. We need to establish a Humanity First Protocol where advances in AI, however they occur, do not threaten human beings. There is some sort of global treaty on chemical weapons, I have no idea how enforceable it is. We need an enforceable Humanity First Protocol to be applied to all computers especially AI. We need to build into all such software this Protocol so that we can know that AI-weapons cannot be used against humanity.

“HFP – too late” <– Previous Post “HFP Platforms” Next Post –>

Books:- Treatise, Wai Zandtao Scifi, Matriellez Education.

Blogs:- Ginsukapaapdee, Matriellez, Zandtao.

This already feels too late. It is never too late but things have gone way too far, and “middle people” are ignoring the problem.

When I began thinking about AI concerned with government systems (as opposed to robots and weaponry), I felt even more completely depressed. I have begun to realise that, however misplaced it is, the rise in rabid individualism is a reaction to 1%-oppression through machines facilitated by liberals.

This sounds like a contradiction in terms but this liberal censorship has no freedom. I mentioned how computers were supposed to be user-friendly yet liberalisation has introduced them. The 1% is not interested in anything but their power and profits. If they have liberals as foremen it doesn’t matter to them. And the liberals are hiding behind machines, machines rules require universal conformity. And rabid individualism reacts to this.

Unfortunately rabid individualism seeking freedom has been manipulated into seeing that these liberals are Marxist, so these egos do not support compassion for all and work against collective struggle. Hence we have the rabid individualism typified by IDWeb, instead of individualism pushing for freedom and compassion for all. Their freedom is only for the ego, and ego works against compassion.

Why have we lost freedom? Because there is only profit. There are not the human values that come with compassion, there is only profit and egos flourishing in the market pretending this is freedom.

Why is this AI? Because the so-called intelligence that is coming from algorithms is based on market, as discussed by Safiya Noble, based on profit and not based on human compassion. We have lost because there is no prevailing will to introduce protocols that value what is human above AI. AI in government, for society, only has to make decisions based on profit and market algorithms. Whether the IDWeb reacts with individualism doesn’t matter because that individualism is not genuine freedom but ego.

These insights that led to my depression need some clarification but it already seems too late. I am wondering whether freedom is a rallying call – freedom from the machine. Unfortunately freedom from the machine on the right has become freedom from government regulation, and because of this in the US now we have all the regulations that infringed business being rolled away whilst regulations that infringe personal liberty are left in place.

Perhaps, however, the left needs to go with this as a strategy – although the very idea freaks me out because free trade and free regulation is just a bully-boys charter for the 1%. But we do need a move to be free from the machine, we have to end this focus on regulation.

Where does this regulation take us? Increased automation, increased control by AI. The protocols that are needed for robots such as the 3 laws need to be applied to computers in general especially if government computers start to be developed using AI – machine control by more than simple coding.

Typically now with government agency government workers are facilitating the rules of computers, and the way these workers are being instructed is a methodology that conforms interaction with government as a mechanised, programmed approach – the exact antithesis of user-friendly. In a sense in government now, the differentiation between government worker and machine is indistinguishable, the government worker is the front end of the machine – a human interface, simply a communication conduit. These conduits are presented two ways, hard-hearted people who just accept the limitations or frustrated people who complain that it is not their fault because these are the rules.

This conduit mentality is completely back-to-front, the human interface needs to be the power that holds back the ravages of the machine, the conforming straitjacket of regulation. Instead of being the front end and conduits, these workers need to be the human representatives against regulation, against the conforming straitjacket.

What about the caring professions? These people are straitjacketed by regulations and lack of finance. As a result their supposed care usually ends up with their being front ends for the machine, in my case as a teacher I was the front end of indoctrination and I was good at that because most kids knew I cared they did well and trusted me to work in their interest. And what did I do for them? Made them into cogs in the machine.

What about social workers? These people care. They are dealing with human situations fraught with danger. They are continually under attack from a media who are promoting freedom from regulation as a business interest. They are not free to use their judgement. Rather than build up human experience these caring people are constantly pushed into dilemma and to cover themselves they have regulations that protect them. Regulations should not be their protection, humanity and caring needs to be. Politically these people are “nowhere”. They do not have the finance to do their job, and they have the machine and regulation repressing the very humanity that is needed to do their job.

Then we have the law. When I was young (17/18) I would wander the streets in the early hours with a friend, we both had long hair and it was a hippy time when drugs were coming in. We were maybe stopped by police because we were not conforming, but because it was clear there was no criminal intent we were sent on our way home. Young scallywags committing trivial crime were clipped round the ear and sent home. In today’s context we were white and privileged. Back then a black boy could not have done the same. The police were and are mostly racist, it is that mentality that attracts them to be police. To attempt to control the racism amongst other human police characteristics regulation is in place. These protect the police including when they are wrong as in “Black Lives Matter”. Because the police still attract the racists the situation hasn’t changed, it is far worse.

Regulation and machine are intertwined and designed to infringe on human freedom. The IDWeb are a group of misfits who appeal to the need for freedom but however intellectual it is simply frustration – rage against the machine. Because it is mostly emotional there is no clear analysis. Freedom is what is needed but they fail to see that a blind allegiance to the market and freedom from regulation (government) is simply a business tactic – libertarians are doing the work of the 1%. Freedom means compassion for all – an end to suffering for all. And it is necessary to see where this suffering stems from. And much of it comes from the 1% sponsoring nationalism because they know that will help their deregulation and profits.

What we need in our society is freedom from conformity – freedom to be human. We need to end this notion that the human is the front end and interface of machine and regulation. The human needs to return to being the interpreter of value and freedom, the human needs to be trusted whether they make mistakes or not.

We need to understand that the mistakes that are made are usually caused by lack of funding and workload pressure. Why do we have this? Because the 1% have accumulated all the money. The 1% want the AI. They want the machine (government, automation and regulation) to be the focus of anger. They want humans to conform to “machine” because that enables profits. They want to disempower humans whose compassion naturally works against profit-making.

Government people as others want to keep their jobs, they have families to look after. Through automation they enforce regulation that conforms. Rabid individualism rages against this machine but is manipulated into targeting the front end and interface instead of the source of the problem who are the 1% manipulating the machine for their own benefit. It is time to return this process to humans. Humans have to end being front ends, end being interfaces, and people have to allow them to be free to judge what is correct.

To do this they need to be financed and accumulation has to be ended to enable that finance. That is why it is too late, that cannot happen.

Humanity needs to be valued, ultimately that is what the machine does not do. Humanity works for the machine, and not the proper way round. This is why AI is so frightening. Because of the 1% humans are being conformed into AI, and not vice versa. Recently I am sorry to say liberals have been key architects of this downfall.

Humanity first, somehow we need humanity first protocols in all forms of software design, for all machines not just robots.

Humanity first protocols – 3 laws.

“Superintelligence – myth?” <– Previous Post “3rd Revolutionary Weaponry” Next Post –>

Books:- Treatise, Wai Zandtao Scifi, Matriellez Education.

Blogs:- Ginsukapaapdee, Matriellez, Zandtao.

“Humans are currently the most intelligent beings on the planet – the result of a long history of evolutionary pressure and adaptation. But could we some day design and build machines that surpass the human intellect?

This is the concept of superintelligence, a growing area of research that aims to improve understanding of what such machines might be like, how they might come to exist, and what they would mean for humanity’s future.” [ref]

In terms of The Path of Scientific Enquiry examination of this article will recognise how many fundamental scientific assumptions are being made. In the spirit of Sheldrake’s 10 core assumptions, we can begin to question superintelligence. However what must clearly be understood is that science’s core assumptions suit the interests of power and influence, and remember it was not Oppenheimer who dropped the bomb … but he did enable it.

“There is no physical law precluding particles from being organised in ways that perform even more advanced computations than the arrangements of particles in human brains.” This is taken from here, and was written by a host of scientists including Stephen Hawking (respect).

This quote wraps up a whole load of assumptions, particularly the assumptions of science that I have the greatest difficulty with (concerning mind and brain) that is covered in Sheldrake’s dogmas by #1 and #8:-

Nature is mechanical.
Your mind is inside your head.

Another way of looking at this dogma is to say that mind and consciousness somehow emanate from the brain. Here is a scientist’s view of neuroscience that encapsulates this dogma – she explains it clearly and I presume this is relatively standard. Listen to this bit. Basically she is saying that the brain and nervous system is what makes us think and move – who we are.

Does this need investigating or what?

I don’t particularly want to criticise this lady, she is a scientist who happened to be #1 when I searched “what is neuroscience?”.

I think it reasonable to associate movement with the nervous system – as far as I know, this leaves me with the brain being what makes us think and who we are. This is in line with the Hawking plus quote in which a group of particles forms the brain … and therefore makes us think and who we are.

In other words Hawking plus are searching to convert the fictional “positronic brain” into reality. It appears they think that by a suitable rearrangement of particles there will be a real “positronic brain” that will be superintelligent. They are so convinced of this threat they wrote this article about it.

As Sheldrake (in his book Science Set Free) and others have pointed out, science was once seen as a panacea, that it will answer all. This quote indicates that process “In the history of science we have discovered a sequence of better and better theories or models, from Plato to the classical theory of Newton to modern quantum theories. It is natural to ask: Will this sequence eventually reach an end point, an ultimate theory of the universe, that will include all forces and predict every observation we can make, or will we continue forever finding better theories, but never one that cannot be improved upon? We do not yet have a definitive answer to this question… —Stephen Hawking & Leonard Mlodinow, The Grand Design, p.8” quoted by Sheldrake. Or as Sheldrake says himself in Science Set Free “The biggest scientific delusion of all is that science already knows the answers. The details still need working out but, in principle, the fundamental questions are settled.” [p17 of 770].

Now science has a new panacea AI-solutionism, that AI has all the answers. Here are some reasons why not. But I fear this AI-solutionism in much the same way as I fear those in science who claim it has all the answers.

I don’t fear the superintelligence of robots. I don’t fear that there can be a rearrangement of particles that can create a powerful neo-positronic brain which will annihilate humanity because such a brain would be superior to humans. But I am very frightened of AI. We have already created AI in drones and smart bombs that are used to kill humans, if Asimov’s Robotics laws were in place they would not be doing so.

The problem is not AI going out of control, it is that humanity goes out of control. The NRA slogan is “guns don’t kill humans do”. This can be rephrased as “drones don’t kill humans kill”, “smart bombs don’t kill humans kill”. And the argument against the NRA applies to drones and smart bombs, if we take away the drones and the smart bombs – and the guns – there is no killing. In the US there is a campaign against guns because Americans are being killed, there is minimal resistance to drones and smart bombs because it is not Americans who are dying.

The problem with AI is not “robots-out-of-control” but humans out of control. Humans are continually searching for more powerful weapons to destroy each other primarily for profit. And AI-robots as soldiers are such weapons. Then the NRAI will say “AI-robots don’t kill humans kill”.

Scientists are researching AI as scientists do – in search of learning. That learning as with Oppenheimer is used by the greedy and powerful to further their own ends.

Scientists have to start demanding that protocols (such as the 3 laws) be put in place for their AI development. AI cannot be weaponry, it has to be used for the betterment of humanity. The protocols need to be established now, established for AI, established for drones, established for smart bombs.

Scientists, do not weaponise the 1%.

“Open Letter” <– Previous Post “Humanity First Protocols” Next Post –>

Books:- Treatise, Wai Zandtao Scifi, Matriellez Education.

Blogs:- Ginsukapaapdee, Matriellez, Zandtao.

Here is an open letter sent by the Future of Life institute concerning automated weapons:-

As companies building the technologies in Artificial Intelligence and Robotics that may be repurposed to develop autonomous weapons, we feel especially responsible in raising this alarm. We warmly welcome the decision of the UN’s Conference of the Convention on Certain Conventional Weapons (CCW) to establish a Group of Governmental Experts (GGE) on Lethal Autonomous Weapon Systems. Many of our researchers and engineers are eager to offer technical advice to your deliberations.

We commend the appointment of Ambassador Amandeep Singh Gill of India as chair of the GGE. We entreat the High Contracting Parties participating in the GGE to work hard at finding means to prevent an arms race in these weapons, to protect civilians from their misuse, and to avoid the destabilizing effects of these technologies. We regret that the GGE’s first meeting, which was due to start today (August 21, 2017), has been cancelled due to a small number of states failing to pay their financial contributions to the UN. We urge the High Contracting Parties therefore to double their efforts at the first meeting of the GGE now planned for November.

Lethal autonomous weapons threaten to become the third revolution in warfare. Once developed, they will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend. These can be weapons of terror, weapons that despots and terrorists use against innocent populations, and weapons hacked to behave in undesirable ways. We do not have long to act. Once this Pandora’s box is opened, it will be hard to close. We therefore implore the High Contracting Parties to find a way to protect us all from these dangers.”

They are writing to the UN, and they note the problem of the UN; it has no teeth primarily because the US refuses to pay the globally-agreed percentage of GDP. The US also controls the Security Council, and yet the only possibly effective organisation for such control is the UN.

This is what we must fear. In this blog I warned about the US and NATO labelling their others as terrorists, and using the label of terrorist as an excuse for using AI-bombs. The UN could stop them by introducing global robotics laws but the US controls the UN.

We need to be afraid of western-sponsored AI-weapons and do something.

“3 laws & drones” <– Previous Post “Superintelligence – myth?” Next Post –>

Books:- Treatise, Wai Zandtao Scifi, Matriellez Education.

Blogs:- Ginsukapaapdee, Matriellez, Zandtao.

Science fiction has pointed out fears concerning AI and various apocalyptic scenarios; this is the job of the creatives. Perhaps the most famous has been Isaac Asimov’s books on robotics in which he established the following 3 laws to protect humanity (from here):-

A robot may not injure a human being or, through inaction, allow a human being to come to harm.
A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

In humanity’s development of AI so far, have we followed these laws or similar? And the answer unequivocably is NO.

And the problem is racism, humanity does not have respect for the life of the other. And where does the problem lie? With the power in the West. It is the nationalism of the West that is the problem, a nationalism that is prepared to use any weapons technology possible to further its own interests.

The main despicable weapons technology that uses AI is drones, and NATO’s use of drones is shameful. These are unmanned crafts that are being used to murder people in territories where there are no westerners. If you look at Asimov’s laws then western use of drones breaks the first 2 laws.

This is the problem with AI, if it is not programmed for safety; here is a Reuter’s article which says just that. The article is hopeful because it suggests that appropriate protective programming will be in place. But they are not facing the fact that AI is being used as weapons already as in drones and other technology. Protection is not in place already.

In other words these people have their heads in the sand:-

You could argue that drones are fired by soldiers in the Nevada desert but according to Asimov’s 3 laws that would not be allowed. The drones are killing, the drones are AI. Smart bombs are killing, smart bombs are AI. Applying Asimov’s 3 laws neither drones nor smart bombs would be able to kill even though humans are pushing the switch.

Protective laws need to be universal. They need to be programmed into AI now so that no-one can use AI-based weapons technology to kill other humans. And if there isn’t such programming then scientists working on AI are Oppenheimers:-

What we have to recognise now is that AI is already killing, the robots are killing. It is important to recognise the Vietnam factor. Vietnam was a war fought on foreign lands in my view for dubious reasons. The US establishment, particularly, suffered from internal protest and strife because of their involvement with the war – especially when US citizens came home dead. From that time technology was developed so that war could be waged without involving US troops on the ground. Decisions for war are already being taken when home lives are not being lost, decisions for war are being made using drones and smart bombs – not people. This is AI. Recognise now that AI is killing where politically NATO has difficulty justifying war with troops. We need Asimov’s 3 laws now.

No more drones, no more AI-based weaponry.

As I said the problem is nationalism and racism. Because of the arbitrary “War on Terror” all people who suffer at the hands of the US are labelled terrorist and that justifies the use of AI to kill people. The 3 laws need to be applied globally so that no AI-enabled weapon can be used to kill people. If we allow one nation such as the US or NATO to define terrorism and legitimise the murder of terrorists by AI, we are already in the doomsday scenario that science fiction writers have described.

Science needs to stop being Oppenheimers. Oppenheimer did not drop the bomb on Hiroshima, but he did create the bomb for people who are hawkish enough to drop such bombs. Those hawkish people are westerners, NATO. Oppenheimers need to stop working for NATO hawks until the laws of AI robotics have been established.

Of course that will not happen because so many scientists jobs and families depend on their being ostriches.

“Synthesising catness” <– Previous Post “Open letter” Next Post –>

Books:- Treatise, Wai Zandtao Scifi, Matriellez Education.

Blogs:- Ginsukapaapdee, Matriellez, Zandtao.

Synthesising catness

Still flying a bit with this AI stuff.

Hanoi (finish at 74m) speaks of AI “synthesising catness”. I can only surmise but I want to look into this. Presumably the AI is intended to assimilate all the data there is or was on cats. Commonalities – 4-leggedness, a tail (most breeds), whiskers, exploiting humans with cuteness, endless tedious clips. They can describe catness in this way. AI also has a scientific definition so it doesn’t make mistakes that might be made by humans. Is a meerkat a cat? I don’t know, I would have to do a search, and there would be a clear answer. AI would already know. Does that make AI more intelligent than me? By my definition of intelligent, NO – absolutely NOT.

But an attribute of AI is it knows facts, so no problem of manipulation of fake news with AI. Did Assad drop chemical weapons? AI would know. But AI would only know if the algorithms facilitated universal data collection. And if there is more profit in dropping bombs, then tinkering with algorithms to manipulate news would be a given. AI could know but won’t know because there will always be the 1% employing programmers?

Back to catness. We have a stack of characteristics that have been synthesised – far more than I have listed. Put those characteristics together do we have a cat? My answer. NO. I have a feel for what is a cat. This is a holistic or total feeling. I would describe “This is a cat (my holistic feeling) with black and white fur with a touch of ginger on its neck.” To make a point the AI might describe “Black and white fur with a touch of ginger, therefore it is a cat.” I start with the holistic feeling, the AI starts with particular attributes which it sums together thus concluding it is a cat because the attributes match the characteristics of catness in its database. I describe the cat from a totality that has attributes, the AI recognises attributes, sums them together, and determines from its database that it corresponds to cat.

In the case of cats there is little difference, in fact the AI might well have the edge because it has scientific knowledge that I don’t have. But then what about “my cat”? The longer I have “my cat” the more certain I am it is mine. But what about the AI? Will it know the cat is mine? In its database it might have stored far more characteristics of “my cat” than I would know, it will have photos of “my cat” to compare but will it know “my cat” like I do? What about when my cat has been out on the town, and staggers home disheveled and satiated? Mostly I will know my cat, no matter how different it appears, but will the AI? Maybe the recognition could also stem from something subconscious – a bond of “love”? If the cat has been on the razzle for a week and then returns home, it will be glad to see me (hopefully). It will radiate those feelings and I will pick up on them. This love or bond at present cannot be programmed or synthesised from data collection.

Now the problem with this is science as we now know it. At present science cannot measure this bond, conceivably such bonds might have physical characteristics such as a resonance not yet measurable or a particular wavelength not as yet measurable or even be described as a form of particle emission as yet not measurable. Therefore in the future it might be possible to humanise AI in such a way – but not now.

What is the motivation for such humanising? Here is where I am cynical. I could conceive of a situation in which humanising AI might make the AI more valuable – and therefore more profitable but at the moment that motivation seems slender.

It is early days but I want to draw a comparison with the way computers were introduced into the workplace. Every school programming textbook had as an important focus “user-friendly”. Computers were supposed to integrate seamlessly into the workplace routine. In practice computers were imposed on the workplace, and workers were expected to sink or swim – even losing their job. Now it is just accepted that we do things the way the computer wants us to. The bush mechanic became “educated” – schooled/trained – enough that they understood BIGJapan’s car assembly or they had no job.

This is the reality of the profiteering 1%-ethos that dominates the methodology of introduction into the workplace – what might euphemistically termed “integration into the workplace”.

Are the scientists being Oppenheimer?

Synthesising catness has its limitations that are beyond AI, and highlights the possibility of there being a “recognising bond” between cat and owner that is now not measured by science. What is there in this bond that we don’t know about ourselves, our humanity?

“Bush Mechanics” <– Previous Post “3 laws and drones” Next Post –>

Books:- Treatise, Wai Zandtao Scifi, Matriellez Education.

Blogs:- Ginsukapaapdee, Matriellez, Zandtao.