Archive for May, 2018


3rd Revolution in Weaponry

Watch this clip of people who describe AI weapons as the third revolution in warfare. The first was gunpowder in which, primarily the British, used this weapon to develop effectively a global colony. Whilst the Chinese invented gunpowder they did not particularly use it to develop a global empire but the British did.

As part of that empire the British colonised the US, and then the US fought for independence kicking out the British When the second revolution in warfare – nuclear weapons – came, the US had effectively taken over the British colonial empire, and they were in conflict with the Soviet Union. This happened when I was young and we were all frightened of nuclear holocaust.

This clip now warns of a third revolution in warfare – AI weapons, do we need to be frightened again? As the NRA says “guns don’t kill humans kill”, are we in a position to think that humans will not use AI weapons? Are we in a position to think that the use of such weapons will not cause global destruction?

One of the protagonists in a competitive AI race has a historical descendency connecting them – British, the US, NATO. In the first weapons revolution the British were the clear aggressors, with nuclear weapons the US were the first to use a nuclear bomb against the people of Japan, and they were part of the conflict which reached a crescendo at the Bay of Pigs. Now we have the potential of AI destruction, what are we doing about it? Instead of trying to control it we are entering into an arms race dominated by the same hegemonic influences that misuse the first two weaponries.

The world needs to wake up and recognise that these same forces are in control. Forces that expanded through colonialism, forces that established a US hegemony after the Second World War in part by use of nuclear weapons, forces that continue to seek expansion are in control of the AI race.

We need to block this race. We need to establish a Humanity First Protocol where advances in AI, however they occur, do not threaten human beings. There is some sort of global treaty on chemical weapons, I have no idea how enforceable it is. We need an enforceable Humanity First Protocol to be applied to all computers especially AI. We need to build into all such software this Protocol so that we can know that AI-weapons cannot be used against humanity.

“HFP – too late” <– Previous Post “HFP Platforms” Next Post –>

Books:- Treatise, Wai Zandtao Scifi, Matriellez Education.

Blogs:- Ginsukapaapdee, Matriellez, Zandtao.

Advertisements


This already feels too late. It is never too late but things have gone way too far, and “middle people” are ignoring the problem.

When I began thinking about AI concerned with government systems (as opposed to robots and weaponry), I felt even more completely depressed. I have begun to realise that, however misplaced it is, the rise in rabid individualism is a reaction to 1%-oppression through machines facilitated by liberals.

This sounds like a contradiction in terms but this liberal censorship has no freedom. I mentioned how computers were supposed to be user-friendly yet liberalisation has introduced them. The 1% is not interested in anything but their power and profits. If they have liberals as foremen it doesn’t matter to them. And the liberals are hiding behind machines, machines rules require universal conformity. And rabid individualism reacts to this.

Unfortunately rabid individualism seeking freedom has been manipulated into seeing that these liberals are Marxist, so these egos do not support compassion for all and work against collective struggle. Hence we have the rabid individualism typified by IDWeb, instead of individualism pushing for freedom and compassion for all. Their freedom is only for the ego, and ego works against compassion.

Why have we lost freedom? Because there is only profit. There are not the human values that come with compassion, there is only profit and egos flourishing in the market pretending this is freedom.

Why is this AI? Because the so-called intelligence that is coming from algorithms is based on market, as discussed by Safiya Noble, based on profit and not based on human compassion. We have lost because there is no prevailing will to introduce protocols that value what is human above AI. AI in government, for society, only has to make decisions based on profit and market algorithms. Whether the IDWeb reacts with individualism doesn’t matter because that individualism is not genuine freedom but ego.

These insights that led to my depression need some clarification but it already seems too late. I am wondering whether freedom is a rallying call – freedom from the machine. Unfortunately freedom from the machine on the right has become freedom from government regulation, and because of this in the US now we have all the regulations that infringed business being rolled away whilst regulations that infringe personal liberty are left in place.

Perhaps, however, the left needs to go with this as a strategy – although the very idea freaks me out because free trade and free regulation is just a bully-boys charter for the 1%. But we do need a move to be free from the machine, we have to end this focus on regulation.

Where does this regulation take us? Increased automation, increased control by AI. The protocols that are needed for robots such as the 3 laws need to be applied to computers in general especially if government computers start to be developed using AI – machine control by more than simple coding.

Typically now with government agency government workers are facilitating the rules of computers, and the way these workers are being instructed is a methodology that conforms interaction with government as a mechanised, programmed approach – the exact antithesis of user-friendly. In a sense in government now, the differentiation between government worker and machine is indistinguishable, the government worker is the front end of the machine – a human interface, simply a communication conduit. These conduits are presented two ways, hard-hearted people who just accept the limitations or frustrated people who complain that it is not their fault because these are the rules.

This conduit mentality is completely back-to-front, the human interface needs to be the power that holds back the ravages of the machine, the conforming straitjacket of regulation. Instead of being the front end and conduits, these workers need to be the human representatives against regulation, against the conforming straitjacket.

What about the caring professions? These people are straitjacketed by regulations and lack of finance. As a result their supposed care usually ends up with their being front ends for the machine, in my case as a teacher I was the front end of indoctrination and I was good at that because most kids knew I cared they did well and trusted me to work in their interest. And what did I do for them? Made them into cogs in the machine.

What about social workers? These people care. They are dealing with human situations fraught with danger. They are continually under attack from a media who are promoting freedom from regulation as a business interest. They are not free to use their judgement. Rather than build up human experience these caring people are constantly pushed into dilemma and to cover themselves they have regulations that protect them. Regulations should not be their protection, humanity and caring needs to be. Politically these people are “nowhere”. They do not have the finance to do their job, and they have the machine and regulation repressing the very humanity that is needed to do their job.

Then we have the law. When I was young (17/18) I would wander the streets in the early hours with a friend, we both had long hair and it was a hippy time when drugs were coming in. We were maybe stopped by police because we were not conforming, but because it was clear there was no criminal intent we were sent on our way home. Young scallywags committing trivial crime were clipped round the ear and sent home. In today’s context we were white and privileged. Back then a black boy could not have done the same. The police were and are mostly racist, it is that mentality that attracts them to be police. To attempt to control the racism amongst other human police characteristics regulation is in place. These protect the police including when they are wrong as in “Black Lives Matter”. Because the police still attract the racists the situation hasn’t changed, it is far worse.

Regulation and machine are intertwined and designed to infringe on human freedom. The IDWeb are a group of misfits who appeal to the need for freedom but however intellectual it is simply frustration – rage against the machine. Because it is mostly emotional there is no clear analysis. Freedom is what is needed but they fail to see that a blind allegiance to the market and freedom from regulation (government) is simply a business tactic – libertarians are doing the work of the 1%. Freedom means compassion for all – an end to suffering for all. And it is necessary to see where this suffering stems from. And much of it comes from the 1% sponsoring nationalism because they know that will help their deregulation and profits.

What we need in our society is freedom from conformity – freedom to be human. We need to end this notion that the human is the front end and interface of machine and regulation. The human needs to return to being the interpreter of value and freedom, the human needs to be trusted whether they make mistakes or not.

We need to understand that the mistakes that are made are usually caused by lack of funding and workload pressure. Why do we have this? Because the 1% have accumulated all the money. The 1% want the AI. They want the machine (government, automation and regulation) to be the focus of anger. They want humans to conform to “machine” because that enables profits. They want to disempower humans whose compassion naturally works against profit-making.

Government people as others want to keep their jobs, they have families to look after. Through automation they enforce regulation that conforms. Rabid individualism rages against this machine but is manipulated into targeting the front end and interface instead of the source of the problem who are the 1% manipulating the machine for their own benefit. It is time to return this process to humans. Humans have to end being front ends, end being interfaces, and people have to allow them to be free to judge what is correct.

To do this they need to be financed and accumulation has to be ended to enable that finance. That is why it is too late, that cannot happen.

Humanity needs to be valued, ultimately that is what the machine does not do. Humanity works for the machine, and not the proper way round. This is why AI is so frightening. Because of the 1% humans are being conformed into AI, and not vice versa. Recently I am sorry to say liberals have been key architects of this downfall.

Humanity first, somehow we need humanity first protocols in all forms of software design, for all machines not just robots.

Humanity first protocols – 3 laws.

“Superintelligence – myth?” <– Previous Post “3rd Revolutionary Weaponry” Next Post –>

Books:- Treatise, Wai Zandtao Scifi, Matriellez Education.

Blogs:- Ginsukapaapdee, Matriellez, Zandtao.


“Humans are currently the most intelligent beings on the planet – the result of a long history of evolutionary pressure and adaptation. But could we some day design and build machines that surpass the human intellect?

This is the concept of superintelligence, a growing area of research that aims to improve understanding of what such machines might be like, how they might come to exist, and what they would mean for humanity’s future.” [ref]

In terms of The Path of Scientific Enquiry examination of this article will recognise how many fundamental scientific assumptions are being made. In the spirit of Sheldrake’s 10 core assumptions, we can begin to question superintelligence. However what must clearly be understood is that science’s core assumptions suit the interests of power and influence, and remember it was not Oppenheimer who dropped the bomb … but he did enable it.

“There is no physical law precluding particles from being organised in ways that perform even more advanced computations than the arrangements of particles in human brains.” This is taken from here, and was written by a host of scientists including Stephen Hawking (respect).

This quote wraps up a whole load of assumptions, particularly the assumptions of science that I have the greatest difficulty with (concerning mind and brain) that is covered in Sheldrake’s dogmas by #1 and #8:-

Nature is mechanical.
Your mind is inside your head.

Another way of looking at this dogma is to say that mind and consciousness somehow emanate from the brain. Here is a scientist’s view of neuroscience that encapsulates this dogma – she explains it clearly and I presume this is relatively standard. Listen to this bit. Basically she is saying that the brain and nervous system is what makes us think and move – who we are.

Does this need investigating or what?

I don’t particularly want to criticise this lady, she is a scientist who happened to be #1 when I searched “what is neuroscience?”.

I think it reasonable to associate movement with the nervous system – as far as I know, this leaves me with the brain being what makes us think and who we are. This is in line with the Hawking plus quote in which a group of particles forms the brain … and therefore makes us think and who we are.

In other words Hawking plus are searching to convert the fictional “positronic brain” into reality. It appears they think that by a suitable rearrangement of particles there will be a real “positronic brain” that will be superintelligent. They are so convinced of this threat they wrote this article about it.

As Sheldrake (in his book Science Set Free) and others have pointed out, science was once seen as a panacea, that it will answer all. This quote indicates that process “In the history of science we have discovered a sequence of better and better theories or models, from Plato to the classical theory of Newton to modern quantum theories. It is natural to ask: Will this sequence eventually reach an end point, an ultimate theory of the universe, that will include all forces and predict every observation we can make, or will we continue forever finding better theories, but never one that cannot be improved upon? We do not yet have a definitive answer to this question… —Stephen Hawking & Leonard Mlodinow, The Grand Design, p.8” quoted by Sheldrake. Or as Sheldrake says himself in Science Set Free “The biggest scientific delusion of all is that science already knows the answers. The details still need working out but, in principle, the fundamental questions are settled.” [p17 of 770].

Now science has a new panacea AI-solutionism, that AI has all the answers. Here are some reasons why not. But I fear this AI-solutionism in much the same way as I fear those in science who claim it has all the answers.

I don’t fear the superintelligence of robots. I don’t fear that there can be a rearrangement of particles that can create a powerful neo-positronic brain which will annihilate humanity because such a brain would be superior to humans. But I am very frightened of AI. We have already created AI in drones and smart bombs that are used to kill humans, if Asimov’s Robotics laws were in place they would not be doing so.

The problem is not AI going out of control, it is that humanity goes out of control. The NRA slogan is “guns don’t kill humans do”. This can be rephrased as “drones don’t kill humans kill”, “smart bombs don’t kill humans kill”. And the argument against the NRA applies to drones and smart bombs, if we take away the drones and the smart bombs – and the guns – there is no killing. In the US there is a campaign against guns because Americans are being killed, there is minimal resistance to drones and smart bombs because it is not Americans who are dying.

The problem with AI is not “robots-out-of-control” but humans out of control. Humans are continually searching for more powerful weapons to destroy each other primarily for profit. And AI-robots as soldiers are such weapons. Then the NRAI will say “AI-robots don’t kill humans kill”.

Scientists are researching AI as scientists do – in search of learning. That learning as with Oppenheimer is used by the greedy and powerful to further their own ends.

Scientists have to start demanding that protocols (such as the 3 laws) be put in place for their AI development. AI cannot be weaponry, it has to be used for the betterment of humanity. The protocols need to be established now, established for AI, established for drones, established for smart bombs.

Scientists, do not weaponise the 1%.

“Open Letter” <– Previous Post “Humanity First Protocols” Next Post –>

Books:- Treatise, Wai Zandtao Scifi, Matriellez Education.

Blogs:- Ginsukapaapdee, Matriellez, Zandtao.


Here is an open letter sent by the Future of Life institute concerning automated weapons:-

As companies building the technologies in Artificial Intelligence and Robotics that may be repurposed to develop autonomous weapons, we feel especially responsible in raising this alarm. We warmly welcome the decision of the UN’s Conference of the Convention on Certain Conventional Weapons (CCW) to establish a Group of Governmental Experts (GGE) on Lethal Autonomous Weapon Systems. Many of our researchers and engineers are eager to offer technical advice to your deliberations.

We commend the appointment of Ambassador Amandeep Singh Gill of India as chair of the GGE. We entreat the High Contracting Parties participating in the GGE to work hard at finding means to prevent an arms race in these weapons, to protect civilians from their misuse, and to avoid the destabilizing effects of these technologies. We regret that the GGE’s first meeting, which was due to start today (August 21, 2017), has been cancelled due to a small number of states failing to pay their financial contributions to the UN. We urge the High Contracting Parties therefore to double their efforts at the first meeting of the GGE now planned for November.

Lethal autonomous weapons threaten to become the third revolution in warfare. Once developed, they will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend. These can be weapons of terror, weapons that despots and terrorists use against innocent populations, and weapons hacked to behave in undesirable ways. We do not have long to act. Once this Pandora’s box is opened, it will be hard to close. We therefore implore the High Contracting Parties to find a way to protect us all from these dangers.”

They are writing to the UN, and they note the problem of the UN; it has no teeth primarily because the US refuses to pay the globally-agreed percentage of GDP. The US also controls the Security Council, and yet the only possibly effective organisation for such control is the UN.

This is what we must fear. In this blog I warned about the US and NATO labelling their others as terrorists, and using the label of terrorist as an excuse for using AI-bombs. The UN could stop them by introducing global robotics laws but the US controls the UN.

We need to be afraid of western-sponsored AI-weapons and do something.

“3 laws & drones” <– Previous Post “Superintelligence – myth?” Next Post –>

Books:- Treatise, Wai Zandtao Scifi, Matriellez Education.

Blogs:- Ginsukapaapdee, Matriellez, Zandtao.

Science fiction has pointed out fears concerning AI and various apocalyptic scenarios; this is the job of the creatives. Perhaps the most famous has been Isaac Asimov’s books on robotics in which he established the following 3 laws to protect humanity (from here):-

A robot may not injure a human being or, through inaction, allow a human being to come to harm.
A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

In humanity’s development of AI so far, have we followed these laws or similar? And the answer unequivocably is NO.

And the problem is racism, humanity does not have respect for the life of the other. And where does the problem lie? With the power in the West. It is the nationalism of the West that is the problem, a nationalism that is prepared to use any weapons technology possible to further its own interests.

The main despicable weapons technology that uses AI is drones, and NATO’s use of drones is shameful. These are unmanned crafts that are being used to murder people in territories where there are no westerners. If you look at Asimov’s laws then western use of drones breaks the first 2 laws.

This is the problem with AI, if it is not programmed for safety; here is a Reuter’s article which says just that. The article is hopeful because it suggests that appropriate protective programming will be in place. But they are not facing the fact that AI is being used as weapons already as in drones and other technology. Protection is not in place already.

In other words these people have their heads in the sand:-

You could argue that drones are fired by soldiers in the Nevada desert but according to Asimov’s 3 laws that would not be allowed. The drones are killing, the drones are AI. Smart bombs are killing, smart bombs are AI. Applying Asimov’s 3 laws neither drones nor smart bombs would be able to kill even though humans are pushing the switch.

Protective laws need to be universal. They need to be programmed into AI now so that no-one can use AI-based weapons technology to kill other humans. And if there isn’t such programming then scientists working on AI are Oppenheimers:-

What we have to recognise now is that AI is already killing, the robots are killing. It is important to recognise the Vietnam factor. Vietnam was a war fought on foreign lands in my view for dubious reasons. The US establishment, particularly, suffered from internal protest and strife because of their involvement with the war – especially when US citizens came home dead. From that time technology was developed so that war could be waged without involving US troops on the ground. Decisions for war are already being taken when home lives are not being lost, decisions for war are being made using drones and smart bombs – not people. This is AI. Recognise now that AI is killing where politically NATO has difficulty justifying war with troops. We need Asimov’s 3 laws now.

No more drones, no more AI-based weaponry.

As I said the problem is nationalism and racism. Because of the arbitrary “War on Terror” all people who suffer at the hands of the US are labelled terrorist and that justifies the use of AI to kill people. The 3 laws need to be applied globally so that no AI-enabled weapon can be used to kill people. If we allow one nation such as the US or NATO to define terrorism and legitimise the murder of terrorists by AI, we are already in the doomsday scenario that science fiction writers have described.

Science needs to stop being Oppenheimers. Oppenheimer did not drop the bomb on Hiroshima, but he did create the bomb for people who are hawkish enough to drop such bombs. Those hawkish people are westerners, NATO. Oppenheimers need to stop working for NATO hawks until the laws of AI robotics have been established.

Of course that will not happen because so many scientists jobs and families depend on their being ostriches.

“Synthesising catness” <– Previous Post “Open letter” Next Post –>

Books:- Treatise, Wai Zandtao Scifi, Matriellez Education.

Blogs:- Ginsukapaapdee, Matriellez, Zandtao.

Synthesising catness

Still flying a bit with this AI stuff.

Hanoi (finish at 74m) speaks of AI “synthesising catness”. I can only surmise but I want to look into this. Presumably the AI is intended to assimilate all the data there is or was on cats. Commonalities – 4-leggedness, a tail (most breeds), whiskers, exploiting humans with cuteness, endless tedious clips. They can describe catness in this way. AI also has a scientific definition so it doesn’t make mistakes that might be made by humans. Is a meerkat a cat? I don’t know, I would have to do a search, and there would be a clear answer. AI would already know. Does that make AI more intelligent than me? By my definition of intelligent, NO – absolutely NOT.

But an attribute of AI is it knows facts, so no problem of manipulation of fake news with AI. Did Assad drop chemical weapons? AI would know. But AI would only know if the algorithms facilitated universal data collection. And if there is more profit in dropping bombs, then tinkering with algorithms to manipulate news would be a given. AI could know but won’t know because there will always be the 1% employing programmers?

Back to catness. We have a stack of characteristics that have been synthesised – far more than I have listed. Put those characteristics together do we have a cat? My answer. NO. I have a feel for what is a cat. This is a holistic or total feeling. I would describe “This is a cat (my holistic feeling) with black and white fur with a touch of ginger on its neck.” To make a point the AI might describe “Black and white fur with a touch of ginger, therefore it is a cat.” I start with the holistic feeling, the AI starts with particular attributes which it sums together thus concluding it is a cat because the attributes match the characteristics of catness in its database. I describe the cat from a totality that has attributes, the AI recognises attributes, sums them together, and determines from its database that it corresponds to cat.

In the case of cats there is little difference, in fact the AI might well have the edge because it has scientific knowledge that I don’t have. But then what about “my cat”? The longer I have “my cat” the more certain I am it is mine. But what about the AI? Will it know the cat is mine? In its database it might have stored far more characteristics of “my cat” than I would know, it will have photos of “my cat” to compare but will it know “my cat” like I do? What about when my cat has been out on the town, and staggers home disheveled and satiated? Mostly I will know my cat, no matter how different it appears, but will the AI? Maybe the recognition could also stem from something subconscious – a bond of “love”? If the cat has been on the razzle for a week and then returns home, it will be glad to see me (hopefully). It will radiate those feelings and I will pick up on them. This love or bond at present cannot be programmed or synthesised from data collection.

Now the problem with this is science as we now know it. At present science cannot measure this bond, conceivably such bonds might have physical characteristics such as a resonance not yet measurable or a particular wavelength not as yet measurable or even be described as a form of particle emission as yet not measurable. Therefore in the future it might be possible to humanise AI in such a way – but not now.

What is the motivation for such humanising? Here is where I am cynical. I could conceive of a situation in which humanising AI might make the AI more valuable – and therefore more profitable but at the moment that motivation seems slender.

It is early days but I want to draw a comparison with the way computers were introduced into the workplace. Every school programming textbook had as an important focus “user-friendly”. Computers were supposed to integrate seamlessly into the workplace routine. In practice computers were imposed on the workplace, and workers were expected to sink or swim – even losing their job. Now it is just accepted that we do things the way the computer wants us to. The bush mechanic became “educated” – schooled/trained – enough that they understood BIGJapan’s car assembly or they had no job.

This is the reality of the profiteering 1%-ethos that dominates the methodology of introduction into the workplace – what might euphemistically termed “integration into the workplace”.

Are the scientists being Oppenheimer?

Synthesising catness has its limitations that are beyond AI, and highlights the possibility of there being a “recognising bond” between cat and owner that is now not measured by science. What is there in this bond that we don’t know about ourselves, our humanity?

“Bush Mechanics” <– Previous Post “3 laws and drones” Next Post –>

Books:- Treatise, Wai Zandtao Scifi, Matriellez Education.

Blogs:- Ginsukapaapdee, Matriellez, Zandtao.

I am a huge Pirsig fan – even thought of studying with the Liverpool crowd (Anthony McWatt) (might have started if there were distance leaning but not now). Pirsig died last year. To begin with his only book was Zen and the Art of Motorcycle Maintenance (ZAMM), (I quoted him in my teaching dissertation – loved it Corgi [1976] ). When Lila first came out I couldn’t get into it. Years later I did, and now think it is better. I started a thing where I was bouncing off Pirsig – reading a bit then writing, I might well continue the Pirsig Platform. I saw he had died, I wanted to show my respect.

In ZAMM Pirsig was questioning AI even before there were computers (I surmise). Let me explain what I mean. If you read ZAMM there is tons of stuff, and one theme was motorcycle maintenance. I am not mechanical so I didn’t personally get it but I think my intuition did. He would go on about how you must CARE for your motorbike. He would talk of mechanics who just read the spec, did what they were supposed to, then got stuck and were unable to go anywhere. And there were mechanics for whom it was an art, for whom fixing the motorbike was a feeling of love (my words). They got into it, and the bike was fixed properly; the spec mechanics couldn’t do it.

In the 90s I was in Botswana driving a wreck. I am a getting from AtoB driver, that is what my car means to me; none of Pirsig and his bike. I was also relatively broke so couldn’t afford a decent car, yet apart from the women driving to game parks was my greatest memory of my time there. This part of ZAMM was never a part of me even though I think I got it. But the bush mechanics were magical. You would walk into their lots and there would just appear a load of junk. But they could fix cars. Of course they broke down again, and they could fix them again. These bush guys got ZAMM, and have now been made obsolete.

Now I have a new car, and it is all different. There are maybe car bush guys around in Thailand who could work ZAMM magic, but that is not profit for BIGJapan. They have designed these bush guys out of existence. Bush guys cannot now open the bonnet and tinker the car into working. Under the bonnet is now an intersecting connection of assembly units. If something goes wrong the mechanics find the unit and replace it. These are trained mechanics. BIGJapan trains them with specs. Symptom, spec, change unit. Car fixed. I made a decision to pay this way, and go to the showroom. My car is fine.

I had a motorbike here, and I had a bike bush mechanic – ether are lots of them here. I damaged the motorbike by driving it without oil – fool. A bush mechanic pulled it apart put it back together again, and it worked. Something else happened, he did the same. I had screwed the bike, he could bush-fix it always. But it was not the way I wanted to drive – not AtoB. Now my car is serviced, and AtoB is fine. For the big bikes these bike bush guys have already been designed out of existence, as small bikes are driven by the poor in all sorts of condition bike bush guys might survive.

Difference between bush guys and BIGJapan – I am now paying far more money. BIGJapan has designed people out of the process and I pay far larger amounts of money to drive. In Botswana I had no money, I couldn’t do it and I needed the bush guys. In Thailand I didn’t want a car – until I did. I had a small bike and the bike bush guy was fine. I got bigger and there were problems until I bought a big bike from BIGJapan paid far more and had no problems.

There is a relationship between my incompetence, quality bush guys and BIGprofit. And BIGprofit designs out human quality. Money wants quality now, it has no patience, and it doesn’t care about people. If I can pay the big bucks to the showroom it is not my problem that the bush guy’s family struggles.

So where do you go with this? Do I demand rich guys hang around the old car lots I used to waste time in whilst the bush guys tinkers? How can I?

I see AI in this way. Human quality will be designed out of the process. Our world will be changed to suit tasks that can be carried out by AI. Do the assembled cars have intelligence? No. Does it work? Yes, if you have the money. Will the AI have intelligence? No. Will it work? Yes, if you have the money.

But where are people in all this? Where are the bush guys with their skills? Earning scraps whilst a few trained and fitted in with assembly units. And the showrooms have pretty girls.

When your boffs are in the labs working on AI for BIGprofit to make human qualities obsolete, are they thinking Oppenheimer?

“Racist AI” <– Previous Post “synth catness” Next Post –>

Books:- Treatise, Wai Zandtao Scifi, Matriellez Education.

Blogs:- Ginsukapaapdee, Matriellez, Zandtao.

Racist AI

A bit of a buzz this morning – curtailed meditation. It started with Brian, Hanoi and artificial intelligence. AI will be part of the path of scientific enquiry as will (might) become clear.

Not true – started with Safiya Noble and her talk on “Algorithms of Oppression”. In this she strongly indicated how search engines are racist and sexist. Now this is because of how I assume the search engine algorithm works – I don’t know how it works, and there is a certain amount of secrecy about the algorithms because business wants to be number one as advertising.

Fundamentally search engine rankings are based on what is loosely called the market, the most visited sites are highest up the rankings – assuming nothing too nefarious. Google is an advertising business, high rankings means advertising revenue. It is not based on human values eg the most creative has no component within the Google model.

What are the implications of this marketing model? I contend that marketing and bell hooks’ wonderful “white supremacist patriarchal society” are symbiotically linked. I am quite happy to say as I am not a scientist that marketing is white supremacist and patriarchal.

But I don’t want to get too bogged down in language because bell’s language turns the ignorant off. Marketing is the 1%-system might not raise so many hackles, it is not a huge leap to see that owners of big companies selling products are interested in marketing. And what are the characteristics of the 1%-system? Quite simple, the 1% exploit the 99% for profit. Not a biggy to say that. And it’s not a big jump to then say that marketing exploits the 99%.

For me that does not need explaining but let’s examine it anyway. I need to buy so I have to learn about what choices I have to buy. Let’s take food. I remember a TED talk, I think Dean Ornish, in which he describes the supermarket shelf as having many names but no difference in food choices. I know as a person who would love to eat 100% organic that I cannot go to a supermarket and do that. In other words in a supermarket I cannot choose to eat healthy; I cannot eat healthy although my choices can improve my diet. Market apologists turn round and say people are not choosing organic so it is not available. To counter that I argue that people are conditioned not to eat what is good for them, and we end in confrontation.

Marketing fashions what we know is available, and the results of search engines are based on the fashioning of the marketers. How can this be changed? Regulate search engines?

Search engines reflect the market, search engines reflect society. And that is why search engines are racist and sexist as Safiya Noble says. The algorithms reflect the way humanity acts. I did not say it reflects the way humanity is, and I will get into that later – that is the nub of AI and the path of scientific enquiry.

AI has a similar problem, and we come to Hanoi. In this clip (finish at 74m), he describes AI learning models as basically models that synthesise universal data collection. To learn about cats AI trails the net for all that there is to know about cats and then synthesises some kind of understanding of cats. Put simply AI learns from all that there is know about cats – good and bad. Sounds reasonable.

So what about learning models concerning race? Based on universal data collection in 1%-Trump-world, what kind of racist is our AI machine? What kind of sexist?

So what happens to those humans who do not want to be racist or sexist? If we can understand the answer to this, can we then add it to our AI learning model? Racism is conditioning, if we assume we are all born equal then it is simply conditioning. So as humans we unlearn our conditioning to stop being racist. This unlearning process is difficult, and has many stages of understanding including language, removing false delusions and removing institutional biases (institutional racism). But there is still more. Around us racist conditioning continues so we have to counter the continuing conditioning processes.

But ultimately we need detached minds that will prevent us from sinking back into the conditioning. And where do we get such a detached mind. One way is meditation although for some such a detached mind could be natural.

Of course not all would agree that anti-racist process is what all humans should be striving for.

This anti-racist model of unlearning could be written in stages as:-

Language
Removing false delusions
Removing institutional racism
Avoiding reconditioning
Remaining detached

How can AI be conceived with these stages?

Language is easy to stop if we choose.

False delusions becomes harder. Some delusions are clearly false. Scientific data exists to remove the 19th century racism that black brains are smaller. Black people deserve equal opportunity, but some might question whether they get such equality. “Blacks are taking our jobs.” becomes a little harder, because of the term “our jobs”. I would argue that the 1% are taking our jobs, and that there are enough jobs for all. So maybe the delusion is caused by institutional racism.

Avoiding reconditioning might be easy for AI. If a pattern of conditioning has been recognised it would be easy not to follow such conditioning again. But what is conditioning? And there we have a problem because what I might describe as conditioning is not the same as how others might describe it.

And as for detachment how can a robot be detached. What is human detachment? As a human I might be able to remain calm and detached but how do I then describe what I am doing?

If detachment is achieved through meditation then that is impossible for a robot. Natural detachment becomes difficult to describe, and if it difficult to describe how can it be “perceived” as AI?

For me the main issue with AI are political questions. I accept that we live in a 1%-system in which humans function as consumers for the express purpose of increasing 1%-profits. If that is accepted what impact will robots have on consumerism. The second political question is the Oppenheimer question. Scientists might well define AI limitations but will those limitations be what the 1% want and will they accept it?

But those political questions are not the main part of the mandtao path of scientific enquiry although financial and political awareness are always part of any path. For the mandtao path that issue is what AI cannot do?

In “Science set free” Sheldrake’s discusses his 10 core scientific assumptions/questions (here and AppA) and suggests that a science assumption is that it can explain everything given time. Extrapolate that, and we have can AI perform all that humans can given time? That enquiry is the path, part of the path of scientific enquiry.

“mind/brain frustration” <– Previous Post “Bush Mechanics” Next Post –>

Books:- Treatise, Wai Zandtao Scifi, Matriellez Education.

Blogs:- Ginsukapaapdee, Matriellez, Zandtao.