Still flying a bit with this AI stuff.

Hanoi (finish at 74m) speaks of AI “synthesising catness”. I can only surmise but I want to look into this. Presumably the AI is intended to assimilate all the data there is or was on cats. Commonalities – 4-leggedness, a tail (most breeds), whiskers, exploiting humans with cuteness, endless tedious clips. They can describe catness in this way. AI also has a scientific definition so it doesn’t make mistakes that might be made by humans. Is a meerkat a cat? I don’t know, I would have to do a search, and there would be a clear answer. AI would already know. Does that make AI more intelligent than me? By my definition of intelligent, NO – absolutely NOT.

But an attribute of AI is it knows facts, so no problem of manipulation of fake news with AI. Did Assad drop chemical weapons? AI would know. But AI would only know if the algorithms facilitated universal data collection. And if there is more profit in dropping bombs, then tinkering with algorithms to manipulate news would be a given. AI could know but won’t know because there will always be the 1% employing programmers?

Back to catness. We have a stack of characteristics that have been synthesised – far more than I have listed. Put those characteristics together do we have a cat? My answer. NO. I have a feel for what is a cat. This is a holistic or total feeling. I would describe “This is a cat (my holistic feeling) with black and white fur with a touch of ginger on its neck.” To make a point the AI might describe “Black and white fur with a touch of ginger, therefore it is a cat.” I start with the holistic feeling, the AI starts with particular attributes which it sums together thus concluding it is a cat because the attributes match the characteristics of catness in its database. I describe the cat from a totality that has attributes, the AI recognises attributes, sums them together, and determines from its database that it corresponds to cat.

In the case of cats there is little difference, in fact the AI might well have the edge because it has scientific knowledge that I don’t have. But then what about “my cat”? The longer I have “my cat” the more certain I am it is mine. But what about the AI? Will it know the cat is mine? In its database it might have stored far more characteristics of “my cat” than I would know, it will have photos of “my cat” to compare but will it know “my cat” like I do? What about when my cat has been out on the town, and staggers home disheveled and satiated? Mostly I will know my cat, no matter how different it appears, but will the AI? Maybe the recognition could also stem from something subconscious – a bond of “love”? If the cat has been on the razzle for a week and then returns home, it will be glad to see me (hopefully). It will radiate those feelings and I will pick up on them. This love or bond at present cannot be programmed or synthesised from data collection.

Now the problem with this is science as we now know it. At present science cannot measure this bond, conceivably such bonds might have physical characteristics such as a resonance not yet measurable or a particular wavelength not as yet measurable or even be described as a form of particle emission as yet not measurable. Therefore in the future it might be possible to humanise AI in such a way – but not now.

What is the motivation for such humanising? Here is where I am cynical. I could conceive of a situation in which humanising AI might make the AI more valuable – and therefore more profitable but at the moment that motivation seems slender.

It is early days but I want to draw a comparison with the way computers were introduced into the workplace. Every school programming textbook had as an important focus “user-friendly”. Computers were supposed to integrate seamlessly into the workplace routine. In practice computers were imposed on the workplace, and workers were expected to sink or swim – even losing their job. Now it is just accepted that we do things the way the computer wants us to. The bush mechanic became “educated” – schooled/trained – enough that they understood BIGJapan’s car assembly or they had no job.

This is the reality of the profiteering 1%-ethos that dominates the methodology of introduction into the workplace – what might euphemistically termed “integration into the workplace”.

Are the scientists being Oppenheimer?

Synthesising catness has its limitations that are beyond AI, and highlights the possibility of there being a “recognising bond” between cat and owner that is now not measured by science. What is there in this bond that we don’t know about ourselves, our humanity?

“Bush Mechanics” <– Previous Post “3 laws and drones” Next Post –>

Books:- Treatise, Wai Zandtao Scifi, Matriellez Education.

Blogs:- Ginsukapaapdee, Matriellez, Zandtao.

Advertisements