[TriEmbed] Intellectual and technical debt and machine learning cautionary article

John Vaughters jvaughters04 at yahoo.com
Fri Aug 30 15:01:52 CDT 2019


 
Pete,
That is a pretty kool article, worth a read for sure, but it really just sort of confirms my view that decision making and response does create intelligence. Even if that decision making and response is based on learning. I first want to start off by saying there is no right answer on this topic. In all my studies over many years concerning the history of the Holy Grail of computing, creating an intelligent device is currently an unsolvable topic. This is a classical argument, for which there are two sides. One group of thinkers that believe we are just a physical device and all operations can be simulated once the processes are known. The second group is of the belief that consciousness is a state that cannot be simulated by known physical processes and possibly never even be within our reach to understand. I believe that both agree that without consciousness we do not have intelligence. At least most believe this to be true. At this point it's easy to go down a path of definitions of intelligence. A dog has intelligence, but does it have consciousness? So we really go down the rat hole if we cannot agree the definition of intelligence. Because we have failed to understand what consciousnesses truly is, it falls into this classical two sided argument. Truly it falls into the realm of philosophy, which rarely provides answers but does provide directions for study.  
To me there has been a watering down of the term AI. Classically it has always meant a consciously thinking entity. Along the path of our many failures to achieve this they recant and say well, "It can learn and respond, therefore it is a form of AI" Why do they take this stance? Because they failed and need to show they are making progress. So to me I fall in camp two. We have no idea what consciousness is and yet we try to create it? This is also the main argument against AI camp one. However, camp one is very aggressive in their belief that we are merely a physical creature and we can be simulated. The proof is in the neural nets, look at the progress we have made. Hubris is the death knell of achievement. The biggest achievers in history are usually dissatisfied with their achievements. So what progress has been made and how many times in history have we been told AI is just a software problem away? We just need more processing power, and then it will happen..... Here is your processing power, now you have it, where is the AI? Well, we have a new software technique and this will solve it once and for all...... Thanks for the object orientated programming technique, but where is the AI? Well, we passed the Turing test, does that count.... No, you merely used cold reading techniques in your responses. Well, we have this new software idea where we can make a computer learn by weights and loss functions and we are even developing specialized chips for this concept, and then it will happen.... OK, so we are waiting and the results are impressive and scary at the same time. We are creating learning response machines that are as ruthless as all in nature except those with a conscious. 
Where is the intelligence? Unless you water down the AI definition it does not exist. 
How do you create a conscious entity without knowing what conciseness is? 
Without writing a book, just know that history is not on the side of those who believe we are on the cusp of AI. It's a long and very interesting history and believe it or not automata has a long history and has been Classically argued possibly as far back to the Greeks. Definitely covered Des Cartes and how appropriate that the person who pushed consciousness with the simple statement, "I think therefore I am" would also discuss automata. I surely recommend the study of the deep historical roots for these discussions. It was long long ago that we started thinking about creating intelligence. But yet, this time by golly we have it. Hubris? My bet's are on the limiting factors of hubris. 
So in this yet another diatribe of my side of the argument. I will reinforce that the possibility that I am wrong is certainly on the table. I will also point out that this argument always ends the same. In short nowhere!!! Because, that is where we are in our understanding of intelligence, nowhere. Yes we will see amazing tasks from computers in the near future, but is that anything new? Do robots building cars amaze you? It does me, now drop back to the 1980's and that amazement is mind blowing. So get ready for some mind blowing results from the latest attempt at AI, I personally believe it will alter our life's course in life saving ways. But it may be equally amazing at destroying life. Hmmmm Classical double edge sword? The life destruction will come from those willing to create devices that make life and death decisions without human input. Sadly, it seems that is actually happening, which is why I changed my attitude towards fearing the latest AI attempt. I do not fear a more intelligent entity, I fear the human stupidity for allowing a learning response machine to have life and death powers. If our goal is to make terminators, I have no trouble believing that can happen. However, the mind behind them will not be a computer intelligence, it will be human.
We do not know the secret sauce of intelligence and to expect that to just happen is hubris. I applaud all efforts towards creating intelligence, how else will we know what not to try going forward. These efforts also create amazing solutions in the process. Where we do agree is in regards to imitation. The lego blocks for imitation are certainly falling in place. An important point because conceivably you could make an entity that acts just like a human and it can learn and respond and it can move fluidly like a human and even trick people to believing that it is intelligent as well, but the gap will remain. It will not be intelligent. To me that was my intention when I said it was an important point. It is one that camp one pushes. If can perfectly simulate a person, then how is it not intelligent. This is camp one's best argument. After all the best from of flattery is imitation, right? But camp two just does not believe that TRUE imitation is achievable without consciousness. And here we are back to the beginning, which is where I will end, because once again, we are nowhere. `,~)  
Two Cents with half a Pence on a good day!
John Vaughters    On Friday, August 30, 2019, 2:18:23 PM EDT, Pete Soper via TriEmbed <triembed at triembed.org> wrote:  
 
  Hi John,
     I'm still digesting and cogitating about this posting of yours. I've been trying to catch up on psychology and neuroscience for the past year or two, mostly for personal reasons, and am struck by the fact that if you take Eagleman's book Incognito and Kahneman's book Thinking Fast and Slow and boil them down it's possible to get a perspective about just how little magic is left in the way people actually behave (vs the way we think we behave). That's not to say artificial abstract reasoning is around the corner or to minimize what a vast gap there is between any existing (i.e. domain-specific) AI and a person. Just saying the nature of the lego blocks and simple collections of blocks seems to be coming into focus.
     And here's another submission for my guess that AI is sneaking up on us in plain sight, albeit with more and more intellectual debt being accrued:
 
      https://www.newscientist.com/article/2214731-robot-pilot-that-can-grab-the-flight-controls-gets-its-plane-licence/
 
 -Pete
 
 On 7/29/19 11:08 AM, John Vaughters via TriEmbed wrote:
  
 
 What we (Society) call Machine Learning, I call weighted big data with Artificial Stupidity. Yes the machine is learning, NO it is not intelligent. Quite the opposite. In fact, it reminds me of the old Sesame Street game, One of these does not belong with the other. Smart comparisons based on big data inputs with massive processing. The current Machine Learning will be useful as a tool, similar to a ratchet wrench is to a car. It will help us for specific tasks, but not all tasks. Ever try to use a ratchet wrench as a hammer, right, well it does work, but can give very bad results. That is experience talking :)  
  I've seen at least two articles from people in the industry stating we are headed in the wrong direction on AI. One software and one hardware. The software person used the same term I always use, Artificial Stupidity. He felt that we had to re-think the entire approach, but did not offer one. The point being he just flat out believed smart weighted comparisons are not the answer. A tool yes, but it will not lead to intelligence. The Hardware guy was somehow connected to Intel and believed the heavy processing was not the answer and instead of high electrical power with high processing power is not the solution. They were looking at low power processing with fast small calculations in massive parallel. Think video card cores. These articles were pie in the sky thoughts, so no idea if they went anywhere. All this tells me is what I have been saying for a long time, we have no clue what intelligence is or how to create it. What we keep doing is taking shots in the dark and extracting a little light to take new aim with another shot. Each shot provides great amazing tools. Object Oriented Programming came from one of those shots. I don't know about you, but that was a pretty amazing concept that lead to incredible advances in usable software. More tools are coming that will blow our minds, but it still will not be intelligent.  
  Machine Learning is so complex and very unreliable, because when it fails, it can be quite spectacular. The worse part is the creators have no idea why it failed, because they cannot evaluate the neural network. This is a real Frankenstein. Enough knowledge to build it but not able to understand or control it. The phase we are in right now is to build software to help evaluate what the neural nets are doing and it is a massive task.  
  I have always criticized Elon Musk for being afraid of AI, but I have backed down on him a bit. Because, if we allow some of this technology to run our  world, the fear is not that it will take over, but it will fail and fail big. I have no idea why Elon Musk is afraid of AI, but I do now see a very real issue where people think their software is great and apply it in situations that can cause massive problems. For instance, imagine AI implemented in an electric grid. SCARY! Ummmm Weapons decisions. YIKES! Sadly, I have found out both of these are being looked at, hence my fear level has raised, but not due to Sky Net domination, Due to Human Stupidity allowing Artificial Stupidity to be misused.  
  2 cents worth a half pence on a good day 
  John Vaughters 
      On Monday, July 29, 2019, 10:08:42 AM EDT, Brian via TriEmbed <triembed at triembed.org> wrote:  
  
   On 7/27/19 5:07 PM, Mark Sidell via TriEmbed wrote:
 > Favorite pick-up line: You look like a thing and I love you.
 
 Best.  Pick-up.  Line.  EVAR.
 
 I may have to try this one.
 
 -B 
 
 
 _______________________________________________
 Triangle, NC Embedded Computing mailing list
 
 To post message: TriEmbed at triembed.org
 List info: http://mail.triembed.org/mailman/listinfo/triembed_triembed.org
 TriEmbed web site: http://TriEmbed.org
 To unsubscribe, click link and send a blank message: mailto:unsubscribe-TriEmbed at bitser.net?subject=unsubscribe
 
      
  _______________________________________________
Triangle, NC Embedded Computing mailing list

To post message: TriEmbed at triembed.org
List info: http://mail.triembed.org/mailman/listinfo/triembed_triembed.org
TriEmbed web site: http://TriEmbed.org
To unsubscribe, click link and send a blank message: mailto:unsubscribe-TriEmbed at bitser.net?subject=unsubscribe

 
 _______________________________________________
Triangle, NC Embedded Computing mailing list

To post message: TriEmbed at triembed.org
List info: http://mail.triembed.org/mailman/listinfo/triembed_triembed.org
TriEmbed web site: http://TriEmbed.org
To unsubscribe, click link and send a blank message: mailto:unsubscribe-TriEmbed at bitser.net?subject=unsubscribe

  
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.triembed.org/pipermail/triembed_triembed.org/attachments/20190830/5678b19a/attachment.htm>


More information about the TriEmbed mailing list