“Artificial Intelligent and Robotic Eye” is the title of a new book by David Carrier, PhD and Associate Professor at the University of Michigan School of Nursing. I have read it and found it to be an interesting little book. In fact, I learned a few things I can use in my own future artificial intelligent and robotic systems design work.
I am somewhat familiar with some of the work of Searches for Variable Time Commonalities (SVUC), with whom I’ve worked on Artificial Intelligence projects over the years. However, when reading this book, for the first time, I didn’t recognize any connections in the language, even though Carrier repeatedly emphasizes how significant the concept of symbolic order is to an AI system design project. Perhaps he should have explained that as an aside during the discussion of evolvable and abstract patterns in artificial intelligence, but he doesn’t. That being said, his discussion of cellular processes makes me think he might have something to add to the current discussion of understanding cellular processes in complexity as described by Evolved Relational Organization (ENS), and how we might refine the definition of ordered complexity below it.
If we consider Cellular Automation (CAE) we find biological complexity in all forms of life including plants, animals, bacteria, protozoa, fungus, etc. Carrier seems to be an evolutionist in this respect, however, and he also makes a good case for using Cellular Automation to describe intelligent robotic systems. His comments on how computer networks work reminds me of the early days of the Internet when the designers didn’t fully grasp the network’s protocol or why it worked, yet were building such giant databases which store information forever. It took awhile for computers to get smarter, and now we’re still building systems with billions of lines of code running simultaneously. Even thought he makes a good case for using Cellular Automation to model living things, I find that argument to be circular. That being said, I’ll give him points for sticking with his interpretation of Cellular Automation, which I think is very creative and interesting.
When discussing the future of technological AI in the future, I find myself torn between two camps, or at least they should be classified as such. On one side of the spectrum are those that think that Artificial Intelligent computer programs, like those running Google’s search engine, will be able to converse with, and control, actual humans. These people are so preoccupied with the “artistic” and “dream state” that they forget or don’t realize that the future of AI will likely involve software that can mimic or even better human behavior. There is already quite a bit of Artificial Intelligence research in this area, and it shows promising results. Those that subscribe to the other side of the spectrum, which I suspect to be most of the scientific community, argue that there is absolutely no need to focus so much on artistic AI, because we do not yet have the intelligence to be able to reason with and control our own systems.
Some of the arguments advanced by the Cambridge scholars against technological AI are worth examining. One of these is the idea that it might make us dependent on artificially intelligent computers to carry out our cognitive science. This is because we cannot reason with and control artificially intelligent computers, and if we do, the results might be disastrous for all of mankind. Another argument advanced against AIs by the proponents of the Cambridge school of thought is that they could actually create a super intelligent artificially intelligent computer, and thus be able to control human beings through psychological persuasion, and perhaps even use psychological persuasion to make people more obedient to them.
In an interesting paper posted on the Harvard university website, Robert Freund and Kevin Warwick suggest that we might be able to control artificially intelligent androids by using a parallel distributed processing technique. The paper is entitled, “Parallel distributed processing and human psychology.” According to the article, the paper proposes a new way of thinking about the relationship between AIs and humans. According to the researchers, we should not be scared of AIs because they will bring with them super-intelligent computers with multiple degrees of consciousness. Moreover, according to the researchers, the humans we will be interacting with will have superior linguistic, cultural, and psychological faculties.
The researchers argue that this will make the relationship between humans and AI much stronger than the current one. Furthermore, they say that we should be able to figure out a way to link our thinking with those of the machines, so that we can use the latter’s insights to further improve our mental well-being. For this, we need to develop parallel distributed processing, or PVPS, in which humans and artificially intelligent systems can communicate. According to the researchers, if we are able to link our thinking to the thinking processes of the robots, we will be able to better understand human beings and how we work.
In addition, one of the papers put forward by the researchers shows how we can use artificially intelligent software to create intelligent speech recognition software. For instance, if you are familiar with the project called Openheimer, you may have known that the project uses software called Freeview to analyze patients’ thoughts. In this case, an artificial intelligence network calledcambridge is being used to analyze the communication patterns of ordinary English speakers, and to translate these into spoken words. In the future, such a system may be used not only to analyze conversations, but also to interpret them. Thus, the future of artificially intelligent computers may lie in speech recognition, and the ability to understand the meaning of conversations in the human language.
5