Analysis of Artificial Intelligence: The Future Mind

It may be argued that the concept of artificial intelligence is not a newly-emerging one. Ever since humans became ‘self-aware’ in a sense (recognizing and interacting with each other as part of the same species) and began to use and manipulate tools some 2.5 million years ago, it was only a matter of time before human beings’ ever-expanding consciousness brought both of those attributes together in a completely new form.

Essentially, artificial intelligence may be described as a way to duplicate human thought, analysis, and reasoning in a controlled, non-human environment. While A.I. research is certainly not unified, both in terms of the stated goal and in terms of how to get there, generally it may be said that the purpose is to create an effective, adaptable mimicry of varying aspects of human thought.

The Two Basic Types of A.I.

There are two types of A.I., ‘weak’ A.I. and ‘strong’ A.I. Weak A.I. consists of any level of basic calculation, manipulation of information, or parameter analysis. Basically, weak A.I. can be summarized as a basic computer doing what it is asked. While there are multiple variants (in terms of processing power and multi-tasking), weak A.I. is information manipulation at its most simple level.

Where weak A.I. is basic calculation, strong A.I. is something closer to what one would recognize as mimicking human thought. However, it is important to note that while those attempting strong A.I. do hope to see something close to ‘human-like’ results in the future, the main aspects of the human mind under particular focus are adaptability and analysis, both of which may be argued to be the most difficult aspects of the mind.

While proximity in operability to human thought is considered the benchmark in how advanced an A.I. is, another metric that may be considered is self-reference. In his seminal work on A.I. Godel, Escher, Bach: an Eternal Golden Braid, Indiana University professor Douglas R. Hofstadter introduces the concept of what he calls ‘strange loops’. While the concept can be difficult to comprehend, it basically focuses on how often and how deeply an intelligence reflects back to and self-references during data analysis.

What Hofstadter’s concept introduces is a way to gauge the level of A.I. beyond merely the bulky and general classifications of weak and strong. With the gauging of constancy and depth of self-reference, one can gauge A.I. based on ‘levels’ of achievable consciousness or self-awareness.

Reasons for Self-Reference in A.I.

Given that A.I. is ultimately geared towards creating an advanced ‘thought machine’, one may ask: Why is self-reference important to A.I. construction? The reason is that if one were to consider humans as the most intelligent creatures on earth (and potentially in the universe) then in trying to create as fast and as adaptable an A.I. as possible, one would want to mimic the core processes of the human mind, one being constant self-awareness and self-reference.

With this alternative analysis of A.I. development, the question of how exactly self-reference plays into data analysis and adaptability begins. The fact is that while humans self-reference constantly, it may not be intuitively obvious as to why or how this helps humanity adapt or analyze. However, when one expands the constraints of self-reference and awareness beyond the obvious parameters, the answers begin to show.

Self-reference and awareness are inextricably tied together. The more self-aware something is, the more it self-references, and the more it self-references, the better it can adapt to new information (analog or digital). By self-referencing, an organism or object is aware that there is a clear line between itself and the greater environment, and any changes in that environment will inevitably affect the ‘self’.

Difficulties of Self-Reference in A.I.

With this understanding of self-reference, it becomes quite obvious how difficult, both in terms of hardware and programming, the development of effective A.I. can be. While labels such as ‘strong’ and ‘weak’ do help give a broader sense of what a certain A.I.’s capacity is, it isn’t until one digs into the logistics of strong or deep A.I. that problems begin to emerge.

Prominent inventor and Nobel Prize winner Ray Kurzweil discussed in his book The Singularity is Near, how technological development is increasing at an exponential rate. The time it takes to develop a new technology to fruition is constantly being halved. Whether or not we can distill millions of years of biological evolution into thousands or hundreds, remains to be seen. The benefit of being human, is that our species has had millions of years to evolve a strong sense of self-awareness and an extremely complicated intelligence. Without the advantage of such a substantial amount of time, it becomes exponentially more difficult to mimic or duplicate various aspects of the human mind in any substantial way.

Leave a Reply

Your email address will not be published. Required fields are marked *