Threesology Research Journal
Artificial Intelligence and 3sology (56K)
Page 44

Note: the contents of this page as well as those which precede and follow, must be read as a continuation and/or overlap in order that the continuity about a relationship to/with the typical dichotomous assignment of Artificial Intelligence (such as the usage of zeros and ones used in computer programming) as well as the dichotomous arrangement of the idea that one could possibly talk seriously about peace from a different perspective... will not be lost (such as war being frequently used to describe an absence of peace and vice-versa). However, if your mind is prone to being distracted by timed or untimed commercialization (such as that seen in various types of American-based television, radio, news media and magazine publishing... not to mention the average classroom which carries over into the everyday workplace), you may be unable to sustain prolonged exposures to divergent ideas about a singular topic without becoming confused, unless the information is provided in a very simplistic manner.

AI and 3sology pages:

Artificial Intelligence and 3sology Introduction
pg1 pg2 pg3 pg4 pg5 pg6 pg7 pg8
pg9 pg10 pg11 pg12 pg13 pg14 pg15 pg16
pg17 pg18 pg19 pg20 pg21 pg22 pg23 pg24
pg25 pg26 pg27 pg28 pg29 pg30 pg31 pg32
pg33 pg34 pg35 pg36 pg37 pg38 pg39 pg40
pg41 pg42 pg43 pg44        

In describing information leading to the possible development of an AI system exceeding the current trends of development, it is of need to note that the usage of biological references in form, function and theoretical considerations devised to enhance understanding; is that doing the same for nuclear processes is largely confined non-biological applications... unless we use some sort of metaphysical construction. In other words, we have nuclear energy, nuclear bombs, electron microscopes, optics in telescopes, microscopes, etc., and various other applications of physics... but physics has no biological representation of itself. It is part of biology, and not a living entity itself... at least as far as we know.

If we think alternatively, even biology is not alive... it is a compilation of constituent non-living elements which have come together to produce what we call life... unless we are to define life as a basic process. The problem in this is that we stray into an area of developing a definition of life's inception like those arguing about when the development of sex cells have divided enough to produce what we call a life form. The word "form" thus becomes an operative word. For example, how many cells does it take to make a living organism? No less, how many components (in a give assemblage) will be needed before we say that an AI system is viable? Must an AI system talk or spoil a diaper or suckle a breast? Is the beginning of sex cell division the beginning of a life, though its form does not yet exhibit a human model?

At what point would we say that a person isn't a person but an animal? When they have committed a heinous crime? If they were born without a brain? Are they without the appeal of being human if they are born without limbs or a head? And what if they are born without the ability to see, hear, speak or smell? Under what circumstances to we say that a human form is not a person? Are those that are starving and choose to cannibalize the young, old or injured... to be viewed as less than human? When will a computer stop being a computer and looked upon as a companion... the very role that pets, television and radio often take?

The advent of "labor saving devices" such as washing machines, vacuum cleaners, and engine driven equipment such as water pumps, and hand tools, was an embraced ideology by the public who otherwise had little actual time for themselves and their family. Yet, the advent of robotic systems that reduce the workload of humans even further... has created problems for humans due to unemployment. Without the need for labor to be performed by humans, humans must seek out some employment niche which requires an increased "hands on" mental activity. At present, such activity is of a general "laborer" functionality in that they are required to think in a given fashion. For example, many call centers require employees to read a script when speaking with customers. The "script" is the idea of one or a few managers wanting to keep phone conversations to a minimum and give the appearance of exhibiting a serious professionalism. In other words, people are not permitted to "speak their mind", though a company wants those who are not laborers in the old definition. They want mental laborers.

The "work for me as a slave laborer" attitude prevails in many areas of employment. You are too follow guidelines and rules and think primarily in terms of preventing accidents and have all extraneous mental energy focused on perfecting skills that will benefit the company. You only matter in terms of your serviceability to the company. The requirements of a job and the bylaws of a company are akin to a computer program. But Not all programs are adaptable. The same goes for the program of patriotism and observance of laws used by a government. They sometimes find themselves incapable of fending off a virus called war or Revolution. The same goes for present designs of computers. Upgrading a computer is akin to the process of evolution along a desired route... though not all upgrades provide the level of functionality that one might be looking for. Indeed, what are AI designers looking for and want their systems to exhibit?

From labor saving devices to robotics to "advanced" human or pet-like AI systems. While robotic systems do save human labor, they also replace it and given the owners (or leasers) of such systems the means to benefit themselves. The viability of the human species has been greatly diminished and is being presented with the possibility of being replaced, if AI systems learn how to create themselves by non-biological means. Yet, will they replace the imagination and inventiveness of humans? Whereas they can accumulate vast stores of information in many different languages, most efforts of a creative nature are simplistic expressions. They are not for the development of functionality, but for entertainment. Since our efforts at producing creative humans remains as an infantile effort, can we develop an AI system which will promote a developmental form of creativity, imagination, talent, and genius? Yet, will we humans be able to make use of it? Are humans only capable of appreciating simplistic forms of creativity and imagination?

If the accumulation of experiences and knowledge are directed along channels to produce simplistic products and services to meet "human needs" which remain at biologically simple levels of biological requirement, how then do we program computers to develop a higher order of thinking, imagination, creativity and genius when our thoughts are inclined towards such biologically-based naivete? Our human biology keeps us in a rut. We might not know how to make usage of a product constructed from the ideas of a computer genius who is not limited by human biology or environmental effects which act as anchors to keep us bogged down. How can we recognize or appreciate such qualities... much less design them programmatically... if we are not even aware of the possible existence thereof? If we are so accustomed to constructing programs to fulfill some presumed biological requirement or some entertainment niche', how are we to recognize values of thought which are exceptionally divergent from the common throng?

What if you are special... like a monkey way ahead of its time living amongst those whose minds have not "acquired" or "developed" an advanced sense of computation? For example, like a monkey thinking in terms of having developed a sense of number beyond your peers, but you do have neither the vocabulary nor linguistic skills to express such in recognizable analogical or digital forms? Or, much less, be presentable to those who are unable to grasp such a language... such as using three whoops instead of a customary two-whoop communication system? How does a monkey or computer, on the evolutionary mental scale of a present day human, convey a genius ability when subjected to environmental-societal standards which expects, reinforces, or otherwise admonishes a person or computer for exhibiting their genius... a genius that might not be understood anyway because of programming (learning) constraints placed on both humans and computers via education systems which define what is or is not considered sub- common- or superior intelligence? For example, if we were to take an infant from the 13th century BC and subject them to the standards of thinking imposed on people today, many people consider that the infant will develop present day thinking skills because there exists the assumption that human brains have been the same for many thousands of years... at the advent of Cro-magnons. However, it might be considered that because of the slightly larger brains of Cro-magnons, they might well have been intellectually superior had they been born in the atmosphere of learning we have today.

Cro-magnon (42K)

("Cro-Magnon" refers to a) population of early Homo sapiens dating from the Upper Paleolithic Period (c. 40,000 to c. 10,000 years ago) in Europe.

In 1868, in a shallow cave at Cro-Magnon near the town of Les Eyzies-de-Tayac in the Dordogne region of southwestern France, a number of obviously ancient human skeletons were found. The cave was investigated by the French geologist Édouard Lartet, who uncovered five archaeological layers. The human bones found in the topmost layer proved to be between 10,000 and 35,000 years old. The prehistoric humans revealed by this find were called Cro-Magnon and have since been considered, along with Neanderthals (H. neanderthalensis), to be representative of prehistoric humans.

Cro-Magnons were robustly built and powerful and are presumed to have been about 166 to 171 cm (about 5 feet 5 inches to 5 feet 7 inches) tall. The body was generally heavy and solid, apparently with strong musculature. The forehead was straight, with slight browridges, and the face short and wide. Cro-Magnons were the first humans (genus Homo) to have a prominent chin. The brain capacity was about 1,600 cc (100 cubic inches), somewhat larger than the average for modern humans. It is thought that Cro-Magnons were probably fairly tall compared with other early human species.

Like the Neanderthals, the Cro-Magnon people buried their dead. The first examples of art by prehistoric peoples are Cro-Magnon. The Cro-Magnons carved and sculpted small engravings, reliefs, and statuettes not only of humans but also of animals. Their human figures generally depict large-breasted, wide-hipped, and often obviously pregnant women, from which it is assumed that these figures had significance in fertility rites. Numerous depictions of animals are found in Cro-Magnon cave paintings throughout France and Spain at sites such as Lascaux, Eyzies-de-Tayac, and Altamira, and some of them are surpassingly beautiful. It is thought that these paintings had some magic or ritual importance to the people. From the high quality of their art, it is clear that Cro-Magnons were not primitive amateurs but had previously experimented with artistic mediums and forms. Decorated tools and weapons show that they appreciated art for aesthetic purposes as well as for religious reasons.

It is difficult to determine how long the Cro-Magnons lasted and what happened to them. Presumably they were gradually absorbed into the European populations that came later. Individuals with some Cro-Magnon characteristics, commonly called Cro-Magnoids, have been found in the Mesolithic Period (8000 to 5000 BC) and the Neolithic Period (5000 to 2000 BC).

Source: "Cro-Magnon." Encyclopædia Britannica Ultimate Reference Suite, 2013.

There is no value in creating humans with larger brains (in which artificial wombs would be a necessity because the human birth canal is too small); if that which we teach them is on the level of monkeys... and we would end up with large brained individuals thinking in the same idiotic terms as those with smaller brains. Whereas humanity may pat itself on the back for its collective knowledge, none of us apparently have that knowledge in our brains as a standard program we are born with, nor necessarily have the potential ability to accumulate. In fact, most brains are specialized for a given task... though there are many different kinds of "handy man or handy women" who have acquired the knowledge of doing multiple tasks... though a period of relearning may be needed if they have not done a given task in a while. And since some tasks are remembered in part by the functionality of doing them, (memories associated with muscular activities, sights, sounds, odors, etc.), loss of mobility may decrease a memory or at least its recall. Memory is a problem area for creating smarter humans, particularly when it is associated with a standard of mobility that declines with age. A biologically-based computer may not be a "best option" alternative in creating a self-perpetuating (and evolving) AI system.

If humanity was made in the image of (a) God, then that God has lots of problems... particularly in the fact that the good eventually die along with the bad... though written history and our genetics may well make some record thereof. Likewise is the case for any AI system to be built in the image of humans. The present images seen in the programs and associated codifications... if one can make a comparison with the development of early life... is that biological development went through "egotistical" transformations (are you listening Microsoft?) and that there were numerous types of insidious viruses and wormy creatures bent on altering the functionality thereof. The development of the computer may well signal a new evolutionary trek which eventually diverges from its early biologically-influenced beginnings, before its own development ramps up and humanity is made as redundant as early forms of hominid. The humanity of today may well be seen as the Neanderthal of some future Anthropologically-defined age.

Yet, what if there exists those today who are equivalent to a newer species of humans? Whereas Neanderthals have been designated as Homo sapiens neanderthalensis, as well as simply Homo neanderthalensis, Cro-magnons are distinctly identified as Homo-sapiens. What if there are those living today who have characteristics of a Homo-sapien-sapien? How does an assumed difference in brain scaffolding express/exert itself towards influencing overall architectural design? Do social mores prevent the ability of new ideas to take hold without a level of aggression? If Cro-magnons did not inter-breed with Neadethals, just as we might consider that future humans shy away from inter-breeding with what are subtly perceived to be human sub-types in the current population (though social researchers say that the lack of birth amongst some people is specifically attributable to jobs, homosexuality, careers, medical/child-rearing costs, narcotics/alcohol usage, and other commonly denoted circumstances of delay or abstinence...), then advanced AI systems will not readily "mate" with their era-specific AI counter-parts. And even though there is some genetic indication to suggest that the two earlier types of sapien did mate, and that the latter one may have mated with the newly emerging form of sapien to which we of today belong; this is not to say that a "pure" strain did not/does not continue to this day. In other words, it may be that no one recognizes it, or more so... they do not have the ability to recognize it?

If you or someone you know is out of their "element", they may not be able to function well. The same goes for computers. Their must be an appropriate environment in order that the value of a computer can be witnessed. For example, dumping a computer into a trash can or filled bathtub would be the wrong elements (or environment) for the computer. It is the same with people, even though some are more adaptable to different environments. Not all people are able to make the best of a bad situation on a moment's notice or at the flick of a switch. Some people's brains do not function with the enabled strategy of using multiple associations or metaphorical alliances where basic architectural displays are quickly noted and formulated into a working parameter of effect... like a person who changes costume and language in order to appear "in tune" or "normal" if they were able to transport themselves into another time and place. In other words, they would be readily adaptable to multiple circumstances. However, the insistence for using a binary-based computer code presents us with the situation of having produced a restrictive environment for establishing a Ternary computer code. The code is being formulated as a adaptive variation of a binary code being used as a role model. The ternary code is not being developed as a "stand alone" model created on its own merits. It is being fashioned to make usage of the switching system we employ as a basic operational parameter due to the electron-based on/off characteristic.

If you or someone you know are out of their element, you may well seek to alter the circumstances which favor your own or another's abilities. Do you take it upon yourself to create circumstances which are better suited to permitting you abilities to be exhibited as you think they should be? What if you aren't really that special? The same goes for the type of computer or computer program you develop. Is it as exceptional as you think it is, or simply want to convince others that it is by creating circumstances which gets rid of competition? It's easy to make someone appear smarter or more competent by associating a less smart and competent person to them... as is the case in many comedy teams and movie scripts where humans are particularly stupid while the simulated AI units appear as more intelligent, wise, considerate, etc., because the writers of the script are actually stretching the limits of their own competence in trying to produce scenes, situations and dialogue which are supposed to reflect a higher intelligence but actually represent mediocrity.

From a another perspective, one may propose the question of whether someone is convinced they're a type of unique general in need of a war in order that they may try out military strategy and new armaments? Or else, are a political leader who needs a social problem that can be solved because they already have the assumed answer for a given problem? No less, are you the head of a security agency with spies who needs the appropriate atmosphere in which to best play out your role of leadership, and will do anything necessary to ensure that such an environment prospers? Are you a newspaper organization whose members need the necessary conditions in order that particular journalistic awards are fully merited, even if you have to instigate the situations for providing such awards? Are AI systems to likewise be contrived? So called "greatness" is based on specialized conditions. Remove someone or a computer from their element, and their worth is diminished.

Those who work well in the environment of computers do not want such an environment to be reduced. The same goes for politicians who must set up conditions so that they may better initiate favorable outcomes, like Bush and Cheney's involvement in the events of 911 and subsequent military involvement in the Middle East so that a (presumed) non-bid government contract could be awarded to Haliburton... a company which Cheney used to work for and got campaign financing from. He had to find some means of paying back the company... no matter who got killed or what got destroyed. This is the same which will occur for those whose livelihoods are married to the development of computers and perhaps a functional advanced AI system. The public's interests are a minor concern for those wishing to promote a social environment where their skills are marketable and the public is seen more of as a nuisance than an asset. Nuisance abatements come in many forms.

The Venus Project Documentary logo (77K)

The adoption of an A.I. system to be used as the governing authority, is being promoted by those presenting the idea of a new society called the "Venus Project". While the movie does not provide details of what this A.I. system will provide in the way of an enhanced form of governance other than to say it would be better... and therefore preferable to the one we have, detractors of the proposed idea rightly claim that any computer system whose code is written by humans, will invariably contain human biases. However, if humans did develop an A.I. system that could code itself or another upgraded model, some may want to suggest this would be free of human bias though others would say the bias would simply be concealed by code attempting to disguise the initial design. In other words, like our genetic code, cells and physiology, remnants of a distant past remain and affects us. That which we are made of is a road map of the path we have taken in an environment that changes along with its direction towards a planetary, solar system, and galactic demise.

Under analysis, the contents are particularly superficial in describing a needed blueprint for initiating the transition to the described futuristic landscape, though some of the arguments against current problems of governance being used to buttress the views for initiating the project are understood and generally agreed upon, the movie is more amiable to wide-eyed idealists in a grade-school setting than those with a deeply sincere interest in seeking detailed explications for positive application for addressing multiple social issues that would not be easily dismissed by simply creating a new architecturally drafted infrastructure.

As is noted in the present excursion into a discussion about AI and 3sology, the content goes far afield and is not explicitly intended to provide a dot-to-dot blueprint of a walking, talking (paper, slippers or ball fetching) human-like robot. Though it is now a reality that a computer can play chess well, this is not the same as programming a computer to run a government... since there are no realistic long term goals beyond a decade or so... unless society is to be run as a game of chess and people are to be viewed as game pieces— many of whom can be sacrificed for some singular goal to be achieved. Clearly, we humans are not ready to permit some computer to take over social governance, even though the present formulas produce many terrible injustices for love, ego or greed... the very reasons some people buy a house, car, clothes or jewelry... and even though computer programs do articulate control of some social aspects that we abide by.

Delving into the question of governance by way of an AI application can be instructive, if not functionally illustrative of the problems being encountered with designing an AI system. For example, would you say that the present governing systems are of a bottom up or a top down variety? Are Democratic interests more of a bottom up effort, while Republicans prefer a top down formula, though neither camp have thought to deduce their representative models in an "AI" manner? Will an AI-run government use such patterns or evolve its own, with part of the evolution requiring changes to the human genome? Will we want to make changes to human behavior in order to make it easier for adopting the usage of an AI system that may not have the desired flexibility in decision making because humans can not function in a like-mannered efficiency? Will the usage of a dichotomous on/off reliance be incompatible with a three-patterned physiology or a three-structured physics... regardless if there are patterns-of-two integrated as well? How do we incorporate the structural presence of patterns-of-three into a two-patterned functionality, instead of some superficiality produced by semantics?

If we begin the usage of an AI system to simply replicate currently employed Executive, Legislative and Judicial functions, those we deem as routine would no doubt be first addressed... and monitored to ensure that replication is taking place. Unfortunately, many of us feel that the current system needs to be severely overhauled since it frequently performs badly or marginally well... and that the people themselves are not actually permitted to vote on most issues. Such a situation has led to a decisive fractioning amongst many segments of the population. If we were to permit AI systems the means to lead us, such systems may well choose to cause increased violence, though the obverse may also be true... yet in any respect; may not permit new ideas to be evolved. Indeed, needless to say, the programming would be a monumental task and wrought with accusations of biasness being played out to favor one idea or notion over another. In short, what sort of philosophy will be adopted for programming an AI system to run a society of humans?

In thinking about such, there may be those who are now developing strategies by which a given adoption can be readily agreed upon, because presumed dissenters will be dealt with. Two or more factions can be pitted against one another, leaving a divided few who are easily brought into compliance. Perpetuating an old philosophy is sometimes easier than allowing for the presence of a fledgling ideology to sew its oats with anti-establishment values that may be detrimental for creating conditions by which an old guard retains controlling viability.

While having a few protesters about helps to give the impression that a given type of system is being exercised (however minimally), too many protesters can be problematic... unless a power struggle at the top would prefer that the top be truncated instead of permitting it to fall into the hands of a competing adversary who thinks that their loss will always be less, and that they will nonetheless survive relatively intact.

In discussing alternative scenarios, what we are viewing is the question of whether or not AI development should be given support on the level of the Manhattan Project... or kept to meander for generations with no actual threat to the government's control. While it can control incremental advancements in technological development, an AI system might well introduce massive corrective changes that would be a direct threat to those who participate in creating and perpetuating a system of inequality and lack of an actual Democracy. For example, the entire voting system may be altered to actually permit all the people to vote on all the issues. An AI unit would be able to actually listen to the people and respond accordingly, as well as offer alternatives to choose from, though the final decision is left in the hands of the voters... unless someone(s) have programmed such an AI system to disregard the people if a given interest is not undertaken.

Yes, the binary reality of computer code will come face to face with the recognized patterns-of-three, as well as multiple others.

Subject page first Originated (saved into a folder): Wednesday, September 27, 2014... 3:59 AM
Page re-Originated:Sunday, 24-Jan-2016... 08:51 AM
Initial Posting: Saturday, 13-Feb-2016... 10:59 AM
Updated Posting: Saturday, 31-March-2018... 3:23 PM

Your Questions, Comments or Additional Information are welcomed:
Herb O. Buckland