A friend of mine sent me a link to a book called "The Politics of Jesus," wanting to know what I thought (I think - he may have just been poking at me to see my reaction - several of my friends seem to enjoy this "sport"). I found the information available on Amazon.com to be interesting, and it did provoke a fairly visceral reaction. So, I gave him my impressions:
From reading the excerpt, table of contents, a little research on his other writings, and my cumulative life experience, I can deduce the following about John Howard Yoder:
First, he has never had an original idea in his life. He does not think for himself; he recycles the thoughts and ideas of others and presents them as his own. He is of average intelligence, and hides behind a smokescreen vocabulary consisting of 64-dollar words strung together confusingly enough to fool anyone else of average or below-average intelligence, or anyone who participates in the most current version of that age-old “Emperor’s New Clothes” game, in which shallow people attain respect from other shallow people for fun and profit by pretending that they are sophisticated, and validating the drivel of anyone who agrees with them. In other words, he is a typical “scholar.” Note that his work is at least 30% footnotes, and he is most likely to restate an opinion of some other author or “scholar” rather than to attempt to support his (?) ideas with evidence of any real kind. In the entire excerpt, for example, there is not one single quote attributed to Jesus himself, or from any biblical writings. I’m left with the impression that he thinks so little of his own thoughts that he must somehow justify them by pointing to the writings of others, who, because they are well-known or “published,” are somehow authoritative. But by that token, we have a circular reference that would cause a stack overflow in any application written for computers.
He knows nothing about Jesus, nor does he care. He is too busy playing his social game, and invokes the name of Jesus as a part of his strategy for success (as he defines success, which I’m not sure I understand at all). He is a sophist, a man who could easily argue contradictory opinions convincingly enough to fool the weak-minded, and others like himself, and who would do so if it seemed “profitable” according to his perverted idea of “profit.” He is a person of no conviction, with no actual personal philosophy, other than his nihilistic view of life as a pointless exercise in which the only possible benefit is that to be had immediately and devoured. He is a walking stomach, ever-hungry, and never satisfied, gorging himself upon humanity with no thought towards any possible consequence. And yet, according to Newton, everything we do has consequences of some sort or another. The very act of batting the eye sends ripples of energy into the ocean of existence, energy that can neither be created or destroyed, launched (or perhaps a better term, “directed”) to who knows where.
In other words, he is a charlatan, a con-artist, a civilized witch doctor, practicing his own version of that skill commonly attributed to so-called mediums and psychics. He has probably been at it so long that he believes his own propaganda, as do so many of his ilk. In truth, the lineage of such has most likely been responsible for most if not all of the ills in this world, and no doubt the crucifixion of his subject.
To provide evidence of his gross ignorance concerning his topic (and just about everything else), let me quote just one small passage:
“Jesus and his early followers lived in a world over which they had no control. It was therefore quite fitting that they could not conceive of the exercise of social responsibility in any form other than that of simply being a faithful witnessing minority.”
Now, being a cunning linguist, I hope you’ll indulge me if I carve this up and analyze it piecemeal. The first statement implies that, unlike these poor unsophisticated yokels from the first century, “modern man” has control over his world. It is a preposition for the argument to follow, delivered with the authority of an axiomatic statement which is self-evident to any reasonable person. The sheer arrogance of the idea that anyone has control over his or her own life, much less the entire world, over which we humans are scattered like a series of microscopic patches of bacteria on the surface of the skin on a basketball, is laughable to the point of utter hilarity. It is a postulate that flies in the face of all evidence, wishful thinking at the very least, dangerously presumptive. Yet it is delivered with all the weight of the Law of Gravity, and without apparent levity.
This first proclamation of the superiority of “modern man” (hmm, hasn’t every generation thought of itself as “modern?”), particularly when compared with the poor unfortunate and ignorant forebears, is followed by the conclusion that “therefore… they could not conceive of the exercise of social responsibility in any form other than that of simply being a faithful witnessing minority.” This conclusion first presupposes that these ignorant savages had no concept of “social responsibility,” by which the author apparently means “participation in the political process of government.” I find it gallingly ironic to note that those who seem to be the most politically active have perpetually been the greatest hypocrites, exercising little if any true “social responsibility” in their every day personal affairs, continuously attempting to rearrange the structure of bureaucracies that accomplish little if anything of any real worth. “The end justifies the means” is their rallying cry, but never do they notice that there is no end to a continuity, and therefore, the means is all that is ever achieved. In ability, these underprivileged minority members are apparently so ignorant that they are incapable of even conceiving “the exercise of social responsibility” beyond their own tiny realm of impoverished inexperience.
This is capped with the characterization of Jesus and his early followers as “a faithful witnessing minority.” I am reminded of Arthur Conan Doyle’s story “The Redheaded League.” What constitutes a “minority?” Is it the color of one’s skin (or hair), what flavor of which branch of what religion one practices, one’s sexual preference, whether one is right- or left-handed, or perhaps, which end of a soft-boiled egg one prefers to crack? In a political sense, it is any of these, or any other arbitrary way which one may choose to carve up the human race, an easy enough task considering that in fact, like snowflakes, we are each and all unique. In a political sense, it always boils down to “create the divisions where they will be of the greatest political advantage to me.” Divide and conquer. To the spoiled goes the victory.
Jesus was hardly a yokel. He was anything but unsophisticated. In fact, people are still arguing over what he meant by just about everything he said, over 2,000 years after his exit from the stage of this human tragedy of ours. He was the single most influential human being in the history of man’s brief tenure on this ancient planet. Oddly enough, what he said was painfully simple, so painfully simple that most people have chosen not to hear it, in order to spare themselves the pain. I believe it was Frank Webb who first coined the saying “if the Truth hurts, wear it.” This was later embroidered upon by the enigmatic Uncle Chutney, with the acronymous aphorism “What You Seek Is What You Get.” Of course, neither of these ideas was new; Jesus himself had expressed these very ideas in his own words, thousands of years ago. We humans have the remarkable capacity to deceive ourselves, with our own permission of course. Unfortunately, once deceived, how is one to undo the deception, as one is no longer aware of its deceptive nature? But there it is, and here we are.
Jesus lived at a point in history remarkably like our own. Rome was the greatest civilization on earth, with a representative government having checks and balances, with perhaps the omission of providing a Caesar for life, definitely a chink in the political architecture. That, combined with the imperial fashion of the day, was an occident waiting to happen. The fall of Rome, followed by the Dark Ages, when science and witchcraft were mistakenly linked, leading to the destruction of anything having a scientific patina, obscured much modern knowledge of its incredible sophistication and technology. Judea, as it was known at the time, was under the forced servitude of Rome, a nation under subjection to an oppressive imperial power. In fact, there were political activists of all sorts in Judea, including the Zealots, a quasi-terrorist revolutionary organization, devoted to throwing off the chains of Rome. One of Jesus’ disciples came from this organization.
Yet, in all of the collected quotations of Jesus, not one could be called “overtly” political. In fact, he acted for all the world as if politics were irrelevant to his mission. He did not speak out against the heavy-handed governance of Rome. He did speak out about the Jewish Sadducees and Pharisees, but not politically. It seems that he was more concerned with the individual, as if all that really mattered in a real sense is the individual. And yes, he did speak in rather apocalyptic terms about the end of “the world.” But what exactly did he mean? After all, “the world” that any individual experiences only lasts for a single lifetime. Taking relativity into account, when one is separated from the world, the world is also separated from the one. The world ends every day for somebody.
Is it possible that someone without the benefit of a college education, without television, radio, newspapers, without anything except free time and the world to ponder, could possibly think up anything worthwhile? Take a common shepherd, for example, doing absolutely nothing for 16 hours a day, 7 days a week, 365 days a year, for 50 years, with nothing to distract him, only the earth, sky, and everything in between to observe and think about. What could such a person think of without the influence of that cacophony of human thought we are surrounded by in our modern, sophisticated society? Why, he could never know the benefits of the Ginsu Knife, the latest fashions from Paris, what Rosie wrote in her blog yesterday. He would never realize how empty his life was without an SUV, be able to see a football game, go to the movies, surf the net, and most importantly, find out the prevailing opinions of thousands of his peers about anything and everything under the sun. He wouldn’t have any news programs to watch or listen to, to tell him what he should be concerned about, what to think about, and what he ought to think about those things.
When Isaac Newton was similarly disadvantaged, due to a quarantine that lasted the better part of a year, he invented physics. Pythagoras, Euclid, and Aristotle lived thousands of years earlier. Without the benefit of even a slide rule, they managed to come up with mathematical and logical principles that boggle the modern mind, ideas which most people are still confounded by, and upon which all of modern mathematics is based.
Sure, Jesus was all about politics. He just didn’t have the sophistication to understand politics, or perhaps to elucidate his ideas about politics. He did the best he could, for a poor disadvantaged minority. We shouldn’t be too hard on him from our superior modern perspective.
Friday, June 29, 2007
A friend of mine sent me a link to a book called "The Politics of Jesus," wanting to know what I thought (I think - he may have just been poking at me to see my reaction - several of my friends seem to enjoy this "sport"). I found the information available on Amazon.com to be interesting, and it did provoke a fairly visceral reaction. So, I gave him my impressions:
Monday, June 25, 2007
You may have noticed that I have a new portrait. The original was done 10 years ago, at the end of a long struggle which resulted in my unique hair style. It was done using a rather simple tool set, consisting mostly of some fairly primitive Bitmap editors. It was a 128X128-pixel Bitmap which was edited pixel-by pixel, by hand, and I must admit having a long-standing fondness for it. Not only did it symbolize my personality, but some of my favorite concepts as well. It took about 30 hours to create, and all things considered, was a smashing success.
However, time marches on. While I have not actually changed in appearance, the new portrait was done in an effort to keep up with the current state of the art in terms of computer-aided painting. The original, an Adobe Photoshop psd, is 1024X1024 pixels in size, and painted using all of the tools available in Photoshop, along with a healthy dose of perseverance, and attention to detail. As for the technique, don't ask. I couldn't possibly recall most of it. Again, it took about 30 hours of work.
I believe this new version captures the essential Chutney even more successfully than the original. For one thing, it is state of the art, which I would like to think I remain at all times. For another, it captures all aspects of my personality in many ways. It is full of color. It has a mildly psychedelic aspect to it, an effect of my early life in the 60's and 70's I'm afraid. It is intimate, but at the same time, somewhat distant. There is a great deal of attention to detail, most of which is lost with the shrinkage of the image displayed here, as well as the 256-color GIF format, but I think it bleeds through somehow. There is a hint of sadness, a hint of kindness, hopefully a hint of wisdom, a smattering of peacefulness, and a healthy dose of stress present in my appearance. It is hard to see what the expression on my face is, which conveys my love of ambiguity. Finally, it is nearly photo-realistic, but takes a detour into hyper-realism, which I think it s a bit more appropriate to my personality. I am, after all, not real in the way that most people imagine reality. I have no corporeal existence, although I most certainly do exist (I am writing this, after all!).
At any rate, I am pleased with the new portrait, and I hope it brings you pleasure as well.
Sunday, June 17, 2007
I'm a law-breaker. I break the law on a daily basis. I do it knowingly, willfully, and without remorse. In fact, I often break the law in full view of the police with impunity. They know I'm breaking the law, and they do nothing about it. The law that I break on a daily basis is one that is commonly broken by almost everyone. I break the speed limit.
I have good reason to break this law, actually. I have given a great deal of thought to my behavior in this matter, as I drive over 20 miles to and from work 5 days a week. Being a person of good conscience, and a habitual analyst, I have endeavored to apply my problem-solving skills to determine the best, most optimal methodology to apply in the performance of this task. My findings may be of interest. In fact, this subject matter, that of "the law," is likely to span several entries. There are a lot of aspects I would like to cover. I believe that there is a great deal of misunderstanding with regards to law and government, and as part of my service to humanity, I would like to offer my observations.
Here is my reasoning regarding the observance of this particular law, the speed limit. This reasoning can be applied broadly to any law, and in fact, leads to a number of general observations and conclusions that may be derived from those observations. The application of these observations and conclusions may produce many beneficial effects on the quality of life.
First, let me begin by saying that the vast majority of my drive to and from work is via interstate highways. Interstate highways are unique among roads for a number of reasons. They are limited access roads, without traffic lights, very well constructed, and at least one out of every six miles of every interstate highway is completely straight. This is all due to the original reason for the construction of the interstate highway system, which was originally commisioned for the benefit of the U.S. military. That's a very interesting story, but not the subject of this post.
These conditions, however, make interstate highways safer than ordinary roads, and of course facilitate faster travel on them. Still, this has nothing to do with my law-breaking, but only with the conditions of travel that I used in my calculations.
In Hampton Roads (southeast Virginia), the speed limit on most interstate highways is 55 mph. In some places it is 60, and in a few places it is 65. The majority of the road I travel has a 55 mph speed limit. Yet, I have observed that the average speed of traffic on these particular roads is anywhere from 65-70 mph.
My analysis regarding the optimum method of travel are based upon 2 logical priorities:
- Travel safely. This is the prime directive.
- Optimize use of time by making the trip in the shortest possible time.
A number of rules and methods can be derived from these 2 principles, taking into account environmental factors, such as the laws of physics and human behavior/psychology.
First, applying the laws of physics with regards to the first priority, speed is always relative. We do not often think of it in this way. We think that all vehicles travelling 55 mph are travelling at the same speed. Yet, speed is a measure of distance over time, and what we think of as 55 mph is actually a "default" measure that is relative to the surface of the earth. The earth itself, however, is not stationary. It is rotating on its' axis, and revolving around the sun, which is also in motion. In fact, the entire universe is in a constant state of motion.
More importantly, almost all traffic on a highway is in motion. 2 cars travelling in the same direction at 55 mph are moving at a rate of 0 mph relative to one another. That is, relative to one another, they are stationary. Since the first priority of travel is safety, avoiding collisions is of paramount importance. Objects that are stationary relative to one another never collide. Therefore, 2 cars travelling at the same speed in the same direction on the same road will never collide. The rate or direction of one of the cars must change in order for that to happen.
On the other hand, a car that is stationary on the same highway is "travelling" at a rate of 55 mph relative to a vehicle that is travelling at 55 mph. If the vehicle travelling at 55 mph relative to the surface of the earth is travelling towards the stationary vehicle, a collision is inevitable, again, unless one or the other of the vehicles changes its rate of speed or direction.
According to the laws of physics, objects in motion will continue to move in the same direction at the same speed unless force is exerted upon them. This is termed "momentum." What causes vehicles to slow down is the force of friction and the force of gravity being constantly applied to them. Hence, we must apply force via the engine to keep them moving at the same rate of speed unless they are moving downhill, in which case gravity exterts force upon them.
At any rate, momentum is a force to be reckoned with regarding driving. 2 vehicles on a collision course will require force to be exerted on one or the other in order to avoid a collision. So, the vehicle travelling at 55 mph relative to the surface of the earth will have to apply brakes or change direction to avoid colliding with the stationary vehicle. However, 2 vehicles travelling at exactly the same rate of speed relative to the surface of the earth, and travelling in the same direction, require no force to avoid a collision. In fact, it would require force to create a collision between them.
Using these 2 scenarios as extreme examples, a rule can be created: To avoid a collision, the most optimal speed of 2 vehicles travelling in the same direction on the same road should travel at the same rate of speed.
However, there are generally many more than 2 vehicles travelling in the same direction on an interstate highway in the same area at the same time, especially here in Hampton Roads. Of all the vehicles, the one I am driving is the only one I can control. And all of the other vehicles are travelling at varying rates of speed. Because of the speed limit, and similar goals in the minds of the other drivers, the rate of speed will generally cluster around an average, forming a statistical bell curve.
For anyone not familiar with a bell curve, I think of a bell curve as a sort of hat. It has a hump in the middle, denoting the majority of the average, and thins out towards each end, or the "brim" of the "hat." It is derived by taking a large number of statistical data, and rather than averaging them all together, averaging segments of them over a graph, and then smoothing the resulting curve.
The speed bell curve can be used to calculate the optimum rate of speed, because it is impossible to match the speed exactly of all vehicles travelling in the same direction on a highway. In other words, while the probability of a collision with regards to 2 vehicles travelling at the same rate of speed is 0, and the probability of a collsion between 2 vehicles travelling at differing rates of speed is 100, in any group of vehicles travelling at different rates of speed, the probability is lowest at the center of the bell curve, or the total statistical average rate of speed derived from the entire set.
Thus, a general rule may be created: The safest possible speed to travel on any road is the average speed of all of the traffic. It turns out that this rule must be further refined, but I will cover that topic at another time.
In this case, my point is this: If the speed limit on a road is 55 mph, and the average speed of the traffic is 65 mph, the safest possible speed to travel on that road is 65 mph, not 55 mph. Therefore, in order to travel as safely as possible on the interstate roads in Hampton Roads, I break the law. I never get a speeding ticket either, because the police are aware of my good reasons for doing so. At least once a week I will pass by a traffic police car, breaking the speed limit law, and be completely ignored.
It may therefore be observed that by breaking the law, I am doing the right thing morally and ethically. If I were to obey the law, I would be putting other drivers and myself at greater risk.
So, what good is the law? Well, it turns out that it actually does serve a purpose, but that the purpose of the law is not the purpose that is generally assumed. In the case of the speed limit, as an introductory example, the law empowers the police to take corrective action in the interest of protecting the public. The police have the authority to ticket anyone travelling in excess of the speed limit. This means that, regardless of my moral responsibility to exceed the speed limit, if I do so, a police officer has the authority to pull me over and write me a ticket.
However, having the authority to write a ticket does not dictate that the officer do so at any time a person is observed exceeding the speed limit. It is simply an authority, an empowerment. The officer has the option to exercise his/her judgment to decide when to exert that authority. That is, the officer may ignore the law as well, when it seems right to do so. The vast majority of police officers will not ticket people travelling at the average speed of the traffic, as they are well aware of the safety issues I have discussed. To ticket a driver for speeding creates a dis-incentive for that person to exceed the speed limit. If the average speed of traffic is 65 mph, encouraging an individual driver to drive 10 mph slower would actually increase the probability of collisions on that road. And it is important to keep in mind that the police officer can not ticket everyone speeding, but only one person at a time. Like me, the police officer has no control over the rest of the traffic.
In concluding today's discussion of the law, what I'm getting at is this: Law does not control. It empowers. Creating a law does not prevent people from breaking it. It is the empowerment of the enforcers of the law which has any effect at all, and that effect is not the prevention of a behavior; it is a method for influencing behavior statistically. Also, the existence of a law should not dictate our behavior. Regardless of the reasons for its existence, our behavior should be governed by morality, ethics, and logic.
It is wise to respect the power that the law grants to the enforcers of the law, just as it is wise to repect the power of electricity, and to avoid sticking one's finger in an electrical outlet. However, it is foolish to make law the dictator of one's behavior, or to put the adherence to law above the responsibility to behave in a moral and ethical manner. And it important to understand the difference.
Again, I have much more to say about the topic, but I think that is enough for one post.
Saturday, June 09, 2007
My wife has a cell phone, and like so many people, she seems to love it. I do not. In fact, I do not plan to own one, at least until a user-friendly cell phone is designed, and oddly enough, it doesn't seem to be coming any time soon.
Admittedly, I am not the most social person in the world, at least in terms that most people would call "social." I do participate in the community of mankind, but I prefer to have some form of insulation, such as a computer, or at least a telephone, to hide behind. But I often have difficulty in what most people would call "ordinary conversation," which I take to mean "the relatively undisciplined exchange of more or less random thoughts, ideas, and opinions." I love to learn. I love to think. Anything else is boring to me, or at least seems relatively useless. I realize that this makes me something of a social cripple, but hey, you can't be and do everything. One must make choices, and accept the consequences of those choices. At any rate, that's who I am.
On the other hand, I can certainly accept and even appreciate to a certain extent that desire for connection that most, if not all of us has. We are networking entities. Each of us has a brain that is a neural network, and which is by design, constantly looking for new nodes to link with. So, I can well appreciate the desire for such tools as writing, mail, telephone, radio, television, and the Internet. Cell phones are a step in the evolution of communication technology, that allow people to connect an communicate independent of location.
And perhaps I would own one, except for one particular and disturbing flaw in the design of every cell phone that I've ever seen or heard of. Cell phones are tiny, and the ear piece is generally less that 2 inches from the microphone. This is a source of great bewilderment to me, and I continue to try and understand the social phenomenon that drives this design. Every time I use a cell phone (usually because my wife hands me hers) I feel like saying "Kirk to Enterprise." They look a lot like the communicators in the original Star Trek series. Unfortunately, however, the volume and sound quality prohibit us from using them by holding them in front of us and talking, as they did in the original Star Trek series.
Generally speaking, the tools we design for ourselves are built around our physical characteristics. A chair, for example, has legs which are usually less that 2 feet long, because of the length of the human leg. Chairs with longer legs generally have some form of foot rest built into them, to accomodate the length of the human leg. Beds are about 6 feet or longer in length, due to the average size of the human body. Automotive vehicles have driver compartments that are shaped and sized according to the average shape and size of the human body, and mechanisms for adjusting the dimensions on an indivdual basis. Most buildings have ceilings that are at least 6 if not 7 feet tall, again, to accomodate the size of the human body.
But cell phones, apparently all of them, are made as small as possible, almost all without any means of extending the distance between the ear and mouth pieces. This results in the uncomfortable practice of constantly readjusting the position of the cell phone to either hear better or to be heard. And this bewilders me to no end.
Certainly, size and weight are an issue. I am old enough to remember the first "wireless" telephones, which were essentially 2-way radios, and generally used in cars. They were about the size and shape of a walkie-talkie. This was due to the state of technology at the time, but I note that the distance between the ear and mouth pieces was about the average distance between people's ears and mouths. As time went by, we became better at putting more technology into smaller areas, and "wireless" telephones began to shrink.
However, at some point, this "requirement" of smallness seems to have taken a life of its own, without regard for its original purpose, which was to make "wireless" phones easier to carry, and less tiresome to hold for long periods of time. Instead, the concept became "smallness is a virtue, and the greater the smallness, the greater the virtue."
Now, I can certainly understand the desire to make the size of a cell phone small enough to carry in one's pocket, and perhaps even as thin as a credit card eventually. But this does not imply that when in use, it should not be extensible to fit between the ear and mouth comfortably. After all, umbrellas have employed such technology for at least 100 years. Even the communicator in the original Star Trek series opened up and became about 7 inches long (long enough to have been held with the earpiece and mouthpiece congruent to the locations of the human ear and mouth). They didn't hold it that way, but that was because they didn't have to. Apparently, they (the fictional society of the future) had the technology to make their communicators audible and able to hear the human voice at a distance. But we don't have that technology yet. We must hold the ear and mouthpieces within a small distance from our ears and mouth to be able to communicate. But that is not the case with cell phones. Why?
Is it because the world is full of fools who imitate each other imitating each other like monkeys imitating themselves in a mirror? Is it because innovation is only payed lip service by industry, because to truly step outside the "box" of social convention is dangerous, and most people are full of fear? These are some of the possible reasons I can think of. Unfortunately, I can't think of any good ones. After all, how difficult would it be to make a cell phone that extends like an umbrella, or a pair of headphones? Surely, if we have the technology to make cell phones the size of Star Trek communicators, we have the ability to make them telescope.
It puzzles me, because honestly, I can't figure it out. If this were the case with some cell phones, but not with others, I would understand. But it is so pervasive. I don't like puzzles I cannot solve. In the meantime, though, I must admit I prefer having a space and time in which nobody can bother me. It takes me 30 minutes to an hour to drive to and from work. And that is my "me" time; it is my time to think and ponder. But this question is really bothering me. Hopefully, someone will answer it, or at least I will eventually forget about it.
Ah well. So it goes...
Tuesday, May 15, 2007
Recently, the demise of radio talk show host Don Imus for using the phrase "nappy-headed ho's" caught my attention. Of course, there was quite a bit of talk about the "incident," including talk about the appropriateness or inappropriateness of using the phrase on a radio talk show, and following his firing by CBS, a lot of talk about whether or not CBS should have fired him. Lately, I hear that he is suing CBS for their action, which it seems, violated a contract agreement.
Most recently, stories about other "inappropriate" language have been promulgating, and this is not only typical, but saddening to me. The purpose of this message is not to discusss what is "appropriate" or "inappropriate" to say on public air waves elsewhere. The purpose of this message is to discuss the implications of the public reaction to such speech. I am concerned that the real problem here is one which is not being discussed.
What concerns me first and foremost is the increasingly popular notion that we should not only take offense at the usage of certain phrases and words in others, but that we should also meddle in the affairs of others to prevent such speech.
It seems that the United States is becoming a country of busy-bodies, people who attempt to control the behavior of others by means of coercion or force. This is far worse than being a nation of gossips. Gossiping is a common enough sin, and sin it is, but we are all prone to it. Gossip is harmful, as it is hurtful to the object of gossip. It is easy enough to test this idea. If one would be hurt to be the object of gossip, one must assume that others would be hurt as well. Since there is no benefit to the practice of idle gossip, and it is hurtful, it is wrong.
However, it is one thing to gossip, and entirely another (and far more harmful) to attempt to control the behavior of others through coercion or force. Again, the test here of the behavior is to ask whether one would desire to be controlled through coercion or force. I doubt that anyone would find this desirable.
Of course, there are times when coercion and/or force becomes necessary. It is necessary to use coercion or force to prevent murder or robbery, for example. I use these extreme examples as unquestionable examples of the justification of coercion or force. There are many forms of behavior that might be candidates for the just use of coercion or force. These cover a spectrum, with obvious behavior at one end, and obvious behavior at the other, but with many behaviors lying somewhere in-between. And that is the rub.
Speech falls into this middle category, and we have debated for centuries where we may draw the line between speech which may be offensive but is harmless enough to ignore, and speech which justifies the use of coercion to inhibit. Speech is powerful, and can be a means of great good or great evil. People have been encouraged, inspired, and even saved from death as a result of speech. People have also been discouraged, harmed, and even brought to death as a result of speech.
We know that, for example, gossip is harmful, as I mentioned before. It is harmful because it is idle, having no redeeming social value, and because it causes the object of the gossip to feel pain. However, we also agree that gossip does not necessarily fall into the category of speech which justifies coercion. If we were to employ coercion to prevent all pain, we would cause more pain than we would prevent as a result of it. A certain amount of pain can be beneficial to a person. It may build character, or strengthen the person's ability to resist pain. Like bacteria, a certain amount of it is actually beneficial. Therefore, as a society which values freedom as beneficial to mankind, we tolerate a certain amount of gossip.
We also know that other forms of speech can be harmful, such as sedition, speech which encourages discontent and rebellion against the order of society. We know that advocating murder or other forms of criminal behavior is harmful, and though we might tolerate a certain amount of this, it might well fall into the category of speech which might justify coercion to attenuate.
However, we also believe in the freedom of speech, as the free exchange of ideas and information is beneficial to everyone. In fact, the Constitution of the United States guarantees freedom of speech. But this guarantee has limits, for the reasons mentioned above.
On the other hand, name-calling and mockery, satire and parody, which often fall into the category of "comedy," are tolerated by our system of government. This sort of speech may cause a certain amount of pain, but it does not fall into the category of speech which justifies the use of coercion. Historically, this sort of speech has been employed for a variety of purposes, such as illustrating a point, entertainment, or pure silliness. In this country, in the past, entertainers like José Jiménez, a fictional character portrayed by comedian Bill Dana, Father Guido Sarducci, portrayed by Don Novello in the early years of Saturday Night Live, and HandyMan, portrayed by Damon Wayans in the tv series "In Living Color," are just a few examples.
Yes, some of these comedic characters were offensive to some people, but they were all highly popular, and the real question is, was there a redeeming social value to such? I would say "yes." These sorts of buffoons fall into the category of speech which benefits the object by strengthening the character, as a certain amount of bacteria is beneficial to our health. Not only that, but laughter is healthy, and there was no implied cruelty in these portrayals. That is an important point.
The intent of speech is a critical factor in the determination of its harmfulness. Ridicule which is intended to harm is not easily mistaken, and quite often finds its mark. We are far better at determining the intent of speech than we may admit. And indeed, there are some who are so deluded that they can no longer differentiate between good-natured jibes and cruelty. This trend towards delusion seems to be increasing these days, and this is the harm that I see in recent events.
Don Imus, in his employment of the phrase "nappy-headed ho's," was clearly not malicious. In fact, he may well have been satirizing two expressions which came out of the black community in the first place. Is there a place for such satire? I would say, certainly. I find it bemusing to observe the plethora of deprecating expressions that have emerged from the black community towards other blacks (and whites, latinos, etc). Am I offended? No. And neither is anyone else offended when blacks themselves employ such expressions. The problem in this case was the employment of these expressions by a white man. And the offense taken, not by the objects of the speech, but by political ambulance-chasers, is certainly a problem.
What exactly is the problem? The problem is that a climate of fear is emerging in this country. People are becoming afraid to speek freely, and for good reason. We all saw the way that CBS caved to the political hammer employed by "Reverend" Al Sharpton et al. We saw how Don Imus lost his job. The consequences of this can be seen in the flurry of similar incidents which transpired with regards to other comedic radio talk show personalities. It is not the likes of Don Imus that we are afraid of; it is the likes of "Reverend" Al Sharpton, who employ speech to inflame, to hurt, to destroy careers and incite fear in anyone who might somehow stand in the way of their political ambitions for personal power.
Coercion is not desirable. In the case of Don Imus' "gaff," presuming that anyone was genuinely offended (which I doubt, hopefully), a public apology should well have sufficed, particularly because the remark was not made with evil intention. A public apology was made. But that was not enough to satisfy the "Reverend." Why? Obviously, the "Reverend" was not as offended as he pretended to be. Therefore, there was another motive. The "Reverend" saw an opportunity to strengthen his political power base by portraying himself as a defender of "black rights."
Of course, even the phrase "black rights" is rife with racism. If all men are created equal, why should some men have rights specifically assigned to them and not to others? I am reminded of the book Animal Farm, by George Orwell. This book, which is an allegorical account of Soviet Totalitarianism, in which animals on a farm rise up in rebellion against the humans, describes the creation of "Seven Commandments," which begins with the statement "All animals are created equal." Eventually, as the leaders of the revolution acquire more and more power, this first commandment is revised to read "All animals are created equal, but some are more equal than others." In the end, the pigs who lead the revolution become indistinguishable from the humans whom they conquered.
At any rate, the climate of fear which is growing in this country is causing people of weaker constitutions to succumb to ideas that suppress free speech, and thus portend an evil end. If we allow such suppression to continue, we will all become enslaved to the likes of Al Sharpton and other megalomaniacs, dictators and self-serving political hacks. The result of such oppressive regimes is instability, unrest, and violence.
The only solution is at the individual level. To employ coercion to solve the problem would simply exacerbate the problem. To solve the problem, we must as individuals decide that we will not be initimidated by those who would suppress free speech, and we must employ speech to encourage others to do the same.
Of course, you are free to decide for yourself!
Sunday, May 06, 2007
As a cunning linguist, one of my favorite web sites is the Online Etymology Dictionary. This web site is a perfect example of the incredible potential of the Internet for benefitting mankind as a whole, and any human being as an individual. Humans are networking organisms. Our brains are networks. And our brians store and seek information using networking. But that is a topic for another discussion, I reckon.
What I figured I would discuss today is abstraction. As a programmer, I am keenly aware of the perceptions of people with regards to computers, what they think they are, and what they think they do. Misperceptions about computers extend even to the family of those who call themselves "developers," or even "programmers," and this is a subject of no small concern to me. While abstraction is an incredibly useful tool, one that our brains employ naturally, it can become a source of confusion, and as much of an impediment to progress as it is an aid.
The problem is exemplified by the increasingly common phenomenon in the programming business of the ignorant developer. Technical schools, tools, and high-level programming languages enable people to employ the tools of programming without understanding what they are, or why they work. While this perhaps fills a business need, providing a less-expensive pool of "professional developers" for certain types of tasks, I think that ultimately it may produce more problems than it solves. An ignorant developer is much more likely to build unstable software. Unstable software is not necessarily immediately apparent. A short-term savings can develop into a long-term albatross, in the business world.
Getting back to the Online Etymology Dictionary, there is a correlation between the parsing of language and the understanding of it. We often think we understand the meaning of words when in fact, we only have a vague and impartial sense of what they mean. In fact, we sometimes think we understand the meaning of words because we employ them successfully, when in fact we don't understand them at all. This too is a long-term problem. And with the nearly-instant availability of information on the Internet today, there is no excuse for ignorance.
The word "computer" comes from the Latin "computare," which means literally "to count." There is a reason for this. When most of us think of computers, we envision a box with a monitor, a mouse, a keyboard, and perhaps some other peripherals attached to it, either inside or outside of it. This is a fallacy. In fact, a computer is nothing more than the processor inside the box. The rest of the machine is an interface to the processor, and a set of tools for doing such things as storing and organizing data produced by the processor.
The processor of a computer, or more accurately, the computer itself, does only one thing, and it does it very well. It counts. A computer is roughly the same thing as the ancient Chinese abacus. The ancient Chinese abacus was the first known computer, which was in fact an extension of the earliest computer, which was the human hand. We have a base 10 numbering system because we have 10 fingers on our 2 hands. Before the abacus, people used their fingers (and possibly toes) for counting.
All of mathematics is based on counting, even Calculus. Addition is counting up. Subtraction is counting down. Multiplication is addition (counting up). Division is subtraction (counting down). And all mathematical operations are composed of various combinations of addition, subtraction, multiplication, and division.
But like mathematics, computing has evolved abstractions which enable long operations to be expressed more concisely. Mathematical abstractions have proven to be extremely useful, enabling us to create an ever-increasing box of mathematical tools which we employ in performing practical calculations used in nearly every aspect of our lives. Anything in the universe can be expressed using mathematics. And this is because everything we perceive we perceive by mathematical means.
We identify a thing by defining it. And the word "define" is derived from the Latin "finire," meaning "to bound, to limit." The bounds of anything are determined by measuring it, or by expressing what is that, and what is not that. Measuring involves using some form of mathematical expression. The simplest form of mathematical expression is binary. That is 0. Not that is not zero. In other words, similarity is determined by measuring difference. When the difference between 2 things is measured as 0, they are identical. Therefore, that is 0, and not that is not zero. In a binary number system, these 2 ideas are expressed completely, and may be used to express any mathematical idea.
I have always been grateful that the first programming language I learned was C. C is a procedural language, and a low-level language which is structured to resemble mathematics. Much of the C language looks like algebra, while some of it resembles trigonometry more closely, such as the definition of functions.
Like mathematics, the seeds of more abstract languages is in the semantics of C. And because of the demand for software that performs ever-increasingly complex operations, abstract programming concepts have been built in this foundation, which is itself an abstraction, such as Object-Oriented programming.
Object-Oriented programming is actually an abstraction of procedural programming, as all programming is indeed procedural. A processor performs exactly 1 mathematical operation at a time. However, like a function, which is an abstract concept encapsulating many single procedural operations as a single atomic unit, an object is a similar encapsulation of processes, an encapsulation of encapsulations as it were, which is a convenience we use for the sake of expediency.
It is this very abstraction which provides the power of object-oriented programming. By encapsulating groups of operations within other groups of operations (ad infinitum) which perform the same or similar tasks, we can omit the details, which do not change from one use to another, and accomplish much more with much less physical work (writing code, that is). In addition, because our brains employ abstraction "to distraction," we find the abstraction of object-oriented programming more "intuitive," when used to deal with concepts which seem less mathematical to our perception, due to our advanced abstraction of those concepts in our own minds.
However, this also exposes a danger, a great danger in fact. It is now possible to employ these abstractions without fully understanding the underlying mathematical principles that they encapsulate. A real-world example of this can be seen in nearly any convenience store or fast-food restaurant, when the clerk makes change for you. I am old enough to remember when such people would "count your change" to you. If you paid for something that costs $2.50, and you produced a $5.00 bill, the clerk would count into your hand, starting from $2.50, and adding each amount as he/she counted: "$2.75 (quarter), $3.00 (quarter), "$4.00 (dollar bill), $5.00 (dollar bill)." When the clerk reached $5.00, you had your change, and it was correct. In addition, you knew it was correct, because it had been counted out as you watched. Today, a clerk punches in a price (or reads a bar code), types in the amount received from the customer, and the cash register does a subtraction to reveal the change required. Unfortunately, most of these clerks couldn't make change if their lives depended on it.
The same danger exists in the world of programming. Most developers have little or no education in computer science. Instead, they have gone to some technical school (perhaps) where they were taught "how to do" various types of things, and not taught "why to do" them. The end result is a developer who has a limit to their ability. Once you step outside of the limited realm of what they have been taught, and a problem is exposed that requires a more low-level approach, or would be better solved with a low-level understanding, they are lost.
The difference here is that, unlike convenience store and fast-foot restaurant clerks, these are supposedly "professional" people who should not be stopped at any point in the development process. And because of the demands imposed by an ever-increasingly-complex set of software requirements, problems that require a low-level understanding of the mathematical principles of computing are almost inevitable in the career of any developer. A convenience store clerk is not expected to be able to solve complex problems. Their job is to collect money, to keep the store shelves stocked with merchandise, and to perform similarly simple tasks, and they are paid in accordance with the skill level required. But a developer faces a much higher expectation of skill and knowledge, and above all, an ability to solve complex problems.
So, we find ourselves on the horns of a conundrum here. The solution, as I see it, can only be applied on a personal level. It is important to understand the basics before attempting to enter the realm of abstraction. If one does this, one will be successful. If one does not, there is likely to be a point at which one will have to learn the basics remedially. The former is more desirable, as the point at which one is required to learn the basics is not going to be an inconvenient one.
At least, that's how I figure it, I reckon.
Sunday, April 22, 2007
You've got to love it. Or hate it. Cilantro is like Jesus in that way. People are generally at one extreme or the other about this herb, spice, whatever you want to call it. Personally, although I love Jesus, I hate Cilantro. So does my wife. So do a lot of other people.
Thus far I have been unable to find any reliable statistics on the subject. I did find one rather unscientific and informal survey, but my own inquiries on the subject have yielded similar results. It seems that most people actually like Cilantro, and I've got no problem with that whatsoever. I do, however, have a problem with the growing infiltration of this herb into popular culinary culture. There is a significant minority of people who don't just dislike it; we find it repulsive.
What other herb has a web site devoted to those who hate it?
Last month I went to Seattle to catch up with Microsoft, and was horrified to discover that almost everywhere I went, Cilantro was added to the food. Last week, I bought some Sam's Choice Chicken Enchiladas, and discovered that they too were infused with Cilantro. Yesterday I took my wife out to dinner at Applebee's, and sure enough, I had to pick through the menu and ask specifically in order to avoid the rank stuff, which tastes like ass to me, and leaves a lingering ass taste in my mouth for hours after consumption.
I used to like Mexican food. Now I have to be very careful. The disgusting weed is proliferating throughout popular culture, for whatever reason (influx of Mexicans, perhaps?), and some of us are reeling from the effects.
It seems that some people, due largely to the genetic makeup of their DNA, are not just turned off by the taste; we find it absolutely awful. This is nobody's fault; we are what we are. But what on earth would possess the purveyors of popular food to infuse this herb into an increasing palette of culinary creations?
Getting back to Jesus, who has always inspired controversy: For those of you that are offended by Jesus, imagining walking down the street and finding "Jesus Saves" signs everywhere you looked? That is what eating out (and increasingly, eating prepared foods bought from grocery stores) is like for us Cilantro haters.
My research reveals that my experience is not as uncommon as you Cilantro-lovers might think. Here are a few items I found this morning in my research:
That's quite a bit of controversy. Try it for yourself. Just type the words "hate Cilantro" into Google, and see what you get back.
So, considering the number of people to whom Cilantro tastes like ass, burnt rubber, soap, and the rest of the multitude of descriptions that I've found worldwide, why would it be so popular?
I conjecture that it is popular because of the "Emporer's New Clothes" syndrome. That story was not written about an Emporer, but about people. We have a tendancy to "follow the crowd," due to the social nature of our species, perhaps. So, when the Hoi Polloi, for whatever reasons they may have (and most of their motivations are suspect), proclaim that something is great, the Lemmings flock to the slaughter. This is of course, why I always say "Neither a Follower Nor a Lender Be."
I suspect that the motivation behind this particular piece of nonsense is political, meaning that it is most likely evil. But trends produced by purely political motives, and which are not productive of anything good, seem to eventually die a natural death. We can only hope.
Hey if you like Cilantro, use it! I have no problem with restaurants making it available by request, even. But for something which is so offensive to so many of us to be included in an increasing number of foods without even warning us, well, it's just going to hurt the business of those who practice it.
I remember when I was a kid, discovering that I absolutely loved garlic, and wondering why there was so little of it in foods produced for popular consumption. I found out that although most people like Garlic, most people also didn't want a whole lot of it in their food, for social reasons, which I don't need to describe. The purveyors of food would therefore tone it down, and you could of course add more if you liked. I had a similar experience with hot and spicy foods, which some people don't like at all, or are averse to for medicinal reasons. Sure, I wanted the hot stuff, but as long as it was available for me to add, I accepted it.
But those days, it seems, are being replaced (temporarily) by a culture that watches the Elite, and seems to like to follow them, regardless of how bad the food tastes.
Fortunately, this too, shall pass.
Sunday, February 25, 2007
We sink zis is important.One of my favorite films of all time is Close Encounters of the Third Kind. Directed by Steven Speilberg in 1977, it is not only recognized almost universally as a great film, having won dozens of awards for filmmaking, but it illustrates some incredible philosophical ideas. In fact, this post is about (at least) one of them.
In my last post, I discussed the idea that intuition may be much more reliable than cognitive deliberation. Intuition is not exactly a cognitive process, in the sense that we think consciously about it. It is cognitive in that we perceive it, but often without words or thoughts of any kind.
Close Encounters of the Third Kind is about this intuitive experience. The main character, Roy Neary, played by Richard Dreyfuss, along with a number of other folks, encounter UFOs (alien space ships) at around the same time, most of them on the same night. While they are not abducted (although some are), they are all marked with unconscious impressions of something they cannot explain to anyone. They know they know something; they just don't know what it is that they know.
As time goes by, they become obsessive about this unconscious impression, and several of them begin to draw pictures of it, or create models of it. Neary goes from seeing it in various common objects, to modelling it in his mashed potatoes at dinner, and finally driving his family out of the house in an attempt to build a model of it with dirt, grass, and shrubs he has carried into the house from the front yard.
In the meantime, people from some unknown international governmental agency are seen investigating the people and events associated with the UFOs. They are aware of what has happened, but very secretive about it. François Truffaut plays the central character among these covert officials, and at one point he makes the remark in his french accent, "We sink zis means somesing. We sink zis is important." It seems that even the top secret organization doesn't know quite what to make of the visitors or their intention. It does, however, understand that what is happening is real, and that it is probably somehow important.
All of this illustrates a kind of thinking that I think is quite useful at times, in terms of problem-solving. Perception and thought are 2 entirely different things. Perception is pure; it is the conscious (or unconscious) reception of pure data by the mind. Thought is the cognitive process which we use to analyze the data. Thought may or may not be reliable. And it is the cognitive process of thought which creates the conscious model of that which we perceive. Therefore, what we consciously model from our perception may or may not be reliable.
How the mind works is still a subject of much conjecture. We are continually able to gather more data about the activities of the brain, and the behavior and communications of individual human beings. But the mechanism of the brain is still beyond the ability of science to understand. We have recognized and identified a number of different processes which we have names for, but little else. Among these are personal identity, attention, and cognitive control, all of which I want to discuss here.
The human mind (and I refer to the mind rather than the "brain" deliberately, because I don't necessarily want to limit the mind to the organ which we call the "brain") is a multi-tasking operation, behaving in many ways similarly to a multi-tasking computer. Regardless of how many operations may be occurring simultaneously, our minds are constantly performing a wide variety of tasks "at one time." We know that a computer processor is capable of only one operation at any given time, and that it simulates multi-tasking by switching from one task to another at an incredible rate of speed, performing small "slices" of each operation in a large loop process. We don't know whether the human brain does this, however. We do know that the brains exhibits simultaneous activity, which would tend to indicate many simultaneous processes, as if we had many processors in our brains. But exactly what that activity is, we do not know yet.
Still, among those processes, there is one which we call "attention," and it seems to behave as if it is a single thread, which is capable of "time-sharing" like a computer processor. That is, it can jump among many different foci (points of focus) at a high rate of speed. It does seem, however, to only be able to focus on one "thing" at a time. Attention is somehow associated with personal identity, and it may be that our sense of identity comes from this (apparently) single-threaded process; we may identify "self" as this process. I don't know. But I "sink zis means somesing."
However, apparently simultaneously, there are other mental processes at work. There are routines that have been stored, such as those that cause the heart to beat continuously, as well as the operation of the lungs and other organs of the body. We are capable of performing multiple physical tasks simultaneously, such as walking and talking at the same time. These processes are not conscious. We do not consciously control them. At one point we may have consciously directed their development, such as learning how to walk, how to talk, etc. But we don't consciously control them at some point. They are stored as complete routines and executed automatically.
We also know that decision-making is manifested both as cognitive and unconscious process. Well, perhaps it is not agreed upon as to whether it is always a cognitive process, but I will elaborate on that further to clarify. In any case, there is a cognitive control of at least some decision-making, and possibly an unconscious control of other decision-making.
As an analogy of unconscious decision-making, let's talk about a software routine, as an analogy for an unconsciously-controlled stored mental routine, such as walking. A software routine is a set of instructions which contains selective processes. If statements and switch statements are such selective processes, which constitute a form of software decision-making. If one condition is true, one set of instructions is followed. If another condition is true, another set of instructions is followed. Thus, software makes decisions, however unconsciously. Mental routines such as walking must necessarily include such decision-making, albeit unconscious. When we are walking, and we encounter a dip in the ground, our walking process "automatically" accounts for the change in orientation, and the correct combination of muscular adjustments is made, enabling us to continue walking, without any conscious control, within certain limits. If those limits are exceeded, such as a sudden change, a hole in the ground, for example, our conscious deliberative process is notified, and we swiftly get consciously involved in the corrective process.
So, it is at least possible that we make decisions both consciously and unconsciously, and that those decision-making processes which are unconscious are generally more reliable, because they have been constructed ovcr a long period of time, involving a lot of experience.
The conscious deliberative process, while less reliable, probably due to its' apparently single-threaded nature, is that which controls the creation of the unconscious routines that we store and use. How it does this is, of course, not known. And it is entirely possible that poor unconscious decision-making processes are the product of long-term input of bad data. That is, a person who is trained at a young age to distrust authority, by means of a bad parent, for example, may exhibit poor decision-making habits (routines) with regards to other authorities in adulthood. These can be corrected by long-term input of corrective data. But that is the subject of another discussion.
The point which I am getting at here is that problem-solving is a cognitive process as well. As such, it involves the attention, or conscious involvement, of the person doing the problem-solving. Because the modelling process of the conscious mind is not necessarily accurate, as direct perception is, our problem-solving ability may actually be hampered by conscious thought.
We know, for example, that when we are struggling to solve a problem it often helps to "sleep on it." The process of removing the conscious attention from the problem, even sleeping, about which little is yet known, seems to allow the data which constitutes the parameters of the problem to be organized better, perhaps associated with other information that may be related, and of which we are not (yet) consciously aware. It is this act of allowing the (possibly superior) unconscious processes to work on the problem that seems to "inspire" us with new ideas that help to solve the problem. This is sometimes also referred to as "letting go."
I believe that "zis means somesing." It is not pure will that is most capable of solving problems, of coming up with creative solutions; it is "intuition." This speaks directly to the subject matter of my previous post, which is concerned with intuition versus deliberation. That process which is termed "cognitive control," and which may be that which we identify with "self," has a strong impulse to exert control over our other processes. We feel uncomfortable when we cannot trace the logic of a solution. Yet, we are constantly creating solutions to certain types of problems without any conscious understanding of them. How do you walk? Can you enumerate the muscles and components of the nervous system that you employ in order to do it? No, at least without a great deal of scientific study. Yet, it is an ability which almost everyone has.
And so, it is my thought that perhaps we often take conscious control of problem-solving when we would be better off not to. Sometimes it is better to loosen our focus, to "let go" of a problem, to allow ourselves to float freely in a stream of consciousness, in order to most effectively come up with a solution to a dilemma. The more complex a problem is, the less likely our conscious cognitive process is to be able to solve it in any reasonable period of time. It's a simple matter of resource use. If the conscious process is indeed single-threaded, at a certain point it can only switch between so many sub-threads before it runs out of resources. The unconscious mind is apparently not limited in the same way.
In practical terms, when I begin a project, I often wait several days after being given the requirements and parameters before actually doing anything about it. That is, I don't give it much conscious thought at all. I will allow my mind to freely wander to and away from it. I will sometimes "play with it" in my thoughts, deliberately "blurring" my thoughts about it, concentrating on feelings and impressions rather than concrete ideas and thoughts. Then, when I begin the actual planning process, it seems that much of the structure is already present in my mind, having been created by my unconscious thought processes. Like the elves that helped the shoemaker in the old fable, much of the creative work has already been done for me, as if by magic. And the quality of the work is much better than it would be if I had struggled over it consciously.
This is not to negate the function of the deliberative process. It certainly has its' place, and comes to play at just such a point, filling in the details and creating all of the actual end product. The product cannot be produced without it. But the design, the inspiration, comes from the unconscious.
At any rate, while I know that this concept is not yet fully fleshed out, "I sink zis means somesing. I sink zis is important." It is my hope that perhaps this might stimulate others to do the grunt work.
Saturday, February 17, 2007
For Christmas, my Princess gave me a new subscription to Scientific American Mind magazine, which I have bought previously at several airports on occasion. At one time I had a subscription to Scientific American magazine, but somehow I let it lapse. I had always enjoyed reading Scientific American magazine. Its articles are written not by journalists, but by scientists and researchers. They deal with cutting-edge science, and I am a cutting-edge type of person.
When I discovered Scientific American Mind magazine, which is published by Scientific American, I immediately fell in love with it. The subject matter is fascinating, as it joins together research from several fields that until recently have remained largely separate: Psychology/Psychiatry, and Neuroscience. Only recently have we had the tools to undertake a serious study of the mechanisms of the brain, which, like most of our body, is composed of trillions of nearly-identical cells, neurons for the most part, but which is capable of incredible computational skill, such that it will be a long time before computers begin to catch up with it. There is an inner simplicity to its structure which yields an enormity of complexity and power.
At any rate, while perusing the web site and the magazine recently, I came across a series of articles provoking thought in me, which I would like to share. In fact, most of what I read in Scientific American/Mind provokes thought in me, but this line of thought in particular has pervaded my mind quite a bit recently. This leads me to believe that there is something important (at least to me) lurking underneath it somewhere.
In this case, I was poking around on the Scientific American web site, and came across a series of blog posts, which were all centered around the concept of Intuition versus Deliberation, and related to several articles that deal with the concepts in various ways. It seems that there is now scientific evidence that "intuition" is more reliable than "deliberation" in the decision-making process. I believe (intuitively?) that this is likely to be confirmed, and that the consequences of these discoveries is likely to bring a great deal of benefit to the human race.
Our conscious mind is at least from shortly after birth, almost entirely consumed with that process we call "Thought." Thought, Cognition, and Consciousness are all closely related, and all related to the process of pattern-recognition, abstraction, modelling, and organization which is constantly occurring in our mind, at least when we are awake (or "conscious"), and perhaps even when we are not.
Because we are social beings, we have also developed languages that enable us to communicate thoughts as abstractions to one another, and because we use that language pervasively throughout our lifetime, it is also a large component of our thought process. We often think in "words," as if we were having a conversation with ourselves. This thought process is enormously complex, and must consume a great deal of menta resources, as evidenced by the sheer size of the areas of the brain devoted to it.
However, there is another process at work in our brains as well, one that precedes thought. Our brain is, after all, a computer of sorts. It is capable of performing incredibly complex calculations far faster than any computer we have yet created. It is also capable of learning, responding "intuitively" to positive and negative stimuli, and creating various subroutines that govern the decision-making process.
A perfect example is that of walking. We are not born with the capacity to walk. It is learned when we are infants, and it takes several years to learn it. We learn it by a combination of factors, including observation and motivation. We are motivated by desire. A baby wants to move from one place to another. It begins by squirming, then rolling, followed by crawling, and finally walking. Walking involves the coordination of thousands of muscles, combined with the perception of very fine differences in balance. It is not an easy trick to master. This is why robots do not yet have legs (at least like ours). Yet, once we have learned how to do it, we perform it without any cognitive thought involved. Each step involves a complex sequence of perceptions, both internal (balance) and external (environment), followed by a sequence of decisions (which leg to move, how much force to apply to which muscles, etc.).
Thus, it is provable that we are capable of making decisions accurately without conscious thought.
Therefore, it is logical to presume that we might be able to apply the same sort of process to our other decision-making. We are constantly making conscious decisions as well. We decide what to eat, what clothes we should wear, whom we should marry, whether and when to use force as a means to accomplishing our goals. However, our conscious thought process is somewhat hampered with our language. Our language is inexact, and at times ambiguous.
The language of integral mathematics is rigorous and exact. It is this very exactness which makes computers so reliable (not software, but computers - that is a different topic altogether). Computers deal exclusively with integral numbers, and apply exact mathematical rules to them. 1 + 1 always equals 2. A computer processor is, at it's core, simply a counter. It adds by counting up, and subtracts by counting down. Multiplication is a derivative of addition, and Division is a derivative of subtraction. Addition, subtraction, multiplication, and division are the basis of all mathematics. Mathematics is the basis of all computer programming.
Human language, on the other hand, is much less exact. We employ a limited set of linguistic symbols to represent a nearly (or perhaps entirely) limitless set of ideas. How we accomplish this is by employing the same symbols in different ways, in a highly-complex system of context and association, which is not integral in nature at all. When we parse human speech, we must interpret it. Each individual word is evaluated in the context of the words with which it is combined. Even more ambiguously, words are evaluated in the context of a set of ideas which forms an environmental influence upon their meaning. In other words, language is highly nuanced. Hence, we may at times have difficulty understanding one another. And because we use language internally in our cognitive thought process, we may even have difficulty understanding our own thoughts.
This thought process is further influenced by desire, which forms an environmental influence upon our internal language. We hear what we want to hear. We believe what we want to believe. We are capable of deliberately, and even unconsciously ignoring "unpleasant" thoughts and ideas. What You Seek Is What You Get.
Quite often, this leads to an internal conflict of thought, a debate of sorts which is constantly being conducted in our cognitive thought process. This is the essence of deliberation. The resulting decisions may or may not be helpful. Therefore, we are prone to "error." This can be seen in such things as "criminal" activity, immorality, and psychological disturbances of various sorts.
However, it is important to remember that there is still a part of our brain which remains incredibly reliable. This is the part of our brain which makes decisions without thought, without deliberation, the part of our mind which we use to walk, to run, to dance, to throw a ball, and so on. Is it possible to employ that reliability in the cognitive decision-making process? I believe this has already been demonstrated.
Albert Einstein spoke of the "leap of intuition." He often proposed ideas that did not arrive via any deliberative or logical process. They just "seemed right." It took a great deal of logical and scientific work to confirm these ideas, and many of them were confirmed long after his death. Yet, they were confirmed.
As children, before we are capable of conceiving complex abstract ideas, we seem to have a similar intuitive ability to "know" what is the right decision to make. Certainly, we do not always make the right decision as children, but when we are confronted with the consequences, we generally own up to our mistakes. It is simply desire which causes children to make wrong decisions, just as it is desire which causes adults to make them. The difference is that the child will "know" that they are doing wrong. An adult will often rationalize wrong-doing.
The word "intuition" is derived from 2 Latin roots, "in" (at, on) and "tueri" (to look at, watch over), and in that sense, essentially means "direct perception." This is a revealing definition, as it infers that one can perceive things without cognitive thought, which is of course true, as exemplified by infants.
It would seem that if it were possible to employ such intuition in the decision-making process, as we do when walking, and filter out the internal dialog/debate, the consequences of our decisions would improve dramatically. Various scientific studies I have been reading about in Scientific American Mind, and elsewhere seem to confirm this idea.
In fact, various forms of meditation seem to focus on such a process, that of emptying the cognitive mind of all abstract thought. While meditation seems to be associated with religion in many cases, I believe there is some aspect of the process which is not religious in nature, but purely a form of mental discipline, one that allows the intuitive, unconscious, and highly-accurate mechanism in our brains to be used in the decision-making process.
I look forward to hearing about continued study in this area.
Saturday, January 27, 2007
Windows Vista and Personal/Social ResponsibilityIn case there are any Kurdish people reading this, the title is a phonemagram. It has nothing to do with Kurds or Kurdistan.
I haven't been exactly faithful in keeping my blog up to date lately. I have been busy. Very busy. However, as I have no idea if anyone reads this blog, and no indication that anyone does, I suppose it doesn't really matter. At any rate, there has been little I could do about it, because I have been busy.
Along with the usual herculean work load, I have been getting Windows Vista installed and configured on my machine here at home. It took awhile. Three or four weekends, as I recall. My first attempt involved upgrading from my previous XP Professional operating system. While educational, I eventually realized that recovering from the upgrade would take longer than a clean install of the operating sytem and a reinstall of all the software. So, last weekend, that is what I did. Now everything is beautiful, and I feel much better.
Don't get me wrong; most users are not going to have the difficulty that I had. I am a software developer, and the hardware and software that I require are somewhat outside the bell curve, as far as hardware and software are concerned, particularly with regards to the software. In face, most users aren't likely to be installing the Vista operating system for themselves, and are really not likely to be able to, much less to install software on it, at least for the time being. And there are good reasons for this, having to do with security and support cost.
The chief security improvements in Windows Vista (at least those that are the most visible) fall into 3 categories: User Access Control, Windows Defender, and Service Hardening.
User Access Control.This, along with Windows Defender, are the chief reasons why most users will no longer be able to successfully add and use new software, at least in many cases, at least for awhile (until current software becomes obsolete). And this is a (mostly) good thing. For one thing, it speaks to the "support" issue I mentioned earlier. Please allow me to elaborate.
For many years, computer operating systems have evolved much like automobiles. Like early automobiles, the inner workings of the "engine" have been fairly simple and exposed, enabling the owner of the computer (or the car) to tinker rather easily. This was both, for automobiles and computers, an advantage and a disadvantage. On the plus side, one could easily save money by doing one's own "tune-ups," minor repairs, and adding accessories, without knowing too much about what one was doing. On the minus side, one could easily get into trouble with regards to the changes that one made, if one didn't know what one was doing.
Like automobiles, computers have become more sophisticated as time has gone by. Automobiles have abandoned things like spark plugs and distributors in favor of electronic ignition, and have had computers in them for various reasons, including aiding in maintenance. Computers have evolved even more, and more dangerously, with the advent of the Internet and distributed computing.
Unlike automobiles, computers are in a much less secure environment than they used to be. But like automobiles, the inner workings of computers are increasingly complex and difficult to get into, and in both cases, chiefly for the purpose of disabling the user's ability to damage the machine by tinkering with it.
Unlike automobiles, computer users do not require a license to operate one, even though, like automobiles, computer users now "drive" on "public roads," like the "Information SuperHighway" (an older term that was coined for the Internet). I have often joked that users should be required to have a minimum of understanding about computers and obtain a license by passing a fairly simple test in order to operate one. While I disdain the idea of government interference on the Internet (or almost anywhere else for that matter), the public nature of the Internet makes the idea less offensive to me, although I would still not advocate such. The day that the governments get involved in the Internet is the last day of freedom on earth. Governments are never satisfied with a little control; they thirst for absolute control. While government is a necessary evil, it is both necessary, and evil. But that is a topic for another discussion.
We are left with a dilemma. Computer users are increasingly dangerous to one another when they interact via the Internet. The world is not a nice place; it is full of evil-doers. Hackers and other socially-irresponsible people fill the Internet with SPAM, malware, viruses, trojan horses, network attacks, and the like. The average user is not only ignorant about what to do with regards to such evil; the average user is willfully ignorant about such things. "I don't want to know how it works; I just want it to do what I want it to do" is the slogan of the day.
People are willfully ignorant of the stupidity of computers. The old saying "Garbage in, Garbage out" (GIGO) remains, even though most people are dazzled by what they perceive as the intelligence of computers and software. It is the fact that computers can perform a huge number of instructions in a blazingly short amount of time, and the hiding of the inner mechanisms of this, which has led to the impression, along with the natural laziness inherent in the human psyche.
Therefore, people expect their computers to protect them, rather than the other way around. Perhaps it is the influence of creeping socialism that has led to this impression. Socialism has always relied on the inherent laziness of people to enable the empowerment of larger, more Machiavellian government. But that again, is a topic for another discussion.
In any case, software vendors such as Microsoft, are left to wrestle with the dilemma. To remain competitive, they must create software that satisfies the desire of users to be able to accomplish more, while protecting them from themselves. To keep support cost down, they must make it increasingly difficult for users to do things to their computers that will enable them to break them, as well as breaking other computers by proxy, via network attacks, spyware, malware, etc.
So, like automobiles, it has become necessary to make the engine more difficult to tinker with. The alternative would be to empower government to handle the protection of users from one another, an alternative that only a government could find attractive.
Enter User Access Control. This feature addresses some of the issues that have been passed down from one generation of operating system to another, issues which have simplified the operation of the computer in the past, but now make it much more dangerous. Users have traditionally run their computers with Administrator priveleges for the local machine, which gives them essentially carte blanche permission to do anything to the computer via any application they run. Users, even administrators, will now run under a Standard User Account, and when an application needs permission beyond the allowed permissions for that account, it will prompt them for the necessary credentials.
While this might seem problematic, with regards to the day-to-day operation of the computer, there are things which can be manually configured to prevent the constant interruption of the user for such things as registry permission. But they must be manually configured, with User Access Control firmly involved in the process. And this requires a more sophisticated understanding of security than your average user is likely to have.
This is going to be a tragedy for mal-ware, which typically assumes the identity of the loggged-on user, and attempts to run processes without the user's being aware of it. It also results in the increased difficulty of users to perform certain types of tinkering, at least without a mimimum of knowledge about the inner mechanisms of the computer.
Thus, not only is the computer better protected from the evil outside world, but also from the willfully-ignorant average user. This means that software companies can continue to provide software that does more without being overwhelmed by support incidents that stem from user error.
Windows DefenderWindows Defender has been available as a free add-on for the XP operating system, and marketed as a tool for protection against spyware (while Windows OneCare Live has been marketed as the Microsoft anti-virus solution). While this is certainly true with regards to spyware, it is not the whole truth with regards to the Vista operating system.
Windows Defender is used by Vista to support other services that monitor the health of the system. It also allows the user to remove or disable any software running on the system that may be suspicious. In a sense, it is the "Software GateKeeper" for the operating system.
Service HardeningWindows Services have traditionally been a point of vulnerability to the system, mostly due to the fact that they run without any visible user interface, performing tasks in the background, without the user's knowledge. Again, traditionally, a number of factors have enabled Services to perform necessary maintenance tasks in the context of the local System or Administrator account. This in no longer the case.
Most Services have traditionally run under the System or LocalSystem account, which has granted them carte blanche access to almost everything in the local system. Vista runs most services under the LocalService or NetworkService accounts, accounts which have much more restrictions with regards to changes made to the operating system.
Services now run with individual security identifiers (SIDs). This gives each Service a unique identity, enabling each Service to be individually configured with regards to what specific permissions it has. Each Service may have its own Access Control List (ACL), which enables it to allow or deny access to its services on a user-by-user basis.
Services are write-restricted on an individual basis, meaning that each service can be explicitly granted or denied write permission to files and registry entries.
Services by default are not allowed to interact with the user's desktop, preventing cross-session interaction, and such things as Shatter attacks.
Services are configured with individual Firewall policies, meaning that each Service has specific Firewall priveleges, rather than carte blanche access to network ports and addresses.
As a software developer, I need to run as an administrator, I need to be able to grant applications such as Microsoft Visual Studio permission to do low-level debugging, and I need to run a plethora of diagnostic and devlopment applications that perform operations on the local operating system and the network. I need to set up Internet Information Services on the local machine, to run a Microsoft SQL Server on the local machine, and so on.
So, I had a bit of difficulty installing all of my software. It wasn't really difficult; it required some research and time to do it. Still, I was able to set up my system and software in a few days. And if I had to do it again, I could probably do it all in a single day.
But as a software developer, I can understand and appreciate the enhanced security in the Vista operating system. I can accept the apparent intrusiveness of User Access Control (which I have turned off on my local machine, but would not recommend it to anyone other than a developer). I can also accept the fact that I will have to perform additional tasks with regards to writing software to run on Windows Vista. This will require more time in the short run, but save much support time in the long run. I'm sure every software developer has experienced the headache of hearing from users who have done something completely unrelated to the software in question, which has had an effect on their software, and had to straighten out a user's self-made mess. At least every developer who has been in this business long enough has experienced this, and no doubt Microsoft has had an earful of it.
Support is by far the most expensive aspect of software development, contrary to what most people may believe. While development itself is costly, the cost is short-term, while the cost for support is on-going, and may go on for years.
So, Mrs. Lincoln, other than that, how did you enjoy the play? I have to say that I am highly impressed with Windows Vista. There is far more there than meets the eye, and a lot more than you will hear about in advertisements and commercials for the operating system. Eye candy sells software, but it is power and potential that gives it legs. Vista is well-supplied with both power and potential, more than anyone not working at Microsoft will know about for years to come.
I look forward to the continuing learning experience of working with it.
Thursday, January 11, 2007
Anyway, as I was at some point beyond the visceral amplitude required for some form of ordinality, insofar as many a hungry and iterative condition would permit me to bewail the pandemonium of a phantom asparagus, I betook me to assail in some small fashion, as it were, absent from any fall plankton, or juice in a flesh gibbon.
The means of such, having befallen upon accomplishments both vicarious and educational, so it was as it should have been perhaps, a glancing backwater host of effervescent personification. No mere impediment would inscribe this invective. Only a shoeshine in a vacuum, and sometimes not at all, of course.
Altruism aside, let us happen upon a grotesque, for only in the event of a vacant idiocy would this imaginative edification find any form of ornamental vice with which to whiten your wick. Sick? Boy, a bowl of basins, not officially so, but more in the vein of a scar victoriously painted inside the phantom asparagus, handling its' own fruit in a stockyard cannibal. It is in such a suit, me a fortunate empty (of course), and only a king with a kind of a certain wild animal.
Assiduously, the vale investing afar of a formal opulence, in the form of a fairy, fell from an apple ordinary and able, belatedly pasted itself by the bare and bony flail. Until such time as would in all honesty, hitherto and frowardly skating over the silken tumbler, such and if as much as might, the candy became a kind of cane, angling about on a single wet and wobbly wheel. But then you might have known from the mangled overture, the vicious wink, the thought of which I stank.
And so upon the leaving of this painted cowboy, let not your best be less than blessed, in case of a crazed and lazy crayon, long and lost in the caverns upon which to play no fluid icon. A smatter of actual happenstance would in all victory be spoiled by the gradual violence in a kettle of kitty litter left behind by a blonde in obvious bob.
Eeyore, about seventh.
Thursday, December 28, 2006
I had this idea for a short film, something like you'd see on You-Tube (which I never look at, unless someone sends me something). It was what I thought was a really esoteric joke, and funny because it was so inappropriate. But Der Weiderschlaussen didn't seem to think much of it. So, let me know if you get it.
OK, it's about Gerald Ford and James Brown, who are currently touring separately, coming together for a special viewing for folks who are fans of both of them. A sort of "Together for the Last Time - Gerry and Jim!" show.
The opening shot is a large theater stage with a red curtain drawn, sort of like that curtain in that room in Twin Peaks, you know, the room where everybody talks backwards? Anyway, it's a stationary shot of the curtain, perhaps with "I Feel Good" playing in the background.
The curtain opens to 2 coffins standing side by side, one with Gerald Ford, and one with James Brown, facing the audience. There is silence for about 10 seconds.
Finally, the body of Gerald Ford falls forward out of his coffin, face down on the stage. Laugh track. The curtain closes.
So, is that too esoteric? Too obtuse? Too inappropriate? Too sick?
There's something about it that I like. But I do have an odd sense of humor.
Tuesday, December 19, 2006
Programming is all about problem-solving, but so is most everything else in life, depending on how you look at it. I will always be grateful that I became a programmer, as it taught me to see almost every aspect of my life as a task, a set of requirements, and a set of problems to be solved. From my experience as a programmer, I have learned the science of problem-solving, and practice it at every opportunity.
Part of that process is the elimination of factors that do not contribute to the solution of a given problem or task. Anything which does not contribute to the solution of a problem is a waste of time and resources, both of which are finite. To spend time on a problem that is never solved is also a waste of time. Therefore, it is not logical to waste time and resources on thoughts or activites that do not contribute to the solution of a problem, as the very failure to solve the problem would become a waste of time and resources in and of itself.
In other words, take anything you desire, and consider the attainment of that thing a requirement. Consider the cost of solving the problem, and before you begin, make a determination whether you are able to solve it, and whether it is worth the time and resources necessary to solve it. If the answer is yes, commit yourself to the solution of the problem. Otherwise, you will be wasting time and resources that could well be spent on other requirements. Once that decision is reached, eliminate factors that impede the solution of the problem, and begin the task.
So, how does this all fit in with Personal Responsibility? Well, the process of analysis involves the factoring of resources and time with other "environmental factors." These are situations, circumstances, and events over which you have no control, and yet which affect the process. Notice that I mentioned this is part of the process of analysis, which is the preparation phase of any process, and not the execution phase. It is the phase in which all factors are taken into account, and a roadmap or plan for the execution of the tasks necessary to achieve the requirement(s).
Now, we usually think about the concept of Personal Responsibility as relating to ideas which have nothing to do with programming or problem-solving, such as the concepts of blame, fault, and success or failure. But that is simply not the case. In fact, the concepts of blame, fault, success, failure, and similar concepts are general enough to fit into the model of programming and the model of problem-solving (which are in fact the same model).
If we look at our daily struggle in life, our struggle to "succeed" in life, to overcome the various trials and tests we encounter, our ambition to succeed at whatever it is we want to do, as a series of problems to solve, we can apply problem-solving principles to these types of things with equally-useful results.
When we apply the process of analysis to our course of action in day-to-day life, certain common human traits and behaviors emerge as helpful, and as detrimental to that struggle. For example, we are all subject to the activity of looking backwards at our past life. In fact, this can be a useful aspect of analysis, in the same way that a military After-Action Review (AAR) is useful after a battle or other operation takes place. The purpose of this activity is to prepare for future similar activities by analyzing what went right, what went wrong, and why. It enables the individual or organization to review and/or modify plans.
It is important to note that an AAR is not a process of finger-pointing or blaming. It is purely analytical. The problem with finger-pointing and blaming is that it is not useful to the process of planning the next operation. In other words, finger-pointing, blaming, regret, and so on, are emotional reactions to something perceived. If, for example, I were creating a game which involved sprites moving on a surface, and the surface was black, and I had created some sprites that were black, a test of the game would reveal that the black sprites against a black background made them difficult to see. I could, on one hand, look at the sprites and make the observation "Those sprites are black. They are hard to see." If I were to stop there, I would have accomplished nothing. It would be more useful to make the observation that, because the sprites are hard to see, they should changed to a different color which would contrast against the background. At that point, I have formulated a plan to correct the problem, and any further time spent thinking about how black the sprites are currently would be of no profit whatsoever.
In life, we often waste mental and emotional resources by obsessing on things that are past. We hold grudges, and have poor opinions about people who have caused us suffering. However, this is not useful in determining what course of action we must take in order to succeed. It is more useful to think of people who have caused us pain in terms of how they may fit in with or affect our plan to succeed. Revenge, for example, is a useless endeavor. It embroils one in a task or set of tasks which satisfy no useful requirements. It may satisfy some emotional desire, but the question is, does the satisfaction of an emotional desire bring me closer to my life's goals? And so, looking at others with some sort of qualitative evaluation, and contemplating that quantitative evaluation is pointless. It wastes resources that could be used to achieve personal goals or requirements.
In this sense, blame, regret, and similar backwards-looking activities constitute an attention to environmental conditions over which we have no control. It is not possible to change the past. It is not possible to force another person to change their behavior. It is only possible to make decisions about what we as individuals will do in the present and future. Recognizing that is a part of Personal Responsibility. I have no control over anything except the decisions that I make now and in the future.
I may or may not be able to achieve my goals, yet I only have control over my own decisions. Therefore, my primary focus should always be on the decisions that I make now, and what decisions I will make in the future. When I conduct a personal AAR, I should be concerned only with what decisions I should make, how I might want to change my plan of action, based upon a review of what has happened, how I behaved in the situation, how enironmental factors affected the success or failure of that plan, and how I should modify the plan for the future, accordingly.
Similarly, this concept applies to our dependence upon other people or groups of people for our personal welfare. Humanity as a whole is a society. We are a vast network of individual human beings who have a variety of unique combinations of characteristics, properties, and personalities. We interact with the Human Race by interacting with those in each of our personal "subnets" of friends, associates, and acquaintances. We exchange resources and support one another to one degree or another. And to a varying extent, each of us is dependent upon various individuals and groups for support in the achievement of our individual requirements. There are precious few isolated individuals in the world who are not at all dependent upon one or more other human beings or groups of human beings.
Some of us seem to be more dependent for certain requirements than others, and this may in fact be the case. This is why there are entities such as charitable organizations and governmental organizations that attempt to meet these special needs. However, it is a mistake to think that any of us is entirely independent.
On the other hand, it is a serious mistake for any of us, regardless of our condition, regardless of our dependence upon others, to make the leap of assumption that we cannot achieve our goals/requirements without them. Note that I am not saying that we all can achieve our goals/requirements without the aid of others. I am saying that to assume we cannot is a mistake. In fact, a good plan of execution includes the factoring in of contingencies, changes in the environmental conditions in which we exist, which may require a change in the decisions that we make.
We have seat belts and airbags in our cars. This is not because we expect at some point to be involved in an accident. These things are built into cars because of the possibility that an accident may occur. When and if an accident occurs, we have a recourse, which enables us to avoid serious injury.
Similarly, in the process of development of plans in life, it is useful to plan for contingencies, situations in which certain resources upon which we seem to depend may change or cease to exist. It is wise to plan for the eventuality that such things will happen, because they often do, to each of us, in different ways at different times.
When I was in my late 30's I was very poor. At one time I had no job, no car, no money, and no place to live. But for a number of years I had been working on a plan to change my circumstances. I had discovered that I have an above-average ability to solve problems, that I had a proclivity to analysis, and realized that I was very good with logic and enjoyed solving puzzles of various kinds. I also realized that I was both fascinated with and good at figuring out how to use computers. So, in my spare time, after work in the evenings, and on weekends, I had been teaching myself programming.
By the time I had reached the bottom-most point of my neediness, I had also acquired a skill. I had an opportunity to pursue this new line of work, and took the opportunity. Before a year had gone by, I had started my own consulting business. A dozen years later, I lack for nothing.
In other words, I took Personal Responsibility for my situation. It was not a matter of blame, of making critical observations about myself, the government, my friends, or anyone at all. It turned out to be a matter of analyzing my own strengths and weaknesses, the environmental conditions of my life, and formulating a plan to achieve something better. After that, it was a simple matter of doing those things that I could do under the circumstances.
Note that I am not taking credit for this achievement. I am simply pointing out that I made the decisions that I was able to make, took the actions that I was able to take, and took responsibility for those decisions and actions. Logically, that was the only choice I could make which would have any effect on the outcome. In other words, to have wasted my time and resources considering anything else would have been counter-productive, and diminished the probability of the desired outcome.
To take Personal Responsibility is not a guarantee of success. Nothing in life is guaranteed. However, in the spectrum of probability, the best course of action to take is one which increases the probability of success. And because the only things that each of us has any control over are the decisions we make, and the actions we take, Personal Responsibility is logically useful. To dwell on anything over which we have no control is logically useless.