In this post I want to develop the concept of what it is to reason. I think that to do this we must examine some paradigm examples of the mind using reason, and distinguish the act of reason from other acts.
I think we can first recognize that in going about this question we are reasoning. Reason thus describes not only a noun, as in “There is a reason” meant to describe an explanatory cause for some condition or state, but is in fact also an activity. Reasoning about reasoning is “meta-reasoning,” and we are all along in this activity assuming the validity and meaningfulness of our activity (for otherwise it should be invalid and meaningless). No argument can be presented against the fact that I am reasoning without using reason, ergo reason exists and is real, reductio ad absurdum.
So so far we know at least these things;
1) Reason can be a concept and an activity
2) Reason facilitates through meaning
3) Reason is valid
These we know basically, because in describing what I am doing, my description can only be refuted by using exactly that which is being described. Reason proves reason, essentially. On a relevant rabbit trail, consider this argument;
1) Circular reasoning is fallacious
2) Only reason could prove reason
3) Ergo, reason is fallacious
Now of course, reason is being invoked in this argument, so it can be refuted through reductio ad absurdum. In the case of reason, the negation requires its affirmation. But the argument is valid, so which premise is unsound?
We couldn’t disagree with the first, so it must be the second. What proves reason? Because it is the negation that gives it affirmation, reason is proved through unreason, in that unreason grounds itself in reason, thus giving us proof of the basicity of reason.
Now what does it mean that reason facilitates through meaning? This means that reason is a calculus that deals with things that have meaning; no meaning, no reason is possible (hence we cannot reason about the essence of a square circle, because it has no meaning, i.e. logical referent).
But what has meaning? That’s the tricky part. Random composites of matter do not form meaning; the pixels that form the words I am typing have no more intrinsic value than does the peculiarly shaped object of springy metal laying beside my computer. If I, or any mind, were not present, then while the pixels might exist, they would have no meaning, since there would be nothing they communicate to anyone.
So the meaning that is required for reason to operate is dependent upon a mind.
Now the operation of a mind is especially in its intent. We do not attribute a mind to the earth, because there is no activity or event that occurs in or on it that is due to some end the world intends. We do, however, attribute a mind to man, because he intends things, i.e. to write an argument that will persuade. So where there is a mind, there is intentionality, because, more or less, to have a mind is to possess the ability to intend.
The pixels on my screen have meaning because they were intended to do so. It isn’t a matter of probability that makes things meaningful, because while monkeys on a typewriter will eventually write a Shakespearean play, it would only be a play if there were a mind who could recognize it as such. Even the idea of monkeys on typewriters only exists because minds exist; no minds, no ideas. There might be monkeys on typewriters, but without minds to give meaning to words, no combination of words will ever suddenly gain meaning. Therefore, symbols (i.e. letters) have a derived meaning. They do not possess the meaning in and of themselves, but contingently to the minds that apprehend them.
To understand how a symbol is intrinsically meaningless, consider learning a new language. A new language is, in comparison to a language you already know, essentially a combination of guttural sounds and rules that define how those sounds together make a certain meaning. But what gives meaning to the sounds? The mind that apprehends them. The sound existing somewhere in our dimensional space, if there is no utterer and no hearer addressed, is just a sound, as devoid of meaning as the plip! of a water droplet or the krak! of falling rocks. It is these sounds between minds that have meaning; you take away the minds, you take away the meaning.
This should demonstrate that, because reason facilitates through reason, there is no reason without a mind, and there is no reason without the intentionality that the mind alone produces. But if intentionality is required for there to be meaning, then reason occurs by the act of minds because the mind intends to reflect on a phenomena or idea in order to give it meaning to the mind, to deconstruct it in a meaningful way so as to determines its order and essence; there is, then, in every act of reason the intent of reason. I call this reason-for-reason’s-sake; to reason is to intend to reason. For short, rather than type out “reason-for-reason’s-sake,” I will refer to it as “reason-proper,” as opposed to events which might be mistaken for reason, but are really mechanically determined acts which have their meaning via the author (i.e. mechanic or computer programmer), not of their itself.
With this development, I wish to answer the argument of a commenter known as “/facepalm” (just full of goodwill) on my positive argument from reason post. He said;
A square-circle is just nonsense, because the definition of the square entails that it cannot be a circle. There is nothing contradictory in the concept of vision-proper. Vision-proper, something that would result had evolution given us vision-for-vision’s sake, would bestow upon the viewer the ability to see whatever it is that is in front of the viewer, as long as there is nothing obstructing the line-of-sight, in the highest resolution possible. Distance of object from viewer should not be an issue.
The notion of “vision-proper” only seems analogous without considering the development given to the idea of reason-proper. As I developed, reason is intrinsically an act of intention, and it is meaning that gives meaning to the act of reason. Vision, or sight, on the other hand, requires no intention because there is no essence of sight that involves it being intended to occur for it to occur. Sight, or any other material sense, can be merely the interaction of matter with other matter. (I say “can be” because, in humans, by our possession of a mind, sight is given meaning; sight is still sight, however, without any meaning given it by a mind.) Since no meaning is required for the senses to be senses, then the senses do not hold any such analogy to reason. Thus considerations of “vision-proper” are devoid of meaning, that is, they are logically incoherent like a square circle. Anyone who can find meaning in it isn’t understanding the attention given to meaning, minds, and reason as discussed here.
Computers currently cannot reason as well as humans do. That’s because the most computers today are mere calculators.
Here is demonstrated my commenter’s ignorance of the notion of meaning and intentionality. Since computers do not possess a mind, they cannot reason because they cannot attribute meaning to anything. Any output they give that has meaning to us is because such output was designed by a mind to give such an output to certain input.
While they are god-like in things like performing arithmetic calculations or playing chess, they suck at many things a typical human can do.
More obvious ignorance. Computers are excellent at performing these things because they have been excellently programmed to do so. A boat motor is excellent at giving locomotion to a boat, but not at giving locomotion to a car; this is because of the design given the artifact by a mind. It is also worth considering that there are better and worse boat motors; but what gives it the meaning of better and worse is the mind that evaluates their ability to most proficiently fulfill some criterion that the mind has determined the motor’s purpose to be. A spoon functions better than a fork at ladling soup up to my mouth, but this ability it has is contingent on the mind that evaluates its ability to perform such a function.
So, while indeed now humans are better than computers at some things, and computers are better at some things than humans, this is for several reasons. First, we must remember that it is the human mind which is judging the utility of a computer in comparison to a human performing the same activity. If a computer were to be making such judgments, it would be because it was designed by a human to make such judgments, who would also be determining which criterion the computer evaluates and measures as input to give its output. Second, there is the ontology of the computer itself; it is a composition of wires, electrical impulses, and symbols. And who gives the symbols meaning? Minds.
To have a machine that’s going to be capable of reasoning, we need a device that designed like the neural network of a human brain: neural network-based computing.
This demonstrates my commenter’s commitment to materialism. To the materialist, the mind is no more than the brain, because the mind must be material because there is nothing immaterial. Ergo, it is reduced to the brain; but the brain is wholly material, and so it functions by the same laws as any other machine, albeit in a complex way.
But here’s the problem; if our mind is just the brain, then whence meaning? All throughout it has been discussed why meaning must originate in minds, so let us consider the possible materiality of intention.
Here are some material events; one atom bumps into another; a series of electrical impulses goes from my brain to my finger which causes the muscles to contract to pull the trigger which causes the hammer to fall which causes the bullet to fire; water atoms heat up and evaporate. Neither of these events, considered in their purely material components, have meaning or intentionality. One might intuit meaning into my presumed firing of a gun, but this is a component not present in the material act.
All that happened, materially, is an electrical impulse, the contraction of muscles, the pulling of a trigger, the dropping of a hammer, the explosion of gunpowder, the motion of a bullet. This is the full material description of the event; questions like why? to determine my intent cannot be answered by what the material event is. So if we are to answer such a question, we must reference to something that a material description cannot describe; intentionality, namely, in this case, what my intent was in firing the gun.
So, is a composition of matter and electrical impulses what intentionality is? No. It is just an assemblage of matter and energy; and if it follows the same laws as any other composition of matter and energy, then like all other compositions, the brain is nothing more than a complex network of matter and energy that doesn’t reason. But we do reason. Ergo there is more to the mind than the brain.
Therefore, unless we discover some way to import connections of immaterial objects to material objects, then computers, because they are merely material, cannot, and will not ever reason. No matter how impressive they might be at performing calculations, the impression is only because there is a mind to give any meaning to the operation of the machine, i.e. to evaluate its speed and ability to handle complex equations/data.
What’s interesting, though, is these devices are very similar to the human brain in that they are good at what people are typically good at (facial recognition and other pattern-mapping tasks)
This last quote is not for refutation, but just to analyze the statement based on what was just said. As we remember, it is minds that know meaning, and so can analyze events based on their ability to fulfill some criterion (i.e. what is the meaning of some event in contingency to myself?). Computers have the ability to “recognize” faces because they have been designed to do so, and we evaluate their proficiency at doing so in comparison to what we know to be the act of “recognizing faces,” and the same sort of reasoning holds for computer programs that apply an equation designed by a mind to a set of data.
This is a demonstration that reason-proper is an act only of (immaterial) minds, and cannot be replicated in any composition of matter and energy (aka a machine).
Right after your post, in the “possible related posts” section, it reads, “wtf dude”. According to wordpress, these related posts has been “automatically generated”. But is it really “automatically generated”? Clicking on that post, I see no relation between your post and that post. I can, therefore, think of no other reason why the WordPress post matching process had worked the way it had other than to taunt me to change my username to /wtfdude.
“But here’s the problem; if our mind is just the brain, then whence meaning?”
Stare at the next quoted sentence. Stare at it for one good decade or two, if you have to. Because, believe me, the answer is staring right back at you.
“The pixels on my screen have meaning because they were INTENDED to do so.”
After further thought, I realized that I couldn’t trust you to see the answer for yourself. So, I will have to lead the horse to the water and ladle water into a spoon, pry open the horse’s mouth and tilt the spoon such that water comes trickling down its throat.
“A spoon functions better than a fork at ladling soup up to my mouth, but this ability it has is contingent on the mind that evaluates its ability to perform such a function.”
But what is the mind, really? The mind is only a set of beliefs. Over-simplification, no doubt. But I think this over-simplified definition of the mind is sufficient for this case. So, the mind is a set of beliefs, and what is beliefs in this case? A belief, in this case, is a goal. Goal means tasks that has to be accomplished. In conclusion, the mind is a desire to achieve goals. (This is what we mean when we say the mind has intentions.)
Your goal/intention, in this case, is to eat soup and this is going to be accomplished by putting soup into your mouth. You can experiment with various ways to do this, but you ultimately find that the best way to achieve your goal is to use a ladling device eg.a spoon to put soup into your mouth. Now, a robot that has been programmed to eat soup will ultimately arrive at the same conclusion as you ie. a spoon is better than a fork when it comes ladling up liquid.
I realized that it might be hard to make you understand whatever I have typed, so I am going to make it as simple as possible.
“Here are some material events; one atom bumps into another; a series of electrical impulses goes from my brain to my finger which causes the muscles to contract to pull the trigger which causes the hammer to fall which causes the bullet to fire; water atoms heat up and evaporate. Neither of these events, considered in their purely material components, have meaning or intentionality. One might intuit meaning into my presumed firing of a gun, but this is a component not present in the material act.”
Firing a gun can only has meaning if it somehow fulfills your goal eg. to kill someone or perhaps you enjoy firing guns and want to feel the thrill of firing a gun. What I am saying is, things can only have meaning if it fulfills your goal.
ADDENDUM:
A gun is meaningful to an assassin who wants to kill someone, just as it is as meaningful to T-1000 who has been programmed to kill Sarah Connor.
But you missed the point; if you consider only the material phenomena, then you couldn’t have an answer to the question of intentionality. “Why did I fire the gun?” is, if you analyze only the material composites that formed the action of my firing the gun (an electrical impulse from my brain […] to the motion of the bullet), you cannot have enough information to tell me why I fired the gun. Ergo, there is, if there is any reason for my firing the gun that is separate from the material causal series of events, immaterial components that form explanations, which gives us intentionality, meaning, reason, etc.
Intentionality is just following directive. What is so immaterial about a robot trying to do what it has been programmed to do? If I want to know why a robot does what it does, I could access its “brain” (processor) and inspect all the programming lines that’s going through its “brain”. Your thoughts is no different, as every single thought is “encoded” in the electrical impulses firing around inside your head. The only reason I can’t access your brain and understand what’s going on is because I don’t understand how to interpret the data. It’s like trying to read programming lines written in an unfamiliar programming language.
To say that thoughts are mere “electrical impulses” is like saying that programming lines are just a string of 0s and 1s.
Intentionality is the mind’s being focus on, or about, some external thing, especially as to bring about some end. It is not merely following directive, but actually evaluating, determining, and giving directive.
So while you might be able to analyze the program of a computer, and see that it performs complex equations, you would have to understand the intended end of those equations to a mind to determine the meaning of those equations, i.e. as to whether it is a flight simulator, video game, or text editor.
Analogously, this is why you couldn’t determine the content of my thoughts merely by analyzing the physical content of my successive brain states because an intent is not a “such-and-such composition of matter-energy” but a meaning. It isn’t a matter of being able to interpret data, but a question of “Whence meaning?” Just as any material event outside of the brain possesses no meaning, the same follows as material events within the brain, because it is the mind which gives meaning to events.
“Intentionality is the mind’s being focus on, or about, some external thing, especially as to bring about some end. It is not merely following directive, but actually evaluating, determining, and giving directive.”
Why couldn’t this occur in a robot’s mind? A robot that is too rigidly-structured to follow orders would be terrible at problem-solving. Problem-solving robots would be able to evaluate their directive and set new directives as long as it gets the job done ie. fulfilling the main directive. In Terminator 1, Arnie has been programmed to kill Sarah Connor, but Sarah Connor is protected by another terminator. In order to accomplish his main directive (killing Sarah), he sets himself a new goal/directive, destroying the robot that is protecting Sarah. In Terminator 2, Arnie is ordered by the young John Connor to not kill innocent lives, and he did that, realizing that lowering his body count of innocent lives would not jeopardize his mission (main directive) in any way. Later on, John Connor ordered him to not destroy himself, Terminator Arnie evaluates this directive and realizes that he had to ignore it, because the success of his mission (helping humans against Skynet) requires his complete and utter destruction.
“So while you might be able to analyze the program of a computer, and see that it performs complex equations, you would have to understand the intended end of those equations to a mind to determine the meaning of those equations, i.e. as to whether it is a flight simulator, video game, or text editor.”
And this understanding of the intentions of the computer programmer who programmed the computer would come to me automatically. As soon as I read all the programming lines, I would instantaneously understand what the programmer had intended his software to do.
“Analogously, this is why you couldn’t determine the content of my thoughts merely by analyzing the physical content of my successive brain states because an intent is not a “such-and-such composition of matter-energy” but a meaning. ”
If I can gain access to every mental state you have and understand them, why would I be unable to automatically understand your intentions?
If a computer program sets itself goals, it would be doing so based on the program itself, so this still isn’t different from any other mechanical operation, and not an example of thought.
In going beyond the mere physical data to the assumption that there is a mind who programmed these lines of code in order to complete some meaningful task, you’re analyzing more than the material data. Same with analyzing my brain states; if you postulated nothing more than my brain to try and explain my intention, you wouldn’t understand it; you would only understand the meaning of my brain states in postulating more than the brain, namely, my mind, that has intention. Your linking of physical data to immaterial meaning is so developed that you aren’t seeing the vast difference there is between the physical thing the meaning has been imputed by a mind, and the meaning itself given by the mind.
So I ask this; where is the meaning of a word?
“When I use a word,” Humpty Dumpty said, in rather a scornful tone, “it means just what I choose it to mean—neither more nor less.”
The meaning of a word is in a mind. So? It just means that words (and thereby the meanings imbued on them) are just devices people use to communicate with one another. Again, I ask you, what is so immaterial in all of this? Because a computer system can use “language” to communicate with another computer system. I don’t deny the existence of the mind, I just don’t think that there is anything remotely immaterial about the mind. That is, if you are using “immaterial” to mean soul or some other non-physical objects, as opposed to some abstract concept (concept=non-object) without any tangible existence. If you are using immaterial to refer to a concept, then it’s a whole different argument altogether. Anyway, back to the main argument still, vision is different from reason in the sense that one is passive (merely receiving sensory input), while reasoning requires active thought process. But, both vision-proper (the ability to see properly) and reason-proper (the ability to reason properly) has never been bestowed upon us by evolution.
Can’t win this argument, so you shift to another one, eh? I would agree, I suppose, that evolution hasn’t given us vision-proper (because there is no such thing) or reason-proper (because to have reason requires intent, which evolution has none).
I didn’t shift. You never explained what’s so immaterial about intention. I am left guessing what you meant by the term “immaterial”: is it an object (albeit a non-physical one) or a concept? If it’s a concept, then there’s nothing much to argue about. Also, there is such a thing as vision-proper, but evolution never bothered to give that to us, just as it didn’t bother to give us humans eagle’s vision. In the same vein, I don’t suppose evolution would bother to give us reason-proper (to ability to reason perfectly, just like how God, the most perfect being ever, would reason).
You cannot determine the content of what I intend by analyzing mere matter; this is the immaterial content of intention. That’s your explanation for how intention is immaterial.
Immaterial things are just that; non-matter. Abstract things are immaterial, there are some objects which are immaterial. Not all immaterial things are in the same way immaterial; its only to say they aren’t made of matter.
“The notion of “vision-proper” only seems analogous without considering the development given to the idea of reason-proper. As I developed, reason is intrinsically an act of intention, and it is meaning that gives meaning to the act of reason. Vision, or sight, on the other hand, requires no intention because there is no essence of sight that involves it being intended to occur for it to occur. Sight, or any other material sense, can be merely the interaction of matter with other matter. (I say “can be” because, in humans, by our possession of a mind, sight is given meaning; sight is still sight, however, without any meaning given it by a mind.) Since no meaning is required for the senses to be senses, then the senses do not hold any such analogy to reason. Thus considerations of “vision-proper” are devoid of meaning, that is, they are logically incoherent like a square circle. Anyone who can find meaning in it isn’t understanding the attention given to meaning, minds, and reason as discussed here.”
“Reason-proper” does not mean “to reason infallibly,” but only “the ability to reason.”
You have no beef with materialism, because materialism is opposed to immaterialism in so far as immaterialism is an object, while your usage of immaterialism describes a concept. The difference between object and concept is fairly simple: one has a tangible existence, while the latter is more arbitrary. Think of it like money. The physical parts of money (metal coins and paper notes) has a tangible existence regardless of the economy system that recognize the value of the money, while the value of money varies according to supply and demand and other economic variables.
I do have a beef with materialism, because I do believe there are immaterial objects, i.e. mind. I was just simply pointing out that not all immaterial things are immaterial in the same way.
I think your definition of the difference between concept and object isn’t a very good one, however. An object is not necessarily “tangible,” though we could say it has intelligible properties (i.e. being immaterial); and a concept is not arbitrary (perhaps you meant “abstract?”), but this also isn’t a good description. There is a difference between concrete conceptions (the thing-in-itself) and abstract conceptions (the property of a thing that apart from it has no existence in-itself). Roughly, a concept is the propositional content related to some thing or part of it.
Anyway, this isn’t the main discussion, so I’ll let that be.
There is a reason why I use “object” to denote what you would call “concrete conceptions” and reserve the use of “concept” to what you’d call “abstract conceptions”. The word “concept” has connotation with the mind, after all, it is usually defined as “an idea”.
Conception, be it concrete or abstract, is still not the same as an object, which has an in-itself, while conceptions are our ideas of them, which isn’t identical to their in-itself.
Yes, yes, that’s what I said.
I subscribe to a deterministic viewpoint. Given that, Intent is completly material and your argument for why computers cannot reason is invalid. Is that true, or did i miss something important? I’m not trained in philosophy so its very probable. I just reason that the concept you describe as intent can only exist in a indeterministic framework.