The Subtle Art of Not Giving a Fuck

Key ideas:

Values

You do stuff for a reason. Values one choses to do something should not be external and uncontrollable. As a rule of thumb, if one can't control the values (external acceptance or appreciation, change of someone else's behavior) those values are not worth pursuing. Primary reason lies in the fact that the feedback on your progress or success is at best very subjective and at worse is objectively unknowable in principle. Values that one can control directly are good. I.e. I'll learn to draw because I want to express certain visual ideas is a good value, while I'll learn to draw to become a popular artist is a bad value. The former value is controllable and can be objectively judged: I now have gained an ability to express my ideas in a visually coherent way. The later value is very subjective. It is impossible to control how others value your work as well as control popularity. Values that lack direct control cause confusion, frustration and eventually kill motivation. By choosing a controllable value, other less controllable and external values, come as a bonus - for free: I've learned to expressed my ideas in visually coherent and understandable way - others may appreciate and express their affection. 

Accepting Responsibility

When a situation is caused or provoked by one's actions, the responsibility of dealing with the consequences are given. On the opposite side of the spectrum are situations that happen without our intention. No matter what happened, no matter wether one was in control of the situation - one must accept responsibility of dealing with consequences. There is no way around it. Even if one chooses to do nothing - one accepts responsibility of the consequences of doing nothing. Let's say someone left a child at one's doorstep. One can take the child home, take it to an orphanage, call police, or do nothing. Even the "do nothing choice" has consequences that one will have to face with, though the child on the doorstep was not one's action. Just deal with it!

Just do stuff. 

There is a loop - inspiration -> motivation -> process. The order is not necessary. You can start the process and the results will bring inspiration, which will bring motivation. Think of this triad as a loop, where one can choose an arbitrary point for a start. While inspiration and motivation is subjectivly harder to control, process can be started any moment.

Chinese Room

Through ages of reading different literature on topics of philosophy of mind to artificial intelligence I encountered the Chinese room example in most of them. It provides very good intuition on the problem of thinking about thinking. My latest reiteration of the Chinese room problem happened in Steven Pinker's "How the Mind Works", which is the book I'm reading at the moment. 

The Chinese room is a thought experiment presented by the philosopher John Searle to challenge the claim that it is possible for a computer running a program to have a "mind" and "consciousness" in the same sense that people do, simply by virtue of running the right program. The experiment is intended to help refute a philosophical position that Searle named "strong AI":

The appropriately programmed computer with the right inputs and outputs would thereby have a mind in exactly the same sense human beings have minds.

To contest this view, Searle writes in his first description of the argument: "Suppose that I'm locked in a room and that I know no Chinese, either written or spoken." He further supposes that he has a set of rules in English that "enable me to correlate one set of formal symbols with another set of formal symbols", that is, the Chinese characters. These rules allow him to respond, in written Chinese, to questions, also written in Chinese, in such a way that the posers of the questions – who do understand Chinese – are convinced that Searle can actually understand the Chinese conversation too, even though he cannot. Similarly, he argues that if there is a computer program that allows a computer to carry on an intelligent conversation in a written language, the computer executing the program would not understand the conversation either.

The experiment is the centerpiece of Searle's Chinese room argument which holds that a program cannot give a computer a "mind", "understanding" or "consciousness", regardless of how intelligently it may make it behave.

In Pinker's book, as in many others I've read, the Chinese room problem hangs in the air dividing people into believers and non-beliers, who say that the room is thinking and is conscious while others completely reject the conclusion. The ultimate question has ties to "the little man in our head" or so-called homunculus, and asks - "who does the thinking?"

Somehow, I always though that the answer is apparent, though never noticed that it has been unspoken until just recently. While the thinking process is reduced to the man reading rules in English that by no means can account for complex thinking, but rather represent a simple memory recall at best, the right questions should have been - "Who has written those rules?" Since we're interested in the answer to the thinking question, we should examine all of the processes, even those that are not presented in the description but assumed by definition.

The Chinese room has the following parts that interact with each other: the room, the man, the rules, the window (or slot by which the man communicates back with the outside world).

Clearly the room that holds the rules and the man does no thinking at all. It may provide the convenience of the rules' arrangements and storage. It may speed up the look up and may even enable the thinking but nothing more. For the external observer the room does appear to think and even possesses the source of intelligence, which we'll discuss in the end.

The man, though the most intelligent agent of the whole story, does no thinking either. He may do the thinking by himself, after all he is a man, but not in a relation to the Chinese room problem. As I mentioned above he enables rule execution, or memory recall if he has memorized the rules, but nothing else. 

The rules don't do much thinking either, as they are fairly static objects or memories in the man's head. The man has no way of interacting with the rules in terms of appreciation or understanding, because symbolically they are expressed in unfamiliar to the man Chinese language. He can only execute the rules without understanding why. Hence, rules don't do the thinking either.

At this point the problem usually stalls while ignoring one last part of the Chinese room - someone who speaks Chinese and spent many years writing the rules. I won't go into the argument of wether or not it is possible to write those rules, given exponential complexity of languages, but someone clearly did the thinking and wrote the rules in this hypothetical situation. Playing every possible input scenario someone had to think it through, consider many different contexts, and finally came up with a finite, albeit very large, set of rules. That hypothetical person who wrote the rules is the one who did all the thinking in advance and thus is the answer to the question - "who is thinking?" 

It does, of course, create more questions than it provides answers. Does it mean that the thinking machines that we'll create will have no thinking powers, as we did all the thinking for them? Does it mean that there must be creator for every thinker? ... and so on. I can probably come up with a ton of other nonsensical questions to exploit my conclusion, however, it won't do any good. One thing we must remember is that the answer is the solution to this particular and purely hypothetical problem that describes and nonsensical apparatus that has no real purpose or utility and is static by definition. This excludes any possibility of learning and experience, and is nothing else but a good brain teaser.