There is another problem with
Friedman’s defense, which is that even experts are unable to optimize when the
problems are difficult. To illustrate, let’s return to the game of chess. Since
chess has no stochastic (refers to a
randomly determined process, https://en.wikipedia.org/wiki/Stochastic_process
)
elements, it has long been
known that if both players optimize then one of the players (either the one who
goes first or second) must have a winning strategy, or neither of them do and
the game will lead to a draw (if two teams or opponents draw, they both have
the same score so neither wins). However, unlike checkers, which has been “solved”
(if both players optimize the game is a draw) chess matches do not yield (to produce something useful ) predictable outcomes even in matches between
grandmasters. Sometimes white (first player) wins, less often black wins, and
there are many draws. This proves that even the best chess players in the world
do not maximize. Of course one can argue that chess is a hard game, which is
true. But, many economic decisions are difficult as well. A second line of
defense is to concede (to admit that something is true) that we
don’t all do everything like experts but argue that, if our errors are randomly
distributed with mean zero (mean is the
average of the data that can be calculated by dividing the sum of the data by
the numbers of the data. The mean of any normal distribution is not zero),
then they will wash out in the aggregate, leaving the predictions of the model unbiased (fair in the way that you describe
or treat a situation, showing no prejudice for
or against something, https://www.youtube.com/watch?v=b21FtgqQMj8
https://www.youtube.com/watch?v=gIsMiV_ow-U
) on average. This was often the
reaction to Simon’s (1955) suggestion that people “satisfice” (meaning grope (search blindly or uncertainly by feeling
with the hands to search for an idea or
a way to say or
do something without being certain of what you
are doing) for a
satisfactory solution rather than solve for an optimal one). If the choices of
a satisficer are not systematically different from an optimizer, then the
models lead to identical average predictions (though satisficers will have more
noise). This line of argument was refuted
(to say that
a statement is not true or accurate without giving proof) by the seminal (a
seminal piece of writing or music is new and different and influences other
literature or music that comes after it) work of Daniel Kahneman and
Amos Tversky in the 1970s. In a brilliant series of experiments on what
psychologists refer to as “judgment” and what economists might call
“expectations” or “beliefs,” Tversky and Kahneman (1974) showed that humans
make judgments that are systematically biased. Furthermore, these errors were
predictable based on a theory of human cognition
(in psychology, the process by which you recognize and understand things).
Kahneman and Tversky’s hypothesis was that people often make judgments using
some kind of rule of thumb (a broadly
accurate guide or principle, based on practice rather than theory, https://www.youtube.com/watch?v=tfN2ChZLIrw
)
or heuristic (relating to a method of teaching or learning in which you learn from your own discoveries and experiences). An example is the “availability heuristic” in which
people estimate the frequency of some event by the ease with which they can
recall instances (an example of something happening)
of that event. Using this heuristic is perfectly sensible since frequency and
ease of recall are generally positively correlated. However, use of the
heuristic will lead to predictable errors in those situations where frequency
and ease of recall diverge. For example, when asked to estimate the ratio of
gun deaths by homicide to gun deaths by suicide in the United States, most
people think homicide gun deaths are more common, whereas there are in fact
nearly twice as many gun-inflicted suicides as homicides. These are
expectations that are not close to being “as if” rational—they are predictably biased. Kahneman and
Tversky’s second influential line of research was on decision making. In
particular, in 1979 they published their paper on prospect theory, which was
proposed as a “descriptive” (or what Milton Friedman would have called
“positive”) model of decision making under uncertainty. Prospect theory was
intended to be a descriptive alternative to von Neumann and Morgenstern’s
(1947) expected utility (it is a measure
of satisfaction an individual gets from the consumption of the commodities. In
other words, it is a measurement of usefulness that a consumer obtains from any
good. A utility is a measure of how
much one enjoys a movie, favorite food, or other goods, https://study.com/academy/lesson/utility-theory-definition-examples-economics.html
) theory, which is rightly considered
by most economists to characterize how a rational agent should make risky
choices. Kahneman and Tversky’s research documented numerous choices that
violate any sensible definition of rational. This pair of problems posed to
different groups of subjects offers a good illustration.
1582 THE AMERICAN ECONOMIC REVIEW JULY
2016
Problem 1.—Imagine
that you face the following pair of concurrent
(happening or done at the same
time) decisions. First examine both
decisions, and then indicate the options you prefer.
Decision (i) Choose between:
A. A sure
gain of $240 [84%]
B. 25%
chance to gain $1,000 and
75% chance to gain or lose nothing [16%]
Decision (ii) Choose between:
C. A sure
loss of $750
[13%]
D. A 75%
chance to lose $1,000 and a
25% chance to lose nothing [87%]
The numbers in brackets
indicate the percentage of subjects that chose that option.
We observe a pattern that was
frequently displayed: subjects were risk
averse (opposed to taking risks, or only willing to take small risks) in the domain (an area of activity) of gains (the S-shaped curve means that people tend
to be risk averse in the domain of gains and risk seeking in
the domain of losses; this is the crux (the most important aspect of something) of prospect theory.
In short, prospect theory predicts that domain affects risk
propensity, risk propensity (RP) is a trait characterized by an increased
probability of engaging in behaviors that have some potential danger or harm
but also provide an opportunity for some benefit) but risk seeking in the domain of losses.
It is not immediately obvious
that there is anything particularly disturbing about these choices; that is,
until one studies the following problem.
Problem 2.—Choose
between:
E. 25%
chance to win $240
and 75% chance to lose $760 [0%]
F. 25%
chance to win $250
and 75% chance to lose $750 [100%]
Inspection reveals that
although Problem 2 is worded
differently, its choices are formally identical to those in Problem 1. The difference is that some
simple arithmetic has been performed for the subjects. Once these calculations
are made it becomes clear to every subject that option F dominates option E,
and everyone chooses accordingly. The difficulty, of course, is that option E,
which no one selects, is made up (to combine together) of the combination of options A and D, both of which
were chosen by a large majority of subjects, while option F, which everyone
selects, is a combination of B and C, options that were highly unpopular in
Problem 1. Thus this pair of problems illustrates two findings that are
embarrassing to rational choice adherents
(a supporter of a set of ideas,
an organization,
or a person).
First, subjects’ answers depend on the way a problem is worded or “framed,”
behavior that is inconsistent (not
always behaving in the same way
or producing the same results)
with almost any formal model. Second, by utilizing clever framing, a majority
of subjects can be induced (motivate, to cause something especially a mental or physical change) to
select a pair of options that are dominated by another pair. Once again, this
behavior does not seem consistent (not changing in behaviour, attitudes, or qualities, https://www.youtube.com/watch?v=ZJvKTXDp5wQ
) with the idea that people are
choosing as if they are rational.



Комментариев нет:
Отправить комментарий