IFSM 300 – Class 6

February 13, 2002 

Preliminaries

·        Class Web site reminder: http://www.gl.umbc.edu/~rrobinso/

·        Homework 2 (in assignment 5): due Monday, February 18.

·        Assignment 6: read chapter 7. 

Decision Trees With Revising of Probabilities (Part 2)

Introduced in Lecture 5, Completing in This Lecture

·        Refer to Marketing Research, situation 5.2, page 173.

·        Complete notes at the end. Summary here.

·        Continue toothpaste example to consider whether or not to purchase information (summarized in situation 5.2, page 173, and in Figure 5.10, page 187).

·        Calculate the revised probabilities in Figure 5.10.

·        Review the probability tree of Figure 5.3 (page 175).

·        Develop a probability table that shows the joint and marginal probabilities.

·        From the probability table, calculate the revised probabilities.

·        Review how this is the same as using Bayes’ theorem.

·        Find the revised EV*.

·        Calculate and interpret EVS, EVSI, E, and ENGS. 

Software for Decision Analysis

·        The rise of spreadsheet add-ins notwithstanding, serious DA requires industrial strength software.

·        Examples are given on the Web site of the Decision Analysis Society of INFORMS (software list at http://faculty.fuqua.duke.edu/daweb/dasw.htm).

·        Our class Web site has a link to the Decision Analysis Society’s homepage. 

Millionaire Q1 (overhead)

In a Bayesian probability revision, the data we need to get started are:

A.     P(s|I) and P(I).

B.     P(I|s) and P(s).(!!)

C.     P(sÇI) and P(I).

D.    P(IÇs) and P(s). 

Millionaire Q2 (overhead)

In a full decision tree with Bayesian revision, the order (starting at the left side) of the tree section that involves purchasing information is:

A.     Make management decision, learn management outcome s, decide to purchase information, learn information outcome I|s.

B.     Decide to purchase information, learn management outcome s, make management decision, learn information outcome I|s.

C.     Decide to purchase information, learn information outcome I, make management decision, learn management outcome s|I. (!!) 

Millionaire Q3 (overhead)

Step 1 of the 4 steps to prepare a decision tree for Bayesian analysis consists of obtaining joint probabilities from:

A.     A 2-stage probability tree with s in the first stage and I|s in the second stage. (!!)

B.     A 2-stage probability tree with I in the first stage and s|I in the second stage.

C.     A table of joint and marginal probabilities.

D.    Bayes’ Theorem. 

Millionaire Q4 (overhead)

In the table of joint and marginal probabilities (prepared in step 2 of the 4 steps), besides displaying data we already were given or we already calculated, we determine one type of quantity not previously known:

A.     P(sÇI).

B.     P(I). (!!)

C.     P(s).

D.    P(s|I). 

Millionaire Q5 (overhead)

In step 3, we calculate the one type of data needed in the decision tree but not yet determined:

A.     P(I|s).

B.     P(I Çs).

C.     P(s|I). (!!)

D.    P(sÇI). 

Millionaire Q6 (overhead)

Among the various summary measures, one that INCLUDES the cost of information is:

A.     EVS.

B.     EVSI.

C.     E.

D.    ENGS. (!!) 

Complete Notes

·        On the pages that follow

Notes: Decision Trees with Revising of Probabilities (Part 2)

Marketing Research

 

Conditional and Joint Probabilities. Before continuing to discuss the question of whether or not to purchase market research in the Brite-and-Kist example, we must review more about probability. 

Suppose you have two different wheels of fortune. One wheel contains 2 red spokes and 3 blue spokes, for a total of 5 equally likely spokes. The second wheel contains 4 red spokes and 1 blue spoke, for a total of 5 equally likely spokes.  

Imagine an experiment in which you first toss a fair coin and then select a wheel of fortune to spin: 

If the coin-toss outcome is

You spin the wheel with

heads

2 red spokes, 3 blue spokes

tails

4 red spokes, 1 blue spoke

 

A complete probability tree for this experiment is:         

 


 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Note that the probability 2/5 for red after heads is not the overall probability of red, because red also may occur after tails. Rather, 2/5 is the probability of red given that heads happened. The notation for this conditional probability is P(R½H), where the vertical line means given or assuming.

The symbol Ç (cap or intersection) represents together. So HÇR means we got heads and then red, a so-called joint outcome. The probability P(HÇR) is called a joint probability.

How do you calculate a joint probability? Answer: Multiply together the probabilities on the associated tree branches. For instance,  

P(HÇR) = P(H) P(R½H) = (1/2) (2/5) = 2/10 

P(HÇB) = P(H) P(B½H) = (1/2) (3/5) = 3/10. 

Why does this make sense? What you are doing is starting with the probability of heads and then spreading it out over the subsequent possible red-blue outcomes in proportion to the red-blue probabilities: 

P(HÇR) = 2/5 of 1/2  = (2/5) (1/2) = 2/10 

P(HÇB) = 3/5 of 1/2  = (3/5) (1/2) = 3/10. 

Next we learn one more important fact: 

P(HÇR) = P(RÇH) – if and only if H is exactly the same event in both 

                                    expressions, and R is exactly the same event in both  

                                    expressions. 

In other words, suppose we reverse the process and spin a wheel first, then toss the coin. If we still are working with the first of the two wheels, multiplying the probabilities on the branches of the reversed tree would give us: 

            P(R) P(H½R) = (2/5) (1/2) = 2/10. 

This is P(RÇH). We see that it gives the same result as the P(HÇR) we calculated before.   

Equipped with this last-discovered fact, we now derive a cornerstone of probability. Suppose A and B are different uncertain events where A happens and then B. We learned earlier that  

P(AÇB) = P(A) P(B½A).  

We also learned that if we keep A the same and B the same,  

P(AÇB) = P(BÇA). 

Substituting from the second expression into the first, we can write 

P(BÇA) = P(A) P(B½A). 

Solve this for P(B½A): 

P(B½A) º P(BÇA) / P(A). 

This is the famous definition of conditional probability. It is used in probability theory as a definition (thus the symbol “º”), or fundamental building block, rather than something derived from other facts. 

Marketing Research Example. We now continue the Pucter and Simple example with an extension called marketing research, situation 5.2, page 173:

 

 


 

 

 

 

 

 

 

 

 

 

 

 

The main question now raised is whether or not to purchase the market research offered. If we do purchase it, we would plan to use the information it provides to revise our probability estimates (assignments), thereby, we would hope, improving the quality of those estimates. 

To purchase or not to purchase is a decision. You  know how to analyze decisions – in a decision tree. We can readily extend the tree already prepared so as to cover this new decision.  

Data Concerning Market Research. What will the market research tell us? In situation 5.2, it will give us one of two possible reports: the market is favorable for Brite (symbol I1) or the market is unfavorable for Brite (symbol I2).  

If we buy that research, the sequence of events will be: (1) we get the arket research report (I1 or I2); we decide whether to market Brite or Kist (a1 or a2); and (3) sales turn out to be low or high (s1 or s2).  

In other words, we’ll have an uncertain outcome I1 or I2 in the first chance stage and an uncertain outcome s1 or s2 in the second chance stage. To complete our decision tree, therefore, we require P(I1) and P(I2) for the first chance stage. And we require for the second chance stage conditional probabilities such as P(s1½I1) and P(s2½I2) because s1 or s2 happens after I1 or I2 happens.  

The pertinent data in the problem is information about the anticipated quality of the research. It comes in the form P(research report will be right) and P(research report will be wrong).  

Note that a report which says I1 (favorable to Brite) is saying s2 (high sales) will occur if we market Brite (choose a1). Thus, 

P(favorable report will be right) = P(report says I1 when in fact s2 will occur) 

    = P(I1 given s2) = P(I1½s2).  

P(favorable report will be wrong) = P(report says I1 when s1 will occur) 

                                                       = P(I1 given s1) = P(I1½s1). 

Wording in the problem description of 5.2 may be confusing. But now you realize that it must be giving us probabilities in the form P(I½s). What isn’t stated in the problem, we calculate:

 

Given to us

We calculate

P(I1½s1) = 0.14  favorable, wrong

P(I2½s1) = 1 – 0.14 = 0.86  unfavorable, right

P(I2½s2) = 0.06  unfavorable, wrong

P(I1½s2) = 1 – 0.06 = 0.94  favorable, right

 

In addition to this information about the anticipated quality of the market research, we still have, of course, our original probability estimates (also called probability assignments):  P(s1) = 0.45 and P(s2) = 0.55. 

Four-Step Procedure.  The job ahead is to convert the data we have (in the form P(s) and P(I½s)) into the decision-tree inputs we require (in the form P(I) and P(s½I)). I recommend the following four-step procedure. Note well: This procedure differs a bit from the procedure in the book. 


Step 1. Draw a probability tree to obtain the joint probabilities P(sÇI). Since the data we have are P(s) and P(I½s), in the tree we’ll place s first and then I½s.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Step 2.  Prepare this table of joint and marginal probabilities

 

 

I1

report says favorable

I2

report says unfavorable

 

 

Totals

s1

low sales

P(s1ÇI1)

0.063

P(s1ÇI2)         0.387

P(s1)

0.450

s2

high sales

P(s2ÇI1)

0.517

P(s2ÇI2)

0.033

P(s2)

0.550

 

Totals

P(I1)

0.580

P(I2)

0.420

 

1.000

 

 

In this table, we calculate a marginal probability P(s) or P(I) by adding the joint probabilities across the same row or down the same column. We knew the P(s)’s already, so these just give us a check. The P(I)’s are new. And we’ll be using other numbers from the table in step 3. 

Step 3.   The table enables us to easily determine the inputs we require for the decision tree: 

P(I1) = 0.580, P(I2) = 0.420 from the bottom row of the table.

P(s1½I1) = P(s1ÇI1)/P(I1) = 0.063/0.580 = 0.109 = 0.11

  From the definition of        from the

  conditional probability       table

P(s2½I1) = P(s2ÇI1)/P(I1) = 0.517/0.580 = 0.891 = 0.89

  [Check: 1 – P(s1½I1) = 1 – 0.11 = 0.89] 

P(s1½I2) = P(s1ÇI2)/P(I2) = 0.387/0.420 = 0.921 = 0.92 

P(s2½I2) = P(s2ÇI2)/P(I2) = 0.033/0.420 = 0.079 = 0.08

  [Check: 1 - P(s1½I2) = 1 – 0.92 = 0.08]. 

Step 4. Enter the inputs found in step 3 into the decision tree. To optimize in the tree, we proceed in the usual way. 
Observe that the first decision node now has two branches --  a3 (purchase the market  

 

 

 

 


 

 

 

 

 

 

 

 

 

 

 

 

 

 

research) and a4 (don’t purchase).  

If we select a4, we are back with the same analysis we did originally before considering market research. That begins with a decision node having two branches, a1 (market Brite) and a2 (market Kist), and an EV of $325,000. That’s the bottom section of the decision tree. 

The top section of the decision tree traces the sequence if we select a3 (purchase the research).  That section is where we input the numbers we derived in step 3, above. 

Remember that the cost of the research must be subtracted from the expected profit, if we select a3. It could be done anywhere along the line. In the tree shown, it is subtracted from the EV over the chance node labeled 2. 

In summary, the decision-tree analysis shows that the best policy a* and corresponding best expected profit EV* are:

a* = a3,  a1½I1,  a2½I2  

EV* = $516,380. 

Bayes’ Formula. This is an alternative to steps 1, 2, and 3 above. It permits us to plug the data that we have straight into a formula to obtain a conditional probability we want. 

To illustrate, let’s derive a Bayes’ formula for P(s1½I1): 

P(s1½I1) = P(s1ÇI1) / P(I1)                                From the definition of conditional probability.     

 

 

= P(s1ÇI1) / [P(s1ÇI1) + P(s2ÇI1)]      Substituting for P(I1) using column 1 of the table in step 2 of the four-step procedure.

 

   = P(I1Çs1) / [P(I1Çs1) + P(I1Çs2)]     Using the P(AÇB) = P(BÇA) rule,

                                                            which really is using a “commutative

                                                            law” AÇB = BÇA of set algebra. 

P(s1½I1) = P(s1) P(I1½s1) / [ P(s1) P(I1½s1) + P(s2) P(I1½s2)]  Applying the

                                                                        definition of conditional probability

                                                                        to each term.     

This result is a Bayes’ formula, or Bayes’ theorem, written for the P(s1½I1) in our          

example.  With this formula, we can confirm our previous calculation: 

P(s1½I1) =  (0.45)(0.14) / [(0.45)(0.14) + (0.55)(0.94)] = 0.063 / [0.063 + 0.517]

              = 0.063 / 0.580 = 0.109 = 0.11.  

To write a Bayes’ formula for any other conditional probability, in this or any other problem, we easily generalize the foregoing. The denominator contains terms of the form P(s) P(I½s), summed over all possible s’s with I always the same. The numerator contains the particular term from the denominator that has the same s as in the conditional probability we are evaluating. 

For example, suppose there are three s’s (s1, s2, s3). And suppose we want the Bayes’ formula for P(s3½I3). Then 

            P(s3½I3) = P(s3) P(I3½s3) / [ P(s1) P(I3½s1) + P(s2) P(I3½s2) + P(s3) P(I3½s3)]. 

Related Performance Measures. The optimization analysis in the full decision tree showed that deciding to purchase market research increases EV* from $325,000 to $516,380. How might we further summarize the benefit of information from market research? 

First, we may establish summary measures similar to those introduced before to evaluate perfect information:

 

Perfect Research and Information

Actual Research and Information

EVC – expected value under certainty ($)

           (EV* given perfect information. No

            cost.)

EVS – expected value under sampling ($)

           (EV* given actual information. No

             cost.)

EVPI – expected value of perfect  

             information ($)

            (Improvement in EV*. No cost.)

EVSI – expected value of sample

             information ($)

            (Improvement in EV*. No cost.)

EVPI º ½EVC – EV*½
          = ½595,000 – 325,000½

            = $270,000

EVSI º ½EVS – EV*½
          = ½536,380 – 325,000½
          = $211,380

  

Second, we can compare actual to perfect, and we can adjust EVSI to recognize cost:

 

Compare Actual to Perfect

Adjust Actual to Recognize Cost

E – efficiency rating (%)

      (Ratio of EVSI to EVPI. No cost.)

CS – cost of information ($)

        

 

ENGS – expected net gain from sampling

              ($)

              (EVSI minus cost of information.)

E º (EVSI/EVPI)(100)

   = (211,380/270,000)(100)

   = 78.3%

ENGS º EVSI – CS

           = 211,380 – 20,000

           = $191,380

Note: ENGS applies only when EVSI is based on profit, revenue, or cost.