This is a very simple exercise designed to clarify confusion about the roles of past, present, and future in stopping rules. Let {Xn; n ≥ 1} be a sequence of IID binary rv s, each with the PMF pX (1) = 1/2, pX (0) = 1/2. Let J be a positive integer- valued rv that takes on the sample value n of the first trial for which Xn = 1. That is, for each n ≥ 1,
{J = n}= {X1=0, X2=0, ... , Xn-1=0, Xn=1}.
(a) Use the definition of stopping trial, Definition 5.5.1 in the text, to show that J is a stopping trial for {Xn; n ≥ 1}.
(b) Show that for any given n, the rv s Xn and IJ=n are statistically dependent.
(c) Show that for every m > n, Xn and IJ=m are statistically dependent.
(d) Show that for every m <>, Xn and IJ=m are statistically independent.
(e) Show that Xn and IJ≥n are statistically independent. Give the simplest characterization you can of the event {J ≥n}.
(f) Show that Xn and IJ>n are statistically dependent.
Note: The results here are characteristic of most sequences of IID rv s. For most people, this requires some realignment of intuition, since {J ≥ n} is the union of {J = m} for all m ≥ n, and all of these events are highly dependent on Xn. The right way to think of this is that {J ≥ n} is the complement of {J n}, which is determined by X1, ... , Xn-1. Thus {J ≥ n} is also determined byX1, ... , Xn-1 and is thus independent of Xn. The moral of the story is that thinking of stopping rules as rv s independent of the future is very tricky, even in totally obvious cases such as this.
Text Book: Stochastic Processes: Theory for Applications By Robert G. Gallager.