Exercises

1.Academia das Ciências de Lisboa (Lisbon Academy of Sciences), Lisbon, Portugal.
1.Academia das Ciências de Lisboa (Lisbon Academy of Sciences), Lisbon, Portugal.
Exercises
4.1 Consider the EMS sequence \(\mathrm{ \{ Z_{k} \} }\) . Obtain the expression of \(\mathrm{ Prob \{ min \left( Z_{1},\dots,Z_{k} \right) \geq z \} =Q_{k} \left( z,z \vert \theta \right) }\)
where
\(\mathrm{ Q_{k} \left( x,y \vert \theta \right) =Prob \{ \mathrm{ \begin{array}{c} { } \mathrm{k-1 } \\ \mathrm{ min } \\ \mathrm{1} \end{array} } \left( Z_{1},\dots,Z_{k-1} \right) \geq x,Z_{k} \geq y \} }\)
\(\mathrm{ =Q_{k-1} \left( x,max \left( x,y-log\, \theta \right) \right) \Lambda \left( y-log \left( 1- \theta \right) \right) }\).
Obtain its expression when \(\mathrm{ x<y-log\, \theta }\) and \(\mathrm{ x>y-log\, \theta }\).
4.2 Consider the general EMS sequence \(\mathrm{ X_{k}= \lambda + \delta ~Z_{k} }\) and the seek estimators of \(\mathrm{ \left( \lambda , \delta , \theta \right) }\). We have \(\mathrm{ \mathrm{ \begin{array}{c} { } \mathrm{k } \\ \mathrm{ min } \\ \mathrm{2} \end{array} } \left( X_{i}-X_{i-1} \right) \stackrel{P}\rightarrow \delta\, log \,\theta }\) which gives an (over-) estimator of \(\mathrm{ \Delta ~= \delta \,log \,\theta }\). Using this result obtain, supposing \(\mathrm{ \Delta ^{*}= \Delta }\), an estimator of \(\mathrm{ \left( \lambda , \delta \right) }\).
4.3 Translate maxima results to minima results and vice-versa; use, in particular, the relation between the distribution function and the survival function.
4.4 Study the processes of oldest ages of death for men and women in Sweden (Table 1 and Tables 1 + 2), trying to fit any of the random sequences that may have trend and/or oscillations. In particular they may be considered as sliding processes of maxima, with Gumbel margins.
4.5 Consider a reduced extremal process \(\mathrm{ Z \left( t \right) ,t>0 }\) observed at (non-random) instants \(\mathrm{ \left( 0< \right) }\) \(\mathrm{ t_{1}<t_{2}< \cdots <t_{n} < \cdots}\) . The best (least squares) predictor of \(\mathrm{ Z \left( t_{n+1} \right) \left( t_{n}<t_{n+1} \right) }\) of the form \(\mathrm{ Z_{1}^{*} \left( t \right) =Z \left( t_{n} \right)} +a \) has \( a=\mathrm{log \left( t_{n+1}/t_{n} \right) }\). The best (i.e., least squares) linear predictor of \(\mathrm{ Z \left( t_{n+1} \right) }\) based only on \(\mathrm{ Z \left( t_{n} \right) }\) is of the form \(\mathrm{ Z^{**} \left( t_{n+1} \right) = \alpha + \beta ~Z \left( t_{n} \right) }\); the minimization of \(\mathrm{ MSE=M \left( Z^{**} \left( t_{n+1} \right) -Z \left( t_{n+1} \right) \right) ^{2} }\) leads to
\(\mathrm{ Z^{**} \left( t_{n+1} \right) = \gamma +log\,t_{n+1}+ \rho \left( t_{n}/t_{n+1} \right) \left( Z \left( t_{n} \right) - \gamma +log\,t_{n} \right) }\).
They are both unbiased (i.e., with the same mean value) and
\(\mathrm{ MSE \left( Z^{*} \left( t_{n+1} \right) \right) =\frac{ \pi ^{2}}{3} \left( 1- \rho \left( t_{n}/t_{n+1} \right) \right) }\) and \(\mathrm{ MSE \left( Z^{**} \left( t_{n+1} \right) \right) =\frac{ \pi ^{2}}{6} \left( 1- \rho ^{2} \left( t_{n}/t_{+1n} \right) \right) }\).
Note that \(\mathrm{ Z^{**} \left( t_{n+1} \right) }\) can be compared with the simpler linear predictor \(\mathrm{ Z^{*} \left( t_{n+1} \right) =Z \left( t_{n} \right) +log\frac{t_{n+1}}{t_{n}} }\). The efficiency is
\(\mathrm{ \frac{MSE \left( Z^{*} \left( t_{n+1} \right) \right) }{MSE \left( Z^{**} \left( t_{n+1} \right) \right) }=\frac{1+ \rho \left( t_{n}/t_{n+1} \right) }{2}<1 }\).
None of the predictors is invariant for linear transformations, and so they can’t be used to predict the general extremal processes \(\mathrm{ X \left( t \right) = \lambda + \delta ~Z \left( t \right) }\).
4.6 Consider the extremal processes \(\mathrm{ X \left( t \right) = \lambda + \delta ~Z \left( t \right) ,t \geq 0, }\)where \(\mathrm{ Z \left( t \right) }\) is the reduced extremal process. Suppose observations are made at (non-random) instants \(\mathrm{ \left( 0< \right) t_{1}<t_{2}<\cdots<t_{n}<\cdots }\) .
Obtain the expression of the quasi-linear predictor of \(\mathrm{ X \left( t_{n+1} \right) }\) when \(\mathrm{ X \left( t_{i} \right) ,i=1,\dots,n }\), are known. The quasi-linear predictor of \(\mathrm{ X \left( t_{n+1} \right) }\) based on the last two observations \(\mathrm{ X \left( t_{n-1} \right) }\) and \(\mathrm{ X \left( t_{n} \right) \left( \geq X \left( t_{n-1} \right) \right) }\) is, obviously.
\(\mathrm{ X^{*} \left( t_{n+1} \right) =X \left( t_{n} \right) + \beta \left( X \left( t_{n} \right) -X \left( t_{n-1} \right) \right) }\).
The best (least squares) predictor of \(\mathrm{ X \left( t_{n-1} \right) }\), minimizing
\(\mathrm{ MSE \left( X^{*} \left( t_{n+1} \right) \right) =M \left( X^{*} \left( t_{n+1} \right) -X \left( t_{n+1} \right) \right) ^{2} }\) ,
is given by
\(\mathrm{ \beta =\frac{M \left( \left( X \left( t_{n+1} \right) -X \left( t_{n} \right) \right) \left( X \left( t_{n} \right) -X \left( t_{n-1} \right) \right) \right) }{M ( \left( X \left( t_{n} \right) -X \left( t_{n-1} \right) \right) ^{2} ) } }\)
and
\(\mathrm{ MSE\, ( X^{*} \left( t_{n+1} \right)) =M( \left( X \left( t_{n+1} \right) -X \left( t_{n} \right) \right) ^{2} ) - \beta ^{2}~M( \left( X \left( t_{n} \right) -X \left( t_{n-1} \right) \right) ^{2} ) }\),
which can be expressed in mean values and covariance of the process.
4.7 Define an extremal process of Weibull minima, using the conversion between the Weibull distribution of minima and the Gumbel distribution of maxima.
4.8 Consider a max-compound Poisson stochastic process, \(\mathrm{ X \left( t \right) =max \{ Y_{i} \} }\), where \(\mathrm{ N \left( t \right) }\) is a Poisson process with intensity \(\mathrm{ v }\) and \(\mathrm{ Y_{0},Y_{1},\dots,Y_{k},\dots, }\) is a sequence of independent random variables such that \(\mathrm{ Y_{0} }\) has the distribution function \(\mathrm{ G_{0} \left( x \right) }\) and \(\mathrm{ Y_{j} \left( j \geq 1 \right) }\) have the distribution function \(\mathrm{ G \left( x \right) }\). Show that \(\mathrm{ Prob \{ X \left( t \right) \leq x \} =G_{0} \left( x \right) exp \{ -v~t \left( 1-G \left( x \right) \right) \} }\)and that for no choice of \(\mathrm{ \left( G_{0},G \right) }\) can we have \(\mathrm{ Prob \{ X \left( t \right) \leq x \} = \Lambda \left( \alpha \left( t \right) + \beta \left( t \right) x \right) }\).
4.9 Analyse the same question for max-filtered Poisson processes and max-renewal point processes.