Exercises

1.Academia das Ciências de Lisboa (Lisbon Academy of Sciences), Lisbon, Portugal.
1.Academia das Ciências de Lisboa (Lisbon Academy of Sciences), Lisbon, Portugal.
Exercises
2.1 If \(\mathrm{ Z=max ( X-a,Y-b ) }\) with \(\mathrm{ ( X,Y ) }\) independent with the reduced Gumbel distribution as well as \(\mathrm{ z }\), show that \(\mathrm{ Prob \{ X-a \leq Z \leq z \} =e^{-b}~ \Lambda ( z ) }\).
2.2 Consider one of the sets of data given in the book (or elsewhere); verify using the probability paper if they can be fitted to the Gumbel, Weibull or Fréchet distributions; compare with the use of the choice statistic; estimate the parameters; take logarithms and exponentials of the same set of data and analyse the fit. Is there any apparent contradiction? Explain it.
2.3 Consider the Gumbel distribution \(\mathrm{ \Lambda ( ( x- \lambda ) / \delta ) }\). Obtain the estimators, confidence interval for \(\mathrm{ \lambda }\), and tests corresponding to \(\mathrm{ \lambda _{0}= \lambda _{0} }\) vs. \(\mathrm{ \lambda > \lambda _{0}, \lambda < \lambda _{0} }\) and \(\mathrm{ \lambda \neq \lambda _{0} }\). Study the statistics used.
2.4 Consider the prediction problem for \(\mathrm{ \Lambda( x- \lambda) }\) from a sample \(\mathrm{ ( x_{1}, \dots,x_{n} ) }\) to a second sample (to be observed) \(\mathrm{( Y_{1},\dots,Y_{n} ) }\). Show that the least squares predictor of \(\mathrm{ W=max( Y_{1},\dots,Y_{m} ) }\) is \(\mathrm{ p(\hat{\lambda})=\hat{\lambda}+k }\) with \(\mathrm{ k= \gamma +log\frac{m}{n}+ \Gamma ’ ( n ) / \Gamma ( n ) }\) with a mean square error of prediction of \(\mathrm{ E^{2}=\frac{ \pi ^{2}}{6}+{ \Gamma ’ ( n ) }/{ \Gamma ( n ) }- ( \Gamma ’ ( n ) / \Gamma ( n ) ) ^{2} }\).
The shortest prediction interval for \(\mathrm{ W }\), with level \(\mathrm{ \omega }\), is given by
\(\mathrm{ c_{ \omega } \leq W- \hat{\lambda} \leq d_{ \omega } }\)
where \(\mathrm{ c_{ \omega } }\) and \(\mathrm{ d_{ \omega } }\) are given by the prediction level and the relation
\(\mathrm{ \frac{c_{ \omega }}{n+1}+log ( 1+\frac{m}{n}\,e^{-c_{ \omega }} ) =\frac{d_{ \omega }}{n+1}+log ( 1+\frac{m}{n}\,e^{-d_{ \omega }} ) }\)
or, approximately, \(\mathrm{ c_{ \omega }+m~e^{-c_{ \omega }}=d_{ \omega }+m~e^{-d_{ \omega }} }\).
2.5 Consider two populations \(\mathrm{ P }\) and \(\mathrm{ P’ }\) whose distribution functions are \(\mathrm{ \Lambda ( x- \lambda) }\) and \(\mathrm{ \Lambda ( x- \lambda' ) }\), with the parameters \(\mathrm{ \lambda }\) and \(\mathrm{ \lambda ' }\)estimable from samples \(\mathrm{ ( x_{1}, \dots,x_{n} ) }\) and \(\mathrm{ ( x_{1}^{’},\dots,x_{n}^{’} ) }\).
By the likelihood ratio criterion, show that if \(\mathrm{ V_{n}=\frac{1}{n} \sum _{1}^{n}e^{-x_{i}} }\) and \(\mathrm{ V’_{n’}=\frac{1}{n’} \sum _{1}^{n’}e^{-x’_{i}} }\) we should discriminate for \(\mathrm{ P }\) or \(\mathrm{ P’ }\) a new observation \(\mathrm{ z }\) by the rule
\(\mathrm{ V_{n}^{n} ( V’_{n’}-\frac{V’_{n’}-e^{-z}}{n^{’}+1} ) ^{n^{’}+1}≶ c ( \lambda , \lambda ^{’} ) {V^{'}}_{n^{’}}^{n^{’}} ( V_{n}-\frac{V_{n}-e^{-z}}{n+1} ) ^{n+1} }\),
\(\mathrm{ c }\) being computed to lead to equal chances of misclassification; \(\mathrm{ c }\) depends on \(\mathrm{ \lambda ’- \lambda }\). For large values of \(\mathrm{ n }\)and \(\mathrm{ n' }\) we have using the notation \(\mathrm{ \hat{ \lambda} =-log\,V_{n} }\) and \(\mathrm{ \hat{ \lambda’}=-log\,V’_{n} }\) the approximation,\(\mathrm{ \hat{ \lambda} -e^{ \hat{ \lambda} -z}≶log\,c+ \hat{ \lambda} ’-e^{ \hat{ \lambda} ’-z} }\).
2.6 Develop sequential probability ratio tests for \(\mathrm{ \Lambda ( x- \lambda ) }\). Show that to test \(\mathrm{ \lambda =0~vs. \lambda = \lambda _{1} }\), as the logarithm of the sequential probability ratio is \(\mathrm{ Z_{n}=n [ \lambda _{1}+ ( 1-e^{ \lambda _{1}} ) V_{n} ] ( V_{n}=\frac{1}{n} \sum _{1}^{n}e^{-x_{i}} ) }\), then \(\mathrm{ \alpha }\) and \(\mathrm{ \beta }\) being the approximate values of the errors of first and second kinds, the test rule is to accept \(\mathrm{ \lambda =0~if~V_{n} \leq b_{n}( \lambda _{1} ) }\), to continue sampling if \(\mathrm{ b_{n} ( \lambda _{1} ) <V_{n}<a_{n}( \lambda _{1} ) }\), and to reject \(\mathrm{ \lambda =0 }\) if an \(\mathrm{ a_{n}( \lambda _{1} ) \leq V_{n} }\), if \(\mathrm{ \lambda _{1}<0 }\), and with the inequalities reversed if \(\mathrm{ \lambda _{1}>0 }\) ; we have denoted by \(\mathrm{ a_{n} ( \lambda ) = ( log~A-n~ \lambda ) / ( 1-e^{ \lambda } ) ,b_{n} ( \lambda ) = ( log~B-n~ \lambda) / ( 1-e^{ \lambda } ) }\)where \(\mathrm{ A=\frac{1- \beta }{ \alpha } }\) and \(\mathrm{ B=\frac{1- \beta }{ \alpha } }\). Also obtain the operating characteristic and the average sample number of the sequential test .
2.7 Form a two-sided sequential test of \(\mathrm{ \lambda =0~vs. \lambda = \bar{ \lambda }<0 }\) and \(\mathrm{ \lambda =\bar{\bar{\lambda} }>0 }\) by “intersection” of two one-sided tests as before with \(\mathrm{ \lambda _{1}= \bar{ \lambda } }\) and \(\mathrm{ \lambda _{1}= \bar{\bar{\lambda} } }\), “intersection” meaning accept \(\mathrm{ \lambda =0~ }\)if both one-sided tests aceept it, reject it if any of the one-sided tests rejected it, and continue sampling otherwise. Obtain the special feature of the graph of the sequential probability ratio test and evaluate approximately the operating characteristic and the average sample number.
2.8 Develop a sequential estimation procedure for the parameter \(\mathrm{ \lambda }\) for \(\mathrm{ \Lambda ( x- \lambda ) }\).
2.9 Consider the last six exercises and solve the analogous questions for the exponential distribution, which is equivalent to taking \(\mathrm{ Y=e^{-X} }\); as \(\mathrm{ X }\) is \(\mathrm{ \Lambda ( x- \lambda ) }\) then \(\mathrm{ Y }\) has the exponential distribution with zero location and \(\mathrm{ \delta =e^{- \lambda } }\). The reverse method could be followed: to deduce from the results for the exponential distribution the results for \(\mathrm{ \Lambda ( x- \lambda ) }\).
2.10 Obtain the locally most powerful one-sided tests for \(\mathrm{ \lambda = \lambda _{0}~vs. \lambda = \lambda _{0} ( or~ \lambda < \lambda _{0} ) }\) with fixed \(\mathrm{ \delta ( \delta =1,~for~simplicity ) }\), for \(\mathrm{ \delta = \delta _{0}~vs. \delta = \delta _{0} ( or~ \delta < \delta _{0} ) }\) with fixed \(\mathrm{ \lambda ( \lambda =0,~for~simplicity ) }\); obtain the corresponding locally most powerful unbiased two-sided tests in both, cases and study asymptotic behaviour.
2.11 For the Gumbel distribution consider a pair of estimators \(\mathrm{ ( \lambda ^{*}, \delta ^{*} ) }\) of \(\mathrm{ ( \lambda , \delta ) }\) and assume that \(\mathrm{ V( \lambda ^{*}) \sim A~ \delta ^{2}/n,V( \delta ^{*} ) ~B~ \delta ^{2}/n,C( \lambda ^{*}, \delta ^{*} ) \sim C~ \delta ^{2}/n }\).
a) Compute the asymptotic efficiency for the estimation of any quantile and obtain its minimum (worst asymptotic efficiencies);
b) Choose a pair of central quantiles (of probabilities \(\mathrm{ 0<p<q<1 }\)); evaluate the worst asymptotic efficiency; seek the best pair \(\mathrm{ ( p,q ) }\), i.e., the one giving the largest value to the smallest efficiency.
2.12 For the Gumbel distribution consider an estimator \(\mathrm{ ( \lambda ^{*}, \delta ^{*} ) }\) of \(\mathrm{ ( \lambda , \delta ) }\), asymptotically standard binormal, with mean values \(\mathrm{ \lambda }\)and \(\mathrm{ \delta }\), and \(\mathrm{ V ( \lambda ^{*} ) \sim A~ \delta ^{2}/n,V ( \delta ^{*} ) \sim B~ \delta ^{2}/n }\) and \(\mathrm{ C ( \lambda ^{*}, \delta ^{*} ) \sim C~ \delta ^{2}/n }\). Compute the asymptotic variance of the estimator (or predictor) \(\mathrm{ \lambda ^{*}+ \chi \delta ^{*} }\) of the quantile \(\mathrm{ \lambda + \chi ~ \delta }\) and obtain its asymptotic efficiency with respect to the estimator (or predictor) \(\mathrm{ \hat{ \lambda} + \chi ~ \hat{\delta } }\)based on the maximum likelihood. In particular, compute this efficiency when \(\mathrm{ ( \lambda ^{*}, \delta ^{*} ) }\) are based on two quantiles, and analyse the choice of the quantiles with different criteria (optimization for \(\mathrm{ \chi =a }\), best asymptotic efficiency for \(\mathrm{ \chi \rightarrow \pm \infty }\) etc.).
2.13 Supposing that the samples \(\mathrm{ ( x_{1}, \dots,x_{n} ) }\) and \(\mathrm{ ( x_{1}^{’},\dots,x_{n}^{’} ) }\) have a Gumbel distribution with parameters \(\mathrm{ ( \lambda , \delta ) }\) and \(\mathrm{ ( \lambda ’, \delta ’ ) }\), determine the likelihood ratio test for \(\mathrm{ \lambda = \lambda ’, \delta = \delta ’ }\), and obtain its asymptotic behaviour. Using the approximate relation \(\mathrm{ ( \bar{x}- \hat{\lambda} /\hat{ \delta} \approx \gamma ) }\) simplify the expression and determine its asymptotic behaviour (use the \(\mathrm{ \delta }\)-method).
2.14 Suppose that \(\mathrm{ X_{1},\dots,X_{n} }\) have a Gumbel distribution with parameters \(\mathrm{ \lambda + \beta_i ( i=1,\dots,n ) }\) and \(\mathrm{ \delta }\). Obtain the least squares estimators of the parameters \(\mathrm{ ( \lambda , \beta , \delta ) }\) of this linear regression with additive Gumbel errors ( \(\mathrm{ X_{i}= \lambda + \beta i+ \delta ~Z_{i} }\), where the \(\mathrm{ Z_{i}~ }\)are reduced Gumbel random variables) and its asymptotic behaviour, and formulate a test of \(\mathrm{ \beta =0 }\) (no regression). Generalize, substituting the term \(\mathrm{ \beta i }\) by \(\mathrm{ \beta ~c_{i} }\), with the \(\mathrm{ ~c_{i} }\) known.
2.15 Determine the equations defining the values \(\mathrm{ ( k_{1},k_{2} ) }\) that minimize the mean length of the \(\mathrm{ \left( \beta , \omega \right) }\) and \(\mathrm{ \alpha }\)-average tolerance intervals for Gumbel distribution.
2.16 In the search for one-sided and two-sided tolerance intervals for Gumbel distribution obtain the first-order development of the coefficients \(\mathrm{ k_{i} }\) (i.e., the a such that \(\mathrm{ k_{i} ( n ) =k_{i}+a ( n^{-1/2} ) +0 ( n^{-1/2} ) ) }\), using the \(\mathrm{ \delta }\)-method.
2.17 Generate 50 plus 20 random numbers for the Gumbel (Fréchet) distribution for maxima and study, in the way used for the Weibull distribution of minima, possible predictors of maxima of the next 20 observations based on the initial sample of 50 observations.
2.18 Using the method of moments or of the two quantiles or of Downton or of blocks, obtain the test of \(\mathrm{ ( \lambda, \delta ) = ( \lambda _{0}, \delta _{0} ) ~vs. ( \lambda , \delta ) \neq ( \lambda _{0}, \delta _{0} ) }\).
2.19 Consider the following situation: the random variables \(\mathrm{ X_{ij} ( i=1,\dots,m,j=1,\dots,p ) }\)have a Gumbel distribution with location parameters \(\mathrm{ \lambda _{ij} }\) and unit \(\mathrm{ ( \delta _{ij}=1 ) }\) dispersion parameter. For the sample of \(\mathrm{ n=m~p }\), test the periodicity of the \(\mathrm{ \lambda _{ij} }\), i.e., \(\mathrm{ \lambda _{ij}= \lambda _{j} }\), using the likelihood ratio criterion. Show that the statistic obtained is translation invariant in the hypothesis tested (i.e., for the transformation \(\mathrm{ x_{ij} \rightarrow x_{ij}+a_{j} }\)), and obtain its asymptotic behaviour, obtain the corresponding formulation after transformation to exponential margins.
2.20 Let \(\mathrm{ \{ X_{k} \} }\) be i.i.d. with distribution function \(\mathrm{ \Lambda ( ( x- \lambda ) / \delta ) }\). Now consider the dependent sequence \(\mathrm{ M_{1}=x_{1},M_{k}=max ( M_{k-1},x_{k} ) =max ( x_{1},\dots,x_{k} ) ~for~k>1 }\).
a) Show that the sequence \(\mathrm{ \{ M_{1}\dots ,M_{n} \} }\)is increasing, i.e., \(\mathrm{ M_{1}<M_{n} }\), with probability \(\mathrm{ 1~–1/n~ }\);
b) Show that
\(\mathrm{ M ( M_{k} ) = \lambda + \delta ( ( \gamma +log~k ) ,V ( M_{k} ) =\frac{ \pi ^{2}}{6}\, \delta ^{2} }\) and for \(\mathrm{ p \geq 1 }\)
\(\mathrm{ C ( M_{k},M_{k+p} ) =[ \frac{ \pi ^{2}}{6}+ \iint _{x \leq y+log\,{p}/{k}} ( x- \gamma ) ( y+log\,p/k-x ) ~d~ \Lambda ( x ) d~ \Lambda ( y ) ] \delta ^{2} }\)
\(\mathrm{ = [ \iint _{x \leq y+log\,{p}/{k}} ( x- \gamma ) ( y+{log\,p}/{k} ) +\frac{k}{p+k}~log\frac{p+k}{k} ] \delta ^{2} }\);
2.21 Obtain the least squares estimators of \(\mathrm{ ( \lambda , \delta ) }\) in the previous exercise, if \(\mathrm{ M_{1}<M_{n} }\).
2.22 Obtain maximum likelihood estimators of \(\mathrm{ \delta }\) for the Fréchet and Weibull distributions, supposing \(\mathrm{ \lambda =0 }\) and \(\mathrm{ \alpha }\) known. Compare these results with the estimators of \(\mathrm{ \lambda }\) for the Gumbel distribution with \(\mathrm{ \delta =1 }\). Reduce all the results to the study of the dispersion parameter for exponential distribution.
2.23 Compute \(\mathrm{ \beta _{1} }\) and \(\mathrm{ \beta _{2} }\) for the Fréchet distribution.
2.24 Draw the graphs of the densities of \(\mathrm{ \alpha ^{-1} \Phi _{ \alpha }^{’} ( 1+y/ \alpha ) }\) and verify that for \(\mathrm{ \alpha >6 }\) is very close to \(\mathrm{ \Lambda ’( y ) }\).
2.25 Compute \(\mathrm{ \beta _{1} }\) and \(\mathrm{ \beta _{2} }\) for the Weibull distribution of minima.
2.26 Show that if \(\mathrm{ Z }\) has the distribution function \(\mathrm{ W_{ \alpha } ( z) }\) then \(\mathrm{ Prob \{ \alpha ( 1-Z ) =y \} }\) converges to \(\mathrm{ \Lambda ( y ) }\). Draw the corresponding graphs.
2.27 Compute tables, similar to the one of Chapter 5, for the shortest interval for the same probabilities and different values of the shape parameter \(\mathrm{ \left( \alpha \right) }\) for the Fréchet and Weibull distributions.
2.28 Obtain prediction of maxima of the next \(\mathrm{ m }\) observations for a Weibull distribution of maxima.
2.29 Obtain tests and estimators using two and three central quantiles for the Gumbel and Fréchet distributions of maxima and the Weibull distribution of minima.
Tiago de Oliveira and Littauer (1976), Kubat and Epstein (1980), Neves (1986)
2.30 Determine the expression of the efficiency for the estimation of quantiles when using the method of moments.
2.31 For a Fréchet distribution (for maxima) with location parameter known (zero for convenience) and shape parameter known, obtain maximum likelihood and least squares (and approximations), homogeneous estimators (i.e., such that \(\mathrm{ \delta ^{*} ( \alpha ~x_{i} ) = \alpha ~ \delta ^{*} ( x_{i} ) , \alpha >0 }\)) of the quantiles, homogeneous predictors, and their asymptotic distributions, as well as the behaviour of overpassing probability.
2.32 For a Weibull distribution (for minima) with location parameter known (zero for convenience) and shape parameter known, obtain maximum likelihood and least squares (and approximations), homogeneous estimators (i.e., such that \(\mathrm{ \delta ^{*} ( \alpha ~x_{i} ) = \alpha ~ \delta ^{*} ( x_{i} ) , \alpha >0 }\)) of the quantiles, homogeneous predictors, and their asymptotic distributions, as well as the behaviour of underpassing probability.
2.33 Compute the variance-covariance matrix of maximum likelihood estimators for the Fréchet distribution (for maxima) with \(\mathrm{ \alpha >0 }\).
2.34 Compute the variance-covariance matrix of maximum likelihood estimators for the Weibull distribution (for minima) with \(\mathrm{ \alpha >2 }\).
2.35 Seek the probability \(\mathrm{ p \left( 0<p<1 \right) }\) such that \(\mathrm{ \lambda ^{*}=X_{1}^{’}-a ( Q_{n} ( P ) -X_{1}^{’}) , \delta ^{*}=b ( Q_{n}( P ) -X_{1}^{’} ) }\), where \(\mathrm{ Q_{n} ( P) }\) is the sample quantile (where \(\mathrm{ [ np ] >0 }\)) and \(\mathrm{ X_{1}^{’} }\) is the sample minimum and \(\mathrm{ a,b>0 }\), are the best estimators of \(\mathrm{ ( \lambda , \delta ) }\) for the Fréchet distribution with \(\mathrm{ \alpha = \alpha _{0} }\) known, ‘best’ being taken either in the meaning of minimizing the generalized asymptotic variance or of minimizing the asymptotic variance of some quantile); note that the estimators are quasi-linearly invariant. Compute the efficiency of \(\mathrm{ ( \lambda ^{*}, \delta ^{*} ) }\).
2.36 Do the same exercise for the Weibull distribution (for minima).
2.37Seek the probabilities \(\mathrm{ p,q ( 0<p<q<1 ) }\) such that for Fréchet distribution \(\mathrm{ \lambda ^{*}=X_{1}^{’}-a_{1}( Q_{n}( p ) -X_{1}^{’} ) -a_{2} ( Q_{n} ( q ) -X_{1}^{’}) }\), \(\mathrm{ \delta ^{*}=b_{1} ( Q_{n}( p ) -X_{1}^{’} ) +b_{2} ( Q_{n} ( q ) -X_{1}^{’} ) ,~ \alpha ^{*}=g^{-1} ( \frac{ Q_{n} ( q ) -X_{1}^{’} }{ Q_{n} ( p ) -X_{1}^{’} } ) }\), where \(\mathrm{ X_{1}^{’},Q_{n} ( p ) ,Q_{n} ( q ) }\) have the usual meaning, and we suppose \(\mathrm{ 0< [ np] < [ nq] }\), and \(\mathrm{ a_{i},b_{i}>0 }\), are such that \(\mathrm{( \lambda ^{*}, \delta ^{*}, \alpha ^{*}) }\) minimizes either the generalized asymptotic variance or the asymptotic variance of some quantile; note that \(\mathrm{( \lambda ^{*}, \delta ^{*}, \alpha ^{*}) }\) are quasi-linearly invariant. Compute the efficiency of \(\mathrm{ (\lambda ^{*}, \delta ^{*}, \alpha ^{*}) \cdot g ( \alpha ) }\) can be defined by \(\mathrm{ \frac{Q_{n} ( q ) -X_{1}^{’}}{Q_{n} ( p ) -X_{1}^{’}}{\mathrm{\ }\stackrel{\mathrm{P}}{\rightarrow}}\,g ( \alpha ) =\frac{ \chi _{q}}{ \chi _{p}} }\).
2.38 Do the same exercise for the Weibull distribution (for minima).
2.39 Consider the Weibull distribution for minima with \(\mathrm{ \alpha =1/2 }\) , i.e.,
\(\mathrm{ W_{1/2} ( x \vert \lambda , \delta ) =0 }\) \(\mathrm{ if~x \leq \lambda }\)
\(\mathrm{ W_{{1}/{2}}( x \vert \lambda , \delta ) =1-exp ( - ( \frac{x- \lambda }{ \delta } ) ^{{1}/{2}} ) }\) \(\mathrm{ if~x \geq \lambda }\)
Show that if \(\mathrm{ \lambda = \lambda _{0} }\) is known then \(\mathrm{ \hat{\delta} = ( \frac{1}{n} \sum _{1}^{n}\sqrt[]{x_{i}- \lambda _{0}} ) ^{2} }\) is asymptotically normal. If \(\mathrm{ \lambda }\) and \(\mathrm{ \delta }\) are unknown parameters, consider \(\mathrm{ \hat{\lambda} =min ( x_{1}, \dots,x_{n} ) }\) and \(\mathrm{ \hat{ \delta }= ( \frac{1}{n} \sum _{1}^{n}\sqrt[]{x_{i}- \hat{\lambda} _{0}} ) ^{2} }\). \(\mathrm{ \hat{ \delta } }\) is a modified estimator of \(\mathrm{ \delta }\) where in the previous expression we substituted \(\mathrm{ \lambda _{0} }\) by \(\mathrm{ \hat{\lambda} }\) which converges quickly to \(\mathrm{ \lambda }\). Show that \(\mathrm{ ( \hat{ \lambda} , \hat{\delta} ) }\) is an estimator of \(\mathrm{ ( \lambda , \delta ) }\) with independent Weibull and normal margins. Use the \(\mathrm{ \delta }\)-method.
2.40 Consider \(\mathrm{ 0<p<q<r<1 }\) such that \(\mathrm{ \beta =\frac{log\,p}{log\,q}=\frac{log\,q}{log\,r}>1 }\), and define the statistic \(\mathrm{ Q^{*}=\frac{X_{[ nr] }^{’}-X_{ [ nq ] }^{’}}{X_{ [ nq ] }^{’}-X_{ [ np ] }^{’}}{\mathrm{\ }\stackrel{\mathrm{P}}{\rightarrow}}\,\,\beta ^{ \theta } }\) , where \(\mathrm{ \theta }\) is the shape parameter in the von Mises-Jenkinson formula; note that \(\mathrm{ Q^{*} }\) is positive iff \(\mathrm{ [ np ] <[ nq ] < [ nr ] }\). Then \(\mathrm{ k^{*}=\frac{log\,Q^{*}}{log\, \beta } }\) is an estimator of \(\mathrm{ \theta }\) which is asymptotically normal. Use \(\mathrm{ \theta^* }\) for statistical choice between the Weibull, Gumbel and Fréchet distributions, as done in the test, and compare it with \(\mathrm{ \hat{V}_{n} ( x) }\).
2.41 Consider the choice statistic
\(\mathrm{ \hat{V}_{n}( x )=\frac{1}{n} \{ \sum _{1}^{n} ( x_{i}-\bar{x} ) ^{2}/ \hat{\delta} ^{2}-n\frac{ \sum _{1}^{n} ( ( x_{i}-\bar{x} ) / \hat{\delta} ) ^{2}e^{- ( x_{i}-\bar{x} ) / \delta }}{ \sum _{1}^{n}e^{- ( x_{i}-\bar{x} ) / \delta }} \} }\),
with
\(\mathrm{ \hat{ \delta} =\bar{x}-\frac{ \sum _{1}^{n}x_{i}e^{-x_{i}/ \hat{ \delta} }}{ \sum _{1}^{n}e^{-x_{i}/\hat{ \delta} }} }\).
It was been shown that if \(\mathrm{ \theta =0 }\) (Gumbel distribution) \(\mathrm{ \hat{ V}_{n} ( x ) /\sqrt[]{n}\, \hat{\delta} }\) is asymptotically standard normal. Consider then the following modified approximate statistics:
a) as \(\mathrm{ \frac{1}{n} \sum _{1}^{n}e^{- ( x_{i}-\bar{x} ) / \delta } \approx M ( e^{- ( x- ( \lambda + \gamma \,\delta) ) / \delta } ) =e^{ \gamma } }\), study
\(\mathrm{ \hat{V}_{n} ( x ) =\frac{1}{n} \{ \sum _{1}^{n}\frac{ ( x_{i}-\bar{x} ) ^{2}}{ \hat{\delta} ^{2}}-e^{- \gamma }\sum _{1}^{n}\frac{ ( x_{i}-\bar{x}) ^{2}}{\hat{ \delta} ^{2}}e^{- ( x_{i}-\bar{x} ) / \delta } \} }\)
and show its asymptotically normality;
b)as \(\mathrm{ \frac{1}{n} \sum _{1}^{n}\frac{ ( x_{i}-\bar{x} ) ^{2}}{ \hat{\delta} ^{2}}e^{- ( x_{i}-\bar{x} ) / \hat{\delta }} \approx M \{ ( \frac{x- ( \lambda + \gamma ~ \delta ) }{ \delta } ) ^{2}~e^{- ( x- ( \lambda + \gamma ~ \delta ) ) / \delta } \} =\frac{ \pi ^{2}}{6} }\) study the statistic \(\mathrm{ \hat{V}_{n,2}=\frac{n}{2} ( s^{2}/ \hat{\delta} ^{2}- \pi ^{2}/6) }\)and show its asymptotic normality; note that this is a comparison between the estimates \(\mathrm{ \hat{\delta} ^{2} }\) and \(\mathrm{ \delta ^{*2}=\frac{6}{ \pi ^{2}}s^{2} }\), the first approximation to \(\mathrm{ \hat{\delta} }\).
c) Study the statistic \(\mathrm{ \hat{V}_{n,3} }\), obtained by substituting \(\mathrm{ \hat{\delta} }\) by \(\mathrm{ \delta ^{*} }\) in \(\mathrm{ \hat{V}_{n,1} }\), and show its asymptotic normality;
d)Study the statistic \(\mathrm{ \hat{V}_{n,4} }\) when \(\mathrm{ ( \lambda , \delta ) }\) have been estimated by the method of moments, i.e.,\(\mathrm{ \lambda ^{*}=\bar{x}- \gamma ~ \delta ^{*},s^{2}=\frac{6}{ \pi ^{2}} \delta ^{*2} }\), and show its asymptotic normality;
e) Compare the efficiencies of these statistics with respect to \(\mathrm{ \hat{V}_{n} }\).
2.42 Compare the \(\mathrm{ Prob~ \{ accept~k=0 \vert k \} }\) for the statistic \(\mathrm{ Q_{n} }\) and for the statistic \(\mathrm{ \hat{V}_{n} (x) }\) for the same significance level \(\mathrm{ ( \alpha =.05,.02,.01,. ) }\) and different values of \(\mathrm{ k }\).
2.43 Analyse also the same question for the statistics given in exercise 2.41.
2.44 Compute \(\mathrm{ \Delta _{n} }\) and \(\mathrm{ \tilde{P}_{n}^{’} ( 0 ) }\), or its orders, for the approximation of order \(\mathrm{ n^{-2} }\) for the solution of the statistical trilemma of the locally optimal approach.
2.45 Show that efficiency of largest observations vs. subsamples methods changes with the value of the shape parameter \(\mathrm{ k }\) of the von Mises-Jenkinson formula, which shows some lack of robustness.
2.46 For the techniques with partial information, seek the optimal quantiles (for quantile estimation and for block estimation) that minimize the asymptotic variance of the estimators ( \(\mathrm{ \lambda ^{*}, \delta ^{*}or\, \alpha ^{*} }\) if it exists) and compare with those with maxima (global) asymptotic variance.
2.47 Owing to the importance of \(\mathrm{ \lambda }\) in \(\mathrm{ \Phi _{ \alpha } ( \frac{x- \lambda }{ \delta } ) }\) and \(\mathrm{ W_{ \alpha } ( \frac{x- \lambda }{ \delta }) }\), which can be estimated by \(\mathrm{ X_{1}^{’},= \begin{array}{c}\mathrm{ n} \\ \mathrm{ min} \\ \mathrm{ 1 } \end{array} X_{i} }\), consider the (new) random variables \(\mathrm{ Y_{i}=X_{i}-X_{1}^{’}(\begin{array}{c}\mathrm{ n} \\ \mathrm{ min} \\ \mathrm{ 1 } \end{array} Y_{i}=0 ) }\). Suppose that they have distributions \(\mathrm{ \Phi _{ \alpha } ( y/ \delta) }\) and \(\mathrm{ W_{ \alpha }( y/ \delta) }\) (which they have approximately), and study the estimators \(\mathrm{ ( \alpha ^{**}, \delta ^{**} ) }\) of \(\mathrm{ ( \alpha, \delta ) }\) based on the positive \(\mathrm{ Y_{i} }\) (in number of \(\mathrm{ n-1 }\) with probability one).
Obtain sufficient conditions for the statistics \(\mathrm{ ( \alpha ^{*}, \delta ^{*} ) }\) to be estimators of \(\mathrm{ ( \alpha, \delta ) }\). Recall that \(\mathrm{ \Phi _{ \alpha } ( y/ \delta) }\) and \(\mathrm{ W_{ \alpha }( y/ \delta) }\) can be easily transformed to have a Gumbel distribution.
2.48 Develop the theory of “excesses over thresholds” for the limiting distributions of maxima both for the P.O.T. and A.E.O.T. methods.
Mexia, J. T., 1967. Studies on the extreme double exponential distribution -II, Sequential estimation and testing for the location parameter of Gumbel distribution. Rev. Fac. Ciencias de Lisboa, A, 12, 5-14.
Neves, M., 1986. Estimacao de quantis das distribuicoes de Gumbel e Frechet cots parametros de escala e de localizacao desconhecidos, baseados em duas estatisticas ordinais. Centro de Estatatica e Aplicacoes, 25/86, Lisboa.
Neves, M., 1987. Predicao homogenea para a distribuicao de Frechet. Jorn. Matem. Luso-Espanholas, Braga.
Passos Coelho, D. and Gil, T. P., 1963. Studies on extreme double exponential distribution, I - The location parameter. Rev. Fac. Ciencias de Lisboa, A, 9, 37-46.
Themido, T., 1985. Intervalos de tolerancia equivariantes para a distribuicao de Gumbel, Universidade de Lisboa, Tese de Mestrado.
Tiago de Oliveira, J., 1982. The -method for obtention of asymptotic distributions; applications. Publ. Inst. Statist. Univ. Paris, 27, 49-70.