311-312

Statistical Theory of Extremes

Annex-5: On the Quadrants Method

José Tiago de Fonseca Oliveira 1

1.Academia das Ciências de Lisboa (Lisbon Academy of Sciences), Lisbon, Portugal.

23-06-2017
28-12-2016
28-12-2016
28-12-2016

Graphical Abstract

Highlights

Abstract

Annex

Keywords

1 . On the Quadrants Method

Consider a sample of \(\mathrm n\) i.i.d. random pairs \(\mathrm {\{(x_{1}, y_{1}),\dots, (x_n, y_n) \}}\) whose distribution function is \(\mathrm {F ( x,y \vert \theta ) }\), where \(\mathrm { \theta }\) is a (unique) dependence parameter and the margins \(\mathrm { F ( x, +\infty \vert \theta) }\) and \(\mathrm { F ( +\infty, \vert \theta) }\) are parameter-free.

Let us denote by \(\mathrm { \xi }\) and \(\mathrm { \eta }\) the medians the margins \(\mathrm {( F \left( \xi , +\infty \vert \theta \right) = \left( F \left( +\infty, \eta \vert \theta \right) =1/2 \right) }\), by \(\mathrm { N_{1}, N_{2}, N_{3}, N_{4} ( N_{1}+N_{2}+N_{3}+N_{4}=n ) }\)the sample cardinals \(\mathrm { N_{1}=\# ( x_{i}> \xi , y_{i}> \eta ) }\), \(\mathrm { N_{2}=\# ( x_{i} \leq \xi, y_{i}> \eta) }\), \(\mathrm { N_{3}=\# ( x_{i} \leq \xi , y_{i} \leq \eta ) }\), and \(\mathrm { N_{4}=\# ( x_{i} \leq \xi , y_{i} \leq \eta ) }\), and by \(\mathrm { p_{1} ( \theta ) ,p_{2} ( \theta ) , p_{3} ( \theta ) , p_{4} ( \theta ) }\) the probabilities \(\mathrm { p_{1} ( \theta ) =Prob ( X> \xi ,Y> \eta ) =p ( \theta ) }\), \(\mathrm { p_{2} ( \theta ) =Prob ( X \leq \xi ,Y> \eta ) =1/2-p ( \theta ) }\), \(\mathrm { p_{3} ( \theta ) =F ( \xi , \eta \vert \theta ) =p( \theta ) }\) and \(\mathrm { p_{4} ( \theta ) = ( X> \xi , Y \leq \eta ) =1/2-p ( \theta ) }\).

The maximum likelihood estimator of \(\mathrm { \theta }\) based on \(\mathrm { (N_{1}, N_{2}, N_{3},N_{4} )}\) is given by the equation \(\mathrm {(\theta ^{\ast\ast} ) =\frac{ ( N_{1}+ N_{3} ) }{2n} }\); denoting by \(\mathrm {N=N_{1}+N_{3} }\) we have  \(\mathrm {p ( \theta ^{\ast\ast} ) =N/2n }\).

As \(\mathrm {\sqrt[]{n}\frac{N/n-2p \left( \theta \right) }{\sqrt[]{2p \left( \theta \right) \left( 1-2p \left( \theta \right) \right) }} }\) is asymptotically standard normal we know, by the \(\mathrm {\delta }\)-method — see Tiago de Oliveira (1982) — that \(\mathrm {\sqrt[]{n}\frac{p^{'} \left( \theta \right) }{\sqrt[]{p \left( \theta \right) \left( 1-2p \left( \theta \right) \right) }} \left( \theta ^{\ast\ast}- \theta \right) }\) is also asymptotically standard normal, as well as  \(\mathrm {\frac{2n^{\frac{3}{2}} p^{'} ( \theta ^{\ast\ast} ) ( \theta ^{\ast\ast}- \theta ) }{\sqrt[]{n ( n-N ) }} }\).

If we consider \(\mathrm {F ( x, y \vert \theta ) = \Lambda ( x, y \vert \theta ) }\) we have \(\mathrm {\xi = \eta =-log \ log 2 }\) and we get \(\mathrm {p ( \theta ) =( \xi , \eta \vert \theta ) =exp⁡ \{ ( -e^{- \xi }+e^{- \eta } ) k ( \eta - \xi \vert \theta ) =exp ( -2 \ log \ 2\ . \ k ( 0 \vert \theta ) )}\). Thus the estimator \(\mathrm {\theta ^{\ast\ast}}\) is given by the equation \(\mathrm {k ( 0 \vert \theta ^{\ast\ast} t) -\frac{ log⁡ ( 2n/N ) }{log4}}\). But, for any (one-parameter) model, we have \(\mathrm {\bar {k}=sup_{ \theta } \ k ( 0 \vert \theta ) ,\underline{k} =in\,f_{ \theta } \ k( 0 \vert \theta )}\); suppose that \(\mathrm {\bar k=k ( 0 \vert \bar{\theta} )}\) and \(\mathrm {\underline k=k ( 0 \vert \underline {\theta } )}\) have unique solutions: then we shall truncate the estimator and take \(\mathrm {\theta ^{\ast\ast}= \bar \theta}\) if \(\mathrm {N/2n>\bar k}\) and  \(\mathrm {\theta ^{\ast\ast}= \underline \theta }\) if \(\mathrm {N/2n>k}\).

A test of independence vs. dependence (i.e. if \(\mathrm {p \left( 0 \right) =1/4 }\) vs. \(\mathrm {p ( 0) >1/4 }\) ) is given by the rejection region \(\mathrm {\sqrt[]{n} ( 2N/n ) -1 \geq \lambda _{ \alpha }}\) where \(\mathrm {\lambda _{ \alpha }= N^{-1} ( 1- \alpha )}\), \(\mathrm {N(.)}\) being the standard normal distribution function. This acts as a test independent of the model but only based on \(\mathrm { N_{1}, N_{2}, N_{3},N_{4} }\): essentially it can confirm dependence.

We can, evidently, extend the method to use other quantiles in the margins or with estimated parameters in the margins.

This method is due to Gumbel and Mustafi (1967), but presented here with modifications.

References

2.

Tiago de Oliveira, J., 1982. A definition of estimator efficiency in k-parameter case. Inst.Statist. Math., 34 A, 411-421.