Discussion:
[R-sig-ME] LMM reduction following marginality taking out "item" before "subject:item" grouping factor
Maarten Jung
2018-11-28 18:17:32 UTC
Permalink
Dear list,

In a 2 x 2 fully crossed design in which every participant responds to
every stimulus multiple times in each cell of the factorial design the
maximal linear mixed model justified by the design (using the lme4 syntax)
should be:
y ~ A * B + (1 + A * B | subject) + (1 + A * B | item) + (1 + A * B |
subject:item)

Within a model reduction process, be it because the estimation algorithm
doesn't converge or the model is overparameterized or one wants to balance
Type-1 error rate and power, I follow the principle of marginality taking
out higher-order interactions before lower-order terms (i.e. lower-order
interactions and main effects) nested under them and random slopes before
random intercepts.
However, it occurs that the variance components of the grouping factor
"item" are not significant while those of the grouping factor
"subject:item" are.

Does it make sense to remove the whole grouping factor "item" before taking
out the variance components of the grouping factor "subejct:item"?

A reduced model would f.i. look like this:
y ~ A * B + (1 + A | subject) + (1 | subject:item)

I'm not sure whether this contradicts the principal of marginality and, in
general, whether this is a sound approach.

Any help is highly appreciated.

Best regards,
Maarten

[[alternative HTML version deleted]]
Jake Westfall
2018-11-28 18:24:30 UTC
Permalink
Maarten,

I think it's fine. I can't think of any reason to respect a principle of
marginality for the random variance components. I agree with the feeling
that it's better to remove higher-order interactions before lower-order
interactions and so on, but that's just because of hierarchical ordering
(higher-order interactions tend to explain less variance than lower-order
interations), not because of any consideration of marginality. If in your
data you find that hierarchical ordering is not quite true and instead the
highest-order interaction is important while a lower-order one is not, then
it makes sense to me to let your model reflect that finding.

Jake

On Wed, Nov 28, 2018 at 12:18 PM Maarten Jung <
Post by Maarten Jung
Dear list,
In a 2 x 2 fully crossed design in which every participant responds to
every stimulus multiple times in each cell of the factorial design the
maximal linear mixed model justified by the design (using the lme4 syntax)
y ~ A * B + (1 + A * B | subject) + (1 + A * B | item) + (1 + A * B |
subject:item)
Within a model reduction process, be it because the estimation algorithm
doesn't converge or the model is overparameterized or one wants to balance
Type-1 error rate and power, I follow the principle of marginality taking
out higher-order interactions before lower-order terms (i.e. lower-order
interactions and main effects) nested under them and random slopes before
random intercepts.
However, it occurs that the variance components of the grouping factor
"item" are not significant while those of the grouping factor
"subject:item" are.
Does it make sense to remove the whole grouping factor "item" before taking
out the variance components of the grouping factor "subejct:item"?
y ~ A * B + (1 + A | subject) + (1 | subject:item)
I'm not sure whether this contradicts the principal of marginality and, in
general, whether this is a sound approach.
Any help is highly appreciated.
Best regards,
Maarten
[[alternative HTML version deleted]]
_______________________________________________
https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models
[[alternative HTML version deleted]]
Maarten Jung
2018-11-28 18:52:56 UTC
Permalink
Hi Jake,

Thanks for your thoughts on this.

I thought that Bates et al. (2015; [1]) were referring to this principle
when they stated:
"[...] we can eliminate variance components from the LMM, following the
standard statistical principle with respect to interactions and main
effects: variance components of higher-order
interactions should generally be taken out of the model before lower-order
terms nested under them. Frequently, in the end, this leads also to the
elimination of variance
components of main effects." (p. 6)

Would you agree with me that this is referring to the principle of
marginality? And if so, can you think of a reason why they suggest to
follow this principle other than "higher-order interactions tend to explain
less variance than lower-order interations"?

Best regards,
Maarten

[1] https://arxiv.org/pdf/1506.04967v1.pdf
Post by Jake Westfall
Maarten,
I think it's fine. I can't think of any reason to respect a principle of
marginality for the random variance components. I agree with the feeling
that it's better to remove higher-order interactions before lower-order
interactions and so on, but that's just because of hierarchical ordering
(higher-order interactions tend to explain less variance than lower-order
interations), not because of any consideration of marginality. If in your
data you find that hierarchical ordering is not quite true and instead the
highest-order interaction is important while a lower-order one is not, then
it makes sense to me to let your model reflect that finding.
Jake
On Wed, Nov 28, 2018 at 12:18 PM Maarten Jung <
Post by Maarten Jung
Dear list,
In a 2 x 2 fully crossed design in which every participant responds to
every stimulus multiple times in each cell of the factorial design the
maximal linear mixed model justified by the design (using the lme4 syntax)
y ~ A * B + (1 + A * B | subject) + (1 + A * B | item) + (1 + A * B |
subject:item)
Within a model reduction process, be it because the estimation algorithm
doesn't converge or the model is overparameterized or one wants to balance
Type-1 error rate and power, I follow the principle of marginality taking
out higher-order interactions before lower-order terms (i.e. lower-order
interactions and main effects) nested under them and random slopes before
random intercepts.
However, it occurs that the variance components of the grouping factor
"item" are not significant while those of the grouping factor
"subject:item" are.
Does it make sense to remove the whole grouping factor "item" before taking
out the variance components of the grouping factor "subejct:item"?
y ~ A * B + (1 + A | subject) + (1 | subject:item)
I'm not sure whether this contradicts the principal of marginality and, in
general, whether this is a sound approach.
Any help is highly appreciated.
Best regards,
Maarten
[[alternative HTML version deleted]]
_______________________________________________
https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models
[[alternative HTML version deleted]]
Jake Westfall
2018-11-28 19:03:39 UTC
Permalink
Maarten,

No, I would not agree that the Bates quote is referring to the principle of
marginality in the sense of e.g.:
https://en.wikipedia.org/wiki/Principle_of_marginality

Bates can chip in if he wants, but as I see it, the quote doesn't hint at
anything like this. It simply says that "variance components of
higher-order interactions should generally be taken out of the model before
lower-order terms nested under them" -- which I agree with. The reason this
is _generally_ true is because hierarchical ordering is _generally_ true.
But it looks like it's not true in your particular case.

can you think of a reason why they suggest to follow this principle other
than "higher-order interactions tend to explain less variance than
lower-order interations"?
No.

Jake

On Wed, Nov 28, 2018 at 12:53 PM Maarten Jung <
Hi Jake,
Thanks for your thoughts on this.
I thought that Bates et al. (2015; [1]) were referring to this principle
"[...] we can eliminate variance components from the LMM, following the
standard statistical principle with respect to interactions and main
effects: variance components of higher-order
interactions should generally be taken out of the model before lower-order
terms nested under them. Frequently, in the end, this leads also to the
elimination of variance
components of main effects." (p. 6)
Would you agree with me that this is referring to the principle of
marginality? And if so, can you think of a reason why they suggest to
follow this principle other than "higher-order interactions tend to explain
less variance than lower-order interations"?
Best regards,
Maarten
[1] https://arxiv.org/pdf/1506.04967v1.pdf
Post by Jake Westfall
Maarten,
I think it's fine. I can't think of any reason to respect a principle of
marginality for the random variance components. I agree with the feeling
that it's better to remove higher-order interactions before lower-order
interactions and so on, but that's just because of hierarchical ordering
(higher-order interactions tend to explain less variance than lower-order
interations), not because of any consideration of marginality. If in your
data you find that hierarchical ordering is not quite true and instead the
highest-order interaction is important while a lower-order one is not, then
it makes sense to me to let your model reflect that finding.
Jake
On Wed, Nov 28, 2018 at 12:18 PM Maarten Jung <
Post by Maarten Jung
Dear list,
In a 2 x 2 fully crossed design in which every participant responds to
every stimulus multiple times in each cell of the factorial design the
maximal linear mixed model justified by the design (using the lme4 syntax)
y ~ A * B + (1 + A * B | subject) + (1 + A * B | item) + (1 + A * B |
subject:item)
Within a model reduction process, be it because the estimation algorithm
doesn't converge or the model is overparameterized or one wants to balance
Type-1 error rate and power, I follow the principle of marginality taking
out higher-order interactions before lower-order terms (i.e. lower-order
interactions and main effects) nested under them and random slopes before
random intercepts.
However, it occurs that the variance components of the grouping factor
"item" are not significant while those of the grouping factor
"subject:item" are.
Does it make sense to remove the whole grouping factor "item" before taking
out the variance components of the grouping factor "subejct:item"?
y ~ A * B + (1 + A | subject) + (1 | subject:item)
I'm not sure whether this contradicts the principal of marginality and, in
general, whether this is a sound approach.
Any help is highly appreciated.
Best regards,
Maarten
[[alternative HTML version deleted]]
_______________________________________________
https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models
[[alternative HTML version deleted]]
Maarten Jung
2018-11-28 20:33:28 UTC
Permalink
Jake,

thanks for this insight.
So, regarding this issue, there is no difference between taking out
variance components for main effects before interactions within the same
grouping factor, e.g. reducing (1 + A*B | subject) to (1 + A:B | subject),
and taking out the whole grouping factor "item" (i.e. all variance
components of it) before "subject:item"?

And, I would be glad if you could answer this related question:
Do all variances of the random slopes (for interactions and main effects)
of a single grouping factor contribute to the standard errors of the fixed
main effects and interactions in the same way?

Regards,
Maarten
Post by Jake Westfall
Maarten,
No, I would not agree that the Bates quote is referring to the principle
https://en.wikipedia.org/wiki/Principle_of_marginality
Bates can chip in if he wants, but as I see it, the quote doesn't hint at
anything like this. It simply says that "variance components of
higher-order interactions should generally be taken out of the model before
lower-order terms nested under them" -- which I agree with. The reason this
is _generally_ true is because hierarchical ordering is _generally_ true.
But it looks like it's not true in your particular case.
can you think of a reason why they suggest to follow this principle other
than "higher-order interactions tend to explain less variance than
lower-order interations"?
No.
Jake
On Wed, Nov 28, 2018 at 12:53 PM Maarten Jung <
Hi Jake,
Thanks for your thoughts on this.
I thought that Bates et al. (2015; [1]) were referring to this principle
"[...] we can eliminate variance components from the LMM, following the
standard statistical principle with respect to interactions and main
effects: variance components of higher-order
interactions should generally be taken out of the model before
lower-order terms nested under them. Frequently, in the end, this leads
also to the elimination of variance
components of main effects." (p. 6)
Would you agree with me that this is referring to the principle of
marginality? And if so, can you think of a reason why they suggest to
follow this principle other than "higher-order interactions tend to explain
less variance than lower-order interations"?
Best regards,
Maarten
[1] https://arxiv.org/pdf/1506.04967v1.pdf
Post by Jake Westfall
Maarten,
I think it's fine. I can't think of any reason to respect a principle of
marginality for the random variance components. I agree with the feeling
that it's better to remove higher-order interactions before lower-order
interactions and so on, but that's just because of hierarchical ordering
(higher-order interactions tend to explain less variance than lower-order
interations), not because of any consideration of marginality. If in your
data you find that hierarchical ordering is not quite true and instead the
highest-order interaction is important while a lower-order one is not, then
it makes sense to me to let your model reflect that finding.
Jake
On Wed, Nov 28, 2018 at 12:18 PM Maarten Jung <
Post by Maarten Jung
Dear list,
In a 2 x 2 fully crossed design in which every participant responds to
every stimulus multiple times in each cell of the factorial design the
maximal linear mixed model justified by the design (using the lme4 syntax)
y ~ A * B + (1 + A * B | subject) + (1 + A * B | item) + (1 + A * B |
subject:item)
Within a model reduction process, be it because the estimation algorithm
doesn't converge or the model is overparameterized or one wants to balance
Type-1 error rate and power, I follow the principle of marginality taking
out higher-order interactions before lower-order terms (i.e. lower-order
interactions and main effects) nested under them and random slopes before
random intercepts.
However, it occurs that the variance components of the grouping factor
"item" are not significant while those of the grouping factor
"subject:item" are.
Does it make sense to remove the whole grouping factor "item" before taking
out the variance components of the grouping factor "subejct:item"?
y ~ A * B + (1 + A | subject) + (1 | subject:item)
I'm not sure whether this contradicts the principal of marginality and, in
general, whether this is a sound approach.
Any help is highly appreciated.
Best regards,
Maarten
[[alternative HTML version deleted]]
_______________________________________________
https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models
[[alternative HTML version deleted]]
Jake Westfall
2018-11-28 21:23:13 UTC
Permalink
Maarten,

So, regarding this issue, there is no difference between taking out
Post by Maarten Jung
variance components for main effects before interactions within the same
grouping factor, e.g. reducing (1 + A*B | subject) to (1 + A:B | subject),
and taking out the whole grouping factor "item" (i.e. all variance
components of it) before "subject:item"?
I think that if you have strong evidence that this is the appropriate
random effects structure, then it makes sense to modify your model
accordingly, yes.

Do all variances of the random slopes (for interactions and main effects)
Post by Maarten Jung
of a single grouping factor contribute to the standard errors of the fixed
main effects and interactions in the same way?
No -- in general, with unbalanced datasets and continuous predictors, it's
hard to say much for sure other than "no." But it can be informative to
think of simpler, approximately balanced ANOVA-like designs where it's much
easier to say much more about which variance components enter which
standard errors and how.

I have a Shiny power analysis app, PANGEA (power analysis for general anova
designs) <http://jakewestfall.org/pangea/>, which as a side feature you can
also use to compute the expected mean square equations for arbitrary
balanced designs w/ categorical predictors. Near the bottom of "step 1"
there is a checkbox for "show expected mean square equations." So you can
specify your design, check the box, then hit the "submit design" button to
view a table representing the equations, with rows = mean squares and
columns = variance components. (A little while ago Shiny changed how it
renders tables and now the row labels no longer appear, which is really
annoying, but they are given in the reverse order of the column labels, so
that the diagonal from bottom-left to top-right is where the mean squares
and variance components correspond.) The standard error for a particular
fixed effect is proportional to the (square root of the) corresponding mean
square divided by the total sample size, that is, by the product of all the
factor sample sizes. So examining the mean square for an effect will tell
you which variance components enter its standard error and which sample
sizes they are divided by in the expression. I find this useful for getting
a sense of how the variance components affect the standard errors, even
though the results from this app are only simplified approximations to
those from more realistic and complicated designs.

Jake

On Wed, Nov 28, 2018 at 2:33 PM Maarten Jung <
Post by Maarten Jung
Jake,
thanks for this insight.
So, regarding this issue, there is no difference between taking out
variance components for main effects before interactions within the same
grouping factor, e.g. reducing (1 + A*B | subject) to (1 + A:B | subject),
and taking out the whole grouping factor "item" (i.e. all variance
components of it) before "subject:item"?
Do all variances of the random slopes (for interactions and main effects)
of a single grouping factor contribute to the standard errors of the fixed
main effects and interactions in the same way?
Regards,
Maarten
Post by Jake Westfall
Maarten,
No, I would not agree that the Bates quote is referring to the principle
https://en.wikipedia.org/wiki/Principle_of_marginality
Bates can chip in if he wants, but as I see it, the quote doesn't hint at
anything like this. It simply says that "variance components of
higher-order interactions should generally be taken out of the model before
lower-order terms nested under them" -- which I agree with. The reason this
is _generally_ true is because hierarchical ordering is _generally_ true.
But it looks like it's not true in your particular case.
can you think of a reason why they suggest to follow this principle other
than "higher-order interactions tend to explain less variance than
lower-order interations"?
No.
Jake
On Wed, Nov 28, 2018 at 12:53 PM Maarten Jung <
Hi Jake,
Thanks for your thoughts on this.
I thought that Bates et al. (2015; [1]) were referring to this principle
"[...] we can eliminate variance components from the LMM, following the
standard statistical principle with respect to interactions and main
effects: variance components of higher-order
interactions should generally be taken out of the model before
lower-order terms nested under them. Frequently, in the end, this leads
also to the elimination of variance
components of main effects." (p. 6)
Would you agree with me that this is referring to the principle of
marginality? And if so, can you think of a reason why they suggest to
follow this principle other than "higher-order interactions tend to explain
less variance than lower-order interations"?
Best regards,
Maarten
[1] https://arxiv.org/pdf/1506.04967v1.pdf
Post by Jake Westfall
Maarten,
I think it's fine. I can't think of any reason to respect a principle
of marginality for the random variance components. I agree with the feeling
that it's better to remove higher-order interactions before lower-order
interactions and so on, but that's just because of hierarchical ordering
(higher-order interactions tend to explain less variance than lower-order
interations), not because of any consideration of marginality. If in your
data you find that hierarchical ordering is not quite true and instead the
highest-order interaction is important while a lower-order one is not, then
it makes sense to me to let your model reflect that finding.
Jake
On Wed, Nov 28, 2018 at 12:18 PM Maarten Jung <
Post by Maarten Jung
Dear list,
In a 2 x 2 fully crossed design in which every participant responds to
every stimulus multiple times in each cell of the factorial design the
maximal linear mixed model justified by the design (using the lme4 syntax)
y ~ A * B + (1 + A * B | subject) + (1 + A * B | item) + (1 + A * B |
subject:item)
Within a model reduction process, be it because the estimation algorithm
doesn't converge or the model is overparameterized or one wants to balance
Type-1 error rate and power, I follow the principle of marginality taking
out higher-order interactions before lower-order terms (i.e. lower-order
interactions and main effects) nested under them and random slopes before
random intercepts.
However, it occurs that the variance components of the grouping factor
"item" are not significant while those of the grouping factor
"subject:item" are.
Does it make sense to remove the whole grouping factor "item" before taking
out the variance components of the grouping factor "subejct:item"?
y ~ A * B + (1 + A | subject) + (1 | subject:item)
I'm not sure whether this contradicts the principal of marginality and, in
general, whether this is a sound approach.
Any help is highly appreciated.
Best regards,
Maarten
[[alternative HTML version deleted]]
_______________________________________________
https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models
[[alternative HTML version deleted]]
Maarten Jung
2018-11-29 13:36:35 UTC
Permalink
Hi Jake,

So, regarding this issue, there is no difference between taking out
Post by Jake Westfall
Post by Maarten Jung
variance components for main effects before interactions within the same
grouping factor, e.g. reducing (1 + A*B | subject) to (1 + A:B | subject),
and taking out the whole grouping factor "item" (i.e. all variance
components of it) before "subject:item"?
I think that if you have strong evidence that this is the appropriate
random effects structure, then it makes sense to modify your model
accordingly, yes.
This makes sense to me.

Do all variances of the random slopes (for interactions and main effects)
Post by Jake Westfall
Post by Maarten Jung
of a single grouping factor contribute to the standard errors of the fixed
main effects and interactions in the same way?
No -- in general, with unbalanced datasets and continuous predictors, it's
hard to say much for sure other than "no." But it can be informative to
think of simpler, approximately balanced ANOVA-like designs where it's much
easier to say much more about which variance components enter which
standard errors and how.
The standard error for a particular fixed effect is proportional to the
(square root of the) corresponding mean square divided by the total sample
size, that is, by the product of all the factor sample sizes. So examining
the mean square for an effect will tell you which variance components enter
its standard error and which sample sizes they are divided by in the
expression.
Your app is very useful, too. Just to double-check if I get this right: the
entries in each cell of the table are the numbers by which the variance
components are divided in the equation of the noncentrality parameter. Is
this correct?


Regards,
Maarten

[[alternative HTML version deleted]]
Jake Westfall
2018-11-29 14:54:18 UTC
Permalink
Maarten,

Just to double-check if I get this right: the entries in each cell of the
table are the numbers by which the variance components are divided in the
equation of the noncentrality parameter. Is this correct?
Almost. They multiply the variance components, not divide them. Essentially
each row gives the weights of a weighted sum of variance components. Then
to translate that to what appears in the denominator of the noncentrality
parameter, the entire thing is divided by the total sample size *and we
remove the variance component for the effect in question *(I forgot to
mention that part in my last email).

For example, consider the simple design with random participants (P) nested
in fixed groups (G). So g is the number of groups, p is the number of
participants per group, and # is the number of replicates. (This is design
2 in the dropdown menu of examples.) The EMS table shows that, for the
between-group effect, the coefficients for the error, participant, and
group variance components are, respectively, 1, #, and #p. So the expected
mean square is var_error + # * var_participants + # * p * var_groups. The
total sample size is pg#, so in the noncentrality parameter expression this
becomes sqrt(var_error / pg# + var_participants / pg). Note that this only
gives most of the denominator of the of the noncentrality parameter
expression -- it ignores the variance of the contrast weights -- you can
see more in the PANGEA working paper, linked in the app.

Jake

On Thu, Nov 29, 2018 at 7:36 AM Maarten Jung <
Hi Jake,
So, regarding this issue, there is no difference between taking out
Post by Jake Westfall
Post by Maarten Jung
variance components for main effects before interactions within the same
grouping factor, e.g. reducing (1 + A*B | subject) to (1 + A:B | subject),
and taking out the whole grouping factor "item" (i.e. all variance
components of it) before "subject:item"?
I think that if you have strong evidence that this is the appropriate
random effects structure, then it makes sense to modify your model
accordingly, yes.
This makes sense to me.
Do all variances of the random slopes (for interactions and main effects)
Post by Jake Westfall
Post by Maarten Jung
of a single grouping factor contribute to the standard errors of the fixed
main effects and interactions in the same way?
No -- in general, with unbalanced datasets and continuous predictors,
it's hard to say much for sure other than "no." But it can be informative
to think of simpler, approximately balanced ANOVA-like designs where it's
much easier to say much more about which variance components enter which
standard errors and how.
The standard error for a particular fixed effect is proportional to the
(square root of the) corresponding mean square divided by the total sample
size, that is, by the product of all the factor sample sizes. So examining
the mean square for an effect will tell you which variance components enter
its standard error and which sample sizes they are divided by in the
expression.
the entries in each cell of the table are the numbers by which the variance
components are divided in the equation of the noncentrality parameter. Is
this correct?
Regards,
Maarten
[[alternative HTML version deleted]]
Maarten Jung
2018-12-01 16:58:11 UTC
Permalink
Hi Jake,

this clears things up for me - thank you.

Regards,
Maarten
Post by Jake Westfall
Maarten,
Just to double-check if I get this right: the entries in each cell of the
table are the numbers by which the variance components are divided in the
equation of the noncentrality parameter. Is this correct?
Almost. They multiply the variance components, not divide them.
Essentially each row gives the weights of a weighted sum of variance
components. Then to translate that to what appears in the denominator of
the noncentrality parameter, the entire thing is divided by the total
sample size *and we remove the variance component for the effect in
question *(I forgot to mention that part in my last email).
For example, consider the simple design with random participants (P)
nested in fixed groups (G). So g is the number of groups, p is the number
of participants per group, and # is the number of replicates. (This is
design 2 in the dropdown menu of examples.) The EMS table shows that, for
the between-group effect, the coefficients for the error, participant, and
group variance components are, respectively, 1, #, and #p. So the expected
mean square is var_error + # * var_participants + # * p * var_groups. The
total sample size is pg#, so in the noncentrality parameter expression this
becomes sqrt(var_error / pg# + var_participants / pg). Note that this only
gives most of the denominator of the of the noncentrality parameter
expression -- it ignores the variance of the contrast weights -- you can
see more in the PANGEA working paper, linked in the app.
Jake
On Thu, Nov 29, 2018 at 7:36 AM Maarten Jung <
Hi Jake,
So, regarding this issue, there is no difference between taking out
Post by Jake Westfall
Post by Maarten Jung
variance components for main effects before interactions within the same
grouping factor, e.g. reducing (1 + A*B | subject) to (1 + A:B | subject),
and taking out the whole grouping factor "item" (i.e. all variance
components of it) before "subject:item"?
I think that if you have strong evidence that this is the appropriate
random effects structure, then it makes sense to modify your model
accordingly, yes.
This makes sense to me.
Do all variances of the random slopes (for interactions and main effects)
Post by Jake Westfall
Post by Maarten Jung
of a single grouping factor contribute to the standard errors of the fixed
main effects and interactions in the same way?
No -- in general, with unbalanced datasets and continuous predictors,
it's hard to say much for sure other than "no." But it can be informative
to think of simpler, approximately balanced ANOVA-like designs where it's
much easier to say much more about which variance components enter which
standard errors and how.
The standard error for a particular fixed effect is proportional to the
(square root of the) corresponding mean square divided by the total sample
size, that is, by the product of all the factor sample sizes. So examining
the mean square for an effect will tell you which variance components enter
its standard error and which sample sizes they are divided by in the
expression.
the entries in each cell of the table are the numbers by which the variance
components are divided in the equation of the noncentrality parameter. Is
this correct?
Regards,
Maarten
[[alternative HTML version deleted]]

Loading...