Maarten,
So, regarding this issue, there is no difference between taking out
Post by Maarten Jungvariance components for main effects before interactions within the same
grouping factor, e.g. reducing (1 + A*B | subject) to (1 + A:B | subject),
and taking out the whole grouping factor "item" (i.e. all variance
components of it) before "subject:item"?
I think that if you have strong evidence that this is the appropriate
random effects structure, then it makes sense to modify your model
accordingly, yes.
Do all variances of the random slopes (for interactions and main effects)
Post by Maarten Jungof a single grouping factor contribute to the standard errors of the fixed
main effects and interactions in the same way?
No -- in general, with unbalanced datasets and continuous predictors, it's
hard to say much for sure other than "no." But it can be informative to
think of simpler, approximately balanced ANOVA-like designs where it's much
easier to say much more about which variance components enter which
standard errors and how.
I have a Shiny power analysis app, PANGEA (power analysis for general anova
designs) <http://jakewestfall.org/pangea/>, which as a side feature you can
also use to compute the expected mean square equations for arbitrary
balanced designs w/ categorical predictors. Near the bottom of "step 1"
there is a checkbox for "show expected mean square equations." So you can
specify your design, check the box, then hit the "submit design" button to
view a table representing the equations, with rows = mean squares and
columns = variance components. (A little while ago Shiny changed how it
renders tables and now the row labels no longer appear, which is really
annoying, but they are given in the reverse order of the column labels, so
that the diagonal from bottom-left to top-right is where the mean squares
and variance components correspond.) The standard error for a particular
fixed effect is proportional to the (square root of the) corresponding mean
square divided by the total sample size, that is, by the product of all the
factor sample sizes. So examining the mean square for an effect will tell
you which variance components enter its standard error and which sample
sizes they are divided by in the expression. I find this useful for getting
a sense of how the variance components affect the standard errors, even
though the results from this app are only simplified approximations to
those from more realistic and complicated designs.
Jake
On Wed, Nov 28, 2018 at 2:33 PM Maarten Jung <
Post by Maarten JungJake,
thanks for this insight.
So, regarding this issue, there is no difference between taking out
variance components for main effects before interactions within the same
grouping factor, e.g. reducing (1 + A*B | subject) to (1 + A:B | subject),
and taking out the whole grouping factor "item" (i.e. all variance
components of it) before "subject:item"?
Do all variances of the random slopes (for interactions and main effects)
of a single grouping factor contribute to the standard errors of the fixed
main effects and interactions in the same way?
Regards,
Maarten
Post by Jake WestfallMaarten,
No, I would not agree that the Bates quote is referring to the principle
https://en.wikipedia.org/wiki/Principle_of_marginality
Bates can chip in if he wants, but as I see it, the quote doesn't hint at
anything like this. It simply says that "variance components of
higher-order interactions should generally be taken out of the model before
lower-order terms nested under them" -- which I agree with. The reason this
is _generally_ true is because hierarchical ordering is _generally_ true.
But it looks like it's not true in your particular case.
can you think of a reason why they suggest to follow this principle other
than "higher-order interactions tend to explain less variance than
lower-order interations"?
No.
Jake
On Wed, Nov 28, 2018 at 12:53 PM Maarten Jung <
Hi Jake,
Thanks for your thoughts on this.
I thought that Bates et al. (2015; [1]) were referring to this principle
"[...] we can eliminate variance components from the LMM, following the
standard statistical principle with respect to interactions and main
effects: variance components of higher-order
interactions should generally be taken out of the model before
lower-order terms nested under them. Frequently, in the end, this leads
also to the elimination of variance
components of main effects." (p. 6)
Would you agree with me that this is referring to the principle of
marginality? And if so, can you think of a reason why they suggest to
follow this principle other than "higher-order interactions tend to explain
less variance than lower-order interations"?
Best regards,
Maarten
[1] https://arxiv.org/pdf/1506.04967v1.pdf
Post by Jake WestfallMaarten,
I think it's fine. I can't think of any reason to respect a principle
of marginality for the random variance components. I agree with the feeling
that it's better to remove higher-order interactions before lower-order
interactions and so on, but that's just because of hierarchical ordering
(higher-order interactions tend to explain less variance than lower-order
interations), not because of any consideration of marginality. If in your
data you find that hierarchical ordering is not quite true and instead the
highest-order interaction is important while a lower-order one is not, then
it makes sense to me to let your model reflect that finding.
Jake
On Wed, Nov 28, 2018 at 12:18 PM Maarten Jung <
Post by Maarten JungDear list,
In a 2 x 2 fully crossed design in which every participant responds to
every stimulus multiple times in each cell of the factorial design the
maximal linear mixed model justified by the design (using the lme4 syntax)
y ~ A * B + (1 + A * B | subject) + (1 + A * B | item) + (1 + A * B |
subject:item)
Within a model reduction process, be it because the estimation algorithm
doesn't converge or the model is overparameterized or one wants to balance
Type-1 error rate and power, I follow the principle of marginality taking
out higher-order interactions before lower-order terms (i.e. lower-order
interactions and main effects) nested under them and random slopes before
random intercepts.
However, it occurs that the variance components of the grouping factor
"item" are not significant while those of the grouping factor
"subject:item" are.
Does it make sense to remove the whole grouping factor "item" before taking
out the variance components of the grouping factor "subejct:item"?
y ~ A * B + (1 + A | subject) + (1 | subject:item)
I'm not sure whether this contradicts the principal of marginality and, in
general, whether this is a sound approach.
Any help is highly appreciated.
Best regards,
Maarten
[[alternative HTML version deleted]]
_______________________________________________
https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models
[[alternative HTML version deleted]]