Discussion:
[R-sig-ME] Error means squares in GLMER and LMER
Kornbrot, Diana
2018-11-21 16:50:09 UTC
Permalink
How canine obtain mean squares for errors used to compute F values in GLMER and LMER?
Would also like to be able to obtain marginal means and standard error in addition to the coefficients.
Must be possible but couldn’t find how
Is it possible to directly specify error terms rather than declaring which slopes and intercepts are random?
All help gratefully received
Best
Diana
_____________________________________
Professor Diana Kornbrot
Mobile
+44 (0) 7403 18 16 12
Work
University of Hertfordshire
College Lane, Hatfield, Hertfordshire AL10 9AB, UK
+44 (0) 170 728 4626
***@herts.ac.uk<mailto:***@herts.ac.uk>
http://dianakornbrot.wordpress.com/
http://go.herts.ac.uk/Diana_Kornbrot
skype: kornbrotme
Home
19 Elmhurst Avenue
London N2 0LT, UK
+44 (0) 208 444 2081
------------------------------------------------------------




[[alternative HTML version deleted]]
Rolf Turner
2018-11-22 00:21:14 UTC
Permalink
Post by Kornbrot, Diana
How canine obtain mean squares for errors used to compute F
values in GLMER and LMER?
<SNIP>

Were you trying to be funny, or did you get bitten by some damned
predictive text program?

cheers,

Rolf Turner
--
Technical Editor ANZJS
Department of Statistics
University of Auckland
Phone: +64-9-373-7599 ext. 88276
Rolf Turner
2018-11-22 10:58:48 UTC
Permalink
My, supposedly helpfu,l predictive text programme has a warped sense of
humour.
Unlike the R documentation which is tedious, verbose, is always sending
me on wild goose changes  and never seems to tell me what i need to know.
Please, please
How do I get those means square error terms form lmer and glmer?
I think your accusations against the R documentation are unfair.

Be that as it were, let me respond a little bit to your substantive
question. I am no expert, so take everything I say with a grain of
salt. Perhaps someone from the R-sig-ME list, who is more knowledgeable
than I, will give you better advice.

My conjecture is that you are having trouble getting hold of mean
squared error terms because there *aren't* any. Mixed models are not
based on sums of squares; they are *likelihood* based. Inference is
based on likelihood ratio tests, not on F-tests. The covariance matrix
for the coefficient estimates is formed as the inverse of the Fisher
Information matrix. It does not have the simple form that it has in the
context of ("un-mixed"; ordinary garden-variety) linear models.

Consequently you will need to readjust your thinking. The learning
curve for mixed models is steep. I have barely got my own toes onto the
bottom of the lowermost slopes.

I hope that I haven't misunderstood either your question or the
underlying structure of mixed models.

cheers,

Rolf
--
Technical Editor ANZJS
Department of Statistics
University of Auckland
Phone: +64-9-373-7599 ext. 88276
roee maor
2018-11-22 11:39:20 UTC
Permalink
Dear Diana,
If indeed what you're looking for is what Rolf mentioned, you might find
Nakagawa & Schielzeth (2013) helpful.
It's titled "A general and simple method for obtaining R2 from generalized
linear mixed-effects models". There is no dispute about the method's
generality, but simple is a relative term...
Here's the link:
https://besjournals.onlinelibrary.wiley.com/doi/epdf/10.1111/j.2041-210x.2012.00261.x

Hope this helps,
--
Roi Maor
PhD candidate
School of Zoology, Tel Aviv University
Centre for Biodiversity and Environment Research, UCL

[[alternative HTML version deleted]]
Kornbrot, Diana
2018-11-22 14:40:12 UTC
Permalink
Dear Roi
Thanks veyr useful reference
BUT
It assume variance component model, i.e. correlations are assumed zero [not mentioned]



On 22 Nov 2018, at 11:39, roee maor <***@gmail.com<mailto:***@gmail.com>> wrote:

Dear Diana,
If indeed what you're looking for is what Rolf mentioned, you might find Nakagawa & Schielzeth (2013) helpful.
It's titled "A general and simple method for obtaining R2 from generalized linear mixed-effects models". There is no dispute about the method's generality, but simple is a relative term...
Here's the link: https://besjournals.onlinelibrary.wiley.com/doi/epdf/10.1111/j.2041-210x.2012.00261.x

Hope this helps,
--
Roi Maor
PhD candidate
School of Zoology, Tel Aviv University
Centre for Biodiversity and Environment Research, UCL

_____________________________________
Professor Diana Kornbrot
Mobile
+44 (0) 7403 18 16 12
Work
University of Hertfordshire
College Lane, Hatfield, Hertfordshire AL10 9AB, UK
+44 (0) 170 728 4626
***@herts.ac.uk<mailto:***@herts.ac.uk>
http://dianakornbrot.wordpress.com/
http://go.herts.ac.uk/Diana_Kornbrot
skype: kornbrotme
Home
19 Elmhurst Avenue
London N2 0LT, UK
+44 (0) 208 444 2081
------------------------------------------------------------




[[alternative HTML version deleted]]
Kornbrot, Diana
2018-11-22 17:09:56 UTC
Permalink
So I am comparing standard ANOVA on raw frequencies (or equivalently probabilities) with GLMM for binomial proportion with both logit and probit link
ALL analyses have been completed in SPSS using MIXED for: response = identity , link = normal ; response = proportion=freq/Nmax, link = probit; response = proportion link = logit)
I want to show how to do identical analyses in R using lmer for are freq and glmer for proportions
So I wasn’t SAME results from R and SPSS (and a diamond necklace for Christmas, celebrated as an EU citizen in UK - I am a demanding woman)
Results are NOT quite the same.
I am checking using the raw probabilities as raw response with lmer before moving on to glmer for proportions
Check 1, in SPSS for raw freq or probability REPEATED give same result as MIXED (response = identity, link = normal). where there are differences it is REPEATED WITHIN comparisons not MULTIVARIATE.
Check 2. Compare R, lmer with SPSS mixed
if repeated groups are w1, w2, etc and between groups are b1, b2 etc, I use:
result <- lmer(freq~b1*b2*w1*w2 + (w1|subject) + (w2|subject), data = test)
anova(result)
Fand df from R and SPSS do not always match, even when they do match on sums of squares
I am trying to work out WHY there is a mismatch
Thought that knowing what was in the DENOMINATOR of the F values - which i perhaps wrongly termed error sums of squares, might help

I want F for usual reasons: to test significance and estimate effect size.
I also want all my packages to give me the SAME F and df2 and to UNDERSTAND what is happening

Sorry this is so long, but hope it is now clearer

best

Diana

and as an extra treat would like marginal means from object of type lmer

Dear Diana,
If indeed what you're looking for is what Rolf mentioned, you might find Nakagawa & Schielzeth (2013) helpful.
It's titled "A general and simple method for obtaining R2 from generalized linear mixed-effects models". There is no dispute about the method's generality, but simple is a relative term...
Here's the link: https://besjournals.onlinelibrary.wiley.com/doi/epdf/10.1111/j.2041-210x.2012.00261.x

Hope this helps,
--
Roi Maor
PhD candidate
School of Zoology, Tel Aviv University
Centre for Biodiversity and Environment Research, UCL

_____________________________________
Professor Diana Kornbrot
Mobile
+44 (0) 7403 18 16 12
Work
University of Hertfordshire
College Lane, Hatfield, Hertfordshire AL10 9AB, UK
+44 (0) 170 728 4626
***@herts.ac.uk<mailto:***@herts.ac.uk>
http://dianakornbrot.wordpress.com/
http://go.herts.ac.uk/Diana_Kornbrot
skype: kornbrotme
Home
19 Elmhurst Avenue
London N2 0LT, UK
+44 (0) 208 444 2081
------------------------------------------------------------




[[alternative HTML version deleted]]
Ben Bolker
2018-11-22 19:44:27 UTC
Permalink
For marginal means, use the emmeans package.

If you use the lmerTest package, you can get Satterthwaite or
Kenward-Roger df: you can use lme (from the nlme package), or
<https://github.com/bbolker/mixedmodels-misc/blob/master/R/calcDenDF.R>,
to get df via a simple "parameter-counting" exercise.

The problem is that the "F statistics" are quite poorly defined for
GLMMs. Can you show us the contrasting results you're getting for SPSS
and glmer? Do you know how SPSS is computing the F statistics? (This
<https://www.ibm.com/support/knowledgecenter/en/SS3RA7_15.0.0/com.ibm.spss.modeler.help/idh_glmm_build_options.htm>
makes it seem like it might be using Satterthwaite approximations ...)
On Thu, Nov 22, 2018 at 12:09 PM Kornbrot, Diana
Post by Kornbrot, Diana
So I am comparing standard ANOVA on raw frequencies (or equivalently probabilities) with GLMM for binomial proportion with both logit and probit link
ALL analyses have been completed in SPSS using MIXED for: response = identity , link = normal ; response = proportion=freq/Nmax, link = probit; response = proportion link = logit)
I want to show how to do identical analyses in R using lmer for are freq and glmer for proportions
So I wasn’t SAME results from R and SPSS (and a diamond necklace for Christmas, celebrated as an EU citizen in UK - I am a demanding woman)
Results are NOT quite the same.
I am checking using the raw probabilities as raw response with lmer before moving on to glmer for proportions
Check 1, in SPSS for raw freq or probability REPEATED give same result as MIXED (response = identity, link = normal). where there are differences it is REPEATED WITHIN comparisons not MULTIVARIATE.
Check 2. Compare R, lmer with SPSS mixed
result <- lmer(freq~b1*b2*w1*w2 + (w1|subject) + (w2|subject), data = test)
anova(result)
Fand df from R and SPSS do not always match, even when they do match on sums of squares
I am trying to work out WHY there is a mismatch
Thought that knowing what was in the DENOMINATOR of the F values - which i perhaps wrongly termed error sums of squares, might help
I want F for usual reasons: to test significance and estimate effect size.
I also want all my packages to give me the SAME F and df2 and to UNDERSTAND what is happening
Sorry this is so long, but hope it is now clearer
best
Diana
and as an extra treat would like marginal means from object of type lmer
Dear Diana,
If indeed what you're looking for is what Rolf mentioned, you might find Nakagawa & Schielzeth (2013) helpful.
It's titled "A general and simple method for obtaining R2 from generalized linear mixed-effects models". There is no dispute about the method's generality, but simple is a relative term...
Here's the link: https://besjournals.onlinelibrary.wiley.com/doi/epdf/10.1111/j.2041-210x.2012.00261.x
Hope this helps,
--
Roi Maor
PhD candidate
School of Zoology, Tel Aviv University
Centre for Biodiversity and Environment Research, UCL
_____________________________________
Professor Diana Kornbrot
Mobile
+44 (0) 7403 18 16 12
Work
University of Hertfordshire
College Lane, Hatfield, Hertfordshire AL10 9AB, UK
+44 (0) 170 728 4626
http://dianakornbrot.wordpress.com/
http://go.herts.ac.uk/Diana_Kornbrot
skype: kornbrotme
Home
19 Elmhurst Avenue
London N2 0LT, UK
+44 (0) 208 444 2081
------------------------------------------------------------
[[alternative HTML version deleted]]
_______________________________________________
https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models
Loading...