To show that this is a summary describing a large sample, we will dig a little deeper. We retrieve the posterior sample for all the parameters composing this model.
The data frame consist of the expected 8000 samples for a large number of variables.
The variables starting with b_ are the fixed effects, one for the intercept (= reference) and one effect compared to that reference for each of the treatments. The variable starting with sd_ is the standard deviation of the random effects, and the ones starting with r_ are the blups (if you don’t know what a blup is, reread the text on LMM). The rest of the variables are of no importance at this moment.
For instance, the sample for treatment T1 looks like this.
The mean is around -1.6 and the bulk of the sample ranges between -2.7 and +1.4.
We can take aside the fixed effect variables from this dataframe and back transform them.
However, before we can backtransform the samples, we need to turn the effects of T1 to T5 to absolute value by adding the intercept to the effects for coefficients 2 to 6.
And finally summarize these samples, for instance by calculating the mean and median and the quantiles 0.025 and 0.975, followed by some formatting for readability.
This approach leads immediately to all we need: estimates of mean and median and credible intervals.
Compare this to preds.m1.brms as it was obtained via predict.
[To be complete: check also fitted and conditional.effects.]
We can follow the same approach to get any contrast. For instance, if we need the difference between T1 and T2, and between T4 and T5, we can simple do the following:
According to the same strategy we can also calculate p-value-like probabilities, for instance the probability that T1 leads to more cell infection than T2.
The approach of exploiting the posterior sample is handy if you have such sample. The two other model strategies, don’t work with posterior samples, yet we can produce a similar sample that can be sued in a similar manner.
4.1.2 Contrasts with inla
Inla allows us to draw samples from the posterior distribution that was derived during modelling via the funtion inla.posterior.sample. Below we ask for 5000 samples.
m1.inla.postsamples <-inla.posterior.sample(5000, result = m1.inla)
The object produced by this function is rather complex so we wrote a function to help extract the fixed effects.
Again a very similar approach based on sampling is recommended fro the glmer model. In this case we do not have a posterior distribution. This role is taken up by bootstrap samples. See the [companion text] (https://hw-appliedlinmixmodinr.netlify.app/extractresult#bootstrapping-for-calculating-confidence-intervals) for more explanation on bootstrapping lmer models.
treatment exposure product
1 reference normal standard
2 T1 prolonged standard
3 T2 normal improvementA
4 T3 prolonged improvementA
5 T4 normal improvementB
6 T5 prolonged improvementB
Alternative models can be fit to see whether changes are caused by exposure, product or an interaction between them. There is the full model with both main factors and the interaction (see m2 below). One simpler alternative is the model without the interaction but with the two main factors (m3). If m3 fits as well as m2 it is unlikely that there is an interaction or that it is important compared to the main effects.
The same reasoning we can follow to see if both main factors are contributing or only one of them (models m4 and m5).
We have seen that brms provided most reliable results, but it takes time to calculate. I you are not in a hurry you can try the code below. The models can be compared via a leave-one-out crossvalidation via loo.
If you do this, you will see that loo will rank the models from most preferred to least preferred: m4 > m3 > m1 > m2 > m5. This indicates that product is the the only factor affecting infection in this experiment.
So here the faster inla come in handy. Mind you that waic=TRUE has been added to the control option list. This will add the waic or Watanabee-Akaiki information criterion to the output. This criterion penalizes for complexity: when two models fit similarly, the simpler one will be preferred. The smaller the waic the better.