Cheat sheets for R

9 September 2017

Useful “cheat sheets” for R and RStudio

  • Data Import
  • Data Transformation
  • Sparklyr
  • R Markdown
  • RStudio IDE
  • Shiny
  • Data Visualization
  • Package Development

Click me!

Advertisements

Quote

16 September 2016

Good science is the one which disobeys

xkcd: Linear regression

30 August 2016

linear_regression

The 95% confidence interval suggests Rexthor’s dog could also be a cat, or possibly a teapot.

Big data…

29 June 2016

The risks of Big Data – or why I am not worried about brain tumours

http://understandinguncertainty.org/risks-big-data-%E2%80%93-or-why-i-am-not-worried-about-brain-tumours

Link: http://nyti.ms/1XsJPHp

A good and sound idea or a way to avoid scientific critics?

Fools…

15 June 2015

Fools make researches and wise men exploit them. H. G. Wells

About correlation

4 June 2015

An “old” but refreshing paper about the “correlation fallacy”: Anscombe’s Quartet

He presents four bivariate data sets with the same number of observations, same means, same variance, same correlation and same regression coefficients, but…

Look at it: http://www.sjsu.edu/faculty/gerstman/StatPrimer/anscombe1973.pdf

One of my colleague is planning to submit a paper to the International Journal of Eating Disorders and they provide very useful and sound guidelines regarding statistical thinking (indeed, their recommendations are more than just statistical reporting guidelines).

For example:

Misinterpretation of Nonsignificant Hypothesis Tests
A common scientific error is the misinterpretation of a nonsignificant hypothesis test as evidence of no effect. We are taught never to accept a null hypothesis. One can fail to  reject a hypothesis for many reasons, other than no effect. Among these, a study can be underpowered, have unexpectedly large variance, fail to recruit the desired number of participants, have a model with two correlated predictors such that in the presence of the other, neither has any significant prediction of the response, and many other reasons. This situation is analogous to the verdict in a criminal trial in the United States: Guilty or Not Guilty. Not Guilty does not mean innocent. One can be found not guilty because there was insufficient evidence, some evidence was ruled inadmissible by the judge, evidence became contaminated, the prosecutor poorly organized or presented what would have been sufficient evidence, etc. Absence of Evidence is not Evidence of Absence.

Guideline: Never interpret a nonsignificant effect as evidence that no effect exists.
References: Ioannidis J. “Why most published research findings are false”. PLoS Med 2005; 2: e124. doi:10.1371/journal.pmed.0020124. PMID 16060722

You can find the document right here.

Recently, during a conference, I was asked about the validity of my statistical analysis since the 3 groups I was comparing were of (highly) unequal sample sizes. It was an interesting question actually and in order to have a better insight,  I decided to start with a simple simulation analysis: comparing two means using a T-test. Here are the results of these simulations.

First I wanted to asses the type I error (the incorrect rejection of a true null hypothesis). I simulated 5000 replications of two independent normally distributed samples with mean equal to 0 and variance equal to one. I varied the sample sizes in two different manner. The first way consisted in keeping the size of the smallest sample (n1) fixed and changing the ratio n2/n1, modifying the smallest sample size n1 for different simulations.

The second way consisted in keeping the total sample size (n1 + n2) fixed and changing the ratio (n2/(n1 + n2)),in this case modifying total sample size for different simulations.

I averaged how many times the null hypothesis was rejected at the 0.05 level over the 5000 replications.

The results are presented here:

Type_I_Error

You can see that, as far as type I error goes, unequal sample sizes have no impact on it. Good news!

Now, let’s look at the Type II error or Power (the failure to reject a false null hypothesis).

I did the exact same scenarios as for the assessment of the Type I error, with the only difference being that the first sample had a mean of 0 and the second of 0.5.

Here are the results:

Type_II_Error

You can see from the graphs that the power appears to  depend on the total sample size, as expected, but also and very heavily on the size of the smallest sample. Indeed even with a total sample size of 1000, which is very large for such a mean difference (0.5) but with a ratio of 0.01 (i.e. n1=10 and n2=990), the power drop to 0.3!

Conclusions: Regarding the power, sample sizes equality matters. Therefore a power analysis when nothing significant comes out would be required.

Scientific method: Statistical errors

P values, the ‘gold standard’ of statistical validity, are not as reliable as many scientists assume.

A bright article by Regina Nuzzo in Nature (2014: 506, 150–152, doi:10.1038/506150a)