Sie möchten die Hypothese (H0) testen, dass der mittlere Mietpreis in München 16,28€ beträgt (wie der Münchner Merkur einmal behauptet hat). Dafür ziehen Sie eine Stichprobe der Größe n = 36. Gehen Sie von einer SD von 3€ in der Population aus (Menge aller Mietwohnungen in München). Alpha sei 5%. Der Mittelwert Ihrer Stichprobe ist 16,79€. Nehmen Sie als H1 die Hypothese, dass der wahre mittlere Mietpreis höher ist.

Gesucht

  1. Was ist der z-Wert des Stichprobenergebnisses?
  2. Wie hoch ist die Wahrscheinlichkeit dieses Ergebnisses (oder noch extremerer), wenn die H0 gilt? Sprich: Was ist der p-Wert (2 Dezimalen)?
  3. Verwerfen Sie die H0?

Lösung

Wir fassen die gegebenen Informationen zusammen:

mue = 16.28  # mue laut H0
xquer = 16.79
sdpop = 3
n = 36

Als erstes berechnen wir den SE, den wir im nächsten Schritt für den z-Wert benötigen:

se = sdpop / sqrt(n)
se
## [1] 0.5

Wir berechnen den z-Wert:

z = (xquer - mue) / se
z
## [1] 1.02

Der z-Wert ist: 1.02.

Und berechnen den p-Wert für z:

p = 1 - pnorm(z)

… gerundet auf 2 Dezimalen:

p = round(p, 2)
p
## [1] 0.15

Der p-Wert beträgt: 0.15.

Wenn p < 5% (.05) ist, verwerfen wir die H0; wir prüfen also als letzten Schritt, ob diese Bedingung erfüllt ist.

p < .05
## [1] FALSE

Nein, wir verwerfen die H0 nicht, sondern behalten sie bei.

The R package dplyr has some attractive features; some say, this packkage revolutionized their workflow. At any rate, I like it a lot, and I think it is very helpful.

In this post, I would like to share some useful (I hope) ideas (“tricks”) on filter, one function of dplyr. This function does what the name suggests: it filters rows (ie., observations such as persons). The addressed rows will be kept; the rest of the rows will be dropped. Note that always a data frame tibble is returned.

Starters example

Load packages:

library(tidyverse)  # get da whole shabeng!
## Loading tidyverse: ggplot2
## Loading tidyverse: tibble
## Loading tidyverse: tidyr
## Loading tidyverse: readr
## Loading tidyverse: purrr
## Conflicts with tidy packages ----------------------------------------------
## filter(): dplyr, stats
## lag():    dplyr, stats
# don't forget to have the package(s) installed upfront.

An easy usecase would be:

mtcars %>% 
  filter(cyl >= 8)
##     mpg cyl  disp  hp drat    wt  qsec vs am gear carb
## 1  18.7   8 360.0 175 3.15 3.440 17.02  0  0    3    2
## 2  14.3   8 360.0 245 3.21 3.570 15.84  0  0    3    4
## 3  16.4   8 275.8 180 3.07 4.070 17.40  0  0    3    3
## 4  17.3   8 275.8 180 3.07 3.730 17.60  0  0    3    3
## 5  15.2   8 275.8 180 3.07 3.780 18.00  0  0    3    3
## 6  10.4   8 472.0 205 2.93 5.250 17.98  0  0    3    4
## 7  10.4   8 460.0 215 3.00 5.424 17.82  0  0    3    4
## 8  14.7   8 440.0 230 3.23 5.345 17.42  0  0    3    4
## 9  15.5   8 318.0 150 2.76 3.520 16.87  0  0    3    2
## 10 15.2   8 304.0 150 3.15 3.435 17.30  0  0    3    2
## 11 13.3   8 350.0 245 3.73 3.840 15.41  0  0    3    4
## 12 19.2   8 400.0 175 3.08 3.845 17.05  0  0    3    2
## 13 15.8   8 351.0 264 4.22 3.170 14.50  0  1    5    4
## 14 15.0   8 301.0 335 3.54 3.570 14.60  0  1    5    8

We see there are 15 cars with 8 cylinders.

Typical comparison operators to filter rows include:

  • == equality
  • != inequality
  • < or > greater than/ smaller than
  • <= less or equal

Multiple logical comparisons can be combined. Just add ‘em up using commas; that amounts to logical OR “addition”:

mtcars %>% 
  filter(cyl == 8, hp > 250)
##    mpg cyl disp  hp drat   wt qsec vs am gear carb
## 1 15.8   8  351 264 4.22 3.17 14.5  0  1    5    4
## 2 15.0   8  301 335 3.54 3.57 14.6  0  1    5    8

AND addition can be achived the standard way

mtcars %>% 
  filter(cyl == 6 & hp < 260)
##    mpg cyl  disp  hp drat    wt  qsec vs am gear carb
## 1 21.0   6 160.0 110 3.90 2.620 16.46  0  1    4    4
## 2 21.0   6 160.0 110 3.90 2.875 17.02  0  1    4    4
## 3 21.4   6 258.0 110 3.08 3.215 19.44  1  0    3    1
## 4 18.1   6 225.0 105 2.76 3.460 20.22  1  0    3    1
## 5 19.2   6 167.6 123 3.92 3.440 18.30  1  0    4    4
## 6 17.8   6 167.6 123 3.92 3.440 18.90  1  0    4    4
## 7 19.7   6 145.0 175 3.62 2.770 15.50  0  1    5    6

Before we continue, let’s transform the rowname (where the cars’ names are stored) into a proper column, so that we can address the cars names the usual way:

mtcars <- mtcars %>% 
    rownames_to_column

Pick value from list

One particular helpful way is to say “I want to keep any of the following items”. In R, the %in% operator comes for help. See:

mtcars %>% 
  filter(cyl %in% c(4, 6))
##           rowname  mpg cyl  disp  hp drat    wt  qsec vs am gear carb
## 1       Mazda RX4 21.0   6 160.0 110 3.90 2.620 16.46  0  1    4    4
## 2   Mazda RX4 Wag 21.0   6 160.0 110 3.90 2.875 17.02  0  1    4    4
## 3      Datsun 710 22.8   4 108.0  93 3.85 2.320 18.61  1  1    4    1
## 4  Hornet 4 Drive 21.4   6 258.0 110 3.08 3.215 19.44  1  0    3    1
## 5         Valiant 18.1   6 225.0 105 2.76 3.460 20.22  1  0    3    1
## 6       Merc 240D 24.4   4 146.7  62 3.69 3.190 20.00  1  0    4    2
## 7        Merc 230 22.8   4 140.8  95 3.92 3.150 22.90  1  0    4    2
## 8        Merc 280 19.2   6 167.6 123 3.92 3.440 18.30  1  0    4    4
## 9       Merc 280C 17.8   6 167.6 123 3.92 3.440 18.90  1  0    4    4
## 10       Fiat 128 32.4   4  78.7  66 4.08 2.200 19.47  1  1    4    1
## 11    Honda Civic 30.4   4  75.7  52 4.93 1.615 18.52  1  1    4    2
## 12 Toyota Corolla 33.9   4  71.1  65 4.22 1.835 19.90  1  1    4    1
## 13  Toyota Corona 21.5   4 120.1  97 3.70 2.465 20.01  1  0    3    1
## 14      Fiat X1-9 27.3   4  79.0  66 4.08 1.935 18.90  1  1    4    1
## 15  Porsche 914-2 26.0   4 120.3  91 4.43 2.140 16.70  0  1    5    2
## 16   Lotus Europa 30.4   4  95.1 113 3.77 1.513 16.90  1  1    5    2
## 17   Ferrari Dino 19.7   6 145.0 175 3.62 2.770 15.50  0  1    5    6
## 18     Volvo 142E 21.4   4 121.0 109 4.11 2.780 18.60  1  1    4    2

Partial matching

Suppose you would like to filter all Mercs; the Mercs include “Merc 240D”, “Merc 280C” and other. So we cannot filter for “Merc” as an exact search string. We need to tell R, “hey if ‘Merc’ is a part of this string, then filter it, otherwise leave it”.

For more flexible string-operations, we can make use of the package stringr (again, by Hadley Wickham).

library(stringr)
mtcars %>% 
  filter(str_detect(rowname, "Merc"))
##       rowname  mpg cyl  disp  hp drat   wt qsec vs am gear carb
## 1   Merc 240D 24.4   4 146.7  62 3.69 3.19 20.0  1  0    4    2
## 2    Merc 230 22.8   4 140.8  95 3.92 3.15 22.9  1  0    4    2
## 3    Merc 280 19.2   6 167.6 123 3.92 3.44 18.3  1  0    4    4
## 4   Merc 280C 17.8   6 167.6 123 3.92 3.44 18.9  1  0    4    4
## 5  Merc 450SE 16.4   8 275.8 180 3.07 4.07 17.4  0  0    3    3
## 6  Merc 450SL 17.3   8 275.8 180 3.07 3.73 17.6  0  0    3    3
## 7 Merc 450SLC 15.2   8 275.8 180 3.07 3.78 18.0  0  0    3    3

Of course, we now can go wild, making use of the whole string manipulation magic, called Regex. This tool is powerful indeed, but it needs some time to get used to it. For example, let’s filter all cars where the cars names begins with “L”:

mtcars %>% 
  filter(str_detect(rowname, "^L"))
##               rowname  mpg cyl  disp  hp drat    wt  qsec vs am gear carb
## 1 Lincoln Continental 10.4   8 460.0 215 3.00 5.424 17.82  0  0    3    4
## 2        Lotus Europa 30.4   4  95.1 113 3.77 1.513 16.90  1  1    5    2

The circonflex means “the string starts with”; in this example “the string starts with ‘L’”. To get values ending with, say, “L”, we use $ in Regex:

mtcars %>% 
  filter(str_detect(rowname, "L$"))
##          rowname  mpg cyl  disp  hp drat   wt qsec vs am gear carb
## 1     Merc 450SL 17.3   8 275.8 180 3.07 3.73 17.6  0  0    3    3
## 2 Ford Pantera L 15.8   8 351.0 264 4.22 3.17 14.5  0  1    5    4

Another usecase could be that we want to pick rows where the names contain digits.

mtcars %>% 
  filter(str_detect(rowname, "\\d"))
##           rowname  mpg cyl  disp  hp drat    wt  qsec vs am gear carb
## 1       Mazda RX4 21.0   6 160.0 110 3.90 2.620 16.46  0  1    4    4
## 2   Mazda RX4 Wag 21.0   6 160.0 110 3.90 2.875 17.02  0  1    4    4
## 3      Datsun 710 22.8   4 108.0  93 3.85 2.320 18.61  1  1    4    1
## 4  Hornet 4 Drive 21.4   6 258.0 110 3.08 3.215 19.44  1  0    3    1
## 5      Duster 360 14.3   8 360.0 245 3.21 3.570 15.84  0  0    3    4
## 6       Merc 240D 24.4   4 146.7  62 3.69 3.190 20.00  1  0    4    2
## 7        Merc 230 22.8   4 140.8  95 3.92 3.150 22.90  1  0    4    2
## 8        Merc 280 19.2   6 167.6 123 3.92 3.440 18.30  1  0    4    4
## 9       Merc 280C 17.8   6 167.6 123 3.92 3.440 18.90  1  0    4    4
## 10     Merc 450SE 16.4   8 275.8 180 3.07 4.070 17.40  0  0    3    3
## 11     Merc 450SL 17.3   8 275.8 180 3.07 3.730 17.60  0  0    3    3
## 12    Merc 450SLC 15.2   8 275.8 180 3.07 3.780 18.00  0  0    3    3
## 13       Fiat 128 32.4   4  78.7  66 4.08 2.200 19.47  1  1    4    1
## 14     Camaro Z28 13.3   8 350.0 245 3.73 3.840 15.41  0  0    3    4
## 15      Fiat X1-9 27.3   4  79.0  66 4.08 1.935 18.90  1  1    4    1
## 16  Porsche 914-2 26.0   4 120.3  91 4.43 2.140 16.70  0  1    5    2
## 17     Volvo 142E 21.4   4 121.0 109 4.11 2.780 18.60  1  1    4    2

In Regex, \d means “digit”. As the backslash needs to be escaped (by typing an extra backslash), we type two backslashes, and get what we want.

Similarly, if we want all values except those with digits, we could say:

mtcars %>% 
  filter(!str_detect(rowname, "\\d"))
##                rowname  mpg cyl  disp  hp drat    wt  qsec vs am gear carb
## 1    Hornet Sportabout 18.7   8 360.0 175 3.15 3.440 17.02  0  0    3    2
## 2              Valiant 18.1   6 225.0 105 2.76 3.460 20.22  1  0    3    1
## 3   Cadillac Fleetwood 10.4   8 472.0 205 2.93 5.250 17.98  0  0    3    4
## 4  Lincoln Continental 10.4   8 460.0 215 3.00 5.424 17.82  0  0    3    4
## 5    Chrysler Imperial 14.7   8 440.0 230 3.23 5.345 17.42  0  0    3    4
## 6          Honda Civic 30.4   4  75.7  52 4.93 1.615 18.52  1  1    4    2
## 7       Toyota Corolla 33.9   4  71.1  65 4.22 1.835 19.90  1  1    4    1
## 8        Toyota Corona 21.5   4 120.1  97 3.70 2.465 20.01  1  0    3    1
## 9     Dodge Challenger 15.5   8 318.0 150 2.76 3.520 16.87  0  0    3    2
## 10         AMC Javelin 15.2   8 304.0 150 3.15 3.435 17.30  0  0    3    2
## 11    Pontiac Firebird 19.2   8 400.0 175 3.08 3.845 17.05  0  0    3    2
## 12        Lotus Europa 30.4   4  95.1 113 3.77 1.513 16.90  1  1    5    2
## 13      Ford Pantera L 15.8   8 351.0 264 4.22 3.170 14.50  0  1    5    4
## 14        Ferrari Dino 19.7   6 145.0 175 3.62 2.770 15.50  0  1    5    6
## 15       Maserati Bora 15.0   8 301.0 335 3.54 3.570 14.60  0  1    5    8

As ! is used for a logical not, we can invert our expression above (`str_detect) this way.

Finally, say we want to filter all “Mercs” and all “Toyotas”. As there are different Mercs and different Toyotas in the data set, we need to tell R something like “take all Mercs you can find all Toyotas, and leave the rest”.

What does not work is this:

mtcars %>% 
  filter(rowname %in% c("Merc", "Toyota"))
##  [1] rowname mpg     cyl     disp    hp      drat    wt      qsec   
##  [9] vs      am      gear    carb   
## <0 rows> (or 0-length row.names)

The code above does not work, as the %in% operators does not partial matching but expects complete matching.

Again, Regex can help:

mtcars %>% 
  filter(str_detect(rowname, "Merc|Toy"))
##          rowname  mpg cyl  disp  hp drat    wt  qsec vs am gear carb
## 1      Merc 240D 24.4   4 146.7  62 3.69 3.190 20.00  1  0    4    2
## 2       Merc 230 22.8   4 140.8  95 3.92 3.150 22.90  1  0    4    2
## 3       Merc 280 19.2   6 167.6 123 3.92 3.440 18.30  1  0    4    4
## 4      Merc 280C 17.8   6 167.6 123 3.92 3.440 18.90  1  0    4    4
## 5     Merc 450SE 16.4   8 275.8 180 3.07 4.070 17.40  0  0    3    3
## 6     Merc 450SL 17.3   8 275.8 180 3.07 3.730 17.60  0  0    3    3
## 7    Merc 450SLC 15.2   8 275.8 180 3.07 3.780 18.00  0  0    3    3
## 8 Toyota Corolla 33.9   4  71.1  65 4.22 1.835 19.90  1  1    4    1
## 9  Toyota Corona 21.5   4 120.1  97 3.70 2.465 20.01  1  0    3    1

Note that the | (vertical dash) means “or” (in R and in Regex).

The same result could be achived in a more usual R way:

mtcars %>% 
  filter(str_detect(rowname, "Merc") | str_detect(rowname, "Toy"))
##          rowname  mpg cyl  disp  hp drat    wt  qsec vs am gear carb
## 1      Merc 240D 24.4   4 146.7  62 3.69 3.190 20.00  1  0    4    2
## 2       Merc 230 22.8   4 140.8  95 3.92 3.150 22.90  1  0    4    2
## 3       Merc 280 19.2   6 167.6 123 3.92 3.440 18.30  1  0    4    4
## 4      Merc 280C 17.8   6 167.6 123 3.92 3.440 18.90  1  0    4    4
## 5     Merc 450SE 16.4   8 275.8 180 3.07 4.070 17.40  0  0    3    3
## 6     Merc 450SL 17.3   8 275.8 180 3.07 3.730 17.60  0  0    3    3
## 7    Merc 450SLC 15.2   8 275.8 180 3.07 3.780 18.00  0  0    3    3
## 8 Toyota Corolla 33.9   4  71.1  65 4.22 1.835 19.90  1  1    4    1
## 9  Toyota Corona 21.5   4 120.1  97 3.70 2.465 20.01  1  0    3    1

Filtering NAs

For the sake of illustration, let’s introduce some NAs to mtcars.

mtcars$mpg[sample(32, 3)] <- NA
mtcars$cyl[sample(32, 3)] <- NA
mtcars$hp[sample(32, 3)] <- NA
mtcars$wt[sample(32, 3)] <- NA

First, we filter all lines where there are no NAs in mpg:

mtcars %>% 
  filter(!is.na(mpg))
##                rowname  mpg cyl  disp  hp drat    wt  qsec vs am gear carb
## 1            Mazda RX4 21.0   6 160.0 110 3.90 2.620 16.46  0  1    4    4
## 2        Mazda RX4 Wag 21.0   6 160.0 110 3.90 2.875 17.02  0  1    4    4
## 3           Datsun 710 22.8  NA 108.0  93 3.85 2.320 18.61  1  1    4    1
## 4       Hornet 4 Drive 21.4   6 258.0 110 3.08 3.215 19.44  1  0    3    1
## 5    Hornet Sportabout 18.7   8 360.0 175 3.15 3.440 17.02  0  0    3    2
## 6              Valiant 18.1   6 225.0 105 2.76 3.460 20.22  1  0    3    1
## 7           Duster 360 14.3   8 360.0 245 3.21    NA 15.84  0  0    3    4
## 8            Merc 240D 24.4  NA 146.7  62 3.69 3.190 20.00  1  0    4    2
## 9             Merc 280 19.2   6 167.6 123 3.92 3.440 18.30  1  0    4    4
## 10           Merc 280C 17.8   6 167.6 123 3.92 3.440 18.90  1  0    4    4
## 11          Merc 450SE 16.4   8 275.8 180 3.07 4.070 17.40  0  0    3    3
## 12          Merc 450SL 17.3   8 275.8 180 3.07 3.730 17.60  0  0    3    3
## 13  Cadillac Fleetwood 10.4   8 472.0 205 2.93 5.250 17.98  0  0    3    4
## 14 Lincoln Continental 10.4   8 460.0 215 3.00 5.424 17.82  0  0    3    4
## 15   Chrysler Imperial 14.7   8 440.0 230 3.23 5.345 17.42  0  0    3    4
## 16            Fiat 128 32.4   4  78.7  66 4.08 2.200 19.47  1  1    4    1
## 17      Toyota Corolla 33.9   4  71.1  65 4.22 1.835 19.90  1  1    4    1
## 18       Toyota Corona 21.5   4 120.1  97 3.70 2.465 20.01  1  0    3    1
## 19    Dodge Challenger 15.5   8 318.0 150 2.76    NA 16.87  0  0    3    2
## 20         AMC Javelin 15.2   8 304.0 150 3.15 3.435 17.30  0  0    3    2
## 21          Camaro Z28 13.3   8 350.0 245 3.73 3.840 15.41  0  0    3    4
## 22    Pontiac Firebird 19.2  NA 400.0 175 3.08 3.845 17.05  0  0    3    2
## 23           Fiat X1-9 27.3   4  79.0  66 4.08 1.935 18.90  1  1    4    1
## 24       Porsche 914-2 26.0   4 120.3  91 4.43 2.140 16.70  0  1    5    2
## 25        Lotus Europa 30.4   4  95.1 113 3.77 1.513 16.90  1  1    5    2
## 26      Ford Pantera L 15.8   8 351.0  NA 4.22 3.170 14.50  0  1    5    4
## 27        Ferrari Dino 19.7   6 145.0 175 3.62 2.770 15.50  0  1    5    6
## 28       Maserati Bora 15.0   8 301.0  NA 3.54 3.570 14.60  0  1    5    8
## 29          Volvo 142E 21.4   4 121.0 109 4.11 2.780 18.60  1  1    4    2

Easy. Next, we filter complete rows. Wait, there’s a shortcut for that:

mtcars %>% 
  na.omit %>% 
  nrow
## [1] 22

But, what if we only care about NAs at mpg and hp? Say, we want any row with NA in these two columns. Here’s a way:

mtcars %>% 
  filter(!is.na(mpg) & !is.na(hp)) 
##                rowname  mpg cyl  disp  hp drat    wt  qsec vs am gear carb
## 1            Mazda RX4 21.0   6 160.0 110 3.90 2.620 16.46  0  1    4    4
## 2        Mazda RX4 Wag 21.0   6 160.0 110 3.90 2.875 17.02  0  1    4    4
## 3           Datsun 710 22.8  NA 108.0  93 3.85 2.320 18.61  1  1    4    1
## 4       Hornet 4 Drive 21.4   6 258.0 110 3.08 3.215 19.44  1  0    3    1
## 5    Hornet Sportabout 18.7   8 360.0 175 3.15 3.440 17.02  0  0    3    2
## 6              Valiant 18.1   6 225.0 105 2.76 3.460 20.22  1  0    3    1
## 7           Duster 360 14.3   8 360.0 245 3.21    NA 15.84  0  0    3    4
## 8            Merc 240D 24.4  NA 146.7  62 3.69 3.190 20.00  1  0    4    2
## 9             Merc 280 19.2   6 167.6 123 3.92 3.440 18.30  1  0    4    4
## 10           Merc 280C 17.8   6 167.6 123 3.92 3.440 18.90  1  0    4    4
## 11          Merc 450SE 16.4   8 275.8 180 3.07 4.070 17.40  0  0    3    3
## 12          Merc 450SL 17.3   8 275.8 180 3.07 3.730 17.60  0  0    3    3
## 13  Cadillac Fleetwood 10.4   8 472.0 205 2.93 5.250 17.98  0  0    3    4
## 14 Lincoln Continental 10.4   8 460.0 215 3.00 5.424 17.82  0  0    3    4
## 15   Chrysler Imperial 14.7   8 440.0 230 3.23 5.345 17.42  0  0    3    4
## 16            Fiat 128 32.4   4  78.7  66 4.08 2.200 19.47  1  1    4    1
## 17      Toyota Corolla 33.9   4  71.1  65 4.22 1.835 19.90  1  1    4    1
## 18       Toyota Corona 21.5   4 120.1  97 3.70 2.465 20.01  1  0    3    1
## 19    Dodge Challenger 15.5   8 318.0 150 2.76    NA 16.87  0  0    3    2
## 20         AMC Javelin 15.2   8 304.0 150 3.15 3.435 17.30  0  0    3    2
## 21          Camaro Z28 13.3   8 350.0 245 3.73 3.840 15.41  0  0    3    4
## 22    Pontiac Firebird 19.2  NA 400.0 175 3.08 3.845 17.05  0  0    3    2
## 23           Fiat X1-9 27.3   4  79.0  66 4.08 1.935 18.90  1  1    4    1
## 24       Porsche 914-2 26.0   4 120.3  91 4.43 2.140 16.70  0  1    5    2
## 25        Lotus Europa 30.4   4  95.1 113 3.77 1.513 16.90  1  1    5    2
## 26        Ferrari Dino 19.7   6 145.0 175 3.62 2.770 15.50  0  1    5    6
## 27          Volvo 142E 21.4   4 121.0 109 4.11 2.780 18.60  1  1    4    2

Finally, assume we want to inspect each row where there is at least one NA at mpg or at hp.

mtcars %>% 
  filter((is.na(mpg) | is.na(hp))) 
##          rowname  mpg cyl  disp  hp drat    wt  qsec vs am gear carb
## 1       Merc 230   NA   4 140.8  NA 3.92 3.150 22.90  1  0    4    2
## 2    Merc 450SLC   NA   8 275.8 180 3.07    NA 18.00  0  0    3    3
## 3    Honda Civic   NA   4  75.7  52 4.93 1.615 18.52  1  1    4    2
## 4 Ford Pantera L 15.8   8 351.0  NA 4.22 3.170 14.50  0  1    5    4
## 5  Maserati Bora 15.0   8 301.0  NA 3.54 3.570 14.60  0  1    5    8

Happy filtering!

Recently, Andrew Gelman (@StatModeling at Twitter) published a post with this title - ““Dear Major Textbook Publisher”: A Rant”.

In essence, he discussed how a good stats intro text book should be like. And complained about the low quality of some many textbooks out there.

As I am also in the business guilty of coming up with stats curriculum for my students (applied courses for business type students mostly), I discuss some thoughts for “stats curriculum developers” (like myself).

Of course, the GAISE report provides an authoritative overview on what is considered helpful (by the ASA and by many others). My compilation builds on that but adds/drops some points.

Principles in stats curriculum

  • statistical thinking above anything else. What is statistics all about? Get a bunch of data, see the essence (some summary, often). But wait, there is variability, stuff comes in different shades of grey. How certain (or rather uncertain) are we about some hypothesis? Ah, that’s why probability comes into play. My grand grandpa did smoke, loved his beer, and see, turned 94 recently, and you keep telling me that would not be enough to “demonstrate”…. Ah, OK… What does statistics say about causality? (not much, to my mind.) This commitment implies that procedural know-how need be downgraded (we cannot have the cake and eat it).

  • Problem solving. Give students some dataset, some question, and let them investigate. Discuss, refine, correct their actions. Explain afterwards better ways (I heard this was called inductive learning).

  • Real data, real problems. Don’t stay with urns and coin tosses, fine as a starter, but then go on to real data, and real problems. For example, I used extraversion and number of Facebook friends as a running example; based on the data students gave to this survey. So they analyzed their own data. If you want to make use of the data: you are welcome, here it is (free as in free).

  • A lot of ‘guide at your side’ (activation). Rather then ‘sage on stage’. However, I found that surprisingly many students appear overwhelmed if the “activation dose” gets too strong. So I try to balance the time when I speak, students work on their own or when they work in groups. Honestly, what’s the advantage of listening to me in person compared to watching a Youtube video? OK, at times, I may respond (Youtube not so much), but the real benefit would be joint problem solving. So let’s do that.

  • Technology (R). Computers are already ubiquitous, and penetrating even more in everyday life. So we as teachers should not dream of the “good old days” where we solved triple integrals with formulas as wide as my arm span (I never did). Future is likely to dominated by computers when it comes to our working style or working tools. So let’s use them and show them how to code. A bit, at least (many are frightened though initially). R is a great environment for that purpose.

  • Ongoing selfassessment. Cognitive and educational psychology points out that frequent assessment helps a lot to learn some stuff [citation missing]. So let’s give the students ongoing feedback. For example, what I do is I use a Google form with quiz feature for that purpose. Minimal preparation from my side for tomorrow’s class, because that’s all pre-prepared.

  • Focus on/start with intuition. Don’t start by throwing some strange (\LaTeX typeset, of course) equation in the students’ faces (unless you are teaching in some math heavy fields, but not in business). Start by explaining the essence in simple words, and giving an intuition. For example, variance can be depicted as (something like) “average deviation sticks”; correlation as “average deviation rectangles” (see this post). When the essence is understood, refine. Now come your long equations. This post speaks of a refinement process from “sorta-true” to very true. Quite true.

  • Be nice. I think there is no point in being overly austere; surely you must be joking, Mr. Feynman. Even stats classes can be fun… Personally, I find Andy Field’s tone in his textbook quite refreshing.

Aims of a stats curriculum

If your class belongs to a business type of field, don’t expect they all will turn to science as a career now. More realistically, they will have to face some stats questions in their working live. Two things may happen then.

First, they hire someone who really knows what to do (at least, someone who thinks so). Then your students of today need be prepared to speak to that expert. They need enough understanding to get the gist of the experts strategy. They need be able to ask some good questions. But the clearly do not need to understand many details, if any.

Second scenario. The future ego of your today-student will by her or himself do some stats problem solving. I think even our applied business students should have some working knowledge, maybe on regression plus some Random Forests, based on some general ideas plus visualization and data wrangling.

In sum, critical understanding is the first and some partial, locally constrained actionable know-how is second.

Content

Some thought blobs.

  • Descriptive stats can be paired with some cool visualization. I like qplot from ggplot2 because it is easy start, but allows to combine different steps, which fosters creative problem solving (I hope).

  • Data wrangling. dplyr is great because with ~5 functions you can rule them all see here for an intro. This is, a couple of easy functions, such as filter, summarise, or count do the bulk of typical data preparation tasks (yes, these functions pretty much do what their name suggests).

  • wee p, I tend(ed) to focus on this point in class, not because I love it, but because it is still the way papers are built. So students need to understand it. And it is not easy to comprehend. If they do not understand, how can the new generation overcome the problems of the past. I try to lay bare the problems of the p-value (mainly that it is not the answer we need).

  • Bayes. It is (probably) too strong to state that without understanding Bayes, one cannot understand the p-value. But understanding posterior probabilities is a relief, as the p-value can now be accepted as what it is: some conditional probability (of some data given some hypothesis). Bayes puts p into context. But Bayes, I fear – not being deeply based in Bayes – can necessitate a lot of ground work, time consuming fundamentals, if you really want some working knowledge. That’s why I, for the moment at least, take some shortcut, and present only the basic concept, alongside, maybe, some off-the-shelf software such as JASP.

  • Basic statistical learning ideas. Overfitting is an essential concept; it should be grasped. There are a number of similarly important points (particularly resampling, cross-validation, and bias-variance trade-off, for example); the book Introduction to Statistical Learning is of great help here.

  • Algorithmic methods, in the sense of Breiman’s two cultures. One particular nice candidate here are tree-based methods, including notorious Random Forests. Conceptually easy yet high predictive performance in some situations.

  • Text mining. 90% of all is something, I mean, 90% of all information is unstructured, mostly image and text, some say. So let’s look at that. That’s fun, not yet so much looked at, and for business students of some relevance. Julia Silge’s and David Robinsons’s book on Textmining appears great (and vast, for that matter).

  • Repro crisis. Let’s not be silent about the problems we face in Science. True, there are great challenges, but hey, what we need is not less, but more science. A good dose of open science can and should be plugged in here.

Byebye Powerpoint

In my experience, it can be difficult by bypass PowerPoint, even when you are preparing a curriculum. In my university, several teachers may/are encouraged to make use of your stuff, and they may/will utterly complain if you do not use PowerPoint (Keynote as a surrogate is not a solution).

But for the next curriculum, my plan is to use RMarkdown to write a real scriptum, not only slides. That will be lots of work. But then it will be easy to come up with lean slides, without much text, because there will be the script for the details. Then, slides, can be used what they are intended for: convey one idea per slide, no or little text, primarily consisting of graphics or schema. The slides can be prepared using RMarkdown, too. RMarkdown is not particularly fit for that purpose, though. I mean, you will not have the full features of PPT, and not the ease. But you will not need it. Simple slides can very well be made using RMarkdown. Once there is a real “mini-book”, aka script, no detailed slides are needed any more.

Git and friends can then be used for version control, collaboration, and so on…

However, a lot of colleagues will complain, I fear. Let’s see. I hope to convince them that this is a far better way. Bye bye PowerPoint.

Conclusions

All that here are some quick thoughts in progress. They are more to make up my mind, and to stimulate your thinking. I believe all that written here, but I am sure, I missed a lot and made a number of mistakes. Let me know your thoughts!

Acknowledgements

I learnt a lot, both statistically as well as didactically, from my colleague Karsten Luebke, whose thoughts also went into this post.

Teaching or learning stats can be a challenging endeavor. In my experience, starting with concrete (as opposed to abstract) examples helps many a learner. What also helps (for me) is visualizing.

As p-values are still part and parcel of probably any given stats curriculum, here is a convenient function to simulate p-values and to plot them.

“Simulating p-values” amounts to drawing many samples from a given, specified population (eg., µ=100, s=15, normally distributed). We could ourselves go out and draw samples (eg., testing IQ of strangers). As the computer can do that too and does not mourn about repetitive tasks, let’s leave that task to the machine.

Simulation can be easier to comprehend than working with theoretical distributions. It is more tangible to say “we are drawing many samples” then to speak about some theoretical distribution. That’s why simulation can be of didactic value.

So, here you can source an R function which will do the simulation plus plotting job for you.

source("https://sebastiansauer.github.io/Rcode/clt_1.R")

Note that dplyr and ggplot need to installed.

Use the function simu_p(). It can be run without parameters:

s1 <- simu_p()

plot of chunk unnamed-chunk-2

The plot shows the sampling distribution, the 5% sample means with highest values, the p-value, and the mean of the sample actual drawn (dashed line)

However, one is free to adapt the parameters, eg., experimenting with smaller or larger samples sizes, or a different sample mean:

s2 <- simu_p(n_samples = 1000, mean = 100, sd = 15, n = 5, distribution = "normal", sample_mean = 104)

plot of chunk unnamed-chunk-3

Note that two types of distribution to draw from are implemented: normal and uniform:

s3 <- simu_p(distribution = "uniform", sample_mean = .6)

plot of chunk unnamed-chunk-4

Initially, I wrote the function to demonstrate the central limit theorem. That’s why two different distributions (normal and uniform) are implemented.

Note that the p-value presented here is of type greater than only, for didactic reasons.

The object returned, eg. s2 contains the values, a cumulative percentage, and an indication whether a certain sample mean was smaller or larger than the significance level, or the sample mean, respectively:

head(s1)
##     samples   perc_num max_5perc greater_than_sample
## 1 101.73581 0.74374374         0                   0
## 2 102.74671 0.84484484         0                   0
## 3  93.98155 0.01501502         0                   0
## 4  98.10223 0.25825826         0                   0
## 5 100.50964 0.56456456         0                   0
## 6 100.17396 0.52152152         0                   0

Here is the source code of the function simu_p():

simu_p <- function(n_samples = 1000, mean = 100, sd = 15, n = 30, distribution = "normal", sample_mean = 107){

  library(ggplot2)
  library(dplyr)



  if (distribution == "normal") {
    samples <- replicate(n_samples, mean(rnorm(n = n, mean = mean, sd = sd)))

    df <- data.frame(samples = samples)

    df %>%
      mutate(perc_num = percent_rank(samples),
             max_5perc = ifelse(perc_num > 950, 1, 0),
             greater_than_sample = ifelse(samples > sample_mean, 1, 0)) -> df

    p_value <- round(mean(df$greater_than_sample), 3)

    p <- ggplot(df) +
      aes(x = samples) +
      geom_histogram() +
      labs(title = paste("Histogram of ", n_samples, " samples", "\n from a normal distribution", sep = ""),
           caption = paste("mean-pop=", mean, ", sd-pop=",sd, sep = "", ", mean in sample=", sample_mean),
           x = "sample means") +
      geom_histogram(data = dplyr::filter(df, perc_num > .95), fill = "red") +
      theme(plot.title = element_text(hjust = 0.5)) +
      geom_vline(xintercept = sample_mean, linetype = "dashed", color = "grey40") +
      ggplot2::annotate("text", x = Inf, y = Inf, label = paste("p =",p_value), hjust = 1, vjust = 1)
    print(p)
  }


  if (distribution == "uniform") {
    samples <- replicate(n_samples, mean(runif(n = n, min=0, max=1)))
    df <- data.frame(samples = samples)

    if (sample_mean > 1) sample_mean <- .99

    df %>%
      mutate(perc_num = percent_rank(samples),
             max_5perc = ifelse(perc_num > 950, 1, 0),
             greater_than_sample = ifelse(samples > sample_mean, 1, 0)) -> df

    p_value <- round(mean(df$greater_than_sample), 3)

    p <- ggplot(df) +
      aes(x = samples) +
      geom_histogram() +
      labs(title = paste("Histogram of ", n_samples, " samples", "\n from a uniform distribution", sep = ""),
           caption = paste("sample mean =", sample_mean, ", min-pop = 0, max-pop = 1"),
           x = "sample means") +
      geom_histogram(data = dplyr::filter(df, perc_num > .95), fill = "red") +
      theme(plot.title = element_text(hjust = 0.5)) +
      geom_vline(xintercept = sample_mean, linetype = "dashed", color = "grey40") +
      ggplot2::annotate("text", x = Inf, y = Inf, label = paste("p =",p_value), hjust = 1, vjust = 1)
    print(p)

  }
return(df)

}

One idea of problem solving is, or should be, I think, that one should tackle problems of high complexity, but not too high. That sounds trivial, cooler tone would be “as hard as possible, as easy as necessary” which is basically the same thing.

In software development including Rstats, a similar principle applies. Sounds theoretical, I admit. So see here some lines of code that has bitten me recently:

obs <-  c(1,2,3)
pred <- c(1,2,4)

monster <- 1 - (sum((obs - pred)^2))/(sum((obs - mean(obs))^2))
monster
## [1] 0.5

The important line is of course

 1 - (sum((obs - pred)^2))/(sum((obs - mean(obs))^2))

The bug I incorporated was something like this:

 1 - ((obs - pred)^2)/((obs - mean(obs))^2)

Friendly bugs yield a functions that dies trying its job. Nasty bugs silently infuse problems. In this case, not a single number, but a vector of numbers resulted. Once you know it, surely, it is easy. But starring at some 1000 lines of codes can make it more difficult to see…

So, one could argue that the complexity was (too) high… for me, at least. Sigh. To put it differently: Can the code rendered more bug-robust?

I think the pipe —- %>% —- helps to reduce the complexity. The pipe hacks a big package in tiny pieces, to be dealt with step-by-step, one after each other. And not all in one gobble-dee-gopp.

So, compare the pipe code:

library(dplyr)  # make sure it is installed

obs %>% 
  `-`(pred) %>% 
  `^`(2) %>% 
  sum -> SS_b

obs %>% 
  `-`(mean(obs)) %>% 
  `^`(2) %>% 
  sum -> SS_t
  

R2 <- 1 - (SS_b/SS_t)
R2
## [1] 0.5

In words, what does this code do:

  1. Take obs, then
  2. from that (obs) subtract pred, then
  3. square each number, then
  4. sum up the resulting numbers, then
  5. save that number as SS_b

That’s really simple, isn’t it!

Similarly procedure for the second bit. tl;dr.

Note that the pipe comes from the package magrittr, but is included by dplyr.

Admittedly, I have allowed one or two intermediate steps which make it easier to follow, too. But that’s also a way to reduce unnecessary (I would say) complexity.