diff --git a/Assignment 2-2020.Rmd b/Assignment 2-2020.Rmd index 0b235a3..e658652 100644 --- a/Assignment 2-2020.Rmd +++ b/Assignment 2-2020.Rmd @@ -29,6 +29,8 @@ D1 <- read.csv("video-data.csv", header = TRUE) #Create a data frame that only contains the years 2018 D2 <- filter(D1, year == 2018) +#preview the tibble. I like to get a preview of the data this way instead of clicking on the table or using the View command +D2 ``` ## Histograms @@ -90,18 +92,24 @@ pairs(D5) 1. Create a simulated data set containing 100 students, each with a score from 1-100 representing performance in an educational game. The scores should tend to cluster around 75. Also, each student should be given a classification that reflects one of four interest groups: sport, music, nature, literature. -```{r} +```{r simulate data,} #rnorm(100, 75, 15) creates a random sample with a mean of 75 and standard deviation of 20 -#pmax sets a maximum value, pmin sets a minimum value #round rounds numbers to whole number values #sample draws a random samples from the groups vector according to a uniform distribution +#I used expand_grid to stay in the tidyverse +stid=seq(1,100,1) +scores=round(pmin(100,pmax(1, rnorm(100,75,15)))) +interest=sample(c("sport","music","nature","literature"), 100, replace=TRUE) +EG <- tibble(stid,scores,interest) ``` 2. Using base R commands, draw a histogram of the scores. Change the breaks in your histogram until you think they best represent your data. ```{r} +hist(EG$scores, breaks =7,xlab = "Scores", main = "Educational Game - Score Distribution") + ``` @@ -110,6 +118,8 @@ pairs(D5) ```{r} #cut() divides the range of scores into intervals and codes the values in scores according to which interval they fall. We use a vector called `letters` as the labels, `letters` is a vector made up of the letters of the alphabet. +label <-letters[1:7] +EG$breaks<- cut(EG$scores, breaks =7, labels = label) ``` @@ -118,12 +128,13 @@ pairs(D5) ```{r} library(RColorBrewer) #Let's look at the available palettes in RColorBrewer - +display.brewer.all() #The top section of palettes are sequential, the middle section are qualitative, and the lower section are diverging. #Make RColorBrewer palette available to R and assign to your bins #Use named palette in histogram +hist(EG$scores, breaks = 7, col = brewer.pal(6,"Blues")) ``` @@ -131,20 +142,22 @@ library(RColorBrewer) ```{r} #Make a vector of the colors from RColorBrewer - +interest.col=brewer.pal(4,"Pastel2") +boxplot(EG$scores~EG$interest, col=interest.col, xlab = "Student Interest", ylab = "Scores") ``` 6. Now simulate a new variable that describes the number of logins that students made to the educational game. They should vary from 1-25. ```{r} +EG$logins=sample(1:25,100, replace = TRUE) ``` 7. Plot the relationships between logins and scores. Give the plot a title and color the dots according to interest group. ```{r} - +plot(EG$logins, EG$scores, col=interest.col, xlab = "Number of Log-ins", ylab = "Scores", main = "Logins vs. Scores") ``` @@ -152,14 +165,19 @@ library(RColorBrewer) 8. R contains several inbuilt data sets, one of these in called AirPassengers. Plot a line graph of the the airline passengers over time using this data set. ```{r} +data("AirPassengers") +plot(AirPassengers) ``` -9. Using another inbuilt data set, iris, plot the relationships between all of the variables in the data set. Which of these relationships is it appropraiet to run a correlation on? +9. Using another inbuilt data set, iris, plot the relationships between all of the variables in the data set. Which of these relationships is it appropriate to run a correlation on? ```{r} - +data("iris") +pairs(iris) +plot(iris$Petal.Length, iris$Petal.Width, main = "Relationship between Petal Lenght and petal width") +cor(iris$Petal.Length, iris$Petal.Widt) ``` # Part III - Analyzing Swirl @@ -171,7 +189,13 @@ In this repository you will find data describing Swirl activity from the class s ### Instructions 1. Insert a new code block + 2. Create a data frame from the `swirl-data.csv` file called `DF1` +```{r} +DF1 <- read.csv("swirl-data.csv", header = T) + +``` + The variables are: @@ -185,19 +209,42 @@ The variables are: `hash` - anonymyzed student ID 3. Create a new data frame that only includes the variables `hash`, `lesson_name` and `attempt` called `DF2` +```{r} +DF2<-select(DF1, c(8,2,5)) +``` -4. Use the `group_by` function to create a data frame that sums all the attempts for each `hash` by each `lesson_name` called `DF3` +4. Use the `group_by` function to create a data frame that sums all the attempts for each `hash` by each `lesson_name` called `DF3` 5. On a scrap piece of paper draw what you think `DF3` would look like if all the lesson names were column names - 6. Convert `DF3` to this format +```{r} +DF3 <- DF2 %>% group_by(hash,lesson_name) %>% summarise(attempts=sum(attempt), .groups='drop') %>% drop_na(attempts) %>%pivot_wider(names_from = lesson_name, values_from=attempts) +``` + 7. Create a new data frame from `DF1` called `DF4` that only includes the variables `hash`, `lesson_name` and `correct` -8. Convert the `correct` variable so that `TRUE` is coded as the **number** `1` and `FALSE` is coded as `0` +8. Convert the `correct` variable so that `TRUE` is coded as the **number** `1` and `FALSE` is coded as `0` +```{r} +DF4 <- select(DF1, c(hash,lesson_name,correct)) %>%mutate(correct= recode(correct, "TRUE"=1, "FALSE"=0)) +``` + 9. Create a new data frame called `DF5` that provides a mean score for each student on each course +```{r} +DF5 <-DF4 %>% group_by(hash,lesson_name) %>% drop_na(correct)%>% summarise(mean_correct=mean(correct, na.rm = TRUE), .groups='drop') %>% pivot_wider(names_from = lesson_name, values_from=mean_correct) +DF5 +``` + 10. **Extra credit** Convert the `datetime` variable into month-day-year format and create a new data frame (`DF6`) that shows the average correct for each day +```{r} +DF6 <- select(DF1,correct,datetime) +DF6$correct <- ifelse(DF6$correct== TRUE, 1,0) +DF6$datetime <- as.POSIXlt(DF6$datetime, origin="1970-01-01 00:00.00 UTC") +DF6$datetime <- strftime(DF6$datetime, format = "%b:%e") +DF7 <- DF6 %>% group_by(datetime) %>% summarise(av_correct=mean(correct,na.rm = TRUE)) +``` + Finally use the knitr function to generate an html document from your work. Commit, Push and Pull Request your work back to the main branch of the repository. Make sure you include both the .Rmd file and the .html file. diff --git a/Assignment-2-2020.html b/Assignment-2-2020.html new file mode 100644 index 0000000..3404fd4 --- /dev/null +++ b/Assignment-2-2020.html @@ -0,0 +1,811 @@ + + + + + + + + + + + + + + + +Assignment 2 + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + + + + +

#Part I

+
+

Data Wrangling

+

In the hackathon a project was proposed to collect data from student video watching, a sample of this data is available in the file video-data.csv.

+

stid = student id year = year student watched video participation = whether or not the student opened the video watch.time = how long the student watched the video for confusion.points = how many times a student rewatched a section of a video key,points = how many times a student skipped or increased the speed of a video

+
#Install the 'tidyverse' package or if that does not work, install the 'dplyr' and 'tidyr' packages.
+
+#Load the package(s) you just installed
+
+library(tidyverse)
+
## -- Attaching packages --------------------------------------- tidyverse 1.3.0 --
+
## v ggplot2 3.3.2     v purrr   0.3.4
+## v tibble  3.0.4     v dplyr   1.0.2
+## v tidyr   1.1.2     v stringr 1.4.0
+## v readr   1.4.0     v forcats 0.5.0
+
## Warning: package 'tibble' was built under R version 4.0.3
+
## -- Conflicts ------------------------------------------ tidyverse_conflicts() --
+## x dplyr::filter() masks stats::filter()
+## x dplyr::lag()    masks stats::lag()
+
library(tidyr)
+library(dplyr)
+
+D1 <- read.csv("video-data.csv", header = TRUE)
+
+#Create a data frame that only contains the years 2018
+D2 <- filter(D1, year == 2018)
+#preview the tibble. I like to get a preview of the data this way instead of clicking on the table or using the View command
+D2
+
##     stid year video participation watch.time confusion.points key.points
+## 1      1 2018     A             1      16.50                6          6
+## 2      2 2018     A             0       0.00                0          0
+## 3      3 2018     A             1       9.00                4          6
+## 4      4 2018     A             1      20.00                8          5
+## 5      5 2018     A             1      12.00                8          5
+## 6      6 2018     A             1      15.00                5          4
+## 7      7 2018     A             1      24.75               11          5
+## 8      8 2018     A             1      12.00                8          6
+## 9      9 2018     A             1      15.00                5          2
+## 10    10 2018     A             1       0.00                0          5
+## 11    11 2018     A             1      19.25                7          6
+## 12    12 2018     A             1       5.00                4          2
+## 13    13 2018     A             1      16.00                8          2
+## 14    14 2018     A             1       7.50                5          5
+## 15    15 2018     A             1      10.00                8          4
+## 16    16 2018     A             1      25.00               10          5
+## 17    17 2018     A             0       0.00                0          0
+## 18    18 2018     A             1      20.25                9          4
+## 19    19 2018     A             1      12.00                6          5
+## 20    20 2018     A             1      14.00                7          4
+## 21    21 2018     A             0       0.00                0          0
+## 22    22 2018     A             0       0.00                0          0
+## 23    23 2018     A             1       3.00                3          5
+## 24    24 2018     A             0       0.00                0          0
+## 25    25 2018     A             1       8.75                7          4
+## 26    26 2018     A             1       0.00                0          5
+## 27    27 2018     A             1       8.00                4          5
+## 28    28 2018     A             1      10.00                4          5
+## 29    29 2018     A             1      10.00                5          2
+## 30    30 2018     A             1      19.25                7          3
+## 31     1 2018     B             1       6.00                2          5
+## 32     2 2018     B             1      15.00                5          4
+## 33     3 2018     B             1       5.00                2          4
+## 34     4 2018     B             1       0.00                0          5
+## 35     5 2018     B             1       2.00                1          4
+## 36     6 2018     B             1      13.50                6          4
+## 37     7 2018     B             1       7.00                7          4
+## 38     8 2018     B             1       0.00                0          7
+## 39     9 2018     B             0       0.00                0          0
+## 40    10 2018     B             1       0.00                0          5
+## 41    11 2018     B             1      12.00                8          3
+## 42    12 2018     B             0       0.00                0          0
+## 43    13 2018     B             1      30.00               10          3
+## 44    14 2018     B             0       0.00                0          0
+## 45    15 2018     B             1       2.00                1          4
+## 46    16 2018     B             1       5.50                2          3
+## 47    17 2018     B             1       0.00                0          5
+## 48    18 2018     B             1      10.00                4          2
+## 49    19 2018     B             1       6.00                2          3
+## 50    20 2018     B             1       6.00                6          5
+## 51    21 2018     B             1      10.50                6          5
+## 52    22 2018     B             1       6.00                2          3
+## 53    23 2018     B             1       7.50                6          4
+## 54    24 2018     B             1       6.00                3          1
+## 55    25 2018     B             1      15.75                7          6
+## 56    26 2018     B             0       0.00                0          0
+## 57    27 2018     B             0       0.00                0          0
+## 58    28 2018     B             1       3.00                1          5
+## 59    29 2018     B             1       3.00                2          5
+## 60    30 2018     B             1       5.00                5          4
+## 61     1 2018     C             1      24.00                8          4
+## 62     2 2018     C             1       5.25                3          5
+## 63     3 2018     C             1       4.00                4          4
+## 64     4 2018     C             1      13.50                6          5
+## 65     5 2018     C             1       0.00                0          6
+## 66     6 2018     C             1      10.50                6          3
+## 67     7 2018     C             1       2.50                1          6
+## 68     8 2018     C             0       0.00                0          0
+## 69     9 2018     C             1      20.25                9          5
+## 70    10 2018     C             1      10.00                8          5
+## 71    11 2018     C             0       0.00                0          0
+## 72    12 2018     C             1      24.00                8          4
+## 73    13 2018     C             0       0.00                0          0
+## 74    14 2018     C             1      18.00                9          5
+## 75    15 2018     C             1       4.00                2          4
+## 76    16 2018     C             1      12.00                4          3
+## 77    17 2018     C             1       0.00                0          3
+## 78    18 2018     C             1      12.00                6          4
+## 79    19 2018     C             0       0.00                0          0
+## 80    20 2018     C             1      16.50                6          4
+## 81    21 2018     C             1       2.50                1          4
+## 82    22 2018     C             1      27.00                9          4
+## 83    23 2018     C             0       0.00                0          0
+## 84    24 2018     C             1      11.25                9          5
+## 85    25 2018     C             1       0.00                0          4
+## 86    26 2018     C             0       0.00                0          0
+## 87    27 2018     C             1      25.00               10          3
+## 88    28 2018     C             1       8.00                8          4
+## 89    29 2018     C             1      14.00                8          4
+## 90    30 2018     C             1       0.00                0          4
+## 91     1 2018     D             1       3.00                3          5
+## 92     2 2018     D             1      27.50               10          4
+## 93     3 2018     D             0       0.00                0          0
+## 94     4 2018     D             1      22.00                8          4
+## 95     5 2018     D             1      16.50                6          3
+## 96     6 2018     D             1       0.00                0          4
+## 97     7 2018     D             1       4.00                4          3
+## 98     8 2018     D             1      18.00                6          2
+## 99     9 2018     D             1       8.00                8          3
+## 100   10 2018     D             1       1.00                1          5
+## 101   11 2018     D             1       4.00                2          3
+## 102   12 2018     D             1      15.75                9          3
+## 103   13 2018     D             1       6.00                4          5
+## 104   14 2018     D             1      11.25                9          3
+## 105   15 2018     D             1       8.00                4          4
+## 106   16 2018     D             1      12.50                5          3
+## 107   17 2018     D             1       5.50                2          5
+## 108   18 2018     D             1       0.00                0          5
+## 109   19 2018     D             1       9.00                6          4
+## 110   20 2018     D             1      15.00                6          3
+## 111   21 2018     D             0       0.00                0          0
+## 112   22 2018     D             1      12.00                8          5
+## 113   23 2018     D             1      15.00                5          4
+## 114   24 2018     D             1      15.00                5          4
+## 115   25 2018     D             1      16.50                6          6
+## 116   26 2018     D             1      22.00               11          4
+## 117   27 2018     D             1       2.00                2          2
+## 118   28 2018     D             1      13.50                6          5
+## 119   29 2018     D             1       4.00                2          5
+## 120   30 2018     D             1       2.25                1          5
+## 121    1 2018     E             1      10.00                4          3
+## 122    2 2018     E             1       7.50                5          5
+## 123    3 2018     E             1       7.00                7          5
+## 124    4 2018     E             1      32.50               13          4
+## 125    5 2018     E             0       0.00                0          0
+## 126    6 2018     E             0       0.00                0          0
+## 127    7 2018     E             0       0.00                0          0
+## 128    8 2018     E             0       0.00                0          0
+## 129    9 2018     E             0       0.00                0          0
+## 130   10 2018     E             1      10.00                4          3
+## 131   11 2018     E             1      24.00                8          4
+## 132   12 2018     E             1      10.00                5          5
+## 133   13 2018     E             1       7.00                4          4
+## 134   14 2018     E             1      14.00                7          4
+## 135   15 2018     E             0       0.00                0          0
+## 136   16 2018     E             1      19.25                7          4
+## 137   17 2018     E             1      19.25                7          4
+## 138   18 2018     E             1      19.50               13          4
+## 139   19 2018     E             1      18.00                6          5
+## 140   20 2018     E             1       6.75                3          6
+## 141   21 2018     E             1       6.00                4          4
+## 142   22 2018     E             0       0.00                0          0
+## 143   23 2018     E             0       0.00                0          0
+## 144   24 2018     E             1       4.50                3          1
+## 145   25 2018     E             1      11.00                4          3
+## 146   26 2018     E             1      15.75                7          5
+## 147   27 2018     E             1       0.00                0          3
+## 148   28 2018     E             1       5.00                2          5
+## 149   29 2018     E             0       0.00                0          0
+## 150   30 2018     E             0       0.00                0          0
+
+
+

Histograms

+
#Generate a histogram of the watch time for the year 2018
+
+hist(D2$watch.time)
+

+
#Change the number of breaks to 100, do you get the same impression?
+
+hist(D2$watch.time, breaks = 100)
+

+
#Cut the y-axis off at 10
+
+hist(D2$watch.time, breaks = 100, ylim = c(0,10))
+

+
#Restore the y-axis and change the breaks so that they are 0-5, 5-20, 20-25, 25-35
+
+hist(D2$watch.time, breaks = c(0,5,20,25,35))
+

+
+
+

Plots

+
#Plot the number of confusion points against the watch time
+
+plot(D1$confusion.points, D1$watch.time)
+

+
#Create two variables x & y
+x <- c(1,3,2,7,6,4,4)
+y <- c(2,4,2,3,2,4,3)
+
+#Create a table from x & y
+table1 <- table(x,y)
+
+#Display the table as a Barplot
+barplot(table1)
+

+
#Create a data frame of the average total key points for each year and plot the two against each other as a lines
+
+D3 <- D1 %>% group_by(year) %>% summarise(mean_key = mean(key.points))
+
## `summarise()` ungrouping output (override with `.groups` argument)
+
plot(D3$year, D3$mean_key, type = "l", lty = "dashed")
+

+
#Create a boxplot of total enrollment for three students
+D4 <- filter(D1, stid == 4|stid == 20| stid == 22)
+#The drop levels command will remove all the schools from the variable with no data  
+D4 <- droplevels(D4)
+boxplot(D4$watch.time~D4$stid, xlab = "Student", ylab = "Watch Time")
+

## Pairs

+
#Use matrix notation to select columns 2, 5, 6, and 7
+D5 <- D1[,c(2,5,6,7)]
+#Draw a matrix of plots for every combination of variables
+pairs(D5)
+

## Part II

+
    +
  1. Create a simulated data set containing 100 students, each with a score from 1-100 representing performance in an educational game. The scores should tend to cluster around 75. Also, each student should be given a classification that reflects one of four interest groups: sport, music, nature, literature.
  2. +
+
#rnorm(100, 75, 15) creates a random sample with a mean of 75 and standard deviation of 20
+#round rounds numbers to whole number values
+#sample draws a random samples from the groups vector according to a uniform distribution
+#I used expand_grid to stay in the tidyverse
+
+stid=seq(1,100,1)
+scores=round(pmin(100,pmax(1, rnorm(100,75,15))))
+interest=sample(c("sport","music","nature","literature"), 100, replace=TRUE)
+
+EG <- tibble(stid,scores,interest)
+
    +
  1. Using base R commands, draw a histogram of the scores. Change the breaks in your histogram until you think they best represent your data.
  2. +
+
hist(EG$scores, breaks =7,xlab = "Scores", main = "Educational Game - Score Distribution")
+

+
    +
  1. Create a new variable that groups the scores according to the breaks in your histogram.
  2. +
+
#cut() divides the range of scores into intervals and codes the values in scores according to which interval they fall. We use a vector called `letters` as the labels, `letters` is a vector made up of the letters of the alphabet.
+label <-letters[1:7]
+EG$breaks<- cut(EG$scores, breaks =7, labels = label)
+
    +
  1. Now using the colorbrewer package (RColorBrewer; http://colorbrewer2.org/#type=sequential&scheme=BuGn&n=3) design a pallette and assign it to the groups in your data on the histogram.
  2. +
+
library(RColorBrewer)
+#Let's look at the available palettes in RColorBrewer
+display.brewer.all()
+

+
#The top section of palettes are sequential, the middle section are qualitative, and the lower section are diverging.
+#Make RColorBrewer palette available to R and assign to your bins
+
+#Use named palette in histogram
+
+hist(EG$scores, breaks = 7, col = brewer.pal(6,"Blues"))
+

+
    +
  1. Create a boxplot that visualizes the scores for each interest group and color each interest group a different color.
  2. +
+
#Make a vector of the colors from RColorBrewer
+interest.col=brewer.pal(4,"Pastel2")
+boxplot(EG$scores~EG$interest, col=interest.col, xlab = "Student Interest", ylab = "Scores")
+

+
    +
  1. Now simulate a new variable that describes the number of logins that students made to the educational game. They should vary from 1-25.
  2. +
+
EG$logins=sample(1:25,100, replace = TRUE)
+
    +
  1. Plot the relationships between logins and scores. Give the plot a title and color the dots according to interest group.
  2. +
+
plot(EG$logins, EG$scores, col=interest.col, xlab = "Number of Log-ins", ylab = "Scores", main = "Logins vs. Scores")
+

+
    +
  1. R contains several inbuilt data sets, one of these in called AirPassengers. Plot a line graph of the the airline passengers over time using this data set.
  2. +
+
data("AirPassengers")
+plot(AirPassengers)
+

+
    +
  1. Using another inbuilt data set, iris, plot the relationships between all of the variables in the data set. Which of these relationships is it appropriate to run a correlation on?
  2. +
+
data("iris")
+pairs(iris)
+

+
plot(iris$Petal.Length, iris$Petal.Width, main = "Relationship between Petal Lenght and petal width")
+

+
cor(iris$Petal.Length, iris$Petal.Widt)
+
## [1] 0.9628654
+
+
+

Part III - Analyzing Swirl

+
+

Data

+

In this repository you will find data describing Swirl activity from the class so far this semester. Please connect RStudio to this repository.

+
+

Instructions

+
    +
  1. Insert a new code block

  2. +
  3. Create a data frame from the swirl-data.csv file called DF1

  4. +
+
DF1 <- read.csv("swirl-data.csv", header = T)
+

The variables are:

+

course_name - the name of the R course the student attempted
+lesson_name - the lesson name
+question_number - the question number attempted correct - whether the question was answered correctly
+attempt - how many times the student attempted the question
+skipped - whether the student skipped the question
+datetime - the date and time the student attempted the question
+hash - anonymyzed student ID

+
    +
  1. Create a new data frame that only includes the variables hash, lesson_name and attempt called DF2
  2. +
+
DF2<-select(DF1, c(8,2,5))
+
    +
  1. Use the group_by function to create a data frame that sums all the attempts for each hash by each lesson_name called DF3
  2. +
  3. On a scrap piece of paper draw what you think DF3 would look like if all the lesson names were column names
  4. +
  5. Convert DF3 to this format
  6. +
+
DF3 <- DF2 %>% group_by(hash,lesson_name) %>% summarise(attempts=sum(attempt), .groups='drop') %>% drop_na(attempts) %>%pivot_wider(names_from = lesson_name, values_from=attempts)
+
    +
  1. Create a new data frame from DF1 called DF4 that only includes the variables hash, lesson_name and correct

  2. +
  3. Convert the correct variable so that TRUE is coded as the number 1 and FALSE is coded as 0

  4. +
+
DF4 <- select(DF1, c(hash,lesson_name,correct)) %>%mutate(correct= recode(correct, "TRUE"=1, "FALSE"=0))
+
## Warning: Problem with `mutate()` input `correct`.
+## i Unreplaced values treated as NA as .x is not compatible. Please specify replacements exhaustively or supply .default
+## i Input `correct` is `recode(correct, `TRUE` = 1, `FALSE` = 0)`.
+
## Warning: Unreplaced values treated as NA as .x is not compatible. Please specify
+## replacements exhaustively or supply .default
+
    +
  1. Create a new data frame called DF5 that provides a mean score for each student on each course
  2. +
+
DF5 <-DF4 %>%  group_by(hash,lesson_name) %>% drop_na(correct)%>%  summarise(mean_correct=mean(correct, na.rm = TRUE), .groups='drop') %>% pivot_wider(names_from = lesson_name, values_from=mean_correct)
+DF5
+
## # A tibble: 36 x 22
+##     hash `Basic Building~ `Dates and Time~  Logic `Matrices and D~
+##    <int>            <dbl>            <dbl>  <dbl>            <dbl>
+##  1  2864            0.88            NA     NA               NA    
+##  2  4807            0.667            0.778  0.614            0.742
+##  3  6487            0.957           NA     NA               NA    
+##  4  8766           NA               NA     NA               NA    
+##  5 11801            1                0.941  0.947            0.867
+##  6 12264           NA               NA     NA                0.821
+##  7 14748            0.88             0.966  0.778           NA    
+##  8 16365            0.867            0.882  0.947            0.9  
+##  9 21536           NA               NA     NA               NA    
+## 10 24042            0.815            0.848  1               NA    
+## # ... with 26 more rows, and 17 more variables: `Missing Values` <dbl>,
+## #   `Subsetting Vectors` <dbl>, Vectors <dbl>, `Workspace and Files` <dbl>,
+## #   `Grouping and Chaining with dplyr` <dbl>, `Looking at Data` <dbl>,
+## #   `Manipulating Data with dplyr` <dbl>, `Tidying Data with tidyr` <dbl>,
+## #   Functions <dbl>, Base_Plotting_System <dbl>, Clustering_Example <dbl>,
+## #   Exploratory_Graphs <dbl>, Graphics_Devices_in_R <dbl>,
+## #   Hierarchical_Clustering <dbl>, K_Means_Clustering <dbl>,
+## #   Principles_of_Analytic_Graphs <dbl>, Plotting_Systems <dbl>
+
    +
  1. Extra credit Convert the datetime variable into month-day-year format and create a new data frame (DF6) that shows the average correct for each day
  2. +
+
DF6 <- select(DF1,correct,datetime)
+DF6$correct <- ifelse(DF6$correct== TRUE, 1,0)
+DF6$datetime <- as.POSIXlt(DF6$datetime, origin="1970-01-01 00:00.00 UTC")
+DF6$datetime <- strftime(DF6$datetime, format = "%b:%e")
+DF7 <- DF6 %>% group_by(datetime) %>% summarise(av_correct=mean(correct,na.rm = TRUE))
+
## `summarise()` ungrouping output (override with `.groups` argument)
+

Finally use the knitr function to generate an html document from your work. Commit, Push and Pull Request your work back to the main branch of the repository. Make sure you include both the .Rmd file and the .html file.

+
+
+
+ + + + +
+ + + + + + + + + + + + + + + diff --git a/assignment2.Rproj b/assignment2.Rproj new file mode 100644 index 0000000..8e3c2eb --- /dev/null +++ b/assignment2.Rproj @@ -0,0 +1,13 @@ +Version: 1.0 + +RestoreWorkspace: Default +SaveWorkspace: Default +AlwaysSaveHistory: Default + +EnableCodeIndexing: Yes +UseSpacesForTab: Yes +NumSpacesForTab: 2 +Encoding: UTF-8 + +RnwWeave: Sweave +LaTeX: pdfLaTeX