Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
159 changes: 159 additions & 0 deletions assignment5-yenling.Rmd
Original file line number Diff line number Diff line change
@@ -0,0 +1,159 @@
---
title: "Principle Component Aanalysis"
output: html_document
---
## Data
The data you will be using comes from the Assistments online intelligent tutoring system (https://www.assistments.org/). It describes students working through online math problems. Each student has the following data associated with them:

- id
- prior_prob_count: How many problems a student has answered in the system prior to this session
- prior_percent_correct: The percentage of problems a student has answered correctly prior to this session
- problems_attempted: The number of problems the student has attempted in the current session
- mean_correct: The average number of correct answers a student made on their first attempt at problems in the current session
- mean_hint: The average number of hints a student asked for in the current session
- mean_attempt: The average number of attempts a student took to answer a problem in the current session
- mean_confidence: The average confidence each student has in their ability to answer the problems in the current session

## Start by uploading the data
```{r}
D1 <- read.csv("Assistments-confidence.csv",header = TRUE)

```

## Create a correlation matrix of the relationships between the variables, including correlation coefficients for each pair of variables/features.

```{r}
#You can install the corrplot package to plot some pretty correlation matrices (sometimes called correlograms)

library(ggplot2)
library(GGally)

ggpairs(D1, 2:8, progress = FALSE)
#ggpairs() draws a correlation plot between all the columns you identify by number (second option, you don't need the first column as it is the student ID) and progress = FALSE stops a progress bar appearing as it renders your plot


ggcorr(D1[,-1], method = c("everything", "pearson"))
#ggcorr() doesn't have an explicit option to choose variables so we need to use matrix notation to drop the id variable. We then need to choose a "method" which determines how to treat missing values (here we choose to keep everything, and then which kind of correlation calculation to use, here we are using Pearson correlation, the other options are "kendall" or "spearman")

#Study your correlogram images and save them, you will need them later. Take note of what is strongly related to the outcome variable of interest, mean_correct.

#I found that mean_correct is highly related to prior_percent_correct and mean_hint.
```

## Create a new data frame with the mean_correct variable removed, we want to keep that variable intact. The other variables will be included in our PCA.

```{r}

D2 <- subset(D1, select = -(mean_correct))
```

## Now run the PCA on the new data frame

```{r}
pca <- prcomp(D2[ , 2:7], scale. = TRUE)

```

## Although princomp does not generate the eigenvalues directly for us, we can print a list of the standard deviation of the variance accounted for by each component.

```{r}

pca$sdev

#To convert this into variance accounted for we can square it, these numbers are proportional to the eigenvalue

pca$sdev^2

#A summary of our pca will give us the proportion of variance accounted for by each component

summary(pca)

#We can look at this to get an idea of which components we should keep and which we should drop

plot(pca, type = "lines")


```

## Decide which components you would drop and remove them from your data set.

I would like to drop PC4, PC5, PC6, PC7 because the explained variation is less than 1 . From the plot, we can infer that the slope begins decreasing from PC4.
```{r}

```


## Part II

```{r}
#Now, create a data frame of the transformed data from your pca.

D3 <- data.frame(pca$x)

#drop pc4, pc5 and pc6
D3_2<-D3[, 1:3]
D3_2<-data.frame(D3_2, D1$mean_correct)
names(D3_2)[4]<-"mean_correct"

#Attach the variable "mean_correct" from your original data frame to D3.

D3 <- data.frame(D3, D1$mean_correct)
names(D3)[7] <-"mean_correct"

#Now re-run your correlation plots between the transformed data and mean_correct. If you had dropped some components would you have lost important infomation about mean_correct?

ggpairs(D3,progress = FALSE)
ggpairs(D3_2,progress = FALSE)

#mean_correct is significantly correlated with PC1(r=.28, p<.001), PC2(r=.37, p<.001), PC4(r=.13, p<.05), and PC6(r=-.39, p<.001). If we drop PC4, PC5, and PC6, we would not be able to spot the associations bwtween mean_correct and PC4 & PC6

```
## Now print out the loadings for the components you generated:

```{r}

pca$rotation

#Examine the eigenvectors, notice that they are a little difficult to interpret. It is much easier to make sense of them if we make them proportional within each component

#abs() will make all eigenvectors positive
loadings <- abs(pca$rotation)

#Now examine your components and try to come up with substantive descriptions of what some might represent?

#PC 1 has a strong negative loading for mean_hint and mean attempt, and positive loadings for prior_percent_correct. These variables stand for the performance in current session. PC 2 is dominated by large positive loadings for prior_percent_correct and positive loadings for prior_prob_count. As these are analyses of students' working through online math problems which may reflect students’ prior experiences to the current session. PC 3 is dominated by large negative loadings for mean_confidence, so this might reflect the confidence of students in solving math problems. As for PC4 to PC 6, the factor loadings of these components don't show distinct patterns so it is challenging to tell what these components might represent. In addition, the contributed variance is not high. As a result, it is difficult to provide theoretical intepretations for these components.


#You can generate a biplot to help you, though these can be a bit confusing. They plot the transformed data by the first two components. Therefore, the axes represent the direction of maximum variance accounted for. Then mapped onto this point cloud are the original directions of the variables, depicted as red arrows. It is supposed to provide a visualization of which variables "go together". Variables that possibly represent the same underlying construct point in the same direction.

biplot(pca)


```
# Part III
Also in this repository is a data set collected from TC students (tc-program-combos.csv) that shows how many students thought that a TC program was related to andother TC program. Students were shown three program names at a time and were asked which two of the three were most similar. Use PCA to look for components that represent related programs. Explain why you think there are relationships between these programs.

```{r}
TC <- read.csv("tc-program-combos.csv", header = TRUE)
TC <- TC[-c(69),]

#pca
pca_TC <- prcomp(TC[, 2:68], scale. = TRUE)
plot(pca_TC, type = "line")

#Choose PC1 to PC4 as major components. Although we can see that the eigenvalues from PC1 to PC24 are larger than 1, we can still see from the plot where the slope starts to decrease from PC4. So I decide to choose PC1 to PC4.

pca_TC$rotation[ ,1:4]

#create a data frame for analysis
PCAIII <- data.frame(abs(pca_TC$rotation[ ,1:4]))

#From the results, we can see that PC1 has larger loadings for change_leadership, Economics.and.Education, Education.Policy, Arts.Administration, School.Principals, Social.Organizational.Psychology. So we can know that PC1 might stand for programs related to administration, school policy and leadership. PC2 has stronger loadings for Clinical.Psychology, Neuroscience, Kinesiology, Physiology, Psychology, Health.Education, Nursing and Behavior.Analysis. So it might represent the health-related programs. PC3 has stronger loadings for Design.and.Development.of.Digital.Games,Cognitive.Science, Learning.Analytics, Mathematics, Education.Technology, Creative.Technologies, Instructional.Technology.and.Media, Measurement.Evaluation.and.Statistics and Communication.Media.and.Learning.Technologies. PC3 is more related to math and technology or STEM fields. PC4 has higher loadings for Linguistics, English.Education, Teaching.English, Literacy, Deaf.and.Hard.of.Hearing. We can infer from the results that PC4 might be more related to language learning.


```





850 changes: 850 additions & 0 deletions assignment5-yenling.html

Large diffs are not rendered by default.

61 changes: 53 additions & 8 deletions assignment5.Rmd
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ The data you will be using comes from the Assistments online intelligent tutorin

## Start by uploading the data
```{r}
D1 <-
D1 <- read.csv("Assistments-confidence.csv",header = TRUE)

```

Expand All @@ -28,29 +28,36 @@ D1 <-
library(ggplot2)
library(GGally)

ggpairs(D1, 2:8, progress = FALSE) #ggpairs() draws a correlation plot between all the columns you identify by number (second option, you don't need the first column as it is the student ID) and progress = FALSE stops a progress bar appearing as it renders your plot
ggpairs(D1, 2:8, progress = FALSE)
#ggpairs() draws a correlation plot between all the columns you identify by number (second option, you don't need the first column as it is the student ID) and progress = FALSE stops a progress bar appearing as it renders your plot

ggcorr(D1[,-1], method = c("everything", "pearson")) #ggcorr() doesn't have an explicit option to choose variables so we need to use matrix notation to drop the id variable. We then need to choose a "method" which determines how to treat missing values (here we choose to keep everything, and then which kind of correlation calculation to use, here we are using Pearson correlation, the other options are "kendall" or "spearman")

ggcorr(D1[,-1], method = c("everything", "pearson"))
#ggcorr() doesn't have an explicit option to choose variables so we need to use matrix notation to drop the id variable. We then need to choose a "method" which determines how to treat missing values (here we choose to keep everything, and then which kind of correlation calculation to use, here we are using Pearson correlation, the other options are "kendall" or "spearman")

#Study your correlogram images and save them, you will need them later. Take note of what is strongly related to the outcome variable of interest, mean_correct.

#I found that mean_correct is highly related to prior_percent_correct and mean_hint.
```

## Create a new data frame with the mean_correct variable removed, we want to keep that variable intact. The other variables will be included in our PCA.

```{r}
D2 <-

D2 <- subset(D1, select = -(mean_correct))
```

## Now run the PCA on the new data frame

```{r}
pca <- prcomp(D2, scale. = TRUE)
pca <- prcomp(D2[ , 2:7], scale. = TRUE)

```

## Although princomp does not generate the eigenvalues directly for us, we can print a list of the standard deviation of the variance accounted for by each component.

```{r}

pca$sdev

#To convert this into variance accounted for we can square it, these numbers are proportional to the eigenvalue
Expand All @@ -64,37 +71,59 @@ summary(pca)
#We can look at this to get an idea of which components we should keep and which we should drop

plot(pca, type = "lines")


```

## Decide which components you would drop and remove them from your data set.

I would like to drop PC4, PC5, PC6, PC7 because the explained variation is less than 1 . From the plot, we can infer that the slope begins decreasing from PC4.
```{r}

```


## Part II

```{r}
#Now, create a data frame of the transformed data from your pca.

D3 <-
D3 <- data.frame(pca$x)

#Attach the variable "mean_correct" from your original data frame to D3.
#drop pc4, pc5 and pc6
D3_2<-D3[, 1:3]
D3_2<-data.frame(D3_2, D1$mean_correct)
names(D3_2)[4]<-"mean_correct"

#Attach the variable "mean_correct" from your original data frame to D3.

D3 <- data.frame(D3, D1$mean_correct)
names(D3)[7] <-"mean_correct"

#Now re-run your correlation plots between the transformed data and mean_correct. If you had dropped some components would you have lost important infomation about mean_correct?

ggpairs(D3,progress = FALSE)
ggpairs(D3_2,progress = FALSE)

#mean_correct is significantly correlated with PC1(r=.28, p<.001), PC2(r=.37, p<.001), PC4(r=.13, p<.05), and PC6(r=-.39, p<.001). If we drop PC4, PC5, and PC6, we would not be able to spot the associations bwtween mean_correct and PC4 & PC6

```
## Now print out the loadings for the components you generated:

```{r}

pca$rotation

#Examine the eigenvectors, notice that they are a little difficult to interpret. It is much easier to make sense of them if we make them proportional within each component

loadings <- abs(pca$rotation) #abs() will make all eigenvectors positive
#abs() will make all eigenvectors positive
loadings <- abs(pca$rotation)

#Now examine your components and try to come up with substantive descriptions of what some might represent?

#PC 1 has a strong negative loading for mean_hint and mean attempt, and positive loadings for prior_percent_correct. These variables stand for the performance in current session. PC 2 is dominated by large positive loadings for prior_percent_correct and positive loadings for prior_prob_count. As these are analyses of students' working through online math problems which may reflect students’ prior experiences to the current session. PC 3 is dominated by large negative loadings for mean_confidence, so this might reflect the confidence of students in solving math problems. As for PC4 to PC 6, the factor loadings of these components don't show distinct patterns so it is challenging to tell what these components might represent. In addition, the contributed variance is not high. As a result, it is difficult to provide theoretical intepretations for these components.


#You can generate a biplot to help you, though these can be a bit confusing. They plot the transformed data by the first two components. Therefore, the axes represent the direction of maximum variance accounted for. Then mapped onto this point cloud are the original directions of the variables, depicted as red arrows. It is supposed to provide a visualization of which variables "go together". Variables that possibly represent the same underlying construct point in the same direction.

biplot(pca)
Expand All @@ -105,6 +134,22 @@ biplot(pca)
Also in this repository is a data set collected from TC students (tc-program-combos.csv) that shows how many students thought that a TC program was related to andother TC program. Students were shown three program names at a time and were asked which two of the three were most similar. Use PCA to look for components that represent related programs. Explain why you think there are relationships between these programs.

```{r}
TC <- read.csv("tc-program-combos.csv", header = TRUE)
TC <- TC[-c(69),]

#pca
pca_TC <- prcomp(TC[, 2:68], scale. = TRUE)
plot(pca_TC, type = "line")

#Choose PC1 to PC4 as major components. Although we can see that the eigenvalues from PC1 to PC24 are larger than 1, we can still see from the plot where the slope starts to decrease from PC4. So I decide to choose PC1 to PC4.

pca_TC$rotation[ ,1:4]

#create a data frame for analysis
PCAIII <- data.frame(abs(pca_TC$rotation[ ,1:4]))

#From the results, we can see that PC1 has larger loadings for change_leadership, Economics.and.Education, Education.Policy, Arts.Administration, School.Principals, Social.Organizational.Psychology. So we can know that PC1 might stand for programs related to administration, school policy and leadership. PC2 has stronger loadings for Clinical.Psychology, Neuroscience, Kinesiology, Physiology, Psychology, Health.Education, Nursing and Behavior.Analysis. So it might represent the health-related programs. PC3 has stronger loadings for Design.and.Development.of.Digital.Games,Cognitive.Science, Learning.Analytics, Mathematics, Education.Technology, Creative.Technologies, Instructional.Technology.and.Media, Measurement.Evaluation.and.Statistics and Communication.Media.and.Learning.Technologies. PC3 is more related to math and technology or STEM fields. PC4 has higher loadings for Linguistics, English.Education, Teaching.English, Literacy, Deaf.and.Hard.of.Hearing. We can infer from the results that PC4 might be more related to language learning.


```

Expand Down
696 changes: 696 additions & 0 deletions assignment5.html

Large diffs are not rendered by default.