Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
38 changes: 33 additions & 5 deletions assignment5.Rmd
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ The data you will be using comes from the Assistments online intelligent tutorin

## Start by uploading the data
```{r}
D1 <-
D1 <- read.csv("Assistments-confidence.csv",header=T)

```

Expand All @@ -27,6 +27,9 @@ D1 <-

library(ggplot2)
library(GGally)
library(tidyr)
library(dplyr)
library(tidyverse)

ggpairs(D1, 2:8, progress = FALSE) #ggpairs() draws a correlation plot between all the columns you identify by number (second option, you don't need the first column as it is the student ID) and progress = FALSE stops a progress bar appearing as it renders your plot

Expand All @@ -38,7 +41,7 @@ ggcorr(D1[,-1], method = c("everything", "pearson")) #ggcorr() doesn't have an e
## Create a new data frame with the mean_correct variable removed, we want to keep that variable intact. The other variables will be included in our PCA.

```{r}
D2 <-
D2 <- select(D1,-id,-mean_correct)

```

Expand Down Expand Up @@ -67,21 +70,26 @@ plot(pca, type = "lines")
```

## Decide which components you would drop and remove them from your data set.
```{r}
#I will drop pc6 because the variance is smaller compared to other values. As a result, pc6 doesn't fit well.
```

## Part II

```{r}
#Now, create a data frame of the transformed data from your pca.

D3 <-
D3 <- data.frame(pca$x)

#Attach the variable "mean_correct" from your original data frame to D3.

D3$"mean_correct"<-D1$mean_correct


#Now re-run your correlation plots between the transformed data and mean_correct. If you had dropped some components would you have lost important infomation about mean_correct?
ggpairs(D3, progress = FALSE)


ggcorr(D3, method = c("everything", "pearson"))
#PC1 and PC2 has strong correlation, so we can't drop them. Mean_correct of PC6 is negatively correlated, so droping pc6 will cause validity problem. However, pc3 and pc5 has small correlation. We may drop them.

```
## Now print out the loadings for the components you generated:
Expand All @@ -105,7 +113,27 @@ biplot(pca)
Also in this repository is a data set collected from TC students (tc-program-combos.csv) that shows how many students thought that a TC program was related to andother TC program. Students were shown three program names at a time and were asked which two of the three were most similar. Use PCA to look for components that represent related programs. Explain why you think there are relationships between these programs.

```{r}
D4 <- read.csv("tc-program-combos.csv", header=T)
D5<-D4[,-1]
pca2 <- prcomp(D5, scale. = TRUE)
#Although princomp does not generate the eigenvalues directly for us, we can print a list of the standard deviation of the variance accounted for by each component.
pca2$sdev
#To convert this into variance accounted for we can square it, these numbers are proportional to the eigenvalue
pca2$sdev^2
#A summary of our pca will give us the proportion of variance accounted for by each component
summary(pca2)
#We can look at this to get an idea of which components we should keep and which we should drop
plot(pca2, type = "lines")
#Now print out the loadings for the components you generated
pca2$rotation
#Examine the eigenvectors, notice that they are a little difficult to interpret. It is much easier to make sense of them if we make them proportional within each component
loadings2 <- abs(pca2$rotation) #abs() will make all eigenvectors positive
#Now examine your components and try to come up with substantive descriptions of what some might represent?

#You can generate a biplot to help you, though these can be a bit confusing. They plot the transformed data by the first two components. Therefore, the axes represent the direction of maximum variance accounted for. Then mapped onto this point cloud are the original directions of the variables, depicted as red arrows. It is supposed to provide a visualization of which variables "go together". Variables that possibly represent the same underlying construct point in the same direction.

biplot(pca2)
#As seen in loadings2, pc5 has really small correlation.As a result, we can run again by dropping pc5.Also,change.leadership, Economics.and.Education and Education.Policy are the top 3 most correlated in pc1. As a result, we may categorize them as "policy related". Clinical.Psychology, Neuroscience,and Kinesiology are the top 3 correlated in pc2. We may categorize it as "health related". Same for the rest.
```


Expand Down
Loading