Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
31 changes: 19 additions & 12 deletions assignment5.Rmd
Original file line number Diff line number Diff line change
@@ -1,5 +1,6 @@
---
title: "Principle Component Aanalysis"
author: 'He Chen'
output: html_document
---
## Data
Expand All @@ -16,7 +17,7 @@ The data you will be using comes from the Assistments online intelligent tutorin

## Start by uploading the data
```{r}
D1 <-
D1 <- read.csv('Assistments-confidence.csv')

```

Expand All @@ -27,6 +28,7 @@ D1 <-

library(ggplot2)
library(GGally)
library(tidyverse)

ggpairs(D1, 2:8, progress = FALSE) #ggpairs() draws a correlation plot between all the columns you identify by number (second option, you don't need the first column as it is the student ID) and progress = FALSE stops a progress bar appearing as it renders your plot

Expand All @@ -38,7 +40,7 @@ ggcorr(D1[,-1], method = c("everything", "pearson")) #ggcorr() doesn't have an e
## Create a new data frame with the mean_correct variable removed, we want to keep that variable intact. The other variables will be included in our PCA.

```{r}
D2 <-
D2 <- select(D1, !c(id,mean_correct))

```

Expand Down Expand Up @@ -67,22 +69,22 @@ plot(pca, type = "lines")
```

## Decide which components you would drop and remove them from your data set.

According to the summary, I would like to drop the PC6 becasue it has the smallest relative variance.
## Part II

```{r}
#Now, create a data frame of the transformed data from your pca.

D3 <-
D3 <- as.data.frame(pca$x)

#Attach the variable "mean_correct" from your original data frame to D3.

D3 <- D3 %>% mutate(mean_correct = D1$mean_correct)


#Now re-run your correlation plots between the transformed data and mean_correct. If you had dropped some components would you have lost important infomation about mean_correct?
ggcorr(D3, method = c("everything", "pearson"))



# Yes, I would miss the information about the most correlated factor.
```
## Now print out the loadings for the components you generated:

Expand All @@ -94,6 +96,9 @@ pca$rotation
loadings <- abs(pca$rotation) #abs() will make all eigenvectors positive

#Now examine your components and try to come up with substantive descriptions of what some might represent?
loadings

# Answer: The rotation or the loading scores for each column in different components represents the proportions of each column. For example, if we see 0.633 mean_hint, we would see 0.542 mean hint in PC1.

#You can generate a biplot to help you, though these can be a bit confusing. They plot the transformed data by the first two components. Therefore, the axes represent the direction of maximum variance accounted for. Then mapped onto this point cloud are the original directions of the variables, depicted as red arrows. It is supposed to provide a visualization of which variables "go together". Variables that possibly represent the same underlying construct point in the same direction.

Expand All @@ -106,9 +111,11 @@ Also in this repository is a data set collected from TC students (tc-program-com

```{r}

```




D4 <- read.csv('tc-program-combos.csv')
D5 <- select(D4, -1)
pca2 <- prcomp(D5, scale. = TRUE)
summary(pca2)
ggcorr(D5, method = c("everything", "pearson"))

# Accoridng to the graph, there are numbers of cells are red, which means they are highly related to each other.
```
342 changes: 342 additions & 0 deletions assignment5.html

Large diffs are not rendered by default.