generated from laser-institute/laser-getting-started
-
Notifications
You must be signed in to change notification settings - Fork 65
/
orientation-case-study-key-R.qmd
executable file
·692 lines (410 loc) · 41.2 KB
/
orientation-case-study-key-R.qmd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
---
title: "A Coding Case Study with Quarto"
subtitle: "LASER Orientation Module"
author: "LASER Institute"
date: today
format:
html:
toc: true
toc-depth: 4
toc-location: right
theme:
light: simplex
dark: cyborg
editor: visual
bibliography: lit/references.bib
---
## 0. INTRODUCTION
![](img/LASER_Hx.png){width="40%"}
Welcome to your first LASER Case Study! The case study activities included in each module demonstrate how key Learning Analytics (LA) techniques featured in exemplary STEM education research studies can be implemented with R or Python. Case studies also provide a holistic setting to explore important foundational topics integral to Learning Analytics such as reproducible research, use of APIs, and ethical use of educational data.
This orientation case study will also introduce you to [Quarto](https://quarto.org), which is heavily integrated into each LASER Module. You may have used Quarto before - or you may not have! Either is fine as this task will be designed with the assumption that you have not used Quarto before.
### How to use this Quarto document
What you are working in now is an **Q**uarto **m**ark**d**own file as indicated by the .**qmd** file name extension. Quarto documents are fully reproducible and use a productive notebook interface to combine formatted text and "chunks" of code to produce a range of [static and dynamic output formats](https://quarto.org/docs/guide/) including: HTML, PDF, Word, HTML5 slides, Tufte-style handouts, books, dashboards, shiny applications, scientific articles, websites, and more.
::: callout-tip
Quarto docs can include specially formatted **callout boxes** like this one to draw special attention to provide notes, tips, cautions, warnings, and important information.
**Pro tip**: Quarto documents also have a handy Outline feature that allow you to easily navigate the entire document. If the outline is not currently visible, click the Outline button located on the right of the toolbar at the top of this document.
:::
#### Source vs. Visual Editor
Following best practices for reproducible research [@gandrud2021], Quarto files store information in plain text [markdown](https://bookdown.org/yihui/rmarkdown/markdown-syntax.html) syntax. You are currently viewing this Quarto document using the visual editor, The visual editor is set as the default view in the [Quarto YAML header](https://quarto.org/docs/get-started/hello/rstudio.html#yaml-header) at the top of this document. Basically, a [YAML header](https://monashdatafluency.github.io/r-rep-res/yaml-header.html#) is:
> a short blob of text that... not only dictates the final file format, but a style and feel for our final document.
The visual editor allows you to view formatted headers, text and code chunks and is a bit more "human readable" than markdown syntax but there will be many occasions where you will want to take a look at the plain text source code underlying this document. This can be viewed at any point by switching to source mode for editing. You can toggle back and forth between these two modes by clicking on **Source** and **Visual** in the editor toolbar.
::: callout-note
You may have noticed a special kind of link in the text above. Specifically, a link citing Reproducible Research with R and R Studio by Chris Gandrud. The YAML header includes a bibliography option and points to our `reference.bib` file in the `lit` folder of this project, which produces a nice tooltip for linked references and a bibliography when our doc is [rendered](https://quarto.org/docs/get-started/hello/rstudio.html#rendering) and [published](https://quarto.org/docs/get-started/authoring/rstudio.html#publishing). Click the following link to learn more about [citations in Quarto](https://quarto.org/docs/authoring/citations.html).
:::
#### 👉 Your Turn ⤵
LASER case studies include many interactive elements in which you are asked to perform an action, answer some questions, or write some code. These are indicated by the **👉 Your Turn** **⤵** header. Now it's your turn to do something.
Take a look at the markdown syntax used to create this document by viewing with the source editor. To do so, click the "Source" button in the toolbar at the top of this file. After you've had a look, click back to the visual editor to continue.
![](img/source-view.png){width="100%"}
Great job! Let's continue!
#### Code Chunks
In addition to including formatted text hyperlinks, and embedded images like above, Quarto documents can also include a specially formatted text box called a "[code chunk](https://quarto.org/docs/get-started/hello/rstudio.html#code-chunks)." These chunks allows you to run code from multiple languages including R, Python, and SQL. For example, the code chunk below is intended to run R code as specified by "r" inside the curly brackets `{}`. It also contains a contains a code "comment" as indicted by the \# hashtag and line of R code. You may have also noticed a set of buttons in the upper right corner of the code chunk which are used to execute the code.
#### 👉 Your Turn ⤵
Click the green arrow ![](https://d33wubrfki0l68.cloudfront.net/18153fb9953057ee5cff086122bd26f9cee8fe93/3aba9/images/notebook-run-chunk.png)icon on the right side of the code chunk to run the R code and view the image file name `laser-cycle.png` stored in the `img` folder in your files pane. Quarto will execute the code and its output and any related messages are displayed below the chunk.
```{r}
# Display an image from the specified path using the knitr package in R
knitr::include_graphics("img/laser-cycle.png")
```
Nice work! For this case study, don't stress too much about understanding the code. We'll spend a lot of time doing that in the other modules. For now, take a look at the image displayed and answer the question that follows by typing your response directly in this document.
#### ❓Question
In LASER case studies, you will often see as part of "Your Turns" a ❓ icon that indicates you are being promoted to answer a question. Type your response to the following question by deleting "YOUR RESPONSE HERE" and adding your own response:
What do you think this image is intended to illustrate?
- YOUR RESPONSE HERE
### The Data-Intensive Research Workflow
The diagram shown above illustrates a Learning Analytics framework called the Data-Intensive Research workflow and comes from the excellent book, Learning Analytics Goes to School [@krumm2018]*.* You can check that out later, but don't feel any need to dive deep into it for now - we spend more time unpacking this framework in our [Learning Analytics Workflow Modules](https://laser-institute.github.io/laser-website/curriculum-la-workflow.html); just know that this case study and all of the case studies in our [LASER curriculum modules](https://laser-institute.github.io/laser-website/curriculum-design.html#modules-topics) are organized around the five main components of this workflow.
In this introductory coding case study, we'll focus on the following tasks specific to each component of the workflow:
1. **Prepare**. Understand the research context, software packages, and data collected.
2. **Wrangle**. Select and filter variables and "wrangle" them in a tabular (think spreadsheet!) format.
3. **Explore**. Create some basic summary tables and plots to understand our data better.
4. **Model**. Run a basic model - specifically, a simple regression model.
5. **Communicate**. Create a reproducible report of your work that you can share with others.
Now, let's get started!
## 1. PREPARE
First and foremost, data-intensive research involves defining and refining a research question and developing an understanding of where your data comes from [@krumm2018]. This part of the process also involves setting up a reproducible research environment so your work can be understood and replicated by other researchers [@gandrud2021]. For now, we'll focus on just a few parts of this process, diving in much more deeply into these components in later learning modules.
### Research Question
In this case study, we'll be working with data come from an unpublished research study by LASER team member, [Josh Rosenberg](https://joshuamrosenberg.com), which utilized a number of different data sources to understand high school students' motivation within the context of online courses.
These data sets and related research questions are explored in much greater detail in other modules, but for the purpose of this case study, our analysis will be driven by the following research question:
*Is there a relationship between the time students spend on a course (as measured through their learning management system) and their final course grade?*
### Projects & Packages 📦
As highlighted in [Chapter 6 of Data Science in Education Using R](https://datascienceineducation.com/c06.html) [@estrellado2020e], one of the first steps of every research workflow should be to set up a "Project" within RStudio.
> A **Project** is the home for all of the files, images, reports, and code that are used in any given project.
We are working in Posit Cloud with an R project [cloned from GitHub](https://github.com/laser-institute/laser-orientation), so a project has already been set up for you as indicated by the `.Rproj` file in the main directory.
#### 👉 Your Turn ⤵
Locate the Files tab lower right hand window pane and see if you can find the file named `laser-orientation.Rproj`.
Since a project already set up for us, we will instead focus on loading the required packages we'll need for analysis.
> **Packages**, sometimes referred to as libraries, are shareable collections of R code that can contain functions, data, and/or documentation and extend the functionality of R.
You can always check to see which packages have already been installed and loaded into RStudio by looking at the Packages tab in the same pane as the Files tab. Click the packages tab to see which packages have already been installed for this project.
#### tidyverse 📦
![](img/tidyverse.png){width="30%"}
One package that we'll be using extensively in our learning modules is the {tidyverse} package. The {tidyverse} is actually a [collection of R packages](https://www.tidyverse.org/packages) designed for wrangling and exploring data (sound familiar?) and which all share an underlying design philosophy, grammar, and data structures. These shared features are sometimes referred to as "[tidy data principles](https://r4ds.had.co.nz/tidy-data.html)" [@wickham2016r].
#### 👉 Your Turn ⤵
To load the tidyverse, we'll use the `library()` function. Go ahead and run the code chunk below:
```{r}
library(tidyverse)
```
::: callout-caution
Please do not worry if you saw a number of messages. Those probably mean that the `tidyverse` loaded just fine. If you see an "error" message, however, try to interpret or search via your search engine the contents of the error, or reach out to us for assistance.
:::
#### skimr 📦
![](img/skimr.png){width="20%"}
The {[skimr](https://github.com/ropensci/skimr)} package is a handy package that provides summary statistics that you can skim quickly to understand your data and see what may be missing. We'll be using this later in the Explore section of this case study.
#### 👉 Your Turn ⤵
As we noted in the beginning, these case studies are meant to be interactive. Throughout each case study, we will ask to you apply some of your R skills to help with the analysis. This intended to help you practice newly introduced functions or R code and reinforce R skills you have already learned.
Now use the `library()` function in the code chunk below to load the `skimr` package into our environment as well.
```{r}
library(skimr)
```
### Loading (or reading in) data
The data we'll explore in this case study were originally collected for a research study, which utilized a number of different data sources to understand students' course-related motivation. These courses were designed and taught by instructors through a statewide online course provider designed to supplement – but not replace – students' enrollment in their local school.
The data used in this case study has already been "wrangled" quite a bit, but the original datasets included:
1. A self-report survey assessing three aspects of students' motivation
2. Log-trace data, such as data output from the learning management system (LMS)
3. Discussion board data
4. Academic achievement data
If you are interested in learning more about these datasets, you can visit Chapter 7 of the excellent book, [*Data Science in Education Using R*](https://datascienceineducation.com/c07.html#data-sources)[@estrellado2020e].
#### 👉 Your Turn ⤵
Next, we'll load our data - specifically, a CSV text file, the kind that you can export from Microsoft Excel or Google Sheets - into R, using the `read_csv()` function in the next chunk.
Clicking the green arrow runs the code; do that next to read the `sci-online-classes.csv` file stored in the `data` folder of your R project:
```{r}
sci_data <- read_csv("data/sci-online-classes.csv")
```
Nice work! You should now see a new data "object" named `sci_data` saved in your Environment pane. Try clicking on it and see what happens!
::: callout-important
It's important to note that by manipulating data with the {tidyverse} package, we are **not** changing the original file. Instead, the data is stored in memory and can be viewed in our **Environment** pane, and can later be exported and saved as a new file is desired.
:::
#### Viewing and inspecting data
Now let's learn another way to inspect our data.
#### 👉 Your Turn ⤵
Run the next chunk and look at the results, tabbing left or right with the arrows, or scanning through the rows by clicking the numbers at the bottom of the pane with the print-out of the data frame you "assigned" to the `sci_data` object in the previous code-chunk:
```{r}
sci_data
```
::: callout-tip
You can also enlarge this output by clicking the "Show in New Window" button located in the top right corner of the output.
:::
#### ❓Question
What do you notice about this data set? What do you wonder? Add one or two observations in the space below:
- YOUR RESPONSE HERE
There are many other ways to inspect your data; the `glimpse()` function provides one such way. Complete the code chunk below to take a "glimpse" at your `sci_data`.
```{r}
glimpse(sci_data)
```
#### ❓Question
We have a couple more questions to pose to you before wrangling the data we just imported.
Generally, rows typically represent "cases," the units that we measure, or the units on which we collect data. This is not a trick question! What counts as a "case" (and therefore what is represented as a row) varies by (and within) fields. There may be multiple types or levels of units studied in your field; listing more than one is fine! Also, please consider what columns - which usually represent variables - represent in your area of work and/or research.
What do rows typically (or you think may) represent in your research:
- YOUR RESPONSE HERE
What do columns typically (or you think may) represent in your research:
- YOUR RESPONSE HERE
Next, we'll use a few functions that are handy for preparing data in table form.
## 2. WRANGLE
By wrangle, we refer to the process of cleaning and processing data, and, in some cases, merging (or joining) data from multiple sources. Often, this part of the process is very (surprisingly) time-intensive! Wrangling your data into shape can itself be an important accomplishment! And documenting your code using R scripts or Quarto files will save yourself and others a great deal of time wrangling data in the future! There are great tools in R for data wrangling, especially through the use of the {[dplyr](https://dplyr.tidyverse.org)} package which is part of the {tidyverse} suite of packages.
### Selecting variables
Recall from our Prepare section that we are interested the relationship between the time students spend on a course and their final course grade.
Let's practice selecting these variables by introducing a very powerful `|>` operator called a **pipe**. Pipes are a powerful tool for combining a sequence of functions or processes.
#### 👉 Your Turn ⤵
Run the following code chunk to "pipe" our `sci_data` to the `select()` function include the following two variables as arguments:
- `FinalGradeCEMS` (i.e., students' final grades on a 0-100 point scale)
- `TimeSpent` (i.e., the number of minutes they spent in the course's learning management system)
```{r}
sci_data |>
select(FinalGradeCEMS, TimeSpent)
```
Notice how the number of columns (variables) is now different!
::: callout-note
It's important to note that since we haven't "assigned" this filtered data frame to a new object using the `<-` assignment operator, the number of variables in the `sci_data` data frame in our environment is still 30, not 2.
:::
Let's *include one additional variable* in the select function that you think might be a predictor of students' final course grade or useful in addressing our research question.
First, we need to figure out what variables exist in our dataset (or be reminded of this - it's very common in R to be continually checking and inspecting your data)!
Recall that you can use a function named `glimpse()` to do this.
```{r}
glimpse(sci_data)
```
#### 👉 Your Turn ⤵
In the code chunk below, add a new variable, being careful to type the new variable name as it appears in the data. We've added some code to get you started. Consider how the names of the other variables are separated as you think about how to add an additional variable to this code.
```{r}
sci_data |>
select(FinalGradeCEMS, TimeSpent, total_points_earned)
```
Once added, the output should be different than in the code above - there should now be an additional variable included in the print-out.
::: callout-note
**A quick footnote about pipes**: The original pipe operator, `%>%`, comes from the {[magrittr](https://magrittr.tidyverse.org)} package but all packages in the tidyverse load `%>%` for you automatically, so you don't usually load magrittr explicitly. The pipe has become such a useful and much used operator in R that it is now baked into R using the new and simpler native pipe `|>` operator. You can use both fairly interchangeably but there are a few [differences between pipe operators](https://www.tidyverse.org/blog/2023/04/base-vs-magrittr-pipe/).
:::
### Filtering variables
Next, let's explore filtering variables.
#### 👉 Your Turn ⤵
Check out and run the next chunk of code, imagining that we wish to filter our data to view only the rows associated with students who earned a final grade (as a percentage) of 70 - 70% - or higher.
```{r}
sci_data |>
filter(FinalGradeCEMS > 70)
```
In the next code chunk, change the cut-off from 70% to some other value - larger or smaller (maybe much larger or smaller - feel free to play around with the code a bit!).
```{r}
sci_data |>
filter(FinalGradeCEMS > 70)
```
#### ❓Question
What happens when you change the cut-off from 70 to something else? Add a thought (or more) below:
- YOUR RESPONSE HERE
### Arrange
The last function we'll use for preparing tables is arrange, which allows us to sort columns in ascending (default) or descending order. We'll again use the `|>` to combine this `arrange()` function with a function we used already - `select()`. We do this so we can view only time spent and final grades.
```{r}
sci_data |>
select(FinalGradeCEMS, TimeSpent) |>
arrange(FinalGradeCEMS)
```
Note that arrange works by sorting values in ascending order (from lowest to highest) by default; you can change this by using the `desc()` function as an argument with arrange, like the following:
```{r}
sci_data |>
select(FinalGradeCEMS, TimeSpent) |>
arrange(desc(FinalGradeCEMS))
```
Just at a quick cursory glance at our two variables, it does appear that students with higher grades also tend to have spent more time in the online course.
#### 👉 Your Turn ⤵
In the code chunk below, replace `FinalGradeCEMS` that is used with both the `select()` and `arrange()` functions with a different variable in the data set. Consider returning to the code chunk above in which you glimpsed at the names of all of the variables.
```{r}
sci_data |>
select(TimeSpent, FinalGradeCEMS) |>
arrange(desc(FinalGradeCEMS))
```
Can you compose a series of functions that include the `select()`, `filter()`, and `arrange()` functions? Recall that you can "pipe" the output from one function to the next as when we used `select()` and `arrange()` together in the code chunk above.
```{r}
sci_data |>
select(TimeSpent, FinalGradeCEMS) |>
filter(FinalGradeCEMS > 70) |>
arrange(FinalGradeCEMS)
```
## 3. EXPLORE
Exploratory data analysis, or exploring your data, involves processes of *describing* your data (such as by calculating the means and standard deviations of numeric variables, or counting the frequency of categorical variables) and, often, visualizing your data. As we'll learn in later labs, the explore phase can also involve the process of "feature engineering," or creating new variables within a dataset [@krumm2018].
In this section, we'll quickly pull together some basic stats for our two variables of interest, `TimeSpent` and `FinalGradeCEMS`, using a handy function from the {skimr} package. We'll also introduce you to a basic data visualization "code template" for the {[ggplot](https://ggplot2.tidyverse.org)} package from the tidyverse to help us examine the relationship between these two variables.
### Summary Statistics
Let's repurpose what we learned from our wrangle section to select just a few variables and quickly gather some descriptive stats using the `skim()` function from the {skimr} package.
```{r}
sci_data |>
select(TimeSpent, FinalGradeCEMS) |>
skim()
```
#### 👉 Your Turn ⤵
Use the code from the chunk from above to explore some other variables of interest from our `sci_data`.
```{r}
sci_data |>
select(course_id, FinalGradeCEMS) |>
skim()
```
What happens if simply feed the skim function the entire `sci_data` object? Give it a try!
```{r}
skim(sci_data)
```
### Data Visualization
Data visualization is an extremely common practice in Learning Analytics, especially in the use of data dashboards. Data visualization involves graphically representing one or more variables with the goal of discovering patterns in data. These patterns may help us to answer research questions or generate new questions about our data, to discover relationships between and among variables, and to create or select features for data modeling.
In this section we'll focus on using a basic code template for the {[ggplot2](https://ggplot2.tidyverse.org)} package from the tidyverse. `ggplot2` is a system for declaratively creating graphics, based on [the grammar of graphics](https://ggplot2-book.org/introduction.html#what-is-the-grammar-of-graphics) [@Wickham]. You provide the data, tell ggplot2 how to map variables to [aesthetics](https://ggplot2.tidyverse.org/reference/aes.html), what graphical elements to use, and it takes care of the details.
### The Graphing Workflow
At it's core, you can create some very simple but attractive graphs with just a couple lines of code. The {ggplot2} package follows the common basic workflow for making graphs:
1. Start the graph with `ggplot()` and include your data as an argument;
2. "Add" elements to the graph using the `+` operator a [`geom_()` function](https://ggplot2.tidyverse.org/reference/#geoms);
3. Select variables to graph on each axis with the `aes()` argument.
#### 👉 Your Turn ⤵
Let's give it a try by creating a simple histogram of our `FinalGradeCEMS` variable. The code below creates a histogram, or a distribution of the values, in this case for students' final grades. Go ahead and run it:
```{r}
ggplot(sci_data) +
geom_histogram(aes(x = FinalGradeCEMS))
```
Note that the first function, `ggplot()`, creates a coordinate system that you can "add" layers to using additional functions and `+` operator. The first argument of `ggplot()` is the dataset, in our case `sci_data`, to use for the graph.
By itself, `ggplot(data = mpg)` just creates an empty graph. But when you add a required `geom_()` function like `geom_histogram()`, you tell it which type of graph you want to make, in our case a histogram. A **geom** is the geometrical object that a plot uses to represent observations. People often describe plots by the type of geom that the plot uses. For example, bar charts use bar geoms, line charts use line geoms, boxplots use boxplot geoms, and so on. Scatterplots, which we'll see a in bit, break the trend; they use the point geom.
The final required element for any graph is a `mapping =` argument that defines which variables in your dataset are mapped to which axes in your graph. The `mapping` argument is always paired with the function `aes()`, which you use to gather together all of the mappings that you want to create. In our case, since we just created a simple histogram, we only had to specify what variable to place on the x axis, which in our case was `FinalGradeCEMS`.
We won't spend a lot of time on it in this case study, but you can also add a wide range of [aesthetic arguments](https://ggplot2.tidyverse.org/reference/index.html#aesthetics) to each geom, like changing the color of the histogram bars by adding an argument to specify color. Let's give that a try using the `fill =` argument:
```{r}
ggplot(sci_data) +
geom_histogram(aes(x = FinalGradeCEMS), fill = "blue")
```
#### 👉 Your Turn ⤵
Now use the code chunk below to visualize the distribution of another variable in the data, specifically `TimeSpent`. You can do so by swapping out the variable `FinalGradeCEMS` with our new variable. Also, change the color to one of your choosing; consider this list of valid color names here: <http://www.stat.columbia.edu/~tzheng/files/Rcolor.pdf>
There is no shame in copying and pasting code from above. Remember, reproducible research is also intended to help you save time!
```{r}
ggplot(sci_data) +
geom_histogram(aes(x = TimeSpent), fill = "green")
```
### Scatterplots
Finally, let's create a scatter plot for the relationship between these two variables. Scatterplots use the point geom, i.e., the `geom_point()` function, and are most useful for displaying the relationship between two continuous variables.
#### 👉 Your Turn ⤵
Complete the code chunk below to create a simple scatterplot with `TimeSpent` on the x axis and `FinalGradeCEMS` on the y axis.
**Hint**: Aside from the missing variable for the y-axis, something else important is also missing that you will need to "add" to your code.
```{r}
ggplot(sci_data) +
geom_point(aes(x = TimeSpent,
y = FinalGradeCEMS))
```
Well done! As you can see, there appears to be a positive relationship between the time students spend in the online course and their final grade!
To learn more about using {ggplot2} for data visualization, we highly recommend the [Data visualization with ggplot2 :: Cheat Sheet](https://rstudio.github.io/cheatsheets/html/data-visualization.html?_gl=1*11gsrq9*_ga*OTU4NTc4NzgwLjE2NzI3NTQwNzQ.*_ga_2C0WZ1JHG0*MTcxMDY0NzEzNi4yNzMuMS4xNzEwNjQ3MTk5LjAuMC4w).
## 4. MODEL
"Model" is one of those terms that has many different meanings. For our purpose, we refer to the process of simplifying and summarizing our data as modeling. Thus, models can take many forms; calculating means represents a legitimate form of modeling data, as does estimating more complex models, including linear regressions, and models and algorithms associated with machine learning tasks. For now, we'll run a base linear regression model to further examine the relationship between `TimeSpent` and `FinalGradeCEMS`.
We'll dive much deeper into modeling in subsequent case studies, but for now let's see if there is a statistically significant relationship between students' final grades, `FinaGradeCEMS`, and the `TimeSpent` on the course.
### An Inferential Model
An inferential statistical model is used to understand the relationships between variables and make inferences about the population from a sample. It aims to identify the underlying mechanisms and determine the significance of these relationships.
Key characteristics of an inferential model include:
- **Focus on Understanding:** These models aim to understand the causal or correlational relationships between variables.
- **Hypothesis Testing:** They often involve hypothesis testing to determine if observed patterns are statistically significant.
- **Parameter Estimates:** Inferential models provide estimates of parameters (such as coefficients) that describe the relationship between variables, along with confidence intervals and p-values to assess their significance.
- **Generalization:** The goal is to generalize findings from the sample to the larger population.
#### 👉 Your Turn ⤵
Execute the following code to run a basic linear regression model to further examine the relationship between `TimeSpent` and `FinalGradeCEMS`:
```{r}
m1 <- lm(FinalGradeCEMS ~ TimeSpent, data = sci_data)
summary(m1)
```
It looks like `TimeSpent` in the online course is indeed positively associated with a higher final grade! That is, students who spent more time in the LMS also earned higher grades. However, before we get too excited, there are likely other factors influencing student performance that are not captured by by our model and study time only accounts for a small portion of why students get different grades. There are clearly other factors that likely play a big role in determining final grades.
Don't worry too much for now if interpreting model outputs is relatively new for you, or if it's been a while. We'll be working with models in greater depth in our other LASER modules.
#### 👉 Your Turn ⤵
Now let's "add" *another* variable to the regression model. Specifically, use the `+` operator after `TimeSpent` to add the course `subject` variable as a predictor of students' final grades.
```{r}
m2 <- lm(FinalGradeCEMS ~ TimeSpent + subject, data = sci_data)
summary(m2)
```
#### ❓Question
What do you notice about the results? How would you interpret them? Add a comment or two below:
- In summary, the model suggests that there is a statistically significant, though relatively small, positive association between the time spent in the Learning Management System and a student's final grade. However, the low R-squared value indicates that many other factors not included in the model may be influencing final grades.
### A Predictive Model
A predictive model is designed to make accurate predictions about future or unseen data. The primary goal is to maximize the accuracy of predictions rather than understanding the underlying relationships.
Key characteristics of an inferential model include:
- **Focus on Prediction:** These models aim to accurately predict the outcome variable for new data points.
- **Model Performance:** The effectiveness of predictive models is evaluated based on metrics like R-squared, mean squared error, or other prediction accuracy measures.
- **Complex Algorithms:** Predictive modeling often uses more complex algorithms (e.g., machine learning techniques) that might not be easily interpretable but offer higher predictive power.
- **Validation:** Predictive models rely heavily on validation techniques like cross-validation to ensure that the model performs well on unseen data.
Before using our model for predictive purposes, let's first take a look at the line of best fit, which represents the linear regression model `m1` that has been fitted to the data.
#### **👉 Your Turn** **⤵**
Run the following code chunk to fit a linear regression model to predict `FinalGradeCEMS` (final grades) based on `TimeSpent` (time in LMS) and visualize the data points and the fitted regression line on a scatter plot:
```{r}
ggplot(data = sci_data, aes(x = TimeSpent, y = FinalGradeCEMS)) +
geom_point() + # Plot data points
geom_smooth(method = 'lm', se = FALSE) + # Add line of best fit
labs(x = 'Time Spent in LMS', y = 'Final Grade') + # Add axis labels
theme_minimal() # Use a minimal theme
```
Now let's use this model to predict a student's grade depending on how much time (in minutes) they spent in the course if they spent 1000, 1500, or 2000 minutes respectively:
```{r}
# Create a data frame 'time' with 'TimeSpent' values 1000, 1500, 2000
time <- data.frame(TimeSpent = c(1000, 1500, 2000))
# Use the predict function to obtain predictions for the new data
predicted_grades <- predict(m1, newdata = time)
# Print the predicted grades
predicted_grades
```
Hmm... Assuming our model is accurate (a big assumption!) even at 2000 minutes the best a student might hope for is a C+!
#### **👉 Your Turn** **⤵**
Trying changing the amount of time a student spends in the course and see how the predicted grade changes based on our simple model.
```{r}
# Create a data frame 'new_time' with new 'TimeSpent' value
new_time <- data.frame(TimeSpent = 5000)
# Use the predict function to obtain predictions for the new data
predicted_grades <- predict(m1, newdata = new_time)
# Print the predicted grades
predicted_grades
```
#### ❓Question
How much time would a student need to spend in the course to get an A?
- YOUR RESPONSE HERE
::: callout-important
It's important to note that these models are just used for illustrative examples of typical modeling approaches used in learning analytics and should be taken with a grain of salt. In our predictive model, for example, we haven't attended to the accuracy of our model or how well it performs well on unseen data.
:::
## 5. COMMUNICATE
The final step in the workflow/process is sharing the results of your analysis with wider audience. Krumm et al. @krumm2018 have outlined the following 3-step process for communicating with education stakeholders findings from an analysis:
1. **Select.** Communicating what one has learned involves selecting among those analyses that are most important and most useful to an intended audience, as well as selecting a form for displaying that information, such as a graph or table in static or interactive form, i.e. a "data product."
2. **Polish**. After creating initial versions of data products, research teams often spend time refining or polishing them, by adding or editing titles, labels, and notations and by working with colors and shapes to highlight key points.
3. **Narrate.** Writing a narrative to accompany the data products involves, at a minimum, pairing a data product with its related research question, describing how best to interpret the data product, and explaining the ways in which the data product helps answer the research question and might be used to inform new analyses or a "change idea" for improving student learning.
In later modules, you will have an opportunity to create a "data product" designed to illustrate some insights gained from your analysis and ideally highlight an action step or change idea that can be used to improve learning or the contexts in which learning occurs.
For example, imagine that as part of a grant to improve student performance in online courses, your are working with the instructors who taught for this online course provider. As part of some early exploratory work to identify factors influencing performance, you are interested in sharing some of your findings about hours logged in the LMS and their final grades. One way we might communicate these findings is through a simple [data dashboard](https://sbkellogg.quarto.pub/final-grades-and-hours-logged/#plots) like the one shown below. Dashboards are a very common reporting tool and their use, for better or worse, has become ubiquitous in the field of Learning Analytics.
[![](img/data-dashboard.png)](https://sbkellogg.quarto.pub/final-grades-and-hours-logged/#plots)
### Render Document
For now, we will wrap up this case study by converting your work to an HTML file that can be published and used to communicate your learning and demonstrate some of your new R skills. To do so, you will need to "render" your document. Rendering a document does two important things, namely when you render a document it:
1. checks through all your code for any errors; and,
2. creates a file in your project directory that you can use to share you work.
#### 👉 Your Turn ⤵
Now that you've finished your first case study, let's render this document by clicking the ![](img/render.png){width="2%"} Render button in the toolbar at that the top of this file. Rendering will covert this Quarto document to a HTML web page as specified in our YAML header. Web pages are just one of [the many publishing formats you can create with Quarto](https://quarto.org/docs/output-formats/all-formats.html) documents.
If the files rendered correctly, you should now see a new file named `orientation-case-study-R.html` in the Files tab located in the bottom right corner of R Studio. If so, congratulations, you just completed the getting started activity! You're now ready for the unit Case Studies that we will complete during the third week of each unit.
::: callout-important
If you encounter errors when you try to render, first check the case study answer key located in the files pane and has the suggested code for the Your Turns. If you are still having difficulties, try copying and pasting the error into Google or ChatGPT to see if you can resolve the issue. Finally, contact your instructor to debug the code together if you're still having issues.
:::
### Publish File
There are a wide variety of ways to publish documents, presentations, and websites created using Quarto. Since content rendered with Quarto uses standard formats (HTML, PDFs, MS Word, etc.) it can be published anywhere. Additionally, there is a `quarto publish` command available for easy publishing to various popular services such as [Quarto Pub](#0), [Posit Cloud](#0), [RPubs](#0) , [GitHub Pages](#0), or [other services](#0).
#### 👉 Your Turn ⤵
Choose of of the following methods described below for publishing your completed case study.
#### Publishing with Quarto Pub
Quarto Pub is a free publishing service for content created with Quarto. Quarto Pub is ideal for blogs, course or project websites, books, reports, presentations, and personal hobby sites.
It’s important to note that all documents and sites published to Quarto Pub are **publicly visible**. You should only publish content you wish to share publicly.
To publish to Quarto Pub, you will use the `quarto publish` command to publish content rendered on your local machine or via Posit Cloud.
Before attempting your first publish, be sure that you have created a free [Quarto Pub](https://quartopub.com/) account.
The `quarto publish` command provides a very straightforward way to publish documents to Quarto Pub.
For example, here is the Terminal command to publish a generic Quarto `document.qmd` to each of this service:
``` {.bash filename="Terminal"}
quarto publish quarto-pub document.qmd
```
You can access your the terminal from directly **Terminal Pane** in the lower left corner as shown below:
![](img/terminal.png){width="100%"}
The actual command you will enter into your terminal to publish your orientation case study is:
`quarto publish quarto-pub orientation-case-study-R.qmd`
When you publish to Quarto Pub using `quarto publish` an access token is used to grant permission for publishing to your account. The first time you publish to Quarto Pub the Quarto CLI will automatically launch a browser to authorize one as shown below.
``` {.bash filename="Terminal"}
$ quarto publish quarto-pub
? Authorize (Y/n) ›
❯ In order to publish to Quarto Pub you need to
authorize your account. Please be sure you are
logged into the correct Quarto Pub account in
your default web browser, then press Enter or
'Y' to authorize.
```
Authorization will launch your default web browser to confirm that you want to allow publishing from Quarto CLI. An access token will be generated and saved locally by the Quarto CLI.
Once you've authorized Quarto Pub and published your case study, it should take you immediately to the published document. See my example Orientation Case Study complete with answer key here: [https://sbkellogg.quarto.pub/laser-orientation-case-study-key](https://sbkellogg.quarto.pub/laser-orientation-case-study-key/).
After you've published your first document, you can continue adding more documents, slides, books and even publish entire websites!
![](img/quarto-pub.png)
#### Publishing with R Pubs
An alternative, and perhaps the easiest way to quickly publish your file online is to publish directly from RStudio using Posit Cloud or RPubs. You can do so by clicking the "Publish" button located in the Viewer Pane after you render your document and as illustrated in the screenshot below.
![](img/publish.png){width="100%"}
Similar to Quarto Pub, be sure that you have created a free Posit Cloud or R Pub account before attempting your first publish. You may also need to add your Posit Cloud or R Pub account before being able to publish.
See below for an example of my published Orientation Case Study complete with answer key here using:
- **R Pubs**: <https://rpubs.com/sbkellogg/orientation-case-study-key>
- **Posit Cloud**: <https://posit.cloud/content/8432811>
![](img/posit-cloud-pub.png){width="100%"}
### Your First LASER Badge!
Congratulations, you've completed your first case study!
Once you have shared a link to your published document with your instructor and they have reviewed your work, you will be provided a physical or digital version of the badge pictured below!
![](img/LASER_Hx.png){width="50%"}
### References