Skip to content

Commit

Permalink
Built site for gh-pages
Browse files Browse the repository at this point in the history
  • Loading branch information
Quarto GHA Workflow Runner committed Sep 13, 2023
1 parent 228589b commit 094731a
Show file tree
Hide file tree
Showing 19 changed files with 23,530 additions and 22,971 deletions.
2 changes: 1 addition & 1 deletion .nojekyll
Original file line number Diff line number Diff line change
@@ -1 +1 @@
c5739e20
c93ab8b4
16 changes: 14 additions & 2 deletions index.html
Original file line number Diff line number Diff line change
Expand Up @@ -163,7 +163,19 @@
</ul>
</li>
</ul>
<div class="quarto-navbar-tools ms-auto">
<ul class="navbar-nav navbar-nav-scroll ms-auto">
<li class="nav-item">
<a class="nav-link" href="./license-web.html" rel="" target=""><i class="bi bi-file-certificate" role="img">
</i>
<span class="menu-text">License</span></a>
</li>
<li class="nav-item">
<a class="nav-link" href="https://github.com/posit-conf-2023/arrow" rel="" target=""><i class="bi bi-github" role="img">
</i>
<span class="menu-text">GitHub</span></a>
</li>
</ul>
<div class="quarto-navbar-tools">
</div>
</div> <!-- /navcollapse -->
</div> <!-- /container-fluid -->
Expand Down Expand Up @@ -217,7 +229,7 @@ <h1 class="title">Big Data in R with Arrow</h1>
<section id="workshop-overview" class="level3">
<h3 class="anchored" data-anchor-id="workshop-overview">Workshop Overview</h3>
<p>Data analysis pipelines with larger-than-memory data are becoming more and more commonplace. In this workshop you will learn how to use Apache Arrow, a multi-language toolbox for working with larger-than-memory tabular data, to create seamless “big” data analysis pipelines with R.</p>
<p>The workshop will focus on using the the arrow R package—a mature R interface to Apache Arrow—to process larger-than-memory files and multi-file data sets with arrow using familiar dplyr syntax. You’ll learn to create and use interoperable data file formats like Parquet for efficient data storage and access, with data stored both on disk and in the cloud, and also how to exercise fine control over data types to avoid common large data pipeline problems. This workshop will provide a foundation for using Arrow, giving you access to a powerful suite of tools for performant analysis of larger-than-memory data in R.</p>
<p>The workshop will focus on using the the arrow R package—a mature R interface to Apache Arrow—to process larger-than-memory files and multi-file datasets with arrow using familiar dplyr syntax. You’ll learn to create and use interoperable data file formats like Parquet for efficient data storage and access, with data stored both on disk and in the cloud, and also how to exercise fine control over data types to avoid common large data pipeline problems. This workshop will provide a foundation for using Arrow, giving you access to a powerful suite of tools for performant analysis of larger-than-memory data in R.</p>
<p><em>This course is for you if you:</em></p>
<ul>
<li>want to learn how to work with tabular data that is too large to fit in memory using existing R and tidyverse syntax implemented in Arrow</li>
Expand Down
Loading

0 comments on commit 094731a

Please sign in to comment.