Skip to content

Commit

Permalink
Merge pull request #7 from posit-conf-2023/rm-slido
Browse files Browse the repository at this point in the history
Remove Slido references + tweaks to in-memory slide order
  • Loading branch information
thisisnic authored Aug 30, 2023
2 parents 31efe2a + 8b319ac commit c885e2a
Show file tree
Hide file tree
Showing 12 changed files with 127 additions and 55 deletions.
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
{
"hash": "980fb7425ba8aa5035e98dd2e64f240d",
"result": {
"markdown": "---\ntitle: \"Arrow In-Memory Exercise\"\nexecute:\n echo: true\n messages: false\n warning: false\n---\n\n::: {.cell}\n\n```{.r .cell-code}\nlibrary(arrow)\nlibrary(dplyr)\n```\n:::\n\n\n\n::: {#exercise-hello-nyc-taxi .callout-tip}\n## Exercises: Arrow Table\n\n::: panel-tabset\n## Problems\n\n1. Read in a single NYC Taxi parquet file using `read_parquet()` as an Arrow Table\n2. Convert your Arrow Table object to a `data.frame` or a `tibble`\n\n## Solution 1\n\n\n::: {.cell}\n\n```{.r .cell-code}\nparquet_file <- here::here(\"data/nyc-taxi/year=2019/month=9/part-0.parquet\")\n\ntaxi_table <- read_parquet(parquet_file, as_data_frame = FALSE)\ntaxi_table\n```\n\n::: {.cell-output .cell-output-stdout}\n```\nTable\n6567396 rows x 22 columns\n$vendor_name <string>\n$pickup_datetime <timestamp[ms]>\n$dropoff_datetime <timestamp[ms]>\n$passenger_count <int64>\n$trip_distance <double>\n$pickup_longitude <double>\n$pickup_latitude <double>\n$rate_code <string>\n$store_and_fwd <string>\n$dropoff_longitude <double>\n$dropoff_latitude <double>\n$payment_type <string>\n$fare_amount <double>\n$extra <double>\n$mta_tax <double>\n$tip_amount <double>\n$tolls_amount <double>\n$total_amount <double>\n$improvement_surcharge <double>\n$congestion_surcharge <double>\n$pickup_location_id <int64>\n$dropoff_location_id <int64>\n```\n:::\n:::\n\n\n## Solution 2\n\n\n::: {.cell}\n\n```{.r .cell-code}\ntaxi_table |> collect()\n```\n\n::: {.cell-output .cell-output-stdout}\n```\n# A tibble: 6,567,396 × 22\n vendor_name pickup_datetime dropoff_datetime passenger_count\n <chr> <dttm> <dttm> <int>\n 1 CMT 2019-08-31 18:09:30 2019-08-31 18:15:42 1\n 2 CMT 2019-08-31 18:26:30 2019-08-31 18:44:31 1\n 3 CMT 2019-08-31 18:39:35 2019-08-31 19:15:55 2\n 4 VTS 2019-08-31 18:12:26 2019-08-31 18:15:17 4\n 5 VTS 2019-08-31 18:43:16 2019-08-31 18:53:50 1\n 6 VTS 2019-08-31 18:26:13 2019-08-31 18:45:35 1\n 7 CMT 2019-08-31 18:34:52 2019-08-31 18:42:03 1\n 8 CMT 2019-08-31 18:50:02 2019-08-31 18:58:16 1\n 9 CMT 2019-08-31 18:08:02 2019-08-31 18:14:44 0\n10 VTS 2019-08-31 18:11:38 2019-08-31 18:26:47 1\n# ℹ 6,567,386 more rows\n# ℹ 18 more variables: trip_distance <dbl>, pickup_longitude <dbl>,\n# pickup_latitude <dbl>, rate_code <chr>, store_and_fwd <chr>,\n# dropoff_longitude <dbl>, dropoff_latitude <dbl>, payment_type <chr>,\n# fare_amount <dbl>, extra <dbl>, mta_tax <dbl>, tip_amount <dbl>,\n# tolls_amount <dbl>, total_amount <dbl>, improvement_surcharge <dbl>,\n# congestion_surcharge <dbl>, pickup_location_id <int>, …\n```\n:::\n:::\n\n\nor\n\n\n::: {.cell}\n\n```{.r .cell-code}\nas_tibble(taxi_table)\n```\n\n::: {.cell-output .cell-output-stdout}\n```\n# A tibble: 6,567,396 × 22\n vendor_name pickup_datetime dropoff_datetime passenger_count\n <chr> <dttm> <dttm> <int>\n 1 CMT 2019-08-31 18:09:30 2019-08-31 18:15:42 1\n 2 CMT 2019-08-31 18:26:30 2019-08-31 18:44:31 1\n 3 CMT 2019-08-31 18:39:35 2019-08-31 19:15:55 2\n 4 VTS 2019-08-31 18:12:26 2019-08-31 18:15:17 4\n 5 VTS 2019-08-31 18:43:16 2019-08-31 18:53:50 1\n 6 VTS 2019-08-31 18:26:13 2019-08-31 18:45:35 1\n 7 CMT 2019-08-31 18:34:52 2019-08-31 18:42:03 1\n 8 CMT 2019-08-31 18:50:02 2019-08-31 18:58:16 1\n 9 CMT 2019-08-31 18:08:02 2019-08-31 18:14:44 0\n10 VTS 2019-08-31 18:11:38 2019-08-31 18:26:47 1\n# ℹ 6,567,386 more rows\n# ℹ 18 more variables: trip_distance <dbl>, pickup_longitude <dbl>,\n# pickup_latitude <dbl>, rate_code <chr>, store_and_fwd <chr>,\n# dropoff_longitude <dbl>, dropoff_latitude <dbl>, payment_type <chr>,\n# fare_amount <dbl>, extra <dbl>, mta_tax <dbl>, tip_amount <dbl>,\n# tolls_amount <dbl>, total_amount <dbl>, improvement_surcharge <dbl>,\n# congestion_surcharge <dbl>, pickup_location_id <int>, …\n```\n:::\n:::\n\n\nor\n\n\n::: {.cell}\n\n```{.r .cell-code}\nas.data.frame(taxi_table)\n```\n\n::: {.cell-output .cell-output-stdout}\n```\n# A tibble: 6,567,396 × 22\n vendor_name pickup_datetime dropoff_datetime passenger_count\n <chr> <dttm> <dttm> <int>\n 1 CMT 2019-08-31 18:09:30 2019-08-31 18:15:42 1\n 2 CMT 2019-08-31 18:26:30 2019-08-31 18:44:31 1\n 3 CMT 2019-08-31 18:39:35 2019-08-31 19:15:55 2\n 4 VTS 2019-08-31 18:12:26 2019-08-31 18:15:17 4\n 5 VTS 2019-08-31 18:43:16 2019-08-31 18:53:50 1\n 6 VTS 2019-08-31 18:26:13 2019-08-31 18:45:35 1\n 7 CMT 2019-08-31 18:34:52 2019-08-31 18:42:03 1\n 8 CMT 2019-08-31 18:50:02 2019-08-31 18:58:16 1\n 9 CMT 2019-08-31 18:08:02 2019-08-31 18:14:44 0\n10 VTS 2019-08-31 18:11:38 2019-08-31 18:26:47 1\n# ℹ 6,567,386 more rows\n# ℹ 18 more variables: trip_distance <dbl>, pickup_longitude <dbl>,\n# pickup_latitude <dbl>, rate_code <chr>, store_and_fwd <chr>,\n# dropoff_longitude <dbl>, dropoff_latitude <dbl>, payment_type <chr>,\n# fare_amount <dbl>, extra <dbl>, mta_tax <dbl>, tip_amount <dbl>,\n# tolls_amount <dbl>, total_amount <dbl>, improvement_surcharge <dbl>,\n# congestion_surcharge <dbl>, pickup_location_id <int>, …\n```\n:::\n:::\n\n:::\n:::\n",
"supporting": [],
"filters": [
"rmarkdown/pagebreak.lua"
],
"includes": {},
"engineDependencies": {},
"preserve": {},
"postProcess": true
}
}
4 changes: 2 additions & 2 deletions _freeze/materials/1_intro/execute-results/html.json
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
{
"hash": "3d6e091a5cba2ed687d545e5d6570bf6",
"hash": "7b2566f7cffe271a7b542a6550ef457f",
"result": {
"markdown": "---\nfooter: \"[🔗 posit.io/arrow](https://posit-conf-2023.github.io/arrow)\"\nlogo: \"images/logo.png\"\nexecute:\n echo: true\nformat:\n revealjs: \n theme: default\nengine: knitr\n---\n\n\n# Hello Arrow {#hello-arrow}\n\n## Slido Poll: Arrow\n\n<br>\n\nHave you used or experimented with Arrow before today?\n\n- A little\n- A lot\n- Not yet\n- Not yet, but I have read about it!\n\n## Hello Arrow<br>Demo\n\n<br>\n\n![](images/logo.png){.absolute top=\"0\" left=\"250\" width=\"600\" height=\"800\"}\n\n## Some \"Big\" Data\n\n![](images/nyc-taxi-homepage.png){.absolute left=\"200\" width=\"600\"}\n\n::: {style=\"font-size: 60%; margin-top: 550px; margin-left: 200px;\"}\n<https://www.nyc.gov/site/tlc/about/tlc-trip-record-data.page>\n:::\n\n## NYC Taxi Data\n\n- *big* NYC Taxi data set (\\~40GBs on disk)\n\n\n::: {.cell}\n\n```{.r .cell-code}\nopen_dataset(\"s3://voltrondata-labs-datasets/nyc-taxi\") |>\n filter(year %in% 2012:2021) |>\n write_dataset(here::here(\"data/nyc-taxi\"), partitioning = c(\"year\", \"month\"))\n```\n:::\n\n\n- *tiny* NYC Taxi data set (\\<1GB on disk)\n\n\n::: {.cell}\n\n```{.r .cell-code}\ndownload.file(url = \"https://github.com/posit-conf-2023/arrow/releases/download/v0.1/nyc-taxi-tiny.zip\",\n destfile = here::here(\"data/nyc-taxi-tiny.zip\"))\n\nunzip(\n zipfile = here::here(\"data/nyc-taxi-tiny.zip\"),\n exdir = here::here(\"data/\")\n)\n```\n:::\n\n\n## posit Cloud ☁️\n\n<br>\n\n[posit.io/arrow-conf23-cloud](https://posit.cloud/spaces/397258/content/all?sort=name_asc)\n\n<br>\n\nOnce you have joined, navigate to Projects on the top menu.\n\n## Larger-Than-Memory Data\n\n<br>\n\n`arrow::open_dataset()`\n\n<br>\n\n`sources`: point to a string path or directory of data files (on disk or in a GCS/S3 bucket) and return an `Arrow Dataset`, then use `dplyr` methods to query it.\n\n::: notes\nArrow Datasets allow you to query against data that has been split across multiple files. This sharding of data may indicate partitioning, which can accelerate queries that only touch some partitions (files). Call open_dataset() to point to a directory of data files and return a Dataset, then use dplyr methods to query it.\n:::\n\n<!-- ## NYC Taxi Dataset: A {dplyr} pipeline -->\n\n<!-- <br> -->\n\n<!-- - use `filter()` to restrict data to 2014:2017 -->\n\n<!-- - use `group_by()` to aggregate by `year` -->\n\n<!-- - use `summarise()` to count total and shared trips -->\n\n<!-- - use `mutate()` to compute percent of trips shared -->\n\n<!-- - use `collect()` to trigger execution & pull result into R -->\n\n## NYC Taxi Dataset\n\n\n::: {.cell}\n\n```{.r .cell-code}\nlibrary(arrow)\n\nnyc_taxi <- open_dataset(here::here(\"data/nyc-taxi\"))\nnyc_taxi\n```\n\n::: {.cell-output .cell-output-stdout}\n```\nFileSystemDataset with 120 Parquet files\nvendor_name: string\npickup_datetime: timestamp[ms]\ndropoff_datetime: timestamp[ms]\npassenger_count: int64\ntrip_distance: double\npickup_longitude: double\npickup_latitude: double\nrate_code: string\nstore_and_fwd: string\ndropoff_longitude: double\ndropoff_latitude: double\npayment_type: string\nfare_amount: double\nextra: double\nmta_tax: double\ntip_amount: double\ntolls_amount: double\ntotal_amount: double\nimprovement_surcharge: double\ncongestion_surcharge: double\npickup_location_id: int64\ndropoff_location_id: int64\nyear: int32\nmonth: int32\n```\n:::\n:::\n\n\n## NYC Taxi Dataset\n\n\n::: {.cell}\n\n```{.r .cell-code}\nnyc_taxi |> \n nrow()\n```\n\n::: {.cell-output .cell-output-stdout}\n```\n[1] 1150352666\n```\n:::\n:::\n\n\n<br>\n\n1.15 billion rows 🤯\n\n## NYC Taxi Dataset: A {dplyr} pipeline\n\n\n::: {.cell}\n\n```{.r .cell-code}\nlibrary(dplyr)\n\nnyc_taxi |>\n filter(year %in% 2014:2017) |>\n group_by(year) |>\n summarise(\n all_trips = n(),\n shared_trips = sum(passenger_count > 1, na.rm = TRUE)\n ) |>\n mutate(pct_shared = shared_trips / all_trips * 100) |>\n collect()\n```\n\n::: {.cell-output .cell-output-stdout}\n```\n# A tibble: 4 × 4\n year all_trips shared_trips pct_shared\n <int> <int> <int> <dbl>\n1 2014 165114361 48816505 29.6\n2 2015 146112989 43081091 29.5\n3 2016 131165043 38163870 29.1\n4 2017 113495512 32296166 28.5\n```\n:::\n:::\n\n\n## NYC Taxi Dataset: A {dplyr} pipeline\n\n\n::: {.cell}\n\n```{.r .cell-code code-line-numbers=\"11,12\"}\nlibrary(dplyr)\n\nnyc_taxi |>\n filter(year %in% 2014:2017) |>\n group_by(year) |>\n summarise(\n all_trips = n(),\n shared_trips = sum(passenger_count > 1, na.rm = TRUE)\n ) |>\n mutate(pct_shared = shared_trips / all_trips * 100) |>\n collect() |> \n system.time()\n```\n\n::: {.cell-output .cell-output-stdout}\n```\n user system elapsed \n 17.720 1.473 2.599 \n```\n:::\n:::\n\n\n## Your Turn\n\n1. Calculate total number of rides for each month in 2019\n\n2. About how long did this query of 1.15 billion rows take?\n\n➡️ [Hello Arrow Exercises Page](01-hello-arrow-exercises.html)\n\n## What is Apache Arrow?\n\n::: columns\n::: {.column width=\"50%\"}\n> A multi-language toolbox for accelerated data interchange and in-memory processing\n:::\n\n::: {.column width=\"50%\"}\n> Arrow is designed to both improve the performance of analytical algorithms and the efficiency of moving data from one system or programming language to another\n:::\n:::\n\n::: {style=\"font-size: 70%;\"}\n<https://arrow.apache.org/overview/>\n:::\n\n## Apache Arrow Specification\n\nIn-memory columnar format: a standardized, language-agnostic specification for representing structured, table-like data sets in-memory.\n\n<br>\n\n![](images/arrow-rectangle.png){.absolute left=\"200\"}\n\n## A Multi-Language Toolbox\n\n![](images/arrow-libraries-structure.png)\n\n## Accelerated Data Interchange\n\n![](images/data-interchange-with-arrow.png)\n\n## Accelerated In-Memory Processing\n\nArrow's Columnar Format is Fast\n\n![](images/columnar-fast.png){.absolute top=\"120\" left=\"200\" height=\"600\"}\n\n::: notes\nThe contiguous columnar layout enables vectorization using the latest SIMD (Single Instruction, Multiple Data) operations included in modern processors.\n:::\n\n## arrow 📦\n\n<br>\n\n![](images/arrow-r-pkg.png){.absolute top=\"0\" left=\"300\" width=\"700\" height=\"900\"}\n\n## arrow 📦\n\n![](images/arrow-read-write-updated.png)\n\n## Today\n\n- Module 1: Larger-than-memory data manipulation with Arrow---Part I\n- Module 2: Data engineering with Arrow\n- Module 3: Larger-than-memory data manipulation with Arrow---Part II\n- Module 4: In-memory workflows in R with Arrow\n\n<br>\n\nWe will also talk about Arrow data types, file formats, controlling schemas & more fun stuff along the way!\n",
"markdown": "---\nfooter: \"[🔗 posit.io/arrow](https://posit-conf-2023.github.io/arrow)\"\nlogo: \"images/logo.png\"\nexecute:\n echo: true\nformat:\n revealjs: \n theme: default\nengine: knitr\n---\n\n\n# Hello Arrow {#hello-arrow}\n\n## Poll: Arrow\n\n<br>\n\nHave you used or experimented with Arrow before today?\n\n- A little\n- A lot\n- Not yet\n- Not yet, but I have read about it!\n\n## Hello Arrow<br>Demo\n\n<br>\n\n![](images/logo.png){.absolute top=\"0\" left=\"250\" width=\"600\" height=\"800\"}\n\n## Some \"Big\" Data\n\n![](images/nyc-taxi-homepage.png){.absolute left=\"200\" width=\"600\"}\n\n::: {style=\"font-size: 60%; margin-top: 550px; margin-left: 200px;\"}\n<https://www.nyc.gov/site/tlc/about/tlc-trip-record-data.page>\n:::\n\n## NYC Taxi Data\n\n- *big* NYC Taxi data set (\\~40GBs on disk)\n\n\n::: {.cell}\n\n```{.r .cell-code}\nopen_dataset(\"s3://voltrondata-labs-datasets/nyc-taxi\") |>\n filter(year %in% 2012:2021) |>\n write_dataset(here::here(\"data/nyc-taxi\"), partitioning = c(\"year\", \"month\"))\n```\n:::\n\n\n- *tiny* NYC Taxi data set (\\<1GB on disk)\n\n\n::: {.cell}\n\n```{.r .cell-code}\ndownload.file(url = \"https://github.com/posit-conf-2023/arrow/releases/download/v0.1/nyc-taxi-tiny.zip\",\n destfile = here::here(\"data/nyc-taxi-tiny.zip\"))\n\nunzip(\n zipfile = here::here(\"data/nyc-taxi-tiny.zip\"),\n exdir = here::here(\"data/\")\n)\n```\n:::\n\n\n## posit Cloud ☁️\n\n<br>\n\n[posit.io/arrow-conf23-cloud](https://posit.cloud/spaces/397258/content/all?sort=name_asc)\n\n<br>\n\nOnce you have joined, navigate to Projects on the top menu.\n\n## Larger-Than-Memory Data\n\n<br>\n\n`arrow::open_dataset()`\n\n<br>\n\n`sources`: point to a string path or directory of data files (on disk or in a GCS/S3 bucket) and return an `Arrow Dataset`, then use `dplyr` methods to query it.\n\n::: notes\nArrow Datasets allow you to query against data that has been split across multiple files. This sharding of data may indicate partitioning, which can accelerate queries that only touch some partitions (files). Call open_dataset() to point to a directory of data files and return a Dataset, then use dplyr methods to query it.\n:::\n\n<!-- ## NYC Taxi Dataset: A {dplyr} pipeline -->\n\n<!-- <br> -->\n\n<!-- - use `filter()` to restrict data to 2014:2017 -->\n\n<!-- - use `group_by()` to aggregate by `year` -->\n\n<!-- - use `summarise()` to count total and shared trips -->\n\n<!-- - use `mutate()` to compute percent of trips shared -->\n\n<!-- - use `collect()` to trigger execution & pull result into R -->\n\n## NYC Taxi Dataset\n\n\n::: {.cell}\n\n```{.r .cell-code}\nlibrary(arrow)\n\nnyc_taxi <- open_dataset(here::here(\"data/nyc-taxi\"))\nnyc_taxi\n```\n\n::: {.cell-output .cell-output-stdout}\n```\nFileSystemDataset with 120 Parquet files\nvendor_name: string\npickup_datetime: timestamp[ms]\ndropoff_datetime: timestamp[ms]\npassenger_count: int64\ntrip_distance: double\npickup_longitude: double\npickup_latitude: double\nrate_code: string\nstore_and_fwd: string\ndropoff_longitude: double\ndropoff_latitude: double\npayment_type: string\nfare_amount: double\nextra: double\nmta_tax: double\ntip_amount: double\ntolls_amount: double\ntotal_amount: double\nimprovement_surcharge: double\ncongestion_surcharge: double\npickup_location_id: int64\ndropoff_location_id: int64\nyear: int32\nmonth: int32\n```\n:::\n:::\n\n\n## NYC Taxi Dataset\n\n\n::: {.cell}\n\n```{.r .cell-code}\nnyc_taxi |> \n nrow()\n```\n\n::: {.cell-output .cell-output-stdout}\n```\n[1] 1150352666\n```\n:::\n:::\n\n\n<br>\n\n1.15 billion rows 🤯\n\n## NYC Taxi Dataset: A {dplyr} pipeline\n\n\n::: {.cell}\n\n```{.r .cell-code}\nlibrary(dplyr)\n\nnyc_taxi |>\n filter(year %in% 2014:2017) |>\n group_by(year) |>\n summarise(\n all_trips = n(),\n shared_trips = sum(passenger_count > 1, na.rm = TRUE)\n ) |>\n mutate(pct_shared = shared_trips / all_trips * 100) |>\n collect()\n```\n\n::: {.cell-output .cell-output-stdout}\n```\n# A tibble: 4 × 4\n year all_trips shared_trips pct_shared\n <int> <int> <int> <dbl>\n1 2014 165114361 48816505 29.6\n2 2015 146112989 43081091 29.5\n3 2016 131165043 38163870 29.1\n4 2017 113495512 32296166 28.5\n```\n:::\n:::\n\n\n## NYC Taxi Dataset: A {dplyr} pipeline\n\n\n::: {.cell}\n\n```{.r .cell-code code-line-numbers=\"11,12\"}\nlibrary(dplyr)\n\nnyc_taxi |>\n filter(year %in% 2014:2017) |>\n group_by(year) |>\n summarise(\n all_trips = n(),\n shared_trips = sum(passenger_count > 1, na.rm = TRUE)\n ) |>\n mutate(pct_shared = shared_trips / all_trips * 100) |>\n collect() |> \n system.time()\n```\n\n::: {.cell-output .cell-output-stdout}\n```\n user system elapsed \n 17.433 1.580 2.944 \n```\n:::\n:::\n\n\n## Your Turn\n\n1. Calculate total number of rides for each month in 2019\n\n2. About how long did this query of 1.15 billion rows take?\n\n➡️ [Hello Arrow Exercises Page](01-hello-arrow-exercises.html)\n\n## What is Apache Arrow?\n\n::: columns\n::: {.column width=\"50%\"}\n> A multi-language toolbox for accelerated data interchange and in-memory processing\n:::\n\n::: {.column width=\"50%\"}\n> Arrow is designed to both improve the performance of analytical algorithms and the efficiency of moving data from one system or programming language to another\n:::\n:::\n\n::: {style=\"font-size: 70%;\"}\n<https://arrow.apache.org/overview/>\n:::\n\n## Apache Arrow Specification\n\nIn-memory columnar format: a standardized, language-agnostic specification for representing structured, table-like data sets in-memory.\n\n<br>\n\n![](images/arrow-rectangle.png){.absolute left=\"200\"}\n\n## A Multi-Language Toolbox\n\n![](images/arrow-libraries-structure.png)\n\n## Accelerated Data Interchange\n\n![](images/data-interchange-with-arrow.png)\n\n## Accelerated In-Memory Processing\n\nArrow's Columnar Format is Fast\n\n![](images/columnar-fast.png){.absolute top=\"120\" left=\"200\" height=\"600\"}\n\n::: notes\nThe contiguous columnar layout enables vectorization using the latest SIMD (Single Instruction, Multiple Data) operations included in modern processors.\n:::\n\n## arrow 📦\n\n<br>\n\n![](images/arrow-r-pkg.png){.absolute top=\"0\" left=\"300\" width=\"700\" height=\"900\"}\n\n## arrow 📦\n\n![](images/arrow-read-write-updated.png)\n\n## Today\n\n- Module 1: Larger-than-memory data manipulation with Arrow---Part I\n- Module 2: Data engineering with Arrow\n- Module 3: Larger-than-memory data manipulation with Arrow---Part II\n- Module 4: In-memory workflows in R with Arrow\n\n<br>\n\nWe will also talk about Arrow data types, file formats, controlling schemas & more fun stuff along the way!\n",
"supporting": [
"1_intro_files"
],
Expand Down
8 changes: 5 additions & 3 deletions _freeze/materials/3_eng_storage/execute-results/html.json

Large diffs are not rendered by default.

Loading

0 comments on commit c885e2a

Please sign in to comment.