From 0ff851455e4b68b399dc07e61804f3ffbfc70589 Mon Sep 17 00:00:00 2001 From: Ajay Madhavan <43872413+aj280192@users.noreply.github.com> Date: Tue, 29 Mar 2022 14:10:50 +0200 Subject: [PATCH 1/4] Update README.md for new release. Added 'lds' and 'lgs' info to the readme. --- README.md | 47 +++++++++++++++++++++++++---------------------- 1 file changed, 25 insertions(+), 22 deletions(-) diff --git a/README.md b/README.md index 7b4cfb5..566648e 100644 --- a/README.md +++ b/README.md @@ -144,6 +144,9 @@ Layer Integrated Gradients (`lig`) | [`.attr.LayerIntegratedGradients`](https:// LIME (`lime`) | [`.attr.LimeBase`](https://captum.ai/api/lime.html) | # samples = 25,
mask prob = 0.3 Occlusion (`occ`) | [`.attr.Occlusion`](https://captum.ai/api/occlusion.html) | sliding window = 3 Shapley Value Sampling (`svs`) | [`.attr.ShapleyValueSampling`](https://captum.ai/api/shapley_value_sampling.html) | # samples = 25 +Layer DeepLiftShap ('lds') | ['.attr.LayerDeepLiftShap'](https://captum.ai/api/layer.html#layer-deepliftshap) | +Layer GradientShap ('lgs') | ['attr.LayerGradientShap'](https://captum.ai/api/layer.html#layer-gradientshap) | # samples = 5 + @@ -161,13 +164,13 @@ Shapley Value Sampling (`svs`) | [`.attr.ShapleyValueSampling`](https://captum.a [`imdb`](https://huggingface.co/datasets/viewer/?dataset=imdb) is a sentiment analysis dataset with 2 classes (`pos` and `neg`). The available split is the `test` subset containing 25k examples. Example configuration: `imdb-xlnet-lig` -Name | 🤗 | `lgxa` | `lig` | `lime` | `occ` | `svs` +Name | 🤗 | `lgxa` | `lig` | `lime` | `occ` | `svs` | 'lds' | 'lgs' --- | --- | --- | --- | --- | --- | --- -ALBERT (`albert`) | [`textattack/albert-base-v2-imdb`](https://huggingface.co/textattack/albert-base-v2-imdb) | ✅ | ✅ | ✅ | ✅ | ✅ -BERT (`bert`) | [`textattack/bert-base-uncased-imdb`](https://huggingface.co/textattack/bert-base-uncased-imdb) | ✅ | ✅ | ✅ | ✅ | ✅ -ELECTRA (`electra`) | [`monologg/electra-small-finetuned-imdb`](https://huggingface.co/monologg/electra-small-finetuned-imdb) | ✅ | ✅ | ✅ | ✅ | ✅ -RoBERTa (`roberta`) | [`textattack/roberta-base-imdb`](https://huggingface.co/textattack/roberta-base-imdb) | ✅ | ✅ | ✅ | ✅ | ✅ -XLNet (`xlnet`) | [`textattack/xlnet-base-cased-imdb`](https://huggingface.co/textattack/xlnet-base-cased-imdb) | ✅ | ✅ | ✅ | ✅ | ✅ +ALBERT (`albert`) | [`textattack/albert-base-v2-imdb`](https://huggingface.co/textattack/albert-base-v2-imdb) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ +BERT (`bert`) | [`textattack/bert-base-uncased-imdb`](https://huggingface.co/textattack/bert-base-uncased-imdb) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ +ELECTRA (`electra`) | [`monologg/electra-small-finetuned-imdb`](https://huggingface.co/monologg/electra-small-finetuned-imdb) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ +RoBERTa (`roberta`) | [`textattack/roberta-base-imdb`](https://huggingface.co/textattack/roberta-base-imdb) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ +XLNet (`xlnet`) | [`textattack/xlnet-base-cased-imdb`](https://huggingface.co/textattack/xlnet-base-cased-imdb) | ✅ | ✅ | ✅ | ✅ | ✅ | ⚠️ | ⚠️ ### MultiNLI @@ -175,13 +178,13 @@ XLNet (`xlnet`) | [`textattack/xlnet-base-cased-imdb`](https://huggingface.co/te [`multi_nli`](https://huggingface.co/datasets/viewer/?dataset=multi_nli) is a textual entailment dataset. The available split is the `validation_matched` subset containing 9815 examples. Example configuration: `multi_nli-roberta-lime` -Name | 🤗 | `lgxa` | `lig` | `lime` | `occ` | `svs` +Name | 🤗 | `lgxa` | `lig` | `lime` | `occ` | `svs` | 'lds' | 'lgs' --- | --- | --- | --- | --- | --- | --- -ALBERT (`albert`) | [`prajjwal1/albert-base-v2-mnli`](https://huggingface.co/prajjwal1/albert-base-v2-mnli) | ✅ | ✅ | ✅ | ✅ | ✅ -BERT (`bert`) | [`textattack/bert-base-uncased-MNLI`](https://huggingface.co/textattack/bert-base-uncased-MNLI) | ✅ | ✅ | ✅ | ✅ | ✅ -ELECTRA (`electra`) | [`howey/electra-base-mnli`](https://huggingface.co/howey/electra-base-mnli) | ✅ | ✅ | ✅ | ✅ | ✅ -RoBERTa (`roberta`) | [`textattack/roberta-base-MNLI`](https://huggingface.co/textattack/roberta-base-MNLI) | ✅ | ✅ | ✅ | ✅ | ✅ -XLNet (`xlnet`) | [`textattack/xlnet-base-cased-MNLI`](https://huggingface.co/textattack/xlnet-base-cased-MNLI) | ✅ | ✅ | ✅ | ✅ | ✅ +ALBERT (`albert`) | [`prajjwal1/albert-base-v2-mnli`](https://huggingface.co/prajjwal1/albert-base-v2-mnli) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ +BERT (`bert`) | [`textattack/bert-base-uncased-MNLI`](https://huggingface.co/textattack/bert-base-uncased-MNLI) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ +ELECTRA (`electra`) | [`howey/electra-base-mnli`](https://huggingface.co/howey/electra-base-mnli) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ +RoBERTa (`roberta`) | [`textattack/roberta-base-MNLI`](https://huggingface.co/textattack/roberta-base-MNLI) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ +XLNet (`xlnet`) | [`textattack/xlnet-base-cased-MNLI`](https://huggingface.co/textattack/xlnet-base-cased-MNLI) | ✅ | ✅ | ✅ | ✅ | ✅ | ⚠️ | ⚠️ ### XNLI @@ -189,13 +192,13 @@ XLNet (`xlnet`) | [`textattack/xlnet-base-cased-MNLI`](https://huggingface.co/te [`xnli`](https://huggingface.co/datasets/viewer/?dataset=xnli) is a textual entailment dataset. It provides the test set of MultiNLI through the "en" configuration. The fine-tuned models used here are the same as the MultiNLI ones. The available split is the `test` subset containing 5010 examples. Example configuration: `xnli-roberta-lime` -Name | 🤗 | `lgxa` | `lig` | `lime` | `occ` | `svs` +Name | 🤗 | `lgxa` | `lig` | `lime` | `occ` | `svs` | 'lds' | 'lgs' --- | --- | --- | --- | --- | --- | --- -ALBERT (`albert`) | [`prajjwal1/albert-base-v2-mnli`](https://huggingface.co/prajjwal1/albert-base-v2-mnli) | ✅ | ✅ | ✅ | ✅ | ✅ -BERT (`bert`) | [`textattack/bert-base-uncased-MNLI`](https://huggingface.co/textattack/bert-base-uncased-MNLI) | ✅ | ✅ | ✅ | ✅ | ✅ -ELECTRA (`electra`) | [`howey/electra-base-mnli`](https://huggingface.co/howey/electra-base-mnli) | ✅ | ✅ | ✅ | ✅ | ✅ -RoBERTa (`roberta`) | [`textattack/roberta-base-MNLI`](https://huggingface.co/textattack/roberta-base-MNLI) | ✅ | ✅ | ✅ | ✅ | ✅ -XLNet (`xlnet`) | [`textattack/xlnet-base-cased-MNLI`](https://huggingface.co/textattack/xlnet-base-cased-MNLI) | ✅ | ✅ | ✅ | ✅ | ✅ +ALBERT (`albert`) | [`prajjwal1/albert-base-v2-mnli`](https://huggingface.co/prajjwal1/albert-base-v2-mnli) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ +BERT (`bert`) | [`textattack/bert-base-uncased-MNLI`](https://huggingface.co/textattack/bert-base-uncased-MNLI) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ +ELECTRA (`electra`) | [`howey/electra-base-mnli`](https://huggingface.co/howey/electra-base-mnli) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ +RoBERTa (`roberta`) | [`textattack/roberta-base-MNLI`](https://huggingface.co/textattack/roberta-base-MNLI) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ +XLNet (`xlnet`) | [`textattack/xlnet-base-cased-MNLI`](https://huggingface.co/textattack/xlnet-base-cased-MNLI) | ✅ | ✅ | ✅ | ✅ | ✅ | ⚠️ | ⚠️ ### AG News @@ -203,11 +206,11 @@ XLNet (`xlnet`) | [`textattack/xlnet-base-cased-MNLI`](https://huggingface.co/te [`ag_news`](https://huggingface.co/datasets/viewer/?dataset=ag_news) is a news topic classification dataset. The available split is the `test` subset containing 7600 examples. Example configuration: `ag_news-albert-svs` -Name | 🤗 | `lgxa` | `lig` | `lime` | `occ` | `svs` +Name | 🤗 | `lgxa` | `lig` | `lime` | `occ` | `svs` | 'lds' | 'lgs' --- | --- | --- | --- | --- | --- | --- -ALBERT (`albert`) | [`textattack/albert-base-v2-ag-news`](https://huggingface.co/textattack/albert-base-v2-ag-news) | ✅ | ✅ | ✅ | ✅ | ✅ -BERT (`bert`) | [`textattack/bert-base-uncased-ag-news`](https://huggingface.co/textattack/bert-base-uncased-ag-news) | ✅ | ✅ | ✅ | ✅ | ✅ -RoBERTa (`roberta`) | [`textattack/roberta-base-ag-news`](https://huggingface.co/textattack/roberta-base-ag-news) | ✅ | ✅ | ✅ | ✅ | ✅ +ALBERT (`albert`) | [`textattack/albert-base-v2-ag-news`](https://huggingface.co/textattack/albert-base-v2-ag-news) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ +BERT (`bert`) | [`textattack/bert-base-uncased-ag-news`](https://huggingface.co/textattack/bert-base-uncased-ag-news) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ +RoBERTa (`roberta`) | [`textattack/roberta-base-ag-news`](https://huggingface.co/textattack/roberta-base-ag-news) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ --- From a69562dc8e7b44f85ea02a5f9b28e97221a60bcd Mon Sep 17 00:00:00 2001 From: Ajay Madhavan <43872413+aj280192@users.noreply.github.com> Date: Tue, 29 Mar 2022 14:17:40 +0200 Subject: [PATCH 2/4] Update README.md --- README.md | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-) diff --git a/README.md b/README.md index 566648e..17ed0ff 100644 --- a/README.md +++ b/README.md @@ -19,7 +19,7 @@ arXiv pre-print available here: https://arxiv.org/abs/2108.13961 ### With pip -[![PyPI](https://img.shields.io/pypi/v/thermostat-datasets?style=flat-square)](https://pypi.org/project/thermostat-datasets/) +[![PyPI](https://img.shields.io/pypi/v/thermostat-datasets?style=flat-square)](https://pypi.org/project/thermostat-datasets/)] ```bash pip install thermostat-datasets @@ -144,8 +144,8 @@ Layer Integrated Gradients (`lig`) | [`.attr.LayerIntegratedGradients`](https:// LIME (`lime`) | [`.attr.LimeBase`](https://captum.ai/api/lime.html) | # samples = 25,
mask prob = 0.3 Occlusion (`occ`) | [`.attr.Occlusion`](https://captum.ai/api/occlusion.html) | sliding window = 3 Shapley Value Sampling (`svs`) | [`.attr.ShapleyValueSampling`](https://captum.ai/api/shapley_value_sampling.html) | # samples = 25 -Layer DeepLiftShap ('lds') | ['.attr.LayerDeepLiftShap'](https://captum.ai/api/layer.html#layer-deepliftshap) | -Layer GradientShap ('lgs') | ['attr.LayerGradientShap'](https://captum.ai/api/layer.html#layer-gradientshap) | # samples = 5 +Layer DeepLiftShap ('lds') | [`.attr.LayerDeepLiftShap`](https://captum.ai/api/layer.html#layer-deepliftshap) | +Layer GradientShap ('lgs') | [`attr.LayerGradientShap`](https://captum.ai/api/layer.html#layer-gradientshap) | # samples = 5 @@ -165,7 +165,7 @@ Layer GradientShap ('lgs') | ['attr.LayerGradientShap'](https://captum.ai/api/la Example configuration: `imdb-xlnet-lig` Name | 🤗 | `lgxa` | `lig` | `lime` | `occ` | `svs` | 'lds' | 'lgs' ---- | --- | --- | --- | --- | --- | --- +--- | --- | --- | --- | --- | --- | --- | --- | --- ALBERT (`albert`) | [`textattack/albert-base-v2-imdb`](https://huggingface.co/textattack/albert-base-v2-imdb) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ BERT (`bert`) | [`textattack/bert-base-uncased-imdb`](https://huggingface.co/textattack/bert-base-uncased-imdb) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ ELECTRA (`electra`) | [`monologg/electra-small-finetuned-imdb`](https://huggingface.co/monologg/electra-small-finetuned-imdb) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ @@ -179,7 +179,7 @@ XLNet (`xlnet`) | [`textattack/xlnet-base-cased-imdb`](https://huggingface.co/te Example configuration: `multi_nli-roberta-lime` Name | 🤗 | `lgxa` | `lig` | `lime` | `occ` | `svs` | 'lds' | 'lgs' ---- | --- | --- | --- | --- | --- | --- +--- | --- | --- | --- | --- | --- | --- | --- | --- ALBERT (`albert`) | [`prajjwal1/albert-base-v2-mnli`](https://huggingface.co/prajjwal1/albert-base-v2-mnli) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ BERT (`bert`) | [`textattack/bert-base-uncased-MNLI`](https://huggingface.co/textattack/bert-base-uncased-MNLI) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ ELECTRA (`electra`) | [`howey/electra-base-mnli`](https://huggingface.co/howey/electra-base-mnli) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ @@ -193,7 +193,7 @@ XLNet (`xlnet`) | [`textattack/xlnet-base-cased-MNLI`](https://huggingface.co/te Example configuration: `xnli-roberta-lime` Name | 🤗 | `lgxa` | `lig` | `lime` | `occ` | `svs` | 'lds' | 'lgs' ---- | --- | --- | --- | --- | --- | --- +--- | --- | --- | --- | --- | --- | --- | --- | --- ALBERT (`albert`) | [`prajjwal1/albert-base-v2-mnli`](https://huggingface.co/prajjwal1/albert-base-v2-mnli) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ BERT (`bert`) | [`textattack/bert-base-uncased-MNLI`](https://huggingface.co/textattack/bert-base-uncased-MNLI) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ ELECTRA (`electra`) | [`howey/electra-base-mnli`](https://huggingface.co/howey/electra-base-mnli) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ @@ -207,7 +207,7 @@ XLNet (`xlnet`) | [`textattack/xlnet-base-cased-MNLI`](https://huggingface.co/te Example configuration: `ag_news-albert-svs` Name | 🤗 | `lgxa` | `lig` | `lime` | `occ` | `svs` | 'lds' | 'lgs' ---- | --- | --- | --- | --- | --- | --- +--- | --- | --- | --- | --- | --- | --- | --- | --- ALBERT (`albert`) | [`textattack/albert-base-v2-ag-news`](https://huggingface.co/textattack/albert-base-v2-ag-news) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ BERT (`bert`) | [`textattack/bert-base-uncased-ag-news`](https://huggingface.co/textattack/bert-base-uncased-ag-news) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ RoBERTa (`roberta`) | [`textattack/roberta-base-ag-news`](https://huggingface.co/textattack/roberta-base-ag-news) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ From 9e1f69cfd26c09c7e0b2e4d335820902c75c5f54 Mon Sep 17 00:00:00 2001 From: Ajay Madhavan <43872413+aj280192@users.noreply.github.com> Date: Tue, 29 Mar 2022 14:20:05 +0200 Subject: [PATCH 3/4] Update README.md --- README.md | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-) diff --git a/README.md b/README.md index 17ed0ff..970f759 100644 --- a/README.md +++ b/README.md @@ -19,7 +19,7 @@ arXiv pre-print available here: https://arxiv.org/abs/2108.13961 ### With pip -[![PyPI](https://img.shields.io/pypi/v/thermostat-datasets?style=flat-square)](https://pypi.org/project/thermostat-datasets/)] +[![PyPI](https://img.shields.io/pypi/v/thermostat-datasets?style=flat-square)](https://pypi.org/project/thermostat-datasets/) ```bash pip install thermostat-datasets @@ -144,8 +144,8 @@ Layer Integrated Gradients (`lig`) | [`.attr.LayerIntegratedGradients`](https:// LIME (`lime`) | [`.attr.LimeBase`](https://captum.ai/api/lime.html) | # samples = 25,
mask prob = 0.3 Occlusion (`occ`) | [`.attr.Occlusion`](https://captum.ai/api/occlusion.html) | sliding window = 3 Shapley Value Sampling (`svs`) | [`.attr.ShapleyValueSampling`](https://captum.ai/api/shapley_value_sampling.html) | # samples = 25 -Layer DeepLiftShap ('lds') | [`.attr.LayerDeepLiftShap`](https://captum.ai/api/layer.html#layer-deepliftshap) | -Layer GradientShap ('lgs') | [`attr.LayerGradientShap`](https://captum.ai/api/layer.html#layer-gradientshap) | # samples = 5 +Layer DeepLiftShap (`lds`) | [`.attr.LayerDeepLiftShap`](https://captum.ai/api/layer.html#layer-deepliftshap) | +Layer GradientShap (`lgs`) | [`.attr.LayerGradientShap`](https://captum.ai/api/layer.html#layer-gradientshap) | # samples = 5 @@ -164,7 +164,7 @@ Layer GradientShap ('lgs') | [`attr.LayerGradientShap`](https://captum.ai/api/la [`imdb`](https://huggingface.co/datasets/viewer/?dataset=imdb) is a sentiment analysis dataset with 2 classes (`pos` and `neg`). The available split is the `test` subset containing 25k examples. Example configuration: `imdb-xlnet-lig` -Name | 🤗 | `lgxa` | `lig` | `lime` | `occ` | `svs` | 'lds' | 'lgs' +Name | 🤗 | `lgxa` | `lig` | `lime` | `occ` | `svs` | `lds` | `lgs` --- | --- | --- | --- | --- | --- | --- | --- | --- ALBERT (`albert`) | [`textattack/albert-base-v2-imdb`](https://huggingface.co/textattack/albert-base-v2-imdb) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ BERT (`bert`) | [`textattack/bert-base-uncased-imdb`](https://huggingface.co/textattack/bert-base-uncased-imdb) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ @@ -178,7 +178,7 @@ XLNet (`xlnet`) | [`textattack/xlnet-base-cased-imdb`](https://huggingface.co/te [`multi_nli`](https://huggingface.co/datasets/viewer/?dataset=multi_nli) is a textual entailment dataset. The available split is the `validation_matched` subset containing 9815 examples. Example configuration: `multi_nli-roberta-lime` -Name | 🤗 | `lgxa` | `lig` | `lime` | `occ` | `svs` | 'lds' | 'lgs' +Name | 🤗 | `lgxa` | `lig` | `lime` | `occ` | `svs` | `lds` | `lgs` --- | --- | --- | --- | --- | --- | --- | --- | --- ALBERT (`albert`) | [`prajjwal1/albert-base-v2-mnli`](https://huggingface.co/prajjwal1/albert-base-v2-mnli) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ BERT (`bert`) | [`textattack/bert-base-uncased-MNLI`](https://huggingface.co/textattack/bert-base-uncased-MNLI) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ @@ -192,7 +192,7 @@ XLNet (`xlnet`) | [`textattack/xlnet-base-cased-MNLI`](https://huggingface.co/te [`xnli`](https://huggingface.co/datasets/viewer/?dataset=xnli) is a textual entailment dataset. It provides the test set of MultiNLI through the "en" configuration. The fine-tuned models used here are the same as the MultiNLI ones. The available split is the `test` subset containing 5010 examples. Example configuration: `xnli-roberta-lime` -Name | 🤗 | `lgxa` | `lig` | `lime` | `occ` | `svs` | 'lds' | 'lgs' +Name | 🤗 | `lgxa` | `lig` | `lime` | `occ` | `svs` | `lds` | `lgs` --- | --- | --- | --- | --- | --- | --- | --- | --- ALBERT (`albert`) | [`prajjwal1/albert-base-v2-mnli`](https://huggingface.co/prajjwal1/albert-base-v2-mnli) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ BERT (`bert`) | [`textattack/bert-base-uncased-MNLI`](https://huggingface.co/textattack/bert-base-uncased-MNLI) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ @@ -206,7 +206,7 @@ XLNet (`xlnet`) | [`textattack/xlnet-base-cased-MNLI`](https://huggingface.co/te [`ag_news`](https://huggingface.co/datasets/viewer/?dataset=ag_news) is a news topic classification dataset. The available split is the `test` subset containing 7600 examples. Example configuration: `ag_news-albert-svs` -Name | 🤗 | `lgxa` | `lig` | `lime` | `occ` | `svs` | 'lds' | 'lgs' +Name | 🤗 | `lgxa` | `lig` | `lime` | `occ` | `svs` | `lds` | `lgs` --- | --- | --- | --- | --- | --- | --- | --- | --- ALBERT (`albert`) | [`textattack/albert-base-v2-ag-news`](https://huggingface.co/textattack/albert-base-v2-ag-news) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ BERT (`bert`) | [`textattack/bert-base-uncased-ag-news`](https://huggingface.co/textattack/bert-base-uncased-ag-news) | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ From 6bfd979841cbfb788bd3b52cd218148a2c7d0b53 Mon Sep 17 00:00:00 2001 From: aj280192 Date: Tue, 29 Mar 2022 18:02:06 +0200 Subject: [PATCH 4/4] minor fix for new release 1.0.2 --- demo.ipynb | 3063 +++------------------ setup.py | 2 +- src/thermostat/data/thermostat_configs.py | 2 +- src/thermostat/visualize.py | 2 +- 4 files changed, 323 insertions(+), 2746 deletions(-) diff --git a/demo.ipynb b/demo.ipynb index 142495d..6b81835 100644 --- a/demo.ipynb +++ b/demo.ipynb @@ -12,7 +12,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "To start off, we have to install the dependencies." + "To start off, we have to install the thermostat-datasets package using pip." ] }, { @@ -24,382 +24,85 @@ "name": "stdout", "output_type": "stream", "text": [ - "Requirement already satisfied: pip in c:\\programdata\\anaconda3\\lib\\site-packages (21.2.4)\n", - "Collecting pip\n", - " Downloading pip-22.0.4-py3-none-any.whl (2.1 MB)\n", - "Installing collected packages: pip\n", - " Attempting uninstall: pip\n", - " Found existing installation: pip 21.2.4\n", - " Uninstalling pip-21.2.4:\n" - ] - }, - { - "name": "stderr", - "output_type": "stream", - "text": [ - "WARNING: Ignoring invalid distribution -andas (c:\\programdata\\anaconda3\\lib\\site-packages)\n", - "WARNING: Ignoring invalid distribution -andas (c:\\programdata\\anaconda3\\lib\\site-packages)\n", - "WARNING: Ignoring invalid distribution -andas (c:\\programdata\\anaconda3\\lib\\site-packages)\n", - " WARNING: Ignoring invalid distribution -andas (c:\\programdata\\anaconda3\\lib\\site-packages)\n", - "ERROR: Could not install packages due to an OSError: [WinError 5] Access is denied: 'c:\\\\programdata\\\\anaconda3\\\\lib\\\\site-packages\\\\pip\\\\py.typed'\n", - "Consider using the `--user` option or check the permissions.\n", - "\n", - "WARNING: Ignoring invalid distribution -andas (c:\\programdata\\anaconda3\\lib\\site-packages)\n", - "WARNING: Ignoring invalid distribution -andas (c:\\programdata\\anaconda3\\lib\\site-packages)\n", - "WARNING: Ignoring invalid distribution -andas (c:\\programdata\\anaconda3\\lib\\site-packages)\n" - ] - }, - { - "name": "stdout", - "output_type": "stream", - "text": [ - "Requirement already satisfied: cmake in c:\\programdata\\anaconda3\\lib\\site-packages (3.22.2)\n" - ] - }, - { - "name": "stderr", - "output_type": "stream", - "text": [ - "WARNING: Ignoring invalid distribution -andas (c:\\programdata\\anaconda3\\lib\\site-packages)\n", - "WARNING: Ignoring invalid distribution -andas (c:\\programdata\\anaconda3\\lib\\site-packages)\n", - "WARNING: Ignoring invalid distribution -andas (c:\\programdata\\anaconda3\\lib\\site-packages)\n", - "WARNING: Ignoring invalid distribution -andas (c:\\programdata\\anaconda3\\lib\\site-packages)\n", - "WARNING: Ignoring invalid distribution -andas (c:\\programdata\\anaconda3\\lib\\site-packages)\n", - "WARNING: Ignoring invalid distribution -andas (c:\\programdata\\anaconda3\\lib\\site-packages)\n" - ] - }, - { - "name": "stdout", - "output_type": "stream", - "text": [ - "Requirement already satisfied: cython in c:\\programdata\\anaconda3\\lib\\site-packages (0.29.23)\n" - ] - }, - { - "name": "stderr", - "output_type": "stream", - "text": [ - "WARNING: Ignoring invalid distribution -andas (c:\\programdata\\anaconda3\\lib\\site-packages)\n", - "WARNING: Ignoring invalid distribution -andas (c:\\programdata\\anaconda3\\lib\\site-packages)\n", - "WARNING: Ignoring invalid distribution -andas (c:\\programdata\\anaconda3\\lib\\site-packages)\n", - "WARNING: Ignoring invalid distribution -andas (c:\\programdata\\anaconda3\\lib\\site-packages)\n", - "WARNING: Ignoring invalid distribution -andas (c:\\programdata\\anaconda3\\lib\\site-packages)\n", - "WARNING: Ignoring invalid distribution -andas (c:\\programdata\\anaconda3\\lib\\site-packages)\n", - "WARNING: Ignoring invalid distribution -andas (c:\\programdata\\anaconda3\\lib\\site-packages)\n", - "WARNING: Ignoring invalid distribution -andas (c:\\programdata\\anaconda3\\lib\\site-packages)\n", - "WARNING: Ignoring invalid distribution -andas (c:\\programdata\\anaconda3\\lib\\site-packages)\n", - "WARNING: Ignoring invalid distribution -andas (c:\\programdata\\anaconda3\\lib\\site-packages)\n", - "WARNING: Ignoring invalid distribution -andas (c:\\programdata\\anaconda3\\lib\\site-packages)\n", - "WARNING: Ignoring invalid distribution -andas (c:\\programdata\\anaconda3\\lib\\site-packages)\n" - ] - }, - { - "name": "stdout", - "output_type": "stream", - "text": [ - "Requirement already satisfied: numpy in c:\\programdata\\anaconda3\\lib\\site-packages (1.20.3)\n", - "Requirement already satisfied: torch in c:\\programdata\\anaconda3\\lib\\site-packages (1.10.1)\n", - "Requirement already satisfied: typing_extensions in c:\\programdata\\anaconda3\\lib\\site-packages (from torch) (3.10.0.2)\n" - ] - }, - { - "name": "stderr", - "output_type": "stream", - "text": [ - "WARNING: Ignoring invalid distribution -andas (c:\\programdata\\anaconda3\\lib\\site-packages)\n", - "WARNING: Ignoring invalid distribution -andas (c:\\programdata\\anaconda3\\lib\\site-packages)\n", - "WARNING: Ignoring invalid distribution -andas (c:\\programdata\\anaconda3\\lib\\site-packages)\n", - "WARNING: Ignoring invalid distribution -andas (c:\\programdata\\anaconda3\\lib\\site-packages)\n", - "WARNING: Ignoring invalid distribution -andas (c:\\programdata\\anaconda3\\lib\\site-packages)\n", - "WARNING: Ignoring invalid distribution -andas (c:\\programdata\\anaconda3\\lib\\site-packages)\n" - ] - }, - { - "name": "stdout", - "output_type": "stream", - "text": [ - "Requirement already satisfied: datasets in c:\\programdata\\anaconda3\\lib\\site-packages (1.18.3)\n", - "Requirement already satisfied: pandas in c:\\programdata\\anaconda3\\lib\\site-packages (from datasets) (1.3.4)\n", - "Requirement already satisfied: xxhash in c:\\programdata\\anaconda3\\lib\\site-packages (from datasets) (2.0.2)\n", - "Requirement already satisfied: packaging in c:\\programdata\\anaconda3\\lib\\site-packages (from datasets) (21.0)\n", - "Requirement already satisfied: multiprocess in c:\\programdata\\anaconda3\\lib\\site-packages (from datasets) (0.70.12.2)\n", - "Requirement already satisfied: tqdm>=4.62.1 in c:\\programdata\\anaconda3\\lib\\site-packages (from datasets) (4.62.3)\n", - "Requirement already satisfied: aiohttp in c:\\programdata\\anaconda3\\lib\\site-packages (from datasets) (3.8.1)\n", - "Requirement already satisfied: fsspec[http]>=2021.05.0 in c:\\programdata\\anaconda3\\lib\\site-packages (from datasets) (2021.10.1)\n", - "Requirement already satisfied: dill in c:\\programdata\\anaconda3\\lib\\site-packages (from datasets) (0.3.4)\n", - "Requirement already satisfied: huggingface-hub<1.0.0,>=0.1.0 in c:\\programdata\\anaconda3\\lib\\site-packages (from datasets) (0.4.0)\n", - "Requirement already satisfied: pyarrow!=4.0.0,>=3.0.0 in c:\\programdata\\anaconda3\\lib\\site-packages (from datasets) (6.0.1)\n", - "Requirement already satisfied: numpy>=1.17 in c:\\programdata\\anaconda3\\lib\\site-packages (from datasets) (1.20.3)\n", - "Requirement already satisfied: requests>=2.19.0 in c:\\programdata\\anaconda3\\lib\\site-packages (from datasets) (2.26.0)\n", - "Requirement already satisfied: pyyaml in c:\\programdata\\anaconda3\\lib\\site-packages (from huggingface-hub<1.0.0,>=0.1.0->datasets) (6.0)\n", - "Requirement already satisfied: filelock in c:\\programdata\\anaconda3\\lib\\site-packages (from huggingface-hub<1.0.0,>=0.1.0->datasets) (3.3.1)\n", - "Requirement already satisfied: typing-extensions>=3.7.4.3 in c:\\programdata\\anaconda3\\lib\\site-packages (from huggingface-hub<1.0.0,>=0.1.0->datasets) (3.10.0.2)\n", - "Requirement already satisfied: pyparsing>=2.0.2 in c:\\programdata\\anaconda3\\lib\\site-packages (from packaging->datasets) (3.0.4)\n", - "Requirement already satisfied: urllib3<1.27,>=1.21.1 in c:\\programdata\\anaconda3\\lib\\site-packages (from requests>=2.19.0->datasets) (1.26.7)\n", - "Requirement already satisfied: charset-normalizer~=2.0.0 in c:\\programdata\\anaconda3\\lib\\site-packages (from requests>=2.19.0->datasets) (2.0.4)\n", - "Requirement already satisfied: idna<4,>=2.5 in c:\\programdata\\anaconda3\\lib\\site-packages (from requests>=2.19.0->datasets) (3.2)\n", - "Requirement already satisfied: certifi>=2017.4.17 in c:\\programdata\\anaconda3\\lib\\site-packages (from requests>=2.19.0->datasets) (2021.10.8)\n", - "Requirement already satisfied: colorama in c:\\programdata\\anaconda3\\lib\\site-packages (from tqdm>=4.62.1->datasets) (0.4.4)\n", - "Requirement already satisfied: yarl<2.0,>=1.0 in c:\\programdata\\anaconda3\\lib\\site-packages (from aiohttp->datasets) (1.7.2)\n", - "Requirement already satisfied: async-timeout<5.0,>=4.0.0a3 in c:\\programdata\\anaconda3\\lib\\site-packages (from aiohttp->datasets) (4.0.2)\n", - "Requirement already satisfied: aiosignal>=1.1.2 in c:\\programdata\\anaconda3\\lib\\site-packages (from aiohttp->datasets) (1.2.0)\n", - "Requirement already satisfied: attrs>=17.3.0 in c:\\programdata\\anaconda3\\lib\\site-packages (from aiohttp->datasets) (21.2.0)\n", - "Requirement already satisfied: frozenlist>=1.1.1 in c:\\programdata\\anaconda3\\lib\\site-packages (from aiohttp->datasets) (1.3.0)\n", - "Requirement already satisfied: multidict<7.0,>=4.5 in c:\\programdata\\anaconda3\\lib\\site-packages (from aiohttp->datasets) (6.0.2)\n", - "Requirement already satisfied: pytz>=2017.3 in c:\\programdata\\anaconda3\\lib\\site-packages (from pandas->datasets) (2021.3)\n", - "Requirement already satisfied: python-dateutil>=2.7.3 in c:\\programdata\\anaconda3\\lib\\site-packages (from pandas->datasets) (2.8.2)\n", - "Requirement already satisfied: six>=1.5 in c:\\programdata\\anaconda3\\lib\\site-packages (from python-dateutil>=2.7.3->pandas->datasets) (1.16.0)\n" - ] - }, - { - "name": "stderr", - "output_type": "stream", - "text": [ - "WARNING: Ignoring invalid distribution -andas (c:\\programdata\\anaconda3\\lib\\site-packages)\n", - "WARNING: Ignoring invalid distribution -andas (c:\\programdata\\anaconda3\\lib\\site-packages)\n", - "WARNING: Ignoring invalid distribution -andas (c:\\programdata\\anaconda3\\lib\\site-packages)\n", - "WARNING: Ignoring invalid distribution -andas (c:\\programdata\\anaconda3\\lib\\site-packages)\n", - "WARNING: Ignoring invalid distribution -andas (c:\\programdata\\anaconda3\\lib\\site-packages)\n", - "WARNING: Ignoring invalid distribution -andas (c:\\programdata\\anaconda3\\lib\\site-packages)\n" - ] - }, - { - "name": "stdout", - "output_type": "stream", - "text": [ - "Requirement already satisfied: spacy in c:\\programdata\\anaconda3\\lib\\site-packages (3.2.1)\n", - "Requirement already satisfied: jinja2 in c:\\programdata\\anaconda3\\lib\\site-packages (from spacy) (2.11.3)\n", - "Requirement already satisfied: blis<0.8.0,>=0.4.0 in c:\\programdata\\anaconda3\\lib\\site-packages (from spacy) (0.7.5)\n", - "Requirement already satisfied: packaging>=20.0 in c:\\programdata\\anaconda3\\lib\\site-packages (from spacy) (21.0)\n", - "Requirement already satisfied: langcodes<4.0.0,>=3.2.0 in c:\\programdata\\anaconda3\\lib\\site-packages (from spacy) (3.3.0)\n", - "Requirement already satisfied: preshed<3.1.0,>=3.0.2 in c:\\programdata\\anaconda3\\lib\\site-packages (from spacy) (3.0.6)\n", - "Requirement already satisfied: murmurhash<1.1.0,>=0.28.0 in c:\\programdata\\anaconda3\\lib\\site-packages (from spacy) (1.0.6)\n", - "Requirement already satisfied: spacy-loggers<2.0.0,>=1.0.0 in c:\\programdata\\anaconda3\\lib\\site-packages (from spacy) (1.0.1)\n", - "Requirement already satisfied: pathy>=0.3.5 in c:\\programdata\\anaconda3\\lib\\site-packages (from spacy) (0.6.1)\n", - "Requirement already satisfied: spacy-legacy<3.1.0,>=3.0.8 in c:\\programdata\\anaconda3\\lib\\site-packages (from spacy) (3.0.8)\n", - "Requirement already satisfied: srsly<3.0.0,>=2.4.1 in c:\\programdata\\anaconda3\\lib\\site-packages (from spacy) (2.4.2)\n", - "Requirement already satisfied: cymem<2.1.0,>=2.0.2 in c:\\programdata\\anaconda3\\lib\\site-packages (from spacy) (2.0.6)\n", - "Requirement already satisfied: thinc<8.1.0,>=8.0.12 in c:\\programdata\\anaconda3\\lib\\site-packages (from spacy) (8.0.13)\n", - "Requirement already satisfied: typer<0.5.0,>=0.3.0 in c:\\programdata\\anaconda3\\lib\\site-packages (from spacy) (0.4.0)\n", - "Requirement already satisfied: numpy>=1.15.0 in c:\\programdata\\anaconda3\\lib\\site-packages (from spacy) (1.20.3)\n", - "Requirement already satisfied: wasabi<1.1.0,>=0.8.1 in c:\\programdata\\anaconda3\\lib\\site-packages (from spacy) (0.9.0)\n", - "Requirement already satisfied: catalogue<2.1.0,>=2.0.6 in c:\\programdata\\anaconda3\\lib\\site-packages (from spacy) (2.0.6)\n", - "Requirement already satisfied: requests<3.0.0,>=2.13.0 in c:\\programdata\\anaconda3\\lib\\site-packages (from spacy) (2.26.0)\n", - "Requirement already satisfied: setuptools in c:\\programdata\\anaconda3\\lib\\site-packages (from spacy) (58.0.4)\n", - "Requirement already satisfied: pydantic!=1.8,!=1.8.1,<1.9.0,>=1.7.4 in c:\\programdata\\anaconda3\\lib\\site-packages (from spacy) (1.8.2)\n", - "Requirement already satisfied: tqdm<5.0.0,>=4.38.0 in c:\\programdata\\anaconda3\\lib\\site-packages (from spacy) (4.62.3)\n", - "Requirement already satisfied: pyparsing>=2.0.2 in c:\\programdata\\anaconda3\\lib\\site-packages (from packaging>=20.0->spacy) (3.0.4)\n", - "Requirement already satisfied: smart-open<6.0.0,>=5.0.0 in c:\\programdata\\anaconda3\\lib\\site-packages (from pathy>=0.3.5->spacy) (5.2.1)\n", - "Requirement already satisfied: typing-extensions>=3.7.4.3 in c:\\programdata\\anaconda3\\lib\\site-packages (from pydantic!=1.8,!=1.8.1,<1.9.0,>=1.7.4->spacy) (3.10.0.2)\n", - "Requirement already satisfied: urllib3<1.27,>=1.21.1 in c:\\programdata\\anaconda3\\lib\\site-packages (from requests<3.0.0,>=2.13.0->spacy) (1.26.7)\n", - "Requirement already satisfied: idna<4,>=2.5 in c:\\programdata\\anaconda3\\lib\\site-packages (from requests<3.0.0,>=2.13.0->spacy) (3.2)\n", - "Requirement already satisfied: certifi>=2017.4.17 in c:\\programdata\\anaconda3\\lib\\site-packages (from requests<3.0.0,>=2.13.0->spacy) (2021.10.8)\n", - "Requirement already satisfied: charset-normalizer~=2.0.0 in c:\\programdata\\anaconda3\\lib\\site-packages (from requests<3.0.0,>=2.13.0->spacy) (2.0.4)\n", - "Requirement already satisfied: colorama in c:\\programdata\\anaconda3\\lib\\site-packages (from tqdm<5.0.0,>=4.38.0->spacy) (0.4.4)\n", - "Requirement already satisfied: click<9.0.0,>=7.1.1 in c:\\programdata\\anaconda3\\lib\\site-packages (from typer<0.5.0,>=0.3.0->spacy) (8.0.3)\n", - "Requirement already satisfied: MarkupSafe>=0.23 in c:\\programdata\\anaconda3\\lib\\site-packages (from jinja2->spacy) (1.1.1)\n" - ] - }, - { - "name": "stderr", - "output_type": "stream", - "text": [ - "WARNING: Ignoring invalid distribution -andas (c:\\programdata\\anaconda3\\lib\\site-packages)\n", - "WARNING: Ignoring invalid distribution -andas (c:\\programdata\\anaconda3\\lib\\site-packages)\n", - "WARNING: Ignoring invalid distribution -andas (c:\\programdata\\anaconda3\\lib\\site-packages)\n", - "WARNING: Ignoring invalid distribution -andas (c:\\programdata\\anaconda3\\lib\\site-packages)\n", - "WARNING: Ignoring invalid distribution -andas (c:\\programdata\\anaconda3\\lib\\site-packages)\n", - "WARNING: Ignoring invalid distribution -andas (c:\\programdata\\anaconda3\\lib\\site-packages)\n" - ] - }, - { - "name": "stdout", - "output_type": "stream", - "text": [ - "Requirement already satisfied: sentencepiece in c:\\programdata\\anaconda3\\lib\\site-packages (0.1.96)\n" - ] - }, - { - "name": "stderr", - "output_type": "stream", - "text": [ - "WARNING: Ignoring invalid distribution -andas (c:\\programdata\\anaconda3\\lib\\site-packages)\n", - "WARNING: Ignoring invalid distribution -andas (c:\\programdata\\anaconda3\\lib\\site-packages)\n", - "WARNING: Ignoring invalid distribution -andas (c:\\programdata\\anaconda3\\lib\\site-packages)\n", - "WARNING: Ignoring invalid distribution -andas (c:\\programdata\\anaconda3\\lib\\site-packages)\n", - "WARNING: Ignoring invalid distribution -andas (c:\\programdata\\anaconda3\\lib\\site-packages)\n", - "WARNING: Ignoring invalid distribution -andas (c:\\programdata\\anaconda3\\lib\\site-packages)\n" - ] - }, - { - "name": "stdout", - "output_type": "stream", - "text": [ - "Requirement already satisfied: transformers in c:\\programdata\\anaconda3\\lib\\site-packages (4.16.2)\n", - "Requirement already satisfied: huggingface-hub<1.0,>=0.1.0 in c:\\programdata\\anaconda3\\lib\\site-packages (from transformers) (0.4.0)\n", - "Requirement already satisfied: sacremoses in c:\\programdata\\anaconda3\\lib\\site-packages (from transformers) (0.0.47)\n", - "Requirement already satisfied: packaging>=20.0 in c:\\programdata\\anaconda3\\lib\\site-packages (from transformers) (21.0)\n", - "Requirement already satisfied: numpy>=1.17 in c:\\programdata\\anaconda3\\lib\\site-packages (from transformers) (1.20.3)\n", - "Requirement already satisfied: tokenizers!=0.11.3,>=0.10.1 in c:\\programdata\\anaconda3\\lib\\site-packages (from transformers) (0.11.4)\n", - "Requirement already satisfied: requests in c:\\programdata\\anaconda3\\lib\\site-packages (from transformers) (2.26.0)\n", - "Requirement already satisfied: filelock in c:\\programdata\\anaconda3\\lib\\site-packages (from transformers) (3.3.1)\n", - "Requirement already satisfied: pyyaml>=5.1 in c:\\programdata\\anaconda3\\lib\\site-packages (from transformers) (6.0)\n", - "Requirement already satisfied: regex!=2019.12.17 in c:\\programdata\\anaconda3\\lib\\site-packages (from transformers) (2021.8.3)\n", - "Requirement already satisfied: tqdm>=4.27 in c:\\programdata\\anaconda3\\lib\\site-packages (from transformers) (4.62.3)\n", - "Requirement already satisfied: typing-extensions>=3.7.4.3 in c:\\programdata\\anaconda3\\lib\\site-packages (from huggingface-hub<1.0,>=0.1.0->transformers) (3.10.0.2)\n", - "Requirement already satisfied: pyparsing>=2.0.2 in c:\\programdata\\anaconda3\\lib\\site-packages (from packaging>=20.0->transformers) (3.0.4)\n", - "Requirement already satisfied: colorama in c:\\programdata\\anaconda3\\lib\\site-packages (from tqdm>=4.27->transformers) (0.4.4)\n", - "Requirement already satisfied: charset-normalizer~=2.0.0 in c:\\programdata\\anaconda3\\lib\\site-packages (from requests->transformers) (2.0.4)\n", - "Requirement already satisfied: certifi>=2017.4.17 in c:\\programdata\\anaconda3\\lib\\site-packages (from requests->transformers) (2021.10.8)\n", - "Requirement already satisfied: urllib3<1.27,>=1.21.1 in c:\\programdata\\anaconda3\\lib\\site-packages (from requests->transformers) (1.26.7)\n", - "Requirement already satisfied: idna<4,>=2.5 in c:\\programdata\\anaconda3\\lib\\site-packages (from requests->transformers) (3.2)\n", - "Requirement already satisfied: six in c:\\programdata\\anaconda3\\lib\\site-packages (from sacremoses->transformers) (1.16.0)\n", - "Requirement already satisfied: click in c:\\programdata\\anaconda3\\lib\\site-packages (from sacremoses->transformers) (8.0.3)\n", - "Requirement already satisfied: joblib in c:\\programdata\\anaconda3\\lib\\site-packages (from sacremoses->transformers) (1.1.0)\n" - ] - }, - { - "name": "stderr", - "output_type": "stream", - "text": [ - "WARNING: Ignoring invalid distribution -andas (c:\\programdata\\anaconda3\\lib\\site-packages)\n", - "WARNING: Ignoring invalid distribution -andas (c:\\programdata\\anaconda3\\lib\\site-packages)\n", - "WARNING: Ignoring invalid distribution -andas (c:\\programdata\\anaconda3\\lib\\site-packages)\n", - "WARNING: Ignoring invalid distribution -andas (c:\\programdata\\anaconda3\\lib\\site-packages)\n", - "WARNING: Ignoring invalid distribution -andas (c:\\programdata\\anaconda3\\lib\\site-packages)\n", - "WARNING: Ignoring invalid distribution -andas (c:\\programdata\\anaconda3\\lib\\site-packages)\n" - ] - }, - { - "name": "stdout", - "output_type": "stream", - "text": [ - "Requirement already satisfied: overrides in c:\\programdata\\anaconda3\\lib\\site-packages (6.1.0)\n", - "Requirement already satisfied: typing-utils>=0.0.3 in c:\\programdata\\anaconda3\\lib\\site-packages (from overrides) (0.1.0)\n" - ] - }, - { - "name": "stderr", - "output_type": "stream", - "text": [ - "WARNING: Ignoring invalid distribution -andas (c:\\programdata\\anaconda3\\lib\\site-packages)\n", - "WARNING: Ignoring invalid distribution -andas (c:\\programdata\\anaconda3\\lib\\site-packages)\n", - "WARNING: Ignoring invalid distribution -andas (c:\\programdata\\anaconda3\\lib\\site-packages)\n", - "WARNING: Ignoring invalid distribution -andas (c:\\programdata\\anaconda3\\lib\\site-packages)\n", - "WARNING: Ignoring invalid distribution -andas (c:\\programdata\\anaconda3\\lib\\site-packages)\n", - "WARNING: Ignoring invalid distribution -andas (c:\\programdata\\anaconda3\\lib\\site-packages)\n" - ] - }, - { - "name": "stdout", - "output_type": "stream", - "text": [ - "Collecting jsonnet\n", - " Using cached jsonnet-0.18.0.tar.gz (592 kB)\n", - "Building wheels for collected packages: jsonnet\n", - " Building wheel for jsonnet (setup.py): started\n", - " Building wheel for jsonnet (setup.py): finished with status 'error'\n", - " Running setup.py clean for jsonnet\n", - "Failed to build jsonnet\n", - "Installing collected packages: jsonnet\n", - " Running setup.py install for jsonnet: started\n", - " Running setup.py install for jsonnet: finished with status 'error'\n" - ] - }, - { - "name": "stderr", - "output_type": "stream", - "text": [ - "WARNING: Ignoring invalid distribution -andas (c:\\programdata\\anaconda3\\lib\\site-packages)\n", - "WARNING: Ignoring invalid distribution -andas (c:\\programdata\\anaconda3\\lib\\site-packages)\n", - " ERROR: Command errored out with exit status 1:\n", - " command: 'C:\\ProgramData\\Anaconda3\\python.exe' -u -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '\"'\"'C:\\\\Users\\\\49176\\\\AppData\\\\Local\\\\Temp\\\\pip-install-vrsb96ad\\\\jsonnet_8f800cc73699425d8babbbf5b9340802\\\\setup.py'\"'\"'; __file__='\"'\"'C:\\\\Users\\\\49176\\\\AppData\\\\Local\\\\Temp\\\\pip-install-vrsb96ad\\\\jsonnet_8f800cc73699425d8babbbf5b9340802\\\\setup.py'\"'\"';f = getattr(tokenize, '\"'\"'open'\"'\"', open)(__file__) if os.path.exists(__file__) else io.StringIO('\"'\"'from setuptools import setup; setup()'\"'\"');code = f.read().replace('\"'\"'\\r\\n'\"'\"', '\"'\"'\\n'\"'\"');f.close();exec(compile(code, __file__, '\"'\"'exec'\"'\"'))' bdist_wheel -d 'C:\\Users\\49176\\AppData\\Local\\Temp\\pip-wheel-qsb54w8v'\n", - " cwd: C:\\Users\\49176\\AppData\\Local\\Temp\\pip-install-vrsb96ad\\jsonnet_8f800cc73699425d8babbbf5b9340802\\\n", - " Complete output (4 lines):\n", - " running bdist_wheel\n", - " running build\n", - " running build_ext\n", - " error: [WinError 2] The system cannot find the file specified\n", - " ----------------------------------------\n", - " ERROR: Failed building wheel for jsonnet\n", - "WARNING: Ignoring invalid distribution -andas (c:\\programdata\\anaconda3\\lib\\site-packages)\n", - " ERROR: Command errored out with exit status 1:\n", - " command: 'C:\\ProgramData\\Anaconda3\\python.exe' -u -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '\"'\"'C:\\\\Users\\\\49176\\\\AppData\\\\Local\\\\Temp\\\\pip-install-vrsb96ad\\\\jsonnet_8f800cc73699425d8babbbf5b9340802\\\\setup.py'\"'\"'; __file__='\"'\"'C:\\\\Users\\\\49176\\\\AppData\\\\Local\\\\Temp\\\\pip-install-vrsb96ad\\\\jsonnet_8f800cc73699425d8babbbf5b9340802\\\\setup.py'\"'\"';f = getattr(tokenize, '\"'\"'open'\"'\"', open)(__file__) if os.path.exists(__file__) else io.StringIO('\"'\"'from setuptools import setup; setup()'\"'\"');code = f.read().replace('\"'\"'\\r\\n'\"'\"', '\"'\"'\\n'\"'\"');f.close();exec(compile(code, __file__, '\"'\"'exec'\"'\"'))' install --record 'C:\\Users\\49176\\AppData\\Local\\Temp\\pip-record-m2_9diqj\\install-record.txt' --single-version-externally-managed --compile --install-headers 'C:\\ProgramData\\Anaconda3\\Include\\jsonnet'\n", - " cwd: C:\\Users\\49176\\AppData\\Local\\Temp\\pip-install-vrsb96ad\\jsonnet_8f800cc73699425d8babbbf5b9340802\\\n", - " Complete output (4 lines):\n", - " running install\n", - " running build\n", - " running build_ext\n", - " error: [WinError 2] The system cannot find the file specified\n", - " ----------------------------------------\n", - "ERROR: Command errored out with exit status 1: 'C:\\ProgramData\\Anaconda3\\python.exe' -u -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '\"'\"'C:\\\\Users\\\\49176\\\\AppData\\\\Local\\\\Temp\\\\pip-install-vrsb96ad\\\\jsonnet_8f800cc73699425d8babbbf5b9340802\\\\setup.py'\"'\"'; __file__='\"'\"'C:\\\\Users\\\\49176\\\\AppData\\\\Local\\\\Temp\\\\pip-install-vrsb96ad\\\\jsonnet_8f800cc73699425d8babbbf5b9340802\\\\setup.py'\"'\"';f = getattr(tokenize, '\"'\"'open'\"'\"', open)(__file__) if os.path.exists(__file__) else io.StringIO('\"'\"'from setuptools import setup; setup()'\"'\"');code = f.read().replace('\"'\"'\\r\\n'\"'\"', '\"'\"'\\n'\"'\"');f.close();exec(compile(code, __file__, '\"'\"'exec'\"'\"'))' install --record 'C:\\Users\\49176\\AppData\\Local\\Temp\\pip-record-m2_9diqj\\install-record.txt' --single-version-externally-managed --compile --install-headers 'C:\\ProgramData\\Anaconda3\\Include\\jsonnet' Check the logs for full command output.\n", - "WARNING: Ignoring invalid distribution -andas (c:\\programdata\\anaconda3\\lib\\site-packages)\n", - "WARNING: Ignoring invalid distribution -andas (c:\\programdata\\anaconda3\\lib\\site-packages)\n", - "WARNING: Ignoring invalid distribution -andas (c:\\programdata\\anaconda3\\lib\\site-packages)\n" - ] - }, - { - "name": "stdout", - "output_type": "stream", - "text": [ - "Requirement already satisfied: sklearn in c:\\programdata\\anaconda3\\lib\\site-packages (0.0)\n", - "Requirement already satisfied: scikit-learn in c:\\programdata\\anaconda3\\lib\\site-packages (from sklearn) (0.24.2)\n", - "Requirement already satisfied: threadpoolctl>=2.0.0 in c:\\programdata\\anaconda3\\lib\\site-packages (from scikit-learn->sklearn) (2.2.0)\n", - "Requirement already satisfied: scipy>=0.19.1 in c:\\programdata\\anaconda3\\lib\\site-packages (from scikit-learn->sklearn) (1.7.1)\n", - "Requirement already satisfied: numpy>=1.13.3 in c:\\programdata\\anaconda3\\lib\\site-packages (from scikit-learn->sklearn) (1.20.3)\n", - "Requirement already satisfied: joblib>=0.11 in c:\\programdata\\anaconda3\\lib\\site-packages (from scikit-learn->sklearn) (1.1.0)\n" - ] - }, - { - "name": "stderr", - "output_type": "stream", - "text": [ - "WARNING: Ignoring invalid distribution -andas (c:\\programdata\\anaconda3\\lib\\site-packages)\n", - "WARNING: Ignoring invalid distribution -andas (c:\\programdata\\anaconda3\\lib\\site-packages)\n", - "WARNING: Ignoring invalid distribution -andas (c:\\programdata\\anaconda3\\lib\\site-packages)\n", - "WARNING: Ignoring invalid distribution -andas (c:\\programdata\\anaconda3\\lib\\site-packages)\n", - "WARNING: Ignoring invalid distribution -andas (c:\\programdata\\anaconda3\\lib\\site-packages)\n", - "WARNING: Ignoring invalid distribution -andas (c:\\programdata\\anaconda3\\lib\\site-packages)\n" - ] - }, - { - "name": "stdout", - "output_type": "stream", - "text": [ - "Requirement already satisfied: pandas in c:\\programdata\\anaconda3\\lib\\site-packages (1.3.4)\n", - "Requirement already satisfied: pytz>=2017.3 in c:\\programdata\\anaconda3\\lib\\site-packages (from pandas) (2021.3)\n", - "Requirement already satisfied: numpy>=1.17.3 in c:\\programdata\\anaconda3\\lib\\site-packages (from pandas) (1.20.3)\n", - "Requirement already satisfied: python-dateutil>=2.7.3 in c:\\programdata\\anaconda3\\lib\\site-packages (from pandas) (2.8.2)\n", - "Requirement already satisfied: six>=1.5 in c:\\programdata\\anaconda3\\lib\\site-packages (from python-dateutil>=2.7.3->pandas) (1.16.0)\n" - ] - }, - { - "name": "stderr", - "output_type": "stream", - "text": [ - "WARNING: Ignoring invalid distribution -andas (c:\\programdata\\anaconda3\\lib\\site-packages)\n", - "WARNING: Ignoring invalid distribution -andas (c:\\programdata\\anaconda3\\lib\\site-packages)\n", - "WARNING: Ignoring invalid distribution -andas (c:\\programdata\\anaconda3\\lib\\site-packages)\n", - "WARNING: Ignoring invalid distribution -andas (c:\\programdata\\anaconda3\\lib\\site-packages)\n", - "WARNING: Ignoring invalid distribution -andas (c:\\programdata\\anaconda3\\lib\\site-packages)\n", - "WARNING: Ignoring invalid distribution -andas (c:\\programdata\\anaconda3\\lib\\site-packages)\n" + "Requirement already satisfied: thermostat-datasets in d:\\working student\\repo\\thermostat\\src (1.0.2)\n", + "Requirement already satisfied: captum>=0.3 in c:\\users\\49176\\anaconda3\\lib\\site-packages (from thermostat-datasets) (0.5.0)\n", + "Requirement already satisfied: datasets>=1.5 in c:\\users\\49176\\anaconda3\\lib\\site-packages (from thermostat-datasets) (1.18.4)\n", + "Requirement already satisfied: jsonnet-binary in c:\\users\\49176\\anaconda3\\lib\\site-packages (from thermostat-datasets) (0.17.0)\n", + "Requirement already satisfied: numpy>=1.20 in c:\\users\\49176\\anaconda3\\lib\\site-packages (from thermostat-datasets) (1.20.3)\n", + "Requirement already satisfied: overrides in c:\\users\\49176\\anaconda3\\lib\\site-packages (from thermostat-datasets) (6.1.0)\n", + "Requirement already satisfied: pandas in c:\\users\\49176\\anaconda3\\lib\\site-packages (from thermostat-datasets) (1.3.4)\n", + "Requirement already satisfied: protobuf in c:\\users\\49176\\anaconda3\\lib\\site-packages (from thermostat-datasets) (3.19.4)\n", + "Requirement already satisfied: pytorch-ignite in c:\\users\\49176\\anaconda3\\lib\\site-packages (from thermostat-datasets) (0.4.8)\n", + "Requirement already satisfied: scipy in c:\\users\\49176\\anaconda3\\lib\\site-packages (from thermostat-datasets) (1.7.1)\n", + "Requirement already satisfied: sentencepiece in c:\\users\\49176\\anaconda3\\lib\\site-packages (from thermostat-datasets) (0.1.96)\n", + "Requirement already satisfied: sklearn in c:\\users\\49176\\anaconda3\\lib\\site-packages (from thermostat-datasets) (0.0)\n", + "Requirement already satisfied: spacy>=3.0 in c:\\users\\49176\\anaconda3\\lib\\site-packages (from thermostat-datasets) (3.2.3)\n", + "Requirement already satisfied: torch in c:\\users\\49176\\anaconda3\\lib\\site-packages (from thermostat-datasets) (1.11.0+cu113)\n", + "Requirement already satisfied: tqdm>=4.49 in c:\\users\\49176\\anaconda3\\lib\\site-packages (from thermostat-datasets) (4.62.3)\n", + "Requirement already satisfied: transformers>=4.5 in c:\\users\\49176\\anaconda3\\lib\\site-packages (from thermostat-datasets) (4.16.2)\n", + "Requirement already satisfied: matplotlib in c:\\users\\49176\\anaconda3\\lib\\site-packages (from captum>=0.3->thermostat-datasets) (3.4.3)\n", + "Requirement already satisfied: dill in c:\\users\\49176\\anaconda3\\lib\\site-packages (from datasets>=1.5->thermostat-datasets) (0.3.4)\n", + "Requirement already satisfied: xxhash in c:\\users\\49176\\anaconda3\\lib\\site-packages (from datasets>=1.5->thermostat-datasets) (3.0.0)\n", + "Requirement already satisfied: responses<0.19 in c:\\users\\49176\\anaconda3\\lib\\site-packages (from datasets>=1.5->thermostat-datasets) (0.18.0)\n", + "Requirement already satisfied: requests>=2.19.0 in c:\\users\\49176\\anaconda3\\lib\\site-packages (from datasets>=1.5->thermostat-datasets) (2.26.0)\n", + "Requirement already satisfied: packaging in c:\\users\\49176\\anaconda3\\lib\\site-packages (from datasets>=1.5->thermostat-datasets) (21.0)\n", + "Requirement already satisfied: fsspec[http]>=2021.05.0 in c:\\users\\49176\\anaconda3\\lib\\site-packages (from datasets>=1.5->thermostat-datasets) (2021.10.1)\n", + "Requirement already satisfied: pyarrow!=4.0.0,>=3.0.0 in c:\\users\\49176\\anaconda3\\lib\\site-packages (from datasets>=1.5->thermostat-datasets) (7.0.0)\n", + "Requirement already satisfied: aiohttp in c:\\users\\49176\\anaconda3\\lib\\site-packages (from datasets>=1.5->thermostat-datasets) (3.8.1)\n", + "Requirement already satisfied: huggingface-hub<1.0.0,>=0.1.0 in c:\\users\\49176\\anaconda3\\lib\\site-packages (from datasets>=1.5->thermostat-datasets) (0.4.0)\n", + "Requirement already satisfied: multiprocess in c:\\users\\49176\\anaconda3\\lib\\site-packages (from datasets>=1.5->thermostat-datasets) (0.70.12.2)\n", + "Requirement already satisfied: setuptools in c:\\users\\49176\\anaconda3\\lib\\site-packages (from spacy>=3.0->thermostat-datasets) (58.0.4)\n", + "Requirement already satisfied: spacy-loggers<2.0.0,>=1.0.0 in c:\\users\\49176\\anaconda3\\lib\\site-packages (from spacy>=3.0->thermostat-datasets) (1.0.1)\n", + "Requirement already satisfied: pydantic!=1.8,!=1.8.1,<1.9.0,>=1.7.4 in c:\\users\\49176\\anaconda3\\lib\\site-packages (from spacy>=3.0->thermostat-datasets) (1.8.2)\n", + "Requirement already satisfied: typer<0.5.0,>=0.3.0 in c:\\users\\49176\\anaconda3\\lib\\site-packages (from spacy>=3.0->thermostat-datasets) (0.4.0)\n", + "Requirement already satisfied: thinc<8.1.0,>=8.0.12 in c:\\users\\49176\\anaconda3\\lib\\site-packages (from spacy>=3.0->thermostat-datasets) (8.0.14)\n", + "Requirement already satisfied: srsly<3.0.0,>=2.4.1 in c:\\users\\49176\\anaconda3\\lib\\site-packages (from spacy>=3.0->thermostat-datasets) (2.4.2)\n", + "Requirement already satisfied: jinja2 in c:\\users\\49176\\anaconda3\\lib\\site-packages (from spacy>=3.0->thermostat-datasets) (2.11.3)\n", + "Requirement already satisfied: langcodes<4.0.0,>=3.2.0 in c:\\users\\49176\\anaconda3\\lib\\site-packages (from spacy>=3.0->thermostat-datasets) (3.3.0)\n", + "Requirement already satisfied: cymem<2.1.0,>=2.0.2 in c:\\users\\49176\\anaconda3\\lib\\site-packages (from spacy>=3.0->thermostat-datasets) (2.0.6)\n", + "Requirement already satisfied: blis<0.8.0,>=0.4.0 in c:\\users\\49176\\anaconda3\\lib\\site-packages (from spacy>=3.0->thermostat-datasets) (0.7.6)\n", + "Requirement already satisfied: wasabi<1.1.0,>=0.8.1 in c:\\users\\49176\\anaconda3\\lib\\site-packages (from spacy>=3.0->thermostat-datasets) (0.9.0)\n", + "Requirement already satisfied: spacy-legacy<3.1.0,>=3.0.8 in c:\\users\\49176\\anaconda3\\lib\\site-packages (from spacy>=3.0->thermostat-datasets) (3.0.9)\n", + "Requirement already satisfied: preshed<3.1.0,>=3.0.2 in c:\\users\\49176\\anaconda3\\lib\\site-packages (from spacy>=3.0->thermostat-datasets) (3.0.6)\n", + "Requirement already satisfied: murmurhash<1.1.0,>=0.28.0 in c:\\users\\49176\\anaconda3\\lib\\site-packages (from spacy>=3.0->thermostat-datasets) (1.0.6)\n", + "Requirement already satisfied: pathy>=0.3.5 in c:\\users\\49176\\anaconda3\\lib\\site-packages (from spacy>=3.0->thermostat-datasets) (0.6.1)\n", + "Requirement already satisfied: catalogue<2.1.0,>=2.0.6 in c:\\users\\49176\\anaconda3\\lib\\site-packages (from spacy>=3.0->thermostat-datasets) (2.0.6)\n", + "Requirement already satisfied: typing-extensions in c:\\users\\49176\\anaconda3\\lib\\site-packages (from torch->thermostat-datasets) (3.10.0.2)\n", + "Requirement already satisfied: colorama in c:\\users\\49176\\anaconda3\\lib\\site-packages (from tqdm>=4.49->thermostat-datasets) (0.4.4)\n", + "Requirement already satisfied: pyyaml>=5.1 in c:\\users\\49176\\anaconda3\\lib\\site-packages (from transformers>=4.5->thermostat-datasets) (6.0)\n", + "Requirement already satisfied: regex!=2019.12.17 in c:\\users\\49176\\anaconda3\\lib\\site-packages (from transformers>=4.5->thermostat-datasets) (2021.8.3)\n", + "Requirement already satisfied: tokenizers!=0.11.3,>=0.10.1 in c:\\users\\49176\\anaconda3\\lib\\site-packages (from transformers>=4.5->thermostat-datasets) (0.11.6)\n", + "Requirement already satisfied: sacremoses in c:\\users\\49176\\anaconda3\\lib\\site-packages (from transformers>=4.5->thermostat-datasets) (0.0.47)\n", + "Requirement already satisfied: filelock in c:\\users\\49176\\anaconda3\\lib\\site-packages (from transformers>=4.5->thermostat-datasets) (3.3.1)\n", + "Requirement already satisfied: typing-utils>=0.0.3 in c:\\users\\49176\\anaconda3\\lib\\site-packages (from overrides->thermostat-datasets) (0.1.0)\n", + "Requirement already satisfied: python-dateutil>=2.7.3 in c:\\users\\49176\\anaconda3\\lib\\site-packages (from pandas->thermostat-datasets) (2.8.2)\n", + "Requirement already satisfied: pytz>=2017.3 in c:\\users\\49176\\anaconda3\\lib\\site-packages (from pandas->thermostat-datasets) (2021.3)\n", + "Requirement already satisfied: scikit-learn in c:\\users\\49176\\anaconda3\\lib\\site-packages (from sklearn->thermostat-datasets) (0.24.2)\n", + "Requirement already satisfied: pyparsing>=2.0.2 in c:\\users\\49176\\anaconda3\\lib\\site-packages (from packaging->datasets>=1.5->thermostat-datasets) (3.0.4)\n", + "Requirement already satisfied: smart-open<6.0.0,>=5.0.0 in c:\\users\\49176\\anaconda3\\lib\\site-packages (from pathy>=0.3.5->spacy>=3.0->thermostat-datasets) (5.2.1)\n", + "Requirement already satisfied: six>=1.5 in c:\\users\\49176\\anaconda3\\lib\\site-packages (from python-dateutil>=2.7.3->pandas->thermostat-datasets) (1.16.0)\n", + "Requirement already satisfied: certifi>=2017.4.17 in c:\\users\\49176\\anaconda3\\lib\\site-packages (from requests>=2.19.0->datasets>=1.5->thermostat-datasets) (2021.10.8)\n", + "Requirement already satisfied: charset-normalizer~=2.0.0 in c:\\users\\49176\\anaconda3\\lib\\site-packages (from requests>=2.19.0->datasets>=1.5->thermostat-datasets) (2.0.4)\n", + "Requirement already satisfied: idna<4,>=2.5 in c:\\users\\49176\\anaconda3\\lib\\site-packages (from requests>=2.19.0->datasets>=1.5->thermostat-datasets) (3.2)\n", + "Requirement already satisfied: urllib3<1.27,>=1.21.1 in c:\\users\\49176\\anaconda3\\lib\\site-packages (from requests>=2.19.0->datasets>=1.5->thermostat-datasets) (1.26.7)\n", + "Requirement already satisfied: click<9.0.0,>=7.1.1 in c:\\users\\49176\\anaconda3\\lib\\site-packages (from typer<0.5.0,>=0.3.0->spacy>=3.0->thermostat-datasets) (8.0.3)\n", + "Requirement already satisfied: frozenlist>=1.1.1 in c:\\users\\49176\\anaconda3\\lib\\site-packages (from aiohttp->datasets>=1.5->thermostat-datasets) (1.3.0)\n", + "Requirement already satisfied: aiosignal>=1.1.2 in c:\\users\\49176\\anaconda3\\lib\\site-packages (from aiohttp->datasets>=1.5->thermostat-datasets) (1.2.0)\n", + "Requirement already satisfied: multidict<7.0,>=4.5 in c:\\users\\49176\\anaconda3\\lib\\site-packages (from aiohttp->datasets>=1.5->thermostat-datasets) (6.0.2)\n", + "Requirement already satisfied: attrs>=17.3.0 in c:\\users\\49176\\anaconda3\\lib\\site-packages (from aiohttp->datasets>=1.5->thermostat-datasets) (21.2.0)\n", + "Requirement already satisfied: yarl<2.0,>=1.0 in c:\\users\\49176\\anaconda3\\lib\\site-packages (from aiohttp->datasets>=1.5->thermostat-datasets) (1.7.2)\n", + "Requirement already satisfied: async-timeout<5.0,>=4.0.0a3 in c:\\users\\49176\\anaconda3\\lib\\site-packages (from aiohttp->datasets>=1.5->thermostat-datasets) (4.0.2)\n", + "Requirement already satisfied: MarkupSafe>=0.23 in c:\\users\\49176\\anaconda3\\lib\\site-packages (from jinja2->spacy>=3.0->thermostat-datasets) (1.1.1)\n", + "Requirement already satisfied: pillow>=6.2.0 in c:\\users\\49176\\anaconda3\\lib\\site-packages (from matplotlib->captum>=0.3->thermostat-datasets) (8.4.0)\n", + "Requirement already satisfied: cycler>=0.10 in c:\\users\\49176\\anaconda3\\lib\\site-packages (from matplotlib->captum>=0.3->thermostat-datasets) (0.10.0)\n", + "Requirement already satisfied: kiwisolver>=1.0.1 in c:\\users\\49176\\anaconda3\\lib\\site-packages (from matplotlib->captum>=0.3->thermostat-datasets) (1.3.1)\n", + "Requirement already satisfied: joblib in c:\\users\\49176\\anaconda3\\lib\\site-packages (from sacremoses->transformers>=4.5->thermostat-datasets) (1.1.0)\n", + "Requirement already satisfied: threadpoolctl>=2.0.0 in c:\\users\\49176\\anaconda3\\lib\\site-packages (from scikit-learn->sklearn->thermostat-datasets) (2.2.0)\n" ] } ], "source": [ - "!pip3 install --upgrade pip\n", - "!pip3 install cmake\n", - "!pip3 install cython\n", - "!pip3 install numpy\n", - "!pip3 install torch\n", - "!pip3 install datasets\n", - "!pip3 install spacy\n", - "!pip3 install sentencepiece\n", - "!pip3 install transformers\n", - "!pip3 install overrides\n", - "!pip3 install jsonnet\n", - "!pip3 install sklearn\n", - "!pip3 install pandas" + "!pip install thermostat-datasets" ] }, { @@ -459,7 +162,7 @@ }, { "cell_type": "code", - "execution_count": 10, + "execution_count": 4, "metadata": { "pycharm": { "name": "#%%\n" @@ -467,46 +170,17 @@ }, "outputs": [ { - "name": "stdout", + "name": "stderr", "output_type": "stream", "text": [ - "Loading Thermostat configuration: imdb-bert-lig\n", - "Downloading and preparing dataset thermostat/imdb-bert-lig to C:\\Users\\49176\\.cache\\huggingface\\datasets\\thermostat\\imdb-bert-lig\\1.0.1\\0cbe93e1fbe5b8ed0217559442d8b49a80fd4c2787185f2d7940817c67d8707b...\n" + "Reusing dataset thermostat (C:\\Users\\49176\\.cache\\huggingface\\datasets\\thermostat\\imdb-bert-lig\\1.0.1\\0cbe93e1fbe5b8ed0217559442d8b49a80fd4c2787185f2d7940817c67d8707b)\n" ] }, - { - "data": { - "application/vnd.jupyter.widget-view+json": { - "model_id": "207ec97028ba4563b4b29eedad4525a9", - "version_major": 2, - "version_minor": 0 - }, - "text/plain": [ - "Downloading: 0%| | 0.00/264M [00:00', -0.00799628160893917, 300),\n", - " ('<', -0.0044715143740177155, 301),\n", - " ('br', 0.014373987913131714, 302),\n", - " ('/', 0.016613293439149857, 303),\n", - " ('>', 0.029748301953077316, 304),\n", - " ('spoil', 0.016072934493422508, 305),\n", - " ('##er', 0.05765294283628464, 306),\n", - " (':', 0.005897448863834143, 307),\n", - " ('this', -0.08212670683860779, 308),\n", - " ('movie', -0.03927135467529297, 309),\n", - " ('doesn', -0.00897288415580988, 310),\n", - " (\"'\", 0.005554314237087965, 311),\n", - " ('t', 0.018459515646100044, 312),\n", - " ('have', 0.004178288858383894, 313),\n", - " ('a', 0.006657324731349945, 314),\n", - " ('goo', -0.004324286710470915, 315),\n", - " ('##fs', -0.009721309877932072, 316),\n", - " ('section', 0.010842953808605671, 317),\n", - " ('.', 0.007669608108699322, 318),\n", - " ('wonder', 0.019997509196400642, 319),\n", - " (',', 0.02238672412931919, 320),\n", - " ('didn', 0.026670066639780998, 321),\n", - " (\"'\", 0.004090417176485062, 322),\n", - " ('t', -0.02376842498779297, 323),\n", - " ('anybody', 0.015019877813756466, 324),\n", - " ('notice', -0.032073259353637695, 325),\n", - " ('that', -0.027400804683566093, 326),\n", - " ('hand', 0.013895555399358273, 327),\n", - " ('in', -0.01581115648150444, 328),\n", - " ('the', -0.0005177874118089676, 329),\n", - " ('2', -0.013833531178534031, 330),\n", - " ('part', 0.003973441198468208, 331),\n", - " ('when', -0.0276736319065094, 332),\n", - " ('the', 0.03735414519906044, 333),\n", - " ('kidnap', -0.0038885336834937334, 334),\n", - " ('##pers', 0.007577804382890463, 335),\n", - " ('decided', -0.007950885221362114, 336),\n", - " ('to', 0.008233290165662766, 337),\n", - " ('go', -0.026257596909999847, 338),\n", - " ('home', 0.0024092162493616343, 339),\n", - " ('?', 0.04507692903280258, 340),\n", - " ('looks', -0.02298576943576336, 341),\n", - " ('like', -0.0147428372874856, 342),\n", - " ('a', 0.041484564542770386, 343),\n", - " ('part', 0.04004620015621185, 344),\n", - " ('of', 0.02201233059167862, 345),\n", - " ('crew', 0.004088917281478643, 346),\n", - " (',', 0.0054010068997740746, 347),\n", - " ('he', -0.014832447282969952, 348),\n", - " ('##he', 0.0015261276857927442, 349),\n", - " ('.', -0.006007165182381868, 350),\n", - " ('i', 0.005043786019086838, 351),\n", - " ('know', 0.0077549186535179615, 352),\n", - " ('i', 0.027806663885712624, 353),\n", - " ('should', -0.018970897421240807, 354),\n", - " ('better', -0.010785568505525589, 355),\n", - " ('post', 0.05738293007016182, 356),\n", - " ('this', -0.012203543446958065, 357),\n", - " ('in', 0.04099973291158676, 358),\n", - " ('forums', 0.06110705807805061, 359),\n", - " (',', 0.015236682258546352, 360),\n", - " ('but', 0.03359401598572731, 361),\n", - " ('i', -0.005674791056662798, 362),\n", - " ('don', -0.011217826046049595, 363),\n", - " (\"'\", 0.004538937471807003, 364),\n", - " ('t', 0.017671801149845123, 365),\n", - " ('agree', 0.01695604808628559, 366),\n", - " ('with', 0.010900290682911873, 367),\n", - " ('some', 0.11742840707302094, 368),\n", - " ('policies', 0.056090448051691055, 369),\n", - " ('here', 0.026037804782390594, 370),\n", - " ('.', 0.01010459940880537, 371),\n", - " ('[SEP]', 0.0, 372)]\n" + " ('amazing', 2.3141794204711914, 1),\n", + " ('movie', 0.06655970215797424, 2),\n", + " ('.', -0.47832658886909485, 3),\n", + " ('some', 0.15708176791667938, 4),\n", + " ('of', -0.02931656688451767, 5),\n", + " ('the', -0.08834744244813919, 6),\n", + " ('script', -0.2660972774028778, 7),\n", + " ('writing', -0.4021594822406769, 8),\n", + " ('could', -0.19280624389648438, 9),\n", + " ('have', -0.015477157197892666, 10),\n", + " ('been', -0.21898044645786285, 11),\n", + " ('better', -0.4095713794231415, 12),\n", + " ('(', 0.05475223436951637, 13),\n", + " ('some', 0.0466572567820549, 14),\n", + " ('cl', 0.08523529022932053, 15),\n", + " ('##iche', 0.05406142398715019, 16),\n", + " ('##d', -0.031489163637161255, 17),\n", + " ('language', -0.3399031162261963, 18),\n", + " (')', -0.11275435984134674, 19),\n", + " ('.', -0.22217823565006256, 20),\n", + " ('joyce', 0.6259628534317017, 21),\n", + " (\"'\", -0.20313552021980286, 22),\n", + " ('s', -0.22971349954605103, 23),\n", + " ('\"', -0.28431516885757446, 24),\n", + " ('the', 0.13832062482833862, 25),\n", + " ('dead', -0.09080619364976883, 26),\n", + " ('\"', 0.008070609532296658, 27),\n", + " ('is', -0.09763125330209732, 28),\n", + " ('all', 0.31558454036712646, 29),\n", + " ('##uded', -0.7008199691772461, 30),\n", + " ('to', 0.09101452678442001, 31),\n", + " ('throughout', 0.11561280488967896, 32),\n", + " ('the', 0.1659221053123474, 33),\n", + " ('movie', -0.40316757559776306, 34),\n", + " ('.', -0.42794814705848694, 35),\n", + " ('beautiful', 2.0079877376556396, 36),\n", + " ('scenery', -0.7362574934959412, 37),\n", + " ('and', 0.22411005198955536, 38),\n", + " ('great', 1.8532911539077759, 39),\n", + " ('acting', -0.25186970829963684, 40),\n", + " ('.', -0.42279019951820374, 41),\n", + " ('very', 1.5215049982070923, 42),\n", + " ('poetic', 0.6973422765731812, 43),\n", + " ('.', -0.4415672719478607, 44),\n", + " ('highly', 0.5367811322212219, 45),\n", + " ('recommend', 1.4894026517868042, 46),\n", + " ('.', 0.14289245009422302, 47),\n", + " ('[SEP]', 0.0, 48)]\n" ] } ], @@ -981,312 +331,52 @@ }, { "cell_type": "code", - "execution_count": 20, + "execution_count": 8, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ - "token_index 0 1 2 3 4 5 \\\n", - "token [CLS] i first saw it at \n", - "attribution 0.0 0.170457 -0.090006 0.107485 -0.069431 0.039387 \n", - "text_field text text text text text text \n", - "\n", - "token_index 6 8 9 10 11 12 \\\n", - "token 5am january 1 , 2009 , \n", - "attribution 0.023045 0.415474 0.384542 0.672036 0.261633 -0.006347 \n", - "text_field text text text text text text \n", - "\n", - "token_index 13 14 15 16 17 18 \\\n", - "token and after a day i watched \n", - "attribution 0.080754 -0.114537 -0.139005 0.174024 0.008026 -0.149438 \n", - "text_field text text text text text text \n", - "\n", - "token_index 19 20 21 22 23 24 \\\n", - "token it again and i want to \n", - "attribution 0.141589 0.042472 0.078582 0.173841 0.059821 0.27329 \n", - "text_field text text text text text text \n", - "\n", - "token_index 25 26 27 28 29 30 \\\n", - "token watch it again . love everything \n", - "attribution 0.085501 0.119676 -0.051532 0.032687 0.349016 0.175231 \n", - "text_field text text text text text text \n", - "\n", - "token_index 31 32 33 34 35 36 \\\n", - "token ( well , almost , so \n", - "attribution 0.152014 -0.081542 0.005581 -0.074633 -0.133133 0.033487 \n", - "text_field text text text text text text \n", - "\n", - "token_index 37 38 39 40 41 42 \\\n", - "token 9 stars ) about it . \n", - "attribution -0.057861 -0.131787 0.061106 -0.073951 -0.195056 0.072654 \n", - "text_field text text text text text text \n", - "\n", - "token_index 43 44 45 46 47 48 \\\n", - "token no color , beautiful naive stories \n", - "attribution -0.070579 -0.090318 0.027063 0.183721 0.023685 0.076976 \n", - "text_field text text text text text text \n", - "\n", - "token_index 49 50 51 53 54 55 \\\n", - "token , funny gangsters , anna , \n", - "attribution 0.093026 -0.099573 0.111415 0.033609 -0.018002 -0.031873 \n", - "text_field text text text text text text \n", - "\n", - "token_index 56 57 58 59 60 61 \\\n", - "token camera work , music . well \n", - "attribution -0.168191 0.005432 0.016994 -0.166483 -0.003029 0.000305 \n", - "text_field text text text text text text \n", - "\n", - "token_index 62 63 64 65 66 67 \\\n", - "token , sometimes you just want to \n", - "attribution 0.044889 0.296482 0.164041 -0.074963 -0.053668 -0.133754 \n", - "text_field text text text text text text \n", - "\n", - "token_index 68 69 70 71 72 73 \\\n", - "token listen little bit longer and the \n", - "attribution -0.078899 -0.02474 0.037653 0.00921 -0.050143 0.025857 \n", - "text_field text text text text text text \n", - "\n", - "token_index 74 75 76 77 78 79 \\\n", - "token music just stops . but this \n", - "attribution -0.122204 -0.058685 -0.067941 -0.023277 0.191971 -0.035146 \n", - "text_field text text text text text text \n", - "\n", - "token_index 80 81 82 83 84 85 \\\n", - "token is not a musical after all \n", - "attribution 0.007563 -0.002562 0.122107 -1.0 -0.180618 -0.109668 \n", - "text_field text text text text text text \n", - "\n", - "token_index 86 87 88 89 90 91 \\\n", - "token . i like anna ' s \n", - "attribution 0.071741 0.011276 0.159978 0.008217 0.175088 0.379829 \n", + "token_index 0 1 2 3 4 5 6 \\\n", + "token [CLS] amazing movie . some of the \n", + "attribution 0.0 1.0 0.028762 -0.206694 0.067878 -0.012668 -0.038177 \n", + "text_field text text text text text text text \n", + "\n", + "token_index 7 8 9 10 11 12 \\\n", + "token script writing could have been better \n", + "attribution -0.114986 -0.173781 -0.083315 -0.006688 -0.094626 -0.176983 \n", "text_field text text text text text text \n", "\n", - "token_index 92 93 94 95 96 98 \\\n", - "token acting , this naive wannabe gangster \n", - "attribution -0.615353 -0.147787 -0.492506 -0.481686 -0.317616 0.193298 \n", + "token_index 13 14 15 18 19 20 \\\n", + "token ( some cliched language ) . \n", + "attribution 0.023659 0.020161 0.036832 -0.146878 -0.048723 -0.096007 \n", "text_field text text text text text text \n", "\n", - "token_index 99 100 101 102 103 104 \\\n", - "token girl , how she speaks , \n", - "attribution 0.034738 0.0362 0.155725 -0.04965 -0.028578 0.027849 \n", - "text_field text text text text text text \n", - "\n", - "token_index 105 106 107 108 109 110 \\\n", - "token holds the gun , everything makes \n", - "attribution 0.106511 -0.098679 -0.117951 -0.088747 0.189717 0.073041 \n", - "text_field text text text text text text \n", - "\n", - "token_index 111 112 113 114 115 116 \\\n", - "token me smile . no , it \n", - "attribution 0.091698 0.149203 0.029522 -0.038933 0.088369 -0.021045 \n", - "text_field text text text text text text \n", - "\n", - "token_index 117 118 119 120 121 122 \\\n", - "token ' s not that funny , \n", - "attribution 0.08587 0.022358 -0.009932 -0.002558 -0.06864 0.059714 \n", - "text_field text text text text text text \n", - "\n", - "token_index 123 124 125 126 127 128 \\\n", - "token though i have laughed a bit \n", - "attribution 0.009371 0.073627 -0.05037 0.012842 -0.011386 0.027237 \n", - "text_field text text text text text text \n", - "\n", - "token_index 129 130 131 132 133 134 \\\n", - "token at some moments , it ' \n", - "attribution 0.06302 -0.007297 0.27467 0.151535 -0.031748 0.119639 \n", - "text_field text text text text text text \n", - "\n", - "token_index 135 136 137 138 139 140 \\\n", - "token s just so subtle . excellent \n", - "attribution 0.111179 0.013132 0.111981 0.250567 0.113439 0.652182 \n", - "text_field text text text text text text \n", - "\n", - "token_index 141 142 143 144 147 148 \\\n", - "token work by samuel benchetrit . though \n", - "attribution 0.130436 0.0702 0.082449 0.060754 0.04657 0.109886 \n", - "text_field text text text text text text \n", - "\n", - "token_index 149 150 151 152 153 154 \\\n", - "token 3d nouvelle seems weaker , but \n", - "attribution -0.10006 -0.025344 -0.059655 0.053869 0.010111 0.047435 \n", + "token_index 21 22 23 24 25 26 \\\n", + "token joyce ' s \" the dead \n", + "attribution 0.27049 -0.087779 -0.099263 -0.122858 0.059771 -0.039239 \n", "text_field text text text text text text \n", "\n", - "token_index 155 156 157 158 160 161 \\\n", - "token they are also gangsters , maybe \n", - "attribution -0.087616 -0.079145 -0.067326 0.164802 0.057904 0.070503 \n", - "text_field text text text text text text \n", - "\n", - "token_index 162 163 164 165 166 167 \\\n", - "token even worse , cause they are \n", - "attribution 0.011633 0.054106 -0.090237 -0.056379 -0.067337 -0.051897 \n", - "text_field text text text text text text \n", - "\n", - "token_index 168 169 170 171 172 173 \\\n", - "token stealing ideas . and the last \n", - "attribution 0.024267 -0.073004 -0.054746 0.023162 -0.104921 -0.073514 \n", - "text_field text text text text text text \n", - "\n", - "token_index 174 175 176 177 178 179 \\\n", - "token scene is my favorite . makes \n", - "attribution -0.191759 0.083886 0.277159 0.410594 -0.144259 -0.102936 \n", - "text_field text text text text text text \n", - "\n", - "token_index 180 181 182 183 184 185 \\\n", - "token me feel so warm and . \n", - "attribution 0.10451 0.198935 0.178571 0.10484 0.059488 -0.13345 \n", - "text_field text text text text text text \n", - "\n", - "token_index 186 187 188 189 190 191 \\\n", - "token . romantic . yes , i \n", - "attribution -0.033263 0.102611 -0.023939 0.086991 -0.006159 -0.016589 \n", - "text_field text text text text text text \n", - "\n", - "token_index 192 193 194 195 196 197 \\\n", - "token would recommend this movie for the \n", - "attribution -0.016266 0.05755 0.018799 0.077491 0.019652 0.03683 \n", - "text_field text text text text text text \n", - "\n", - "token_index 198 199 200 201 202 203 \\\n", - "token romantic souls with a taste for \n", - "attribution 0.066298 0.036036 0.097791 0.03978 0.01561 0.028333 \n", - "text_field text text text text text text \n", - "\n", - "token_index 204 205 206 207 210 211 \\\n", - "token such art - housish movies . \n", - "attribution 0.044567 0.005998 0.05708 0.019486 0.120159 0.042933 \n", - "text_field text text text text text text \n", - "\n", - "token_index 212 213 214 215 216 217 \\\n", - "token and i don ' t agree \n", - "attribution 0.029758 0.066354 0.046331 0.03605 0.024658 -0.078732 \n", - "text_field text text text text text text \n", - "\n", - "token_index 218 219 220 221 222 223 \\\n", - "token with those comparing it to pulp \n", - "attribution 0.023078 0.139114 -0.000608 0.004622 -0.016126 0.016344 \n", - "text_field text text text text text text \n", - "\n", - "token_index 224 225 226 227 228 229 \\\n", - "token fiction . it ' s not \n", - "attribution 0.033682 0.039149 0.068615 0.068419 0.067978 0.16568 \n", - "text_field text text text text text text \n", - "\n", - "token_index 230 231 232 233 234 235 \\\n", - "token about action and twisted story , \n", - "attribution -0.018121 -0.030201 0.007424 0.04382 -0.021345 -0.055491 \n", - "text_field text text text text text text \n", - "\n", - "token_index 236 237 238 241 242 243 \\\n", - "token though all vignettes intersect . it \n", - "attribution -0.020809 0.052888 0.10707 0.069352 0.126138 0.116565 \n", - "text_field text text text text text text \n", - "\n", - "token_index 244 245 246 247 248 249 \\\n", - "token ' s calm , and maybe \n", - "attribution 0.155122 0.223995 0.810752 0.191691 0.136151 0.297176 \n", - "text_field text text text text text text \n", - "\n", - "token_index 250 251 252 253 254 255 \\\n", - "token too slow movie for most of \n", - "attribution 0.074983 0.034951 0.046917 -0.633489 -0.049592 -0.020356 \n", - "text_field text text text text text text \n", - "\n", - "token_index 256 257 258 259 260 261 \\\n", - "token the people . it ' s \n", - "attribution 0.005256 0.06978 -0.031699 -0.074841 0.137725 0.039087 \n", - "text_field text text text text text text \n", - "\n", - "token_index 262 263 264 265 266 267 \\\n", - "token about characters , their feelings , \n", - "attribution 0.161196 -0.346557 0.42938 0.237538 0.217716 0.394794 \n", - "text_field text text text text text text \n", - "\n", - "token_index 268 269 270 271 272 273 \\\n", - "token very subtle . anyway , probably \n", - "attribution 0.407511 0.217422 -0.229159 -0.179182 0.084117 0.122595 \n", - "text_field text text text text text text \n", - "\n", - "token_index 274 275 276 277 278 279 \\\n", - "token this review won ' t be \n", - "attribution -0.253398 0.031705 0.105428 0.031298 0.034757 -0.06585 \n", - "text_field text text text text text text \n", - "\n", - "token_index 280 281 282 283 284 285 \\\n", - "token of much help to anyone ( \n", - "attribution -0.114305 -0.097195 -0.042372 -0.079064 -0.12552 0.049177 \n", - "text_field text text text text text text \n", - "\n", - "token_index 286 287 288 289 290 291 \\\n", - "token my first ) , just wanted \n", - "attribution 0.025227 0.01541 -0.090277 0.126098 -0.088128 0.212069 \n", - "text_field text text text text text text \n", - "\n", - "token_index 292 293 294 295 296 297 \\\n", - "token to express my appreciation . < \n", - "attribution -0.046134 0.134626 -0.040483 0.093308 -0.158916 0.012593 \n", - "text_field text text text text text text \n", - "\n", - "token_index 298 299 300 301 302 303 \\\n", - "token br / > < br / \n", - "attribution -0.017436 0.004035 -0.014979 -0.008376 0.026926 0.031121 \n", - "text_field text text text text text text \n", - "\n", - "token_index 304 305 307 308 309 310 \\\n", - "token > spoiler : this movie doesn \n", - "attribution 0.055725 0.107997 0.011047 -0.153842 -0.073564 -0.016808 \n", - "text_field text text text text text text \n", - "\n", - "token_index 311 312 313 314 315 317 \\\n", - "token ' t have a goofs section \n", - "attribution 0.010405 0.034579 0.007827 0.012471 -0.01821 0.020311 \n", - "text_field text text text text text text \n", - "\n", - "token_index 318 319 320 321 322 323 \\\n", - "token . wonder , didn ' t \n", - "attribution 0.014367 0.03746 0.041935 0.049959 0.007662 -0.044524 \n", - "text_field text text text text text text \n", - "\n", - "token_index 324 325 326 327 328 329 \\\n", - "token anybody notice that hand in the \n", - "attribution 0.028136 -0.060081 -0.051328 0.02603 -0.029618 -0.00097 \n", - "text_field text text text text text text \n", - "\n", - "token_index 330 331 332 333 334 336 \\\n", - "token 2 part when the kidnappers decided \n", - "attribution -0.025913 0.007443 -0.051839 0.069973 0.014195 -0.014894 \n", + "token_index 27 28 29 31 32 33 \\\n", + "token \" is alluded to throughout the \n", + "attribution 0.003487 -0.042188 -0.302837 0.039329 0.049958 0.071698 \n", "text_field text text text text text text \n", "\n", - "token_index 337 338 339 340 341 342 \\\n", - "token to go home ? looks like \n", - "attribution 0.015423 -0.049187 0.004513 0.084439 -0.043058 -0.027617 \n", - "text_field text text text text text text \n", - "\n", - "token_index 343 344 345 346 347 348 \\\n", - "token a part of crew , hehe \n", - "attribution 0.07771 0.075016 0.041234 0.007659 0.010117 -0.027785 \n", - "text_field text text text text text text \n", + "token_index 34 35 36 37 38 39 \\\n", + "token movie . beautiful scenery and great \n", + "attribution -0.174216 -0.184924 0.867689 -0.318151 0.096842 0.800842 \n", + "text_field text text text text text text \n", "\n", - "token_index 350 351 352 353 354 355 \\\n", - "token . i know i should better \n", - "attribution -0.011253 0.009448 0.014527 0.052088 -0.035537 -0.020204 \n", + "token_index 40 41 42 43 44 45 \\\n", + "token acting . very poetic . highly \n", + "attribution -0.108838 -0.182696 0.657471 0.301335 -0.190809 0.231953 \n", "text_field text text text text text text \n", "\n", - "token_index 356 357 358 359 360 361 \\\n", - "token post this in forums , but \n", - "attribution 0.107491 -0.02286 0.076802 0.114468 0.028542 0.062929 \n", - "text_field text text text text text text \n", - "\n", - "token_index 362 363 364 365 366 367 \\\n", - "token i don ' t agree with \n", - "attribution -0.01063 -0.021014 0.008502 0.033103 0.031763 0.020419 \n", - "text_field text text text text text text \n", - "\n", - "token_index 368 369 370 371 372 \n", - "token some policies here . [SEP] \n", - "attribution 0.21997 0.10507 0.048775 0.018928 0.0 \n", - "text_field text text text text text \n" + "token_index 46 47 48 \n", + "token recommend . [SEP] \n", + "attribution 0.643599 0.061746 0.0 \n", + "text_field text text text \n" ] } ], @@ -1305,1801 +395,236 @@ "Now the much more visually pleasing way is to turn the attribution scores into colors and display the heatmap using the displaCy (spaCy) library. We can do this with the `.render()` function." ] }, - { - "cell_type": "code", - "execution_count": 16, - "metadata": {}, - "outputs": [ - { - "data": { - "text/html": [ - "
\n", - " \n", - " [CLS]\n", - " \n", - " \n", - " \n", - " i\n", - " \n", - " \n", - " \n", - " first\n", - " \n", - " \n", - " \n", - " saw\n", - " \n", - " \n", - " \n", - " it\n", - " \n", - " \n", - " \n", - " at\n", - " \n", - " \n", - " \n", - " 5am\n", - " \n", - " \n", - " \n", - " january\n", - " \n", - " \n", - " \n", - " 1\n", - " \n", - " \n", - " \n", - " ,\n", - " \n", - " \n", - " \n", - " 2009\n", - " \n", - " \n", - " \n", - " ,\n", - " \n", - " \n", - " \n", - " and\n", - " \n", - " \n", - " \n", - " after\n", - " \n", - " \n", - " \n", - " a\n", - " \n", - " \n", - " \n", - " day\n", - " \n", - " \n", - " \n", - " i\n", - " \n", - " \n", - " \n", - " watched\n", - " \n", - " \n", - " \n", - " it\n", - " \n", - " \n", - " \n", - " again\n", - " \n", - " \n", - " \n", - " and\n", - " \n", - " \n", - " \n", - " i\n", - " \n", - " \n", - " \n", - " want\n", - " \n", - " \n", - " \n", - " to\n", - " \n", - " \n", - " \n", - " watch\n", - " \n", - " \n", - " \n", - " it\n", - " \n", - " \n", - " \n", - " again\n", - " \n", - " \n", - " \n", - " .\n", - " \n", - " \n", - " \n", - " love\n", - " \n", - " \n", - " \n", - " everything\n", - " \n", - " \n", - " \n", - " (\n", - " \n", - " \n", - " \n", - " well\n", - " \n", - " \n", - " \n", - " ,\n", - " \n", - " \n", - " \n", - " almost\n", - " \n", - " \n", - " \n", - " ,\n", - " \n", - " \n", - " \n", - " so\n", - " \n", - " \n", - " \n", - " 9\n", - " \n", - " \n", - " \n", - " stars\n", - " \n", - " \n", - " \n", - " )\n", - " \n", - " \n", - " \n", - " about\n", - " \n", - " \n", - " \n", - " it\n", - " \n", - " \n", - " \n", - " .\n", - " \n", - " \n", - " \n", - " no\n", - " \n", - " \n", - " \n", - " color\n", - " \n", - " \n", - " \n", - " ,\n", - " \n", - " \n", - " \n", - " beautiful\n", - " \n", - " \n", - " \n", - " naive\n", - " \n", - " \n", - " \n", - " stories\n", - " \n", - " \n", - " \n", - " ,\n", - " \n", - " \n", - " \n", - " funny\n", - " \n", - " \n", - " \n", - " gangsters\n", - " \n", - " \n", - " \n", - " ,\n", - " \n", - " \n", - " \n", - " anna\n", - " \n", - " \n", - " \n", - " ,\n", - " \n", - " \n", - " \n", - " camera\n", - " \n", - " \n", - " \n", - " work\n", - " \n", - " \n", - " \n", - " ,\n", - " \n", - " \n", - " \n", - " music\n", - " \n", - " \n", - " \n", - " .\n", - " \n", - " \n", - " \n", - " well\n", - " \n", - " \n", - " \n", - " ,\n", - " \n", - " \n", - " \n", - " sometimes\n", - " \n", - " \n", - " \n", - " you\n", - " \n", - " \n", - " \n", - " just\n", - " \n", - " \n", - " \n", - " want\n", - " \n", - " \n", - " \n", - " to\n", - " \n", - " \n", - " \n", - " listen\n", - " \n", - " \n", - " \n", - " little\n", - " \n", - " \n", - " \n", - " bit\n", - " \n", - " \n", - " \n", - " longer\n", - " \n", - " \n", - " \n", - " and\n", - " \n", - " \n", - " \n", - " the\n", - " \n", - " \n", - " \n", - " music\n", - " \n", - " \n", - " \n", - " just\n", - " \n", - " \n", - " \n", - " stops\n", - " \n", - " \n", - " \n", - " .\n", - " \n", - " \n", - " \n", - " but\n", - " \n", - " \n", - " \n", - " this\n", - " \n", - " \n", - " \n", - " is\n", - " \n", - " \n", - " \n", - " not\n", - " \n", - " \n", - " \n", - " a\n", - " \n", - " \n", - " \n", - " musical\n", - " \n", - " \n", - " \n", - " after\n", - " \n", - " \n", - " \n", - " all\n", - " \n", - " \n", - " \n", - " .\n", - " \n", - " \n", - " \n", - " i\n", - " \n", - " \n", - " \n", - " like\n", - " \n", - " \n", - " \n", - " anna\n", - " \n", - " \n", - " \n", - " '\n", - " \n", - " \n", - " \n", - " s\n", - " \n", - " \n", - " \n", - " acting\n", - " \n", - " \n", - " \n", - " ,\n", - " \n", - " \n", - " \n", - " this\n", - " \n", - " \n", - " \n", - " naive\n", - " \n", - " \n", - " \n", - " wannabe\n", - " \n", - " \n", - " \n", - " gangster\n", - " \n", - " \n", - " \n", - " girl\n", - " \n", - " \n", - " \n", - " ,\n", - " \n", - " \n", - " \n", - " how\n", - " \n", - " \n", - " \n", - " she\n", - " \n", - " \n", - " \n", - " speaks\n", - " \n", - " \n", - " \n", - " ,\n", - " \n", - " \n", - " \n", - " holds\n", - " \n", - " \n", - " \n", - " the\n", - " \n", - " \n", - " \n", - " gun\n", - " \n", - " \n", - " \n", - " ,\n", - " \n", - " \n", - " \n", - " everything\n", - " \n", - " \n", - " \n", - " makes\n", - " \n", - " \n", - " \n", - " me\n", - " \n", - " \n", - " \n", - " smile\n", - " \n", - " \n", - " \n", - " .\n", - " \n", - " \n", - " \n", - " no\n", - " \n", - " \n", - " \n", - " ,\n", - " \n", - " \n", - " \n", - " it\n", - " \n", - " \n", - " \n", - " '\n", - " \n", - " \n", - " \n", - " s\n", - " \n", - " \n", - " \n", - " not\n", - " \n", - " \n", - " \n", - " that\n", - " \n", - " \n", - " \n", - " funny\n", - " \n", - " \n", - " \n", - " ,\n", - " \n", - " \n", - " \n", - " though\n", - " \n", - " \n", - " \n", - " i\n", - " \n", - " \n", - " \n", - " have\n", - " \n", - " \n", - " \n", - " laughed\n", - " \n", - " \n", - " \n", - " a\n", - " \n", - " \n", - " \n", - " bit\n", - " \n", - " \n", - " \n", - " at\n", - " \n", - " \n", - " \n", - " some\n", - " \n", - " \n", - " \n", - " moments\n", - " \n", - " \n", - " \n", - " ,\n", - " \n", - " \n", - " \n", - " it\n", - " \n", - " \n", - " \n", - " '\n", - " \n", - " \n", - " \n", - " s\n", - " \n", - " \n", - " \n", - " just\n", - " \n", - " \n", - " \n", - " so\n", - " \n", - " \n", - " \n", - " subtle\n", - " \n", - " \n", - " \n", - " .\n", - " \n", - " \n", - " \n", - " excellent\n", - " \n", - " \n", - " \n", - " work\n", - " \n", - " \n", - " \n", - " by\n", - " \n", - " \n", - " \n", - " samuel\n", - " \n", - " \n", - " \n", - " benchetrit\n", - " \n", - " \n", - " \n", - " .\n", - " \n", - " \n", - " \n", - " though\n", - " \n", - " \n", - " \n", - " 3d\n", - " \n", - " \n", - " \n", - " nouvelle\n", - " \n", - " \n", - " \n", - " seems\n", - " \n", - " \n", - " \n", - " weaker\n", - " \n", - " \n", - " \n", - " ,\n", - " \n", - " \n", - " \n", - " but\n", - " \n", - " \n", - " \n", - " they\n", - " \n", - " \n", - " \n", - " are\n", - " \n", - " \n", - " \n", - " also\n", - " \n", - " \n", - " \n", - " gangsters\n", - " \n", - " \n", - " \n", - " ,\n", - " \n", - " \n", - " \n", - " maybe\n", - " \n", - " \n", - " \n", - " even\n", - " \n", - " \n", - " \n", - " worse\n", - " \n", - " \n", - " \n", - " ,\n", - " \n", - " \n", - " \n", - " cause\n", - " \n", - " \n", - " \n", - " they\n", - " \n", - " \n", - " \n", - " are\n", - " \n", - " \n", - " \n", - " stealing\n", - " \n", - " \n", - " \n", - " ideas\n", - " \n", - " \n", - " \n", - " .\n", - " \n", - " \n", - " \n", - " and\n", - " \n", - " \n", - " \n", - " the\n", - " \n", - " \n", - " \n", - " last\n", - " \n", - " \n", - " \n", - " scene\n", - " \n", - " \n", - " \n", - " is\n", - " \n", - " \n", - " \n", - " my\n", - " \n", - " \n", - " \n", - " favorite\n", - " \n", - " \n", - " \n", - " .\n", - " \n", - " \n", - " \n", - " makes\n", - " \n", - " \n", - " \n", - " me\n", - " \n", - " \n", - " \n", - " feel\n", - " \n", - " \n", - " \n", - " so\n", - " \n", - " \n", - " \n", - " warm\n", - " \n", - " \n", - " \n", - " and\n", - " \n", - " \n", - " \n", - " .\n", - " \n", - " \n", - " \n", - " .\n", - " \n", - " \n", - " \n", - " romantic\n", - " \n", - " \n", - " \n", - " .\n", - " \n", - " \n", - " \n", - " yes\n", - " \n", - " \n", - " \n", - " ,\n", - " \n", - " \n", - " \n", - " i\n", - " \n", - " \n", - " \n", - " would\n", - " \n", - " \n", - " \n", - " recommend\n", - " \n", - " \n", - " \n", - " this\n", - " \n", - " \n", - " \n", - " movie\n", - " \n", - " \n", - " \n", - " for\n", - " \n", - " \n", - " \n", - " the\n", - " \n", - " \n", - " \n", - " romantic\n", - " \n", - " \n", - " \n", - " souls\n", - " \n", - " \n", - " \n", - " with\n", - " \n", - " \n", - " \n", - " a\n", - " \n", - " \n", - " \n", - " taste\n", - " \n", - " \n", - " \n", - " for\n", - " \n", - " \n", - " \n", - " such\n", - " \n", - " \n", - " \n", - " art\n", - " \n", - " \n", - " \n", - " -\n", - " \n", - " \n", - " \n", - " housish\n", - " \n", - " \n", - " \n", - " movies\n", - " \n", - " \n", - " \n", - " .\n", - " \n", - " \n", - " \n", - " and\n", - " \n", - " \n", - " \n", - " i\n", - " \n", - " \n", - " \n", - " don\n", - " \n", - " \n", - " \n", - " '\n", - " \n", - " \n", - " \n", - " t\n", - " \n", - " \n", - " \n", - " agree\n", - " \n", - " \n", - " \n", - " with\n", - " \n", - " \n", - " \n", - " those\n", - " \n", - " \n", - " \n", - " comparing\n", - " \n", - " \n", - " \n", - " it\n", - " \n", - " \n", - " \n", - " to\n", - " \n", - " \n", - " \n", - " pulp\n", - " \n", - " \n", - " \n", - " fiction\n", - " \n", - " \n", - " \n", - " .\n", - " \n", - " \n", - " \n", - " it\n", - " \n", - " \n", - " \n", - " '\n", - " \n", - " \n", - " \n", - " s\n", - " \n", - " \n", - " \n", - " not\n", - " \n", - " \n", - " \n", - " about\n", - " \n", - " \n", - " \n", - " action\n", - " \n", - " \n", - " \n", - " and\n", - " \n", - " \n", - " \n", - " twisted\n", - " \n", - " \n", - " \n", - " story\n", - " \n", - " \n", - " \n", - " ,\n", - " \n", - " \n", - " \n", - " though\n", - " \n", - " \n", - " \n", - " all\n", - " \n", - " \n", - " \n", - " vignettes\n", - " \n", - " \n", - " \n", - " intersect\n", - " \n", - " \n", - " \n", - " .\n", - " \n", - " \n", - " \n", - " it\n", - " \n", - " \n", - " \n", - " '\n", - " \n", - " \n", - " \n", - " s\n", - " \n", - " \n", - " \n", - " calm\n", - " \n", - " \n", - " \n", - " ,\n", - " \n", - " \n", - " \n", - " and\n", - " \n", - " \n", - " \n", - " maybe\n", - " \n", - " \n", - " \n", - " too\n", - " \n", - " \n", - " \n", - " slow\n", - " \n", - " \n", - " \n", - " movie\n", - " \n", - " \n", - " \n", - " for\n", - " \n", - " \n", - " \n", - " most\n", - " \n", - " \n", - " \n", - " of\n", - " \n", - " \n", - " \n", - " the\n", - " \n", - " \n", - " \n", - " people\n", - " \n", - " \n", - " \n", - " .\n", - " \n", - " \n", - " \n", - " it\n", - " \n", - " \n", - " \n", - " '\n", - " \n", - " \n", - " \n", - " s\n", - " \n", - " \n", - " \n", - " about\n", - " \n", - " \n", - " \n", - " characters\n", - " \n", - " \n", - " \n", - " ,\n", - " \n", - " \n", - " \n", - " their\n", - " \n", - " \n", - " \n", - " feelings\n", - " \n", - " \n", - " \n", - " ,\n", - " \n", - " \n", - " \n", - " very\n", - " \n", - " \n", - " \n", - " subtle\n", - " \n", - " \n", - " \n", - " .\n", - " \n", - " \n", - " \n", - " anyway\n", - " \n", - " \n", - " \n", - " ,\n", - " \n", - " \n", - " \n", - " probably\n", - " \n", - " \n", - " \n", - " this\n", - " \n", - " \n", - " \n", - " review\n", - " \n", - " \n", - " \n", - " won\n", - " \n", - " \n", - " \n", - " '\n", - " \n", - " \n", - " \n", - " t\n", - " \n", - " \n", - " \n", - " be\n", - " \n", - " \n", - " \n", - " of\n", - " \n", - " \n", - " \n", - " much\n", - " \n", - " \n", - " \n", - " help\n", - " \n", - " \n", - " \n", - " to\n", - " \n", - " \n", - " \n", - " anyone\n", - " \n", - " \n", - " \n", - " (\n", - " \n", - " \n", - " \n", - " my\n", - " \n", - " \n", - " \n", - " first\n", - " \n", - " \n", - " \n", - " )\n", - " \n", - " \n", - " \n", - " ,\n", - " \n", - " \n", - " \n", - " just\n", - " \n", - " \n", - " \n", - " wanted\n", - " \n", - " \n", - " \n", - " to\n", - " \n", - " \n", - "
\n", + " \n", - " express\n", + " [CLS]\n", " \n", " \n", - " \n", - " my\n", + " amazing\n", " \n", " \n", - " \n", - " appreciation\n", + " movie\n", " \n", " \n", - " \n", " .\n", " \n", " \n", - " \n", - " <\n", - " \n", - " \n", - " \n", - " br\n", - " \n", - " \n", - " \n", - " /\n", + " some\n", " \n", " \n", " \n", - " >\n", - " \n", - " \n", - " \n", - " <\n", - " \n", - " \n", - " \n", - " br\n", - " \n", - " \n", - " \n", - " /\n", - " \n", - " \n", - " \n", - " >\n", - " \n", - " \n", - " \n", - " spoiler\n", - " \n", - " \n", - " \n", - " :\n", - " \n", - " \n", - " \n", - " this\n", + " of\n", " \n", " \n", - " \n", - " movie\n", + " the\n", " \n", " \n", - " \n", - " doesn\n", + " script\n", " \n", " \n", - " \n", - " '\n", + " writing\n", " \n", " \n", - " \n", - " t\n", + " could\n", " \n", " \n", - " \n", " have\n", " \n", " \n", - " \n", - " a\n", + " been\n", " \n", " \n", - " \n", - " goofs\n", + " better\n", " \n", " \n", - " \n", - " section\n", + " (\n", " \n", " \n", - " \n", - " .\n", + " some\n", " \n", " \n", " \n", - " wonder\n", - " \n", - " \n", - " \n", - " ,\n", - " \n", - " \n", - " \n", - " didn\n", - " \n", - " \n", - " \n", - " '\n", - " \n", - " \n", - " \n", - " t\n", - " \n", - " \n", - " \n", - " anybody\n", + " cliched\n", " \n", " \n", - " \n", - " notice\n", - " \n", - " \n", - " \n", - " that\n", + " language\n", " \n", " \n", - " \n", - " hand\n", + " )\n", " \n", " \n", - " \n", - " in\n", + " .\n", " \n", " \n", - " \n", - " the\n", + " joyce\n", " \n", " \n", - " \n", - " 2\n", + " '\n", " \n", " \n", - " \n", - " part\n", + " s\n", " \n", " \n", - " \n", - " when\n", + " "\n", " \n", " \n", - " \n", " the\n", " \n", " \n", - " \n", - " kidnappers\n", - " \n", - " \n", - " \n", - " decided\n", - " \n", - " \n", - " \n", - " to\n", - " \n", - " \n", - " \n", - " go\n", - " \n", - " \n", - " \n", - " home\n", - " \n", - " \n", - " \n", - " ?\n", - " \n", - " \n", " \n", - " looks\n", + " dead\n", " \n", " \n", - " \n", - " like\n", + " "\n", " \n", " \n", - " \n", - " a\n", + " is\n", " \n", " \n", - " \n", - " part\n", + " alluded\n", " \n", " \n", " \n", - " of\n", + " to\n", " \n", " \n", - " \n", - " crew\n", + " throughout\n", " \n", " \n", - " \n", - " ,\n", + " the\n", " \n", " \n", - " \n", - " hehe\n", + " movie\n", " \n", " \n", - " \n", " .\n", " \n", " \n", - " \n", - " i\n", - " \n", - " \n", - " \n", - " know\n", - " \n", - " \n", - " \n", - " i\n", - " \n", - " \n", - " \n", - " should\n", - " \n", - " \n", - " \n", - " better\n", - " \n", - " \n", - " \n", - " post\n", - " \n", - " \n", - " \n", - " this\n", - " \n", - " \n", - " \n", - " in\n", - " \n", - " \n", - " \n", - " forums\n", - " \n", - " \n", - " \n", - " ,\n", + " beautiful\n", " \n", " \n", - " \n", - " but\n", + " scenery\n", " \n", " \n", - " \n", - " i\n", + " and\n", " \n", " \n", - " \n", - " don\n", + " great\n", " \n", " \n", - " \n", - " '\n", + " acting\n", " \n", " \n", - " \n", - " t\n", + " .\n", " \n", " \n", - " \n", - " agree\n", + " very\n", " \n", " \n", - " \n", - " with\n", + " poetic\n", " \n", " \n", - " \n", - " some\n", + " .\n", " \n", " \n", - " \n", - " policies\n", + " highly\n", " \n", " \n", - " \n", - " here\n", + " recommend\n", " \n", " \n", - " \n", " .\n", " \n", @@ -3132,7 +657,7 @@ }, { "cell_type": "code", - "execution_count": 31, + "execution_count": 10, "metadata": {}, "outputs": [ { @@ -3178,7 +703,7 @@ }, { "cell_type": "code", - "execution_count": 32, + "execution_count": 11, "metadata": {}, "outputs": [], "source": [ @@ -3194,7 +719,7 @@ }, { "cell_type": "code", - "execution_count": 33, + "execution_count": 12, "metadata": {}, "outputs": [ { @@ -3574,7 +1099,7 @@ }, { "cell_type": "code", - "execution_count": 24, + "execution_count": 13, "metadata": {}, "outputs": [ { @@ -3650,18 +1175,14 @@ "output_type": "stream", "text": [ "Loading Thermostat configuration: multi_nli-bert-occ\n", - "Dataset path is D:\\Working Student\\repo\\thermostat\\src\\thermostat\\dataset.py\n", - "Additional parameters for loading: {}\n", "Loading Thermostat configuration: multi_nli-bert-lig\n", - "Dataset path is D:\\Working Student\\repo\\thermostat\\src\\thermostat\\dataset.py\n", - "Additional parameters for loading: {}\n", "Downloading and preparing dataset thermostat/multi_nli-bert-lig to C:\\Users\\49176\\.cache\\huggingface\\datasets\\thermostat\\multi_nli-bert-lig\\1.0.1\\0cbe93e1fbe5b8ed0217559442d8b49a80fd4c2787185f2d7940817c67d8707b...\n" ] }, { "data": { "application/vnd.jupyter.widget-view+json": { - "model_id": "032671160c2b41f99c2683f3579196df", + "model_id": "98a45c0de0b04800b1974202833db926", "version_major": 2, "version_minor": 0 }, @@ -3692,15 +1213,13 @@ "text": [ "Dataset thermostat downloaded and prepared to C:\\Users\\49176\\.cache\\huggingface\\datasets\\thermostat\\multi_nli-bert-lig\\1.0.1\\0cbe93e1fbe5b8ed0217559442d8b49a80fd4c2787185f2d7940817c67d8707b. Subsequent calls will reuse this data.\n", "Loading Thermostat configuration: multi_nli-bert-lime\n", - "Dataset path is D:\\Working Student\\repo\\thermostat\\src\\thermostat\\dataset.py\n", - "Additional parameters for loading: {}\n", "Downloading and preparing dataset thermostat/multi_nli-bert-lime to C:\\Users\\49176\\.cache\\huggingface\\datasets\\thermostat\\multi_nli-bert-lime\\1.0.1\\0cbe93e1fbe5b8ed0217559442d8b49a80fd4c2787185f2d7940817c67d8707b...\n" ] }, { "data": { "application/vnd.jupyter.widget-view+json": { - "model_id": "419d61c9f31343c3ac6b55bf386170d9", + "model_id": "96404ef25f5040a5af1358793cc69d19", "version_major": 2, "version_minor": 0 }, @@ -4289,9 +1808,67 @@ }, { "cell_type": "code", - "execution_count": null, + "execution_count": 16, "metadata": {}, - "outputs": [], + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Loading Thermostat configuration: imdb-bert-lime\n", + "Downloading and preparing dataset thermostat/imdb-bert-lime to C:\\Users\\49176\\.cache\\huggingface\\datasets\\thermostat\\imdb-bert-lime\\1.0.1\\0cbe93e1fbe5b8ed0217559442d8b49a80fd4c2787185f2d7940817c67d8707b...\n" + ] + }, + { + "data": { + "application/vnd.jupyter.widget-view+json": { + "model_id": "017d3fe023654275a9bdf63662a4b2a6", + "version_major": 2, + "version_minor": 0 + }, + "text/plain": [ + "Downloading: 0%| | 0.00/276M [00:00