Skip to content

Latest commit

 

History

History
74 lines (66 loc) · 5.37 KB

README.md

File metadata and controls

74 lines (66 loc) · 5.37 KB

Sequence to Sequence Learning

This repository builds a sequence to sequence learning algorithm with attention mechanism, which aims to tackle some practical tasks such as simple dialogue, machine translation, pronounce to word and etc. This repo is implemented by tensorflow.

Usage

Before starting the experiment, you need to pull the data first (use cornell dataset as example):

$ cd dataset
$ bash down_cornell.sh

It will create a directory raw/cornell, and the downloaded raw data will be stored under this directory.
Note: other .sh data pullers will download and unzip data into raw/ folder as a sub-directory with a specific name.

Then go back tp the repository root, and execute the following commands to start a training or inference task:

$ cd ..
$ python3 cornell_dialogue.py --mode train  # or decode if you have pretrained checkpoints

It will cleanup the dataset, create vocabularies + train/test dataset indices and save the processed data to dataset/data/cornell directory (If the processed data already exists, will skip this process).
Then load the pre-setup configurations (you can change the configurations in the python file), and create the model and start a training session.

No preprocessed dataset found, create from cornell raw data...
Read cornell movie lines: 304713it [00:02, 128939.96it/s]
Read cornell movie conversations: 83097it [00:01, 46060.47it/s]
Create cornell utterance pairs: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████| 83097/83097 [01:02<00:00, 1319.20it/s]
Build vocabulary: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 158669/158669 [00:02<00:00, 77018.89it/s]
Build dataset: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 158669/158669 [00:01<00:00, 89873.72it/s]
Load configurations...
Load dataset and create batches...
Prepare train batches: 4711it [00:02, 2225.27it/s]
Prepare test batches: 248it [00:00, 3951.25it/s]
Building model...
source embedding shape: [None, None, 1024]
target input embedding shape: [None, None, 1024]
bi-directional rnn output shape: [None, None, 2048]
encoder input projection shape: [None, None, 1024]
encoder output shape: [None, None, 1024]
decoder rnn output shape: [None, None, 10004] (last dimension is vocab size)
number of trainable parameters: 78197524.
Start training...
Epoch 1 / 60:
   1/4711 [..............................] - ETA: 1468s - Global Step: 1 - Train Loss: 9.2197 - Perplexity: 10094.0631
...

Datasets

List of datasets that the mode of this repository is able to handle.

Implementation List

Reference