diff --git a/README.md b/README.md index 25d595a..24e2404 100644 --- a/README.md +++ b/README.md @@ -457,7 +457,7 @@ tuple(tuple(a,b), noise) Where `a` is the input sample, `b` is the label/condition (if any, otherwise fill it with `0`), and `noise` is the latent vector of input. -To train Pix2Pix-like architecture, that have no `noise` as ConvGenerator input, just return the values in thee format `(tuple(a,b), b)` since the condition is the generator output. +To train Pix2Pix-like architecture, that have no `noise` as ConvGenerator input, just return the values in thee format `(tuple(a,b), b)` since the condition is the generator input. ## Test In order to run the tests (with the doctests), linting and docs generation simply use `tox`. diff --git a/src/ashpy/models/convolutional/unet.py b/src/ashpy/models/convolutional/unet.py index 8de13a9..3b464c7 100644 --- a/src/ashpy/models/convolutional/unet.py +++ b/src/ashpy/models/convolutional/unet.py @@ -27,7 +27,10 @@ class UNet(Conv2DInterface): """ UNet Architecture. - Used in Image-to-Image Translation with Conditional Adversarial Nets [1]_. + Architecture similar to the one found in "Image-to-Image Translation + with Conditional Adversarial Nets" [1]_. + + Originally proposed in "U-Net: Convolutional Networks for Biomedical Image Segmentation" [2]_. Examples: * Direct Usage: @@ -50,8 +53,10 @@ class UNet(Conv2DInterface): (1, 512, 512, 3) True - .. [1] Image-to-Image Translation with Conditional Adversarial Nets - https://arxiv.org/abs/1611.04076 + .. [1] Image-to-Image Translation with Conditional Adversarial Nets - + https://arxiv.org/abs/1611.07004 + .. [2] U-Net: Convolutional Networks for Biomedical Image Segmentation - + https://arxiv.org/abs/1505.04597 """