Skip to content

Commit

Permalink
Fix the multilooked PS mask output (#111)
Browse files Browse the repository at this point in the history
* pass through output bounds. stitch ps looked file

* fix the looked ps file creation, pass through filename

* remove old __all__

* removed unused half_window_to_full

* exclude only 1.12.2, not above it

* use `main_thread`, not thread iterator

* return ps_looked file from `multilook_ps_mask`, fix `filled` instead of `fill`

* start new changelog section

* return the ps file in `wrapped_phase.py`

* try pytest without numba for segfault debugging

* update install descriptions

* change back jit for test

* bump mean/var difference for SHP test

* dont pin isce3
  • Loading branch information
scottstanie authored Aug 17, 2023
1 parent 423da2d commit 3706a1b
Show file tree
Hide file tree
Showing 15 changed files with 97 additions and 75 deletions.
3 changes: 1 addition & 2 deletions .github/workflows/test-build-push.yml
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,6 @@ jobs:
gdal=3.5
h5py=3.6
h5netcdf=1.0
isce3=0.8.0
numpy=1.20
numba=0.54
pillow==7.0
Expand Down Expand Up @@ -69,7 +68,7 @@ jobs:
echo "NUMBA_BOUNDSCHECK=1" >> $GITHUB_ENV
- name: Test (with numba boundscheck on)
run: |
pytest
pytest -n0
# https://community.codecov.com/t/numba-jitted-methods-are-not-captured-by-codecov/2649
# - name: Coverage report
# uses: codecov/codecov-action@v2
Expand Down
6 changes: 6 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,9 @@
# Unreleased

**Added**

- Save a multilooked version of the PS mask for output inspection

# [0.2.0](https://github.com/opera-adt/dolphin/compare/v0.1.0...v0.2.0) - 2023-07-25

**Added**
Expand Down
8 changes: 5 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,9 +12,11 @@ High resolution wrapped phase estimation for InSAR using combined PS/DS processi
`dolphin` is available on conda:

```bash
conda install -c conda-forge dolphin
mamba install -c conda-forge dolphin
```

(Note: [using `mamba`](https://mamba.readthedocs.io/en/latest/mamba-installation.html#mamba-install) is recommended for conda-forge packages, but miniconda can also be used.)

To install locally:

1. Download source code:
Expand All @@ -23,12 +25,12 @@ git clone https://github.com/opera-adt/dolphin.git && cd dolphin
```
2. Install dependencies:
```bash
conda env create --file conda-env.yml
mamba env create --file conda-env.yml
```

or if you have an existing environment:
```bash
conda env update --name my-existing-env --file conda-env.yml
mamba env update --name my-existing-env --file conda-env.yml
```

3. Install `dolphin` via pip:
Expand Down
4 changes: 2 additions & 2 deletions conda-env.yml
Original file line number Diff line number Diff line change
Expand Up @@ -7,9 +7,9 @@ dependencies:
- git # for pip install, due to setuptools_scm
- gdal>=3.3
- h5py>=3.6
- hdf5<1.12.2 # https://github.com/SciTools/iris/issues/5187 and https://github.com/pydata/xarray/issues/7549
- hdf5!=1.12.2 # https://github.com/SciTools/iris/issues/5187 and https://github.com/pydata/xarray/issues/7549
- h5netcdf>=1.0
- isce3>=0.8.0
- isce3 # >=0.14.0 # Right now, isce3 is messes up conda's solvers. Should move to optional.
- numba>=0.54
- numpy>=1.20
- pillow>=7.0
Expand Down
56 changes: 31 additions & 25 deletions docs/getting-started.md
Original file line number Diff line number Diff line change
@@ -1,33 +1,13 @@
## Install

The following will install `dolphin` into a conda environment.

1. Download source code:
```bash
git clone https://github.com/opera-adt/dolphin.git && cd dolphin
```
2. Install dependencies:
```bash
conda env create --file conda-env.yml
```
`dolphin` is available on conda-forge:

or if you have an existing environment:
```bash
conda env update --name my-existing-env --file conda-env.yml
```

3. Install `dolphin` via pip:
```bash
conda activate dolphin-env
python -m pip install .
mamba install -c conda-forge dolphin
```


If you have access to a GPU, you can install the extra requirements from running the GPU accelerated algorithms:
```bash
conda env update --name dolphin-env --file conda-env-gpu-extras.yml
```

## Usage

The main entry point for running the phase estimation/stitching and unwrapping workflows is named `dolphin`, which has two subcommands:
Expand Down Expand Up @@ -66,16 +46,42 @@ The full set of options is written to the configuration file; you can edit this
To contribute to the development of `dolphin`, you can fork the repository and install the package in development mode.
We encourage new features to be developed on a new branch of your fork, and then submitted as a pull request to the main repository.

Once you're ready to write new code, you can use the following additional steps to add to your development environment:
To install locally,

1. Download source code:
```bash
git clone https://github.com/opera-adt/dolphin.git && cd dolphin
```
2. Install dependencies:
```bash
mamba env create --file conda-env.yml
```

or if you have an existing environment:
```bash
mamba env update --name my-existing-env --file conda-env.yml
```

3. Install `dolphin` via pip:
```bash
mamba activate dolphin-env
python -m pip install -e .
```


If you have access to a GPU, you can install the extra requirements from running the GPU accelerated algorithms:
```bash
mamba env update --name dolphin-env --file conda-env-gpu-extras.yml
```


The extra packages required for testing and building the documentation can be installed:
```bash
# Run "pip install -e" to install with extra development requirements
python -m pip install -e ".[docs,test]"
```
This will install the `dolphin` package in development mode, and install the additional dependencies for documentation and testing.

After changing code, we use [`pre-commit`](https://pre-commit.com/) to automatically run linting and formatting:
We use [`pre-commit`](https://pre-commit.com/) to automatically run linting and formatting:
```bash
# Get pre-commit hooks so that linting/formatting is done automatically
pre-commit install
Expand Down
2 changes: 1 addition & 1 deletion pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -65,7 +65,7 @@ ignore = "D100,D102,D104,D105,D106,D107,D203,D204,D213,D413"

[tool.pytest.ini_options]
doctest_optionflags = "NORMALIZE_WHITESPACE NUMBER"
addopts = " --cov=dolphin -n auto --maxprocesses=8 --doctest-modules --ignore=scripts --ignore=docs --ignore=data"
addopts = " --cov=dolphin -n auto --maxprocesses=8 --doctest-modules --randomly-seed=1234 --ignore=scripts --ignore=docs --ignore=data"
filterwarnings = [
"error",
# DeprecationWarning thrown in pkg_resources for older numba verions and llvmlite
Expand Down
18 changes: 3 additions & 15 deletions src/dolphin/_background.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,8 +4,7 @@
from collections.abc import Callable
from concurrent.futures import Executor, Future
from queue import Empty, Full, Queue
from threading import Event, Thread
from threading import enumerate as threading_enumerate
from threading import Event, Thread, main_thread
from typing import Any, Optional

from dolphin._log import get_log
Expand All @@ -16,17 +15,6 @@
_DEFAULT_TIMEOUT = 0.5


def is_main_thread_active() -> bool:
"""Check if the main thread is still active.
Used to check if the writing thread should exit if there was
some exception in the main thread.
Source: https://stackoverflow.com/a/23443397/4174466
"""
return any((i.name == "MainThread") and i.is_alive() for i in threading_enumerate())


class BackgroundWorker(abc.ABC):
"""Base class for doing work in a background thread.
Expand Down Expand Up @@ -78,7 +66,7 @@ def __init__(

def _consume_work_queue(self):
while True:
if not is_main_thread_active():
if not main_thread().is_alive():
break

logger.debug(f"{self.name} getting work")
Expand Down Expand Up @@ -312,7 +300,7 @@ def run(self):
# Write the header
f.write("time(s),memory(GB)\n")

while not self._finished_event.is_set() and is_main_thread_active():
while not self._finished_event.is_set() and main_thread().is_alive():
mem = self._get_gpu_memory()
t_cur = time.time() - self.t0
with open(self.log_file, "a") as f:
Expand Down
26 changes: 17 additions & 9 deletions src/dolphin/ps.py
Original file line number Diff line number Diff line change
Expand Up @@ -257,7 +257,7 @@ def multilook_ps_mask(
strides: dict[str, int],
ps_mask_file: Filename,
output_file: Optional[Filename] = None,
):
) -> Path:
"""Create a multilooked version of the full-res PS mask.
Parameters
Expand All @@ -269,17 +269,24 @@ def multilook_ps_mask(
output_file : Optional[Filename], optional
Name of file to save result to.
Defaults to same as `ps_mask_file`, but with "_looked" added before suffix.
Returns
-------
output_file : Path
"""
if strides == {"x": 1, "y": 1}:
logger.info("No striding request, skipping multilook.")
return
return Path(ps_mask_file)
if output_file is None:
ps_suffix = Path(ps_mask_file).suffix
output_file = Path(str(ps_mask_file).replace(ps_suffix, f"_looked{ps_suffix}"))
logger.info(f"Saving a looked PS mask to {output_file}")
if Path(output_file).exists():
logger.info(f"{output_file} exists, skipping.")
return
out_path = Path(str(ps_mask_file).replace(ps_suffix, f"_looked{ps_suffix}"))
logger.info(f"Saving a looked PS mask to {out_path}")
else:
out_path = Path(output_file)

if Path(out_path).exists():
logger.info(f"{out_path} exists, skipping.")
return out_path

ps_mask = io.load_gdal(ps_mask_file, masked=True)
full_rows, full_cols = ps_mask.shape
Expand All @@ -289,11 +296,12 @@ def multilook_ps_mask(
# make sure it's the same size as the MLE result/temp_coh after padding
out_rows, out_cols = full_rows // strides["y"], full_cols // strides["x"]
ps_mask_looked = ps_mask_looked[:out_rows, :out_cols]
ps_mask_looked = ps_mask_looked.astype("uint8").fill(255)
ps_mask_looked = ps_mask_looked.astype("uint8").filled(255)
io.write_arr(
arr=ps_mask_looked,
like_filename=ps_mask_file,
output_name=output_file,
output_name=out_path,
strides=strides,
nodata=255,
)
return out_path
5 changes: 0 additions & 5 deletions src/dolphin/utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -331,11 +331,6 @@ def full_suffix(filename: Filename):
return "".join(fpath.suffixes)


def half_window_to_full(half_window: Union[list, tuple]) -> tuple[int, int]:
"""Convert a half window size to a full window size."""
return (2 * half_window[0] + 1, 2 * half_window[1] + 1)


def gpu_is_available() -> bool:
"""Check if a GPU is available."""
try:
Expand Down
2 changes: 0 additions & 2 deletions src/dolphin/workflows/_utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -19,8 +19,6 @@

logger = get_log(__name__)

__all__ = ["group_by_burst", "setup_output_folder"]


def group_by_burst(
file_list: Sequence[Filename],
Expand Down
5 changes: 4 additions & 1 deletion src/dolphin/workflows/s1_disp.py
Original file line number Diff line number Diff line change
Expand Up @@ -97,6 +97,7 @@ def run(

ifg_file_list: list[Path] = []
tcorr_file_list: list[Path] = []
ps_file_list: list[Path] = []
# The comp_slc tracking object is a dict, since we'll need to organize
# multiple comp slcs by burst (they'll have the same filename)
comp_slc_dict: dict[str, Path] = {}
Expand All @@ -119,10 +120,11 @@ def run(
for fut in fut_to_burst:
burst = fut_to_burst[fut]

cur_ifg_list, comp_slc, tcorr = fut.result()
cur_ifg_list, comp_slc, tcorr, ps_file = fut.result()
ifg_file_list.extend(cur_ifg_list)
comp_slc_dict[burst] = comp_slc
tcorr_file_list.append(tcorr)
ps_file_list.append(ps_file)

# ###################################
# 2. Stitch and unwrap interferograms
Expand All @@ -131,6 +133,7 @@ def run(
stitch_and_unwrap.run(
ifg_file_list=ifg_file_list,
tcorr_file_list=tcorr_file_list,
ps_file_list=ps_file_list,
cfg=cfg,
debug=debug,
)
Expand Down
24 changes: 19 additions & 5 deletions src/dolphin/workflows/stitch_and_unwrap.py
Original file line number Diff line number Diff line change
Expand Up @@ -15,6 +15,7 @@
def run(
ifg_file_list: Sequence[Path],
tcorr_file_list: Sequence[Path],
ps_file_list: Sequence[Path],
cfg: Workflow,
debug: bool = False,
unwrap_jobs: int = 1,
Expand All @@ -23,11 +24,13 @@ def run(
Parameters
----------
ifg_file_list : Sequence[VRTInterferogram]
Sequence of [`VRTInterferogram`][dolphin.interferogram.VRTInterferogram] objects
to stitch together
ifg_file_list : Sequence[Path]
Sequence of interferograms files.
Separate bursts (if any) will be stitched together before unwrapping.
tcorr_file_list : Sequence[Path]
Sequence of paths to the correlation files for each interferogram
Sequence of paths to the burst-wise temporal coherence files.
ps_file_list : Sequence[Path]
Sequence of paths to the (looked) burst-wise ps mask files.
cfg : Workflow
[`Workflow`][dolphin.workflows.config.Workflow] object with workflow parameters
debug : bool, optional
Expand Down Expand Up @@ -77,7 +80,18 @@ def run(
tcorr_file_list,
outfile=stitched_tcorr_file,
driver="GTiff",
overwrite=False,
out_bounds=cfg.output_options.bounds,
out_bounds_epsg=cfg.output_options.bounds_epsg,
)

# Stitch the looked PS files
stitched_ps_file = stitched_ifg_dir / "ps_mask_looked.tif"
stitching.merge_images(
ps_file_list,
outfile=stitched_ps_file,
out_nodata=255,
driver="GTiff",
resample_alg="nearest",
out_bounds=cfg.output_options.bounds,
out_bounds_epsg=cfg.output_options.bounds_epsg,
)
Expand Down
8 changes: 5 additions & 3 deletions src/dolphin/workflows/wrapped_phase.py
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@


@log_runtime
def run(cfg: Workflow, debug: bool = False) -> tuple[list[Path], Path, Path]:
def run(cfg: Workflow, debug: bool = False) -> tuple[list[Path], Path, Path, Path]:
"""Run the displacement workflow on a stack of SLCs.
Parameters
Expand Down Expand Up @@ -88,7 +88,9 @@ def run(cfg: Workflow, debug: bool = False) -> tuple[list[Path], Path, Path]:

# Save a looked version of the PS mask too
strides = cfg.output_options.strides
ps.multilook_ps_mask(strides=strides, ps_mask_file=cfg.ps_options._output_file)
ps_looked_file = ps.multilook_ps_mask(
strides=strides, ps_mask_file=cfg.ps_options._output_file
)

# #########################
# phase linking/EVD step
Expand Down Expand Up @@ -183,4 +185,4 @@ def run(cfg: Workflow, debug: bool = False) -> tuple[list[Path], Path, Path]:
else:
ifg_file_list = [ifg.path for ifg in network.ifg_list] # type: ignore

return ifg_file_list, comp_slc_file, tcorr_file
return ifg_file_list, comp_slc_file, tcorr_file, ps_looked_file
1 change: 1 addition & 0 deletions tests/requirements.txt
Original file line number Diff line number Diff line change
Expand Up @@ -4,4 +4,5 @@ pooch
pre-commit
pytest
pytest-cov
pytest-randomly # control random seed
pytest-xdist # parallel tests: https://pytest-xdist.readthedocs.io/en/latest/
Loading

0 comments on commit 3706a1b

Please sign in to comment.