Skip to content

Commit

Permalink
Resolve merge conflict in README.
Browse files Browse the repository at this point in the history
  • Loading branch information
AndreWeiner committed Sep 13, 2022
2 parents 60f79d7 + ac04b2e commit f6ec1b8
Show file tree
Hide file tree
Showing 30 changed files with 1,602 additions and 250 deletions.
64 changes: 49 additions & 15 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ https://user-images.githubusercontent.com/8482575/120886182-f2b78800-c5ec-11eb-9

## Why *flowTorch*?

The *flowTorch* project was started to make the analysis and modeling of fluid data **easy** and **accessible** to everyone. The library design intends to strike a balance between **usability** and **flexibility**. Instead of a monolithic, black-box analysis tool, the library offers modular components that allow assembling custom analysis and modeling workflows with ease. *flowTorch* helps to fuse data from a wide range of file formats typical for fluid flow data, for example, to compare experiments simulations. The available analysis and modeling tools are rigorously tested and demonstrated on a variety of different fluid flow datasets. Moreover, one can significantly accelerate the entire process of accessing, cleaning, analysing, and modeling fluid flow data by starting with one of the pipelines available in the *flowTorch* [documentation](https://flowmodelingcontrol.github.io/flowtorch-docs/1.0/index.html).
The *flowTorch* project was started to make the analysis and modeling of fluid data **easy** and **accessible** to everyone. The library design intends to strike a balance between **usability** and **flexibility**. Instead of a monolithic, black-box analysis tool, the library offers modular components that allow assembling custom analysis and modeling workflows with ease. *flowTorch* helps to fuse data from a wide range of file formats typical for fluid flow data, for example, to compare experiments simulations. The available analysis and modeling tools are rigorously tested and demonstrated on a variety of different fluid flow datasets. Moreover, one can significantly accelerate the entire process of accessing, cleaning, analysing, and modeling fluid flow data by starting with one of the pipelines available in the *flowTorch* [documentation](https://flowmodelingcontrol.github.io/flowtorch-docs/1.1/index.html).

To get a first impression of how working with *flowTorch* looks like, the code snippet below shows part of a pipeline for performing a dynamic mode decomposition (DMD) of a transient *OpenFOAM* simulation.

Expand Down Expand Up @@ -78,6 +78,9 @@ The easiest way to install *flowTorch* is as follows:
```
# install via pip
pip3 install git+https://github.com/FlowModelingControl/flowtorch
# or install a specific branch, e.g., aweiner
pip3 install git+https://github.com/FlowModelingControl/flowtorch.git@aweiner
# to uninstall flowTorch, run
pip3 uninstall flowtorch
```
Expand All @@ -90,7 +93,7 @@ and install the dependencies listed in *requirements.txt*:
pip3 install -r requirements.txt
```

To get an overview of what *flowTorch* can do for you, have a look at the [online documentation](https://flowmodelingcontrol.github.io/flowtorch-docs/1.0/index.html). The examples presented in the online documentation are also contained in this repository. In fact, the documentation is a static version of several [Jupyter labs](https://jupyter.org/) with start-to-end analyses. If you are interested in an interactive version of one particular example, navigate to `./docs/source/notebooks` and run `jupyter lab`. Note that to execute some of the notebooks, the **corresponding datasets are required**. The datasets can be downloaded [here](https://cloudstorage.tu-braunschweig.de/getlink/fiQUyeDFx3sg2T6LLHBQoCCx/datasets_29_10_2021.tar.gz) (~1.4GB). If the data are only required for unit testing, a reduced dataset may be downloaded [here](https://cloudstorage.tu-braunschweig.de/getlink/fiFZaHCgTWYeq1aZVg3hAui1/datasets_minimal_29_10_2021.tar.gz) (~384MB). Download the data into a directory of your choice and navigate into that directory. To extract the archive, run:
To get an overview of what *flowTorch* can do for you, have a look at the [online documentation](https://flowmodelingcontrol.github.io/flowtorch-docs/1.1/index.html). The examples presented in the online documentation are also contained in this repository. In fact, the documentation is a static version of several [Jupyter labs](https://jupyter.org/) with start-to-end analyses. If you are interested in an interactive version of one particular example, navigate to `./docs/source/notebooks` and run `jupyter lab`. Note that to execute some of the notebooks, the **corresponding datasets are required**. The datasets can be downloaded [here](https://cloud.tu-braunschweig.de/s/sJYEfzFG7yDg3QT) (~2.6GB). If the data are only required for unit testing, a reduced dataset may be downloaded [here](https://cloud.tu-braunschweig.de/s/b9xJ7XSHMbdKwxH) (~411MB). Download the data into a directory of your choice and navigate into that directory. To extract the archive, run:
```
# full dataset
tar xzf datasets_29_10_2021.tar.gz
Expand All @@ -109,6 +112,34 @@ echo "export FLOWTORCH_DATASETS=\"$(pwd)/datasets_minimal/\"" >> ~/.bashrc
. ~/.bashrc
```

## Installing ParaView

**Note:** the following installation of ParaView is only necessary if the *TecplotDataloader* is needed.

*flowTorch* uses the ParaView Python module for accessing [Tecplot](https://www.tecplot.com/) data. When installing ParaView, special attention must be paid to the installed Python and VTK versions. Therefore, the following manual installation is recommend instead of using a standard package installation of ParaView.

1. Determine the version of Python:
```
python3 --version
# example output
Python 3.8.10
```
2. Download the ParaView binaries according to your Python version from [here](https://www.paraview.org/download/). Note that you may have to use an older version ParaView to match your Python version.
3. Install the ParaView binaries, e.g., as follows:
```
# optional: remove old package installation if available
sudo apt remove paraview
# replace the archive's name if needed in the commands below
sudo mv ParaView-5.9.1-MPI-Linux-Python3.8-64bit.tar.gz /opt/
cd /opt
sudo tar xf ParaView-5.9.1-MPI-Linux-Python3.8-64bit.tar.gz
sudo rm ParaView-5.9.1-MPI-Linux-Python3.8-64bit.tar.gz
cd ParaView-5.9.1-MPI-Linux-Python3.8-64bit/
# add path to ParaView binary and Python modules
echo export PATH="\$PATH:$(pwd)/bin" >> ~/.bashrc
echo export PYTHONPATH="\$PYTHONPATH:$(pwd)/lib/python3.8/site-packages" >> ~/.bashrc
```

## Development
### Documentation

Expand Down Expand Up @@ -151,21 +182,24 @@ If you encounter any issues using *flowTorch* or if you have any questions regar

## Reference

If *flowTorch* aids your work, you may support our work by referencing the following software article:
If *flowTorch* aids your work, you may support the project by referencing the following article:

```
@article{Weiner2021,
doi = {10.21105/joss.03860},
url = {https://doi.org/10.21105/joss.03860},
year = {2021},
publisher = {The Open Journal},
volume = {6},
number = {68},
pages = {3860},
author = {Andre Weiner and Richard Semaan},
title = {flowTorch - a Python library for analysis and reduced-order modeling of fluid flows},
journal = {Journal of Open Source Software}
}
```
doi = {10.21105/joss.03860},
url = {https://doi.org/10.21105/joss.03860},
year = {2021},
publisher = {The Open Journal},
volume = {6},
number = {68},
pages = {3860},
author = {Andre Weiner and Richard Semaan},
title = {flowTorch - a Python library for analysis and reduced-order modeling of fluid flows},
journal = {Journal of Open Source Software}
}
```

For a list of scientific works relying on flowTorch, refer to [this list](references.md).

## License

Expand Down
4 changes: 2 additions & 2 deletions docs/source/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -25,11 +25,11 @@ def setup(app):
# -- Project information -----------------------------------------------------

project = 'flowTorch'
copyright = '2020, flowTorch contributors'
copyright = '2022, flowTorch contributors'
author = 'flowTorch contributors'

# The full version, including alpha/beta/rc tags
release = '0.1'
release = '1.1'


# -- General configuration ---------------------------------------------------
Expand Down
8 changes: 8 additions & 0 deletions docs/source/flowtorch.data.rst
Original file line number Diff line number Diff line change
Expand Up @@ -58,6 +58,14 @@ flowtorch.data.tau\_dataloader
:undoc-members:
:show-inheritance:

flowtorch.data.tecplot\_dataloader
----------------------------------

.. automodule:: flowtorch.data.tecplot_dataloader
:members:
:undoc-members:
:show-inheritance:

flowtorch.data.selection\_tools
-------------------------------

Expand Down
36 changes: 18 additions & 18 deletions docs/source/notebooks/dmd_intro.ipynb

Large diffs are not rendered by default.

33 changes: 20 additions & 13 deletions docs/source/notebooks/linear_algebra_basics.ipynb

Large diffs are not rendered by default.

3 changes: 2 additions & 1 deletion flowtorch/analysis/__init__.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,5 @@
from .psp_explorer import PSPExplorer
from .pod import POD
from .dmd import DMD
from .svd import SVD
from .svd import SVD
from .svd import inexact_alm_matrix_complection
160 changes: 147 additions & 13 deletions flowtorch/analysis/dmd.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
"""

# standard library packages
from typing import Tuple, Set
from typing import Tuple, Set, Union
# third party packages
import torch as pt
from numpy import pi
Expand Down Expand Up @@ -32,10 +32,14 @@ class DMD(object):
tensor([-2.3842e-06, -4.2345e+01, -1.8552e+01])
>>> dmd.amplitude
tensor([10.5635+0.j, -0.0616+0.j, -0.0537+0.j])
>>> dmd = DMD(data_matrix, dt=0.1, rank=3, robust=True)
>>> dmd = DMD(data_matrix, dt=0.1, rank=3, robust={"tol": 1.0e-5, "verbose" : True})
"""

def __init__(self, data_matrix: pt.Tensor, dt: float, rank: int = None):
def __init__(self, data_matrix: pt.Tensor, dt: float, rank: int = None,
robust: Union[bool, dict] = False, unitary: bool = False,
optimal: bool = False, tlsq=False):
"""Create DMD instance based on data matrix and time step.
:param data_matrix: data matrix whose columns are formed by the individual snapshots
Expand All @@ -44,28 +48,93 @@ def __init__(self, data_matrix: pt.Tensor, dt: float, rank: int = None):
:type dt: float
:param rank: rank for SVD truncation, defaults to None
:type rank: int, optional
:param robust: data_matrix is split into low rank and sparse contributions
if True or if dictionary with options for Inexact ALM algorithm; the SVD
is computed only on the low rank matrix
:type robust: Union[bool,dict]
:param unitary: enforce the linear operator to be unitary; refer to piDMD_
by Peter Baddoo for more information
:type unitary: bool, optional
:param optimal: compute mode amplitudes based on a least-squares problem
as described in spDMD_ article by M. Janovic et al. (2014); in contrast
to the original spDMD implementation, the exact DMD modes are used in
the optimization problem as outlined in an article_ by R. Taylor
:type optimal: bool, optional
:param tlsq: de-biasing of the linear operator by solving a total least-squares
problem instead of a standard least-squares problem; the rank is selected
automatically or specified by the `rank` parameter; more information can be
found in the TDMD_ article by M. Hemati et al.
:type tlsq: bool, optional
.. _piDMD: https://github.com/baddoo/piDMD
.. _spDMD: https://hal-polytechnique.archives-ouvertes.fr/hal-00995141/document
.. _article: http://www.pyrunner.com/weblog/2016/08/03/spdmd-python/
.. _TDMD: http://cwrowley.princeton.edu/papers/Hemati-2017a.pdf
"""
self._dm = data_matrix
self._dt = dt
self._svd = SVD(self._dm[:, :-1], rank)
self._unitary = unitary
self._optimal = optimal
self._tlsq = tlsq
if self._tlsq:
svd = SVD(pt.vstack((self._dm[:, :-1], self._dm[:, 1:])),
rank, robust)
P = svd.V @ svd.V.conj().T
self._X = self._dm[:, :-1] @ P
self._Y = self._dm[:, 1:] @ P
self._svd = SVD(self._X, svd.rank)
del svd
else:
self._svd = SVD(self._dm[:, :-1], rank, robust)
self._X = self._dm[:, :-1]
self._Y = self._dm[:, 1:]
self._eigvals, self._eigvecs, self._modes = self._compute_mode_decomposition()
self._amplitude = self._compute_amplitudes()

def _compute_operator(self):
"""Compute the approximate linear (DMD) operator.
"""
if self._unitary:
Xp = self._svd.U.conj().T @ self._X
Yp = self._svd.U.conj().T @ self._Y
U, _, VT = pt.linalg.svd(Yp @ Xp.conj().T, full_matrices=False)
return U @ VT
else:
s_inv = pt.diag(1.0 / self._svd.s)
return self._svd.U.conj().T @ self._Y @ self._svd.V @ s_inv

def _compute_mode_decomposition(self):
"""Compute reduced operator, eigen decomposition, and DMD modes.
"""Compute reduced operator, eigen-decomposition, and DMD modes.
"""
s_inv = pt.diag(1.0 / self._svd.s)
operator = (
self._svd.U.conj().T @ self._dm[:, 1:] @ self._svd.V @ s_inv
)
operator = self._compute_operator()
val, vec = pt.linalg.eig(operator)
# type conversion is currently not implemented for pt.complex32
# such that the dtype for the modes is always pt.complex64
phi = (
self._dm[:, 1:].type(val.dtype) @ self._svd.V.type(val.dtype)
self._Y.type(val.dtype) @ self._svd.V.type(val.dtype)
@ s_inv.type(val.dtype) @ vec
)
return val, vec, phi

def _compute_amplitudes(self):
"""Compute amplitudes for exact DMD modes.
If *optimal* is False, the amplitudes are computed based on the first
snapshot in the data matrix; otherwise, a least-squares problem as
introduced by Janovic et al. is solved (refer to the documentation
in the constructor for more information).
"""
if self._optimal:
vander = pt.vander(self.eigvals, self._dm.shape[-1], True)
P = (self.modes.conj().T @ self.modes) * \
(vander @ vander.conj().T).conj()
q = pt.diag(vander @ self._dm.type(P.dtype).conj().T @
self.modes).conj()
else:
P = self._modes
q = self._X[:, 0].type(P.dtype)
return pt.linalg.lstsq(P, q).solution

def partial_reconstruction(self, mode_indices: Set[int]) -> pt.Tensor:
"""Reconstruct data matrix with limited number of modes.
Expand All @@ -79,11 +148,30 @@ def partial_reconstruction(self, mode_indices: Set[int]) -> pt.Tensor:
mode_indices = pt.tensor(list(mode_indices), dtype=pt.int64)
mode_mask[mode_indices] = 1.0
reconstruction = (self.modes * mode_mask) @ self.dynamics
if self._dm.dtype in (pt.complex64, pt.complex32):
if self._dm.dtype in (pt.complex128, pt.complex64, pt.complex32):
return reconstruction.type(self._dm.dtype)
else:
return reconstruction.real.type(self._dm.dtype)

def top_modes(self, n: int = 10, integral: bool = False) -> pt.Tensor:
"""Get the indices of the first n most important modes.
Note that the conjugate complex modes for real data matrices are
not filtered out.
:param n: number of indices to return; defaults to 10
:type n: int
:param integral: if True, the modes are sorted according to their
integral contribution; defaults to False
:type integral: bool, optional
:return: indices of top n modes sorted by amplitude or integral
contribution
:rtype: pt.Tensor
"""
importance = self.integral_contribution if integral else self.amplitude
n = min(n, importance.shape[0])
return importance.abs().topk(n).indices

@property
def required_memory(self) -> int:
"""Compute the memory size in bytes of the DMD.
Expand All @@ -101,6 +189,10 @@ def required_memory(self) -> int:
def svd(self) -> SVD:
return self._svd

@property
def operator(self) -> pt.Tensor:
return self._compute_operator()

@property
def modes(self) -> pt.Tensor:
return self._modes
Expand All @@ -123,24 +215,66 @@ def growth_rate(self) -> pt.Tensor:

@property
def amplitude(self) -> pt.Tensor:
return pt.linalg.pinv(self._modes) @ self._dm[:, 0].type(self._modes.dtype)
return self._amplitude

@property
def dynamics(self) -> pt.Tensor:
return pt.diag(self.amplitude) @ pt.vander(self.eigvals, self._dm.shape[-1], True)

@property
def integral_contribution(self) -> pt.Tensor:
"""Integral contribution of individual modes according to J. Kou et al. 2017.
DOI: https://doi.org/10.1016/j.euromechflu.2016.11.015
"""
return self.modes.norm(dim=0)**2 * self.dynamics.abs().sum(dim=1)

@property
def reconstruction(self) -> pt.Tensor:
"""Reconstruct an approximation of the training data.
:return: reconstructed training data
:rtype: pt.Tensor
"""
if self._dm.dtype in (pt.complex64, pt.complex32):
if self._dm.dtype in (pt.complex128, pt.complex64, pt.complex32):
return (self._modes @ self.dynamics).type(self._dm.dtype)
else:
return (self._modes @ self.dynamics).real.type(self._dm.dtype)

@property
def reconstruction_error(self) -> pt.Tensor:
"""Compute the reconstruction error.
:return: difference between reconstruction and data matrix
:rtype: pt.Tensor
"""
return self.reconstruction - self._dm

@property
def projection_error(self) -> pt.Tensor:
"""Compute the difference between Y and AX.
:return: projection error
:rtype: pt.Tensor
"""
YH = (self.modes @ pt.diag(self.eigvals)) @ \
(pt.linalg.pinv(self.modes) @ self._X.type(self.modes.dtype))
if self._Y.dtype in (pt.complex128, pt.complex64, pt.complex32):
return YH - self._Y
else:
return YH.real.type(self._Y.dtype) - self._Y

@property
def tlsq_error(self) -> Tuple[pt.Tensor, pt.Tensor]:
"""Compute the *noise* in X and Y.
:return: noise in X and Y
:rtype: Tuple[pt.Tensor, pt.Tensor]
"""
if not self._tlsq:
print("Warning: noise is only removed if tlsq=True")
return self._dm[:, :-1] - self._X, self._dm[:, 1:] - self._Y

def __repr__(self):
return f"{self.__class__.__qualname__}(data_matrix, rank={self._svd.rank})"

Expand Down
Loading

0 comments on commit f6ec1b8

Please sign in to comment.