A library for coupling (Py)Torch machine learning models to Fortran
This repository contains code, utilities, and examples for directly calling PyTorch ML models from Fortran.
For full API and user documentation please see the online documentation which is significantly more detailed than this README.
NOTE: We recently made breaking changes to the API as it heads towards a stable release. Please see the online updates documentation for clear guidance on how to easily update your older code to run with the latest version of FTorch.
- Description
- Installation
- Usage
- GPU Support
- Examples
- License
- Contributions
- Authors and Acknowledgment
- Users
It is desirable to be able to run machine learning (ML) models directly in Fortran. Such models are often trained in some other language (say Python) using popular frameworks (say PyTorch) and saved. We want to run inference on this model without having to call a Python executable. To achieve this we use the existing Torch C++ interface.
This project provides a library enabling a user to directly couple their PyTorch models to Fortran code. We provide installation instructions for the library as well as instructions and examples for performing coupling.
use ftorch
...
type(torch_model) :: model
type(torch_tensor), dimension(n_inputs) :: model_inputs_arr
type(torch_tensor), dimension(n_outputs) :: model_output_arr
...
call torch_model_load(model, "/my/saved/TorchScript/model.pt")
call torch_tensor_from_array(model_inputs_arr(1), input_fortran, in_layout, torch_kCPU)
call torch_tensor_from_array(model_output_arr(1), output_fortran, out_layout, torch_kCPU)
call torch_model_forward(model, model_input_arr, model_output_arr)
The following presentations provide an introduction and overview of FTorch:
- Reducing the overheads for coupling PyTorch machine learning models to Fortran
ML & DL Seminars, LSCE, IPSL, Paris - November 2023
Slides - Recording - Reducing the Overhead of Coupled Machine Learning Models between Python and Fortran
RSECon23, Swansea - September 2023
Slides - Recording
Project status: This project is currently in pre-release with documentation and code
being prepared for a first release.
As we stabilise the API in preparation for first release there may be some breaking changes.
Please see
online updates documentation
for clear guidance on how to easily update your older code to run with the latest version.
If you are interested in using this library please get in touch.
For a similar approach to calling TensorFlow models from Fortran please see Fortran-TF-lib.
To install the library requires the following to be installed on the system:
- CMake >= 3.15
- libtorch* or PyTorch
- Fortran (2008 standard compliant), C++ (must fully support C++17), and C compilers
* The minimal example provided downloads the CPU-only Linux Nightly binary. Alternative versions may be required.
If building in a Windows environment then you can either use
Windows Subsystem for Linux (WSL)
or Visual Studio and the Intel Fortran Compiler.
For full details on the process see the
online Windows documentation.
Note that libTorch is not supported for the GNU Fortran compiler with MinGW.
At the time of writing, libtorch is only officially available for x86 architectures
(according to https://pytorch.org/). However, the version of PyTorch provided by
pip install torch
uses an ARM binary for libtorch which works on Apple Silicon.
For detailed installation instructions please see the online installation documentation.
To build and install the library:
-
Navigate to the location in which you wish to install the source and run:
git clone git@github.com:Cambridge-ICCS/FTorch.git
to clone via ssh, or
git clone https://github.com/Cambridge-ICCS/FTorch.git
to clone via https.
-
Navigate to the source directory by running:
cd FTorch/src/
-
Build the library using CMake with the relevant options from the table below:
mkdir build cd build cmake .. -DCMAKE_BUILD_TYPE=Release
The following table of CMake options are available to be passed as arguments to
cmake
through-D<Option>=<Value>
.
It is likely that you will need to provide at leastCMAKE_PREFIX_PATH
.Option Value Description CMAKE_Fortran_COMPILER
ifort
/gfortran
Specify a Fortran compiler to build the library with. This should match the Fortran compiler you're using to build the code you are calling this library from.1 CMAKE_C_COMPILER
icc
/gcc
Specify a C compiler to build the library with.1 CMAKE_CXX_COMPILER
icpc
/g++
Specify a C++ compiler to build the library with.1 CMAKE_PREFIX_PATH
</path/to/libTorch/>
Location of Torch installation2 CMAKE_INSTALL_PREFIX
</path/to/install/lib/at/>
Location at which the library files should be installed. By default this is /usr/local
CMAKE_BUILD_TYPE
Release
/Debug
Specifies build type. The default is Debug
, useRelease
for production codeCMAKE_BUILD_TESTS
TRUE
/FALSE
Specifies whether to compile FTorch's test suite as part of the build. ENABLE_CUDA
TRUE
/FALSE
Specifies whether to check for and enable CUDA2 1 On Windows this may need to be the full path to the compiler if CMake cannot locate it by default.
2 The path to the Torch installation needs to allow CMake to locate the relevant Torch CMake files.
If Torch has been installed as libtorch then this should be the absolute path to the unzipped libtorch distribution. If Torch has been installed as PyTorch in a Python venv (virtual environment), e.g. withpip install torch
, then this should be</path/to/venv/>lib/python<3.xx>/site-packages/torch/
.
You can find the location of your torch install by importing torch from your Python environment (import torch
) and runningprint(torch.__file__)
-
Make and install the library to the desired location with either:
cmake --build . --target install
or, if you want to separate these steps:
cmake --build . cmake --install .
Note: If you are using CMake < 3.15 then you will need to build and install separately using the make system specific commands. For example, if using
make
on UNIX this would be:make make install
Installation will place the following directories at the install location:
CMAKE_INSTALL_PREFIX/include/
- contains header and mod filesCMAKE_INSTALL_PREFIX/lib/
- containscmake
directory and.so
files
Note: depending on your system and architecturelib
may belib64
, and you may have.dll
files or similar.
Note: In a Windows environment this will require administrator privileges for the default install location.
In order to use FTorch users will typically need to follow these steps:
- Save a PyTorch model as TorchScript.
- Write Fortran using the FTorch bindings to use the model from within Fortran.
- Build and compile the code, linking against the FTorch library
These steps are described in more detail in the online documentation
To run on GPU requires a CUDA-compatible installation of libtorch and two main adaptations to the code:
- When saving a TorchScript model, ensure that it is on the GPU
- When using FTorch in Fortran, set the device for the input
tensor(s) to
torch_kCUDA
, rather thantorch_kCPU
.
For detailed guidance about running on GPU, including instructions for using multiple devices, please see the online GPU documentation.
If your code uses large tensors (where large means more than 2,147,483,647 elements
in any one dimension (the maximum value of a 32-bit integer)), you may
need to compile ftorch
with 64-bit integers. For information on how to do
this, please see our
FAQ
Examples of how to use this library are provided in the examples directory.
They demonstrate different functionalities of the code and are provided with
instructions to modify, build, and run as necessary.
For information on testing, see the corresponding
webpage
or the README
in the test
subdirectory.
Copyright © ICCS
FTorch is distributed under the MIT Licence.
Contributions and collaborations are welcome.
For bugs, feature requests, and clear suggestions for improvement please open an issue.
If you have built something upon FTorch that would be useful to others, or can address an open issue, please fork the repository and open a pull request.
Detailed guidelines can be found in the online developer documentation.
Everyone participating in the FTorch project, and in particular in the issue tracker, pull requests, and social media activity, is expected to treat other people with respect and, more generally, to follow the guidelines articulated in the Python Community Code of Conduct.
FTorch is written and maintained by the ICCS
Notable contributors to this project are:
See Contributors for a full list.
The following projects make use of this code or derivatives in some way:
- M2LInES CAM-ML
- DataWave CAM-GW
- DataWave - MiMA ML
See Mansfield and Sheshadri (2024) - DOI: 10.1029/2024MS004292 - Convection parameterisations in ICON
See Heuer et al (2023) - DOI: 10.48550/arXiv.2311.03251
Are we missing anyone? Let us know.