Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cause cuDNN errors #9

Open
ZhiZe-ZG opened this issue Apr 24, 2024 · 1 comment
Open

Cause cuDNN errors #9

ZhiZe-ZG opened this issue Apr 24, 2024 · 1 comment

Comments

@ZhiZe-ZG
Copy link

Here is error informations:

 python .\train_Unet_CIFAR.py
None
Using 16bit Automatic Mixed Precision (AMP)
GPU available: True (cuda), used: True
TPU available: False, using: 0 TPU cores
IPU available: False, using: 0 IPUs
HPU available: False, using: 0 HPUs
`Trainer(limit_train_batches=1.0)` was configured so 100% of the batches per epoch will be used..
Initializing distributed: GLOBAL_RANK: 0, MEMBER: 1/2
[W socket.cpp:697] [c10d] The client socket has failed to connect to [MMD]:3288 (system error: 10049 - The requested address is not valid in its context.).
None
Initializing distributed: GLOBAL_RANK: 1, MEMBER: 2/2
[W socket.cpp:697] [c10d] The client socket has failed to connect to [MMD]:3288 (system error: 10049 - The requested address is not valid in its context.).
----------------------------------------------------------------------------------------------------
distributed_backend=gloo
All distributed processes registered. Starting with 2 processes
----------------------------------------------------------------------------------------------------

Files already downloaded and verified
Files already downloaded and verified
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0,1]
LOCAL_RANK: 1 - CUDA_VISIBLE_DEVICES: [0,1]

  | Name  | Type  | Params
--------------------------------
0 | model | XUnet | 5.9 M
--------------------------------
5.9 M     Trainable params
0         Non-trainable params
5.9 M     Total params
23.454    Total estimated model params size (MB)
Sanity Checking: |                                                       | 0/? [00:00<?, ?it/s]E:\CondaEnvs\hre2_windows\Lib\site-packages\pytorch_lightning\trainer\connectors\data_connector.py:441: The 'val_dataloader' does not have many workers which may be a bottleneck. Consider increasing the value of the `num_workers` argument` to `num_workers=11` in the `DataLoader` to improve performance.
E:\CondaEnvs\hre2_windows\Lib\site-packages\pytorch_lightning\trainer\connectors\data_connector.py:441: The 'train_dataloader' does not have many workers which may be a bottleneck. Consider increasing the value of the `num_workers` argument` to `num_workers=11` in the `DataLoader` to improve performance.
Epoch 0:   0%|                                                         | 0/704 [00:00<?, ?it/s]E:\CondaEnvs\hre2_windows\Lib\site-packages\pytorch_lightning\trainer\connectors\logger_connector\result.py:213: You called `self.log('over_error', ...)` in your `training_step` but the value needs to be floating to be reduced. Converting it to torch.float32. You can silence this warning by converting the value to floating point yourself. If you don't intend to reduce the value (for instance when logging the global step or epoch) then you can use `self.logger.log_metrics({'over_error': ...})` instead.
[rank0]:[W reducer.cpp:1367] Warning: find_unused_parameters=True was specified in DDP constructor, but did not find any unused parameters in the forward pass. This flag results in an extra traversal of the autograd graph every iteration,  which can adversely affect performance. If your model indeed never has any unused parameters in the forward pass, consider turning this flag off. Note that this warning may be a false positive if your model has flow control causing later iterations to have unused parameters. (function operator ())
[rank1]:[W reducer.cpp:1367] Warning: find_unused_parameters=True was specified in DDP constructor, but did not find any unused parameters in the forward pass. This flag results in an extra traversal of the autograd graph every iteration,  which can adversely affect performance. If your model indeed never has any unused parameters in the forward pass, consider turning this flag off. Note that this warning may be a false positive if your model has flow control causing later iterations to have unused parameters. (function operator ())
Traceback (most recent call last):
  File "E:\GitRepositories\HRE\train_Unet_CIFAR.py", line 45, in <module>
    unet_train()
  File "E:\GitRepositories\HRE\CodeLib\Trainer\Unet.py", line 106, in unet_train
    trainer.fit(model_module, data_module)
  File "E:\CondaEnvs\hre2_windows\Lib\site-packages\pytorch_lightning\trainer\trainer.py", line 544, in fit
    call._call_and_handle_interrupt(
  File "E:\CondaEnvs\hre2_windows\Lib\site-packages\pytorch_lightning\trainer\call.py", line 43, in _call_and_handle_interrupt
    return trainer.strategy.launcher.launch(trainer_fn, *args, trainer=trainer, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "E:\CondaEnvs\hre2_windows\Lib\site-packages\pytorch_lightning\strategies\launchers\subprocess_script.py", line 105, in launch
    return function(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "E:\CondaEnvs\hre2_windows\Lib\site-packages\pytorch_lightning\trainer\trainer.py", line 580, in _fit_impl
    self._run(model, ckpt_path=ckpt_path)
  File "E:\CondaEnvs\hre2_windows\Lib\site-packages\pytorch_lightning\trainer\trainer.py", line 987, in _run
    results = self._run_stage()
              ^^^^^^^^^^^^^^^^^
  File "E:\CondaEnvs\hre2_windows\Lib\site-packages\pytorch_lightning\trainer\trainer.py", line 1033, in _run_stage
    self.fit_loop.run()
  File "E:\CondaEnvs\hre2_windows\Lib\site-packages\pytorch_lightning\loops\fit_loop.py", line 205, in run
    self.advance()
  File "E:\CondaEnvs\hre2_windows\Lib\site-packages\pytorch_lightning\loops\fit_loop.py", line 363, in advance
    self.epoch_loop.run(self._data_fetcher)
  File "E:\CondaEnvs\hre2_windows\Lib\site-packages\pytorch_lightning\loops\training_epoch_loop.py", line 140, in run
    self.advance(data_fetcher)
  File "E:\CondaEnvs\hre2_windows\Lib\site-packages\pytorch_lightning\loops\training_epoch_loop.py", line 250, in advance
    batch_output = self.automatic_optimization.run(trainer.optimizers[0], batch_idx, kwargs)
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "E:\CondaEnvs\hre2_windows\Lib\site-packages\pytorch_lightning\loops\optimization\automatic.py", line 190, in run
    self._optimizer_step(batch_idx, closure)
  File "E:\CondaEnvs\hre2_windows\Lib\site-packages\pytorch_lightning\loops\optimization\automatic.py", line 268, in _optimizer_step
    call._call_lightning_module_hook(
  File "E:\CondaEnvs\hre2_windows\Lib\site-packages\pytorch_lightning\trainer\call.py", line 157, in _call_lightning_module_hook
    output = fn(*args, **kwargs)
             ^^^^^^^^^^^^^^^^^^^
  File "E:\CondaEnvs\hre2_windows\Lib\site-packages\pytorch_lightning\core\module.py", line 1303, in optimizer_step
    optimizer.step(closure=optimizer_closure)
  File "E:\CondaEnvs\hre2_windows\Lib\site-packages\pytorch_lightning\core\optimizer.py", line 152, in step
    step_output = self._strategy.optimizer_step(self._optimizer, closure, **kwargs)
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "E:\CondaEnvs\hre2_windows\Lib\site-packages\pytorch_lightning\strategies\ddp.py", line 270, in optimizer_step
    optimizer_output = super().optimizer_step(optimizer, closure, model, **kwargs)
                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "E:\CondaEnvs\hre2_windows\Lib\site-packages\pytorch_lightning\strategies\strategy.py", line 239, in optimizer_step
    return self.precision_plugin.optimizer_step(optimizer, model=model, closure=closure, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "E:\CondaEnvs\hre2_windows\Lib\site-packages\pytorch_lightning\plugins\precision\amp.py", line 80, in optimizer_step
    closure_result = closure()
                     ^^^^^^^^^
  File "E:\CondaEnvs\hre2_windows\Lib\site-packages\pytorch_lightning\loops\optimization\automatic.py", line 144, in __call__
    self._result = self.closure(*args, **kwargs)
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "E:\CondaEnvs\hre2_windows\Lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "E:\CondaEnvs\hre2_windows\Lib\site-packages\pytorch_lightning\loops\optimization\automatic.py", line 138, in closure
    self._backward_fn(step_output.closure_loss)
  File "E:\CondaEnvs\hre2_windows\Lib\site-packages\pytorch_lightning\loops\optimization\automatic.py", line 239, in backward_fn
    call._call_strategy_hook(self.trainer, "backward", loss, optimizer)
  File "E:\CondaEnvs\hre2_windows\Lib\site-packages\pytorch_lightning\trainer\call.py", line 309, in _call_strategy_hook
    output = fn(*args, **kwargs)
             ^^^^^^^^^^^^^^^^^^^
  File "E:\CondaEnvs\hre2_windows\Lib\site-packages\pytorch_lightning\strategies\strategy.py", line 213, in backward
    self.precision_plugin.backward(closure_loss, self.lightning_module, optimizer, *args, **kwargs)
  File "E:\CondaEnvs\hre2_windows\Lib\site-packages\pytorch_lightning\plugins\precision\precision.py", line 72, in backward
Traceback (most recent call last):
  File "E:\GitRepositories\HRE\train_Unet_CIFAR.py", line 45, in <module>
    unet_train()
  File "E:\GitRepositories\HRE\CodeLib\Trainer\Unet.py", line 106, in unet_train
    trainer.fit(model_module, data_module)
  File "E:\CondaEnvs\hre2_windows\Lib\site-packages\pytorch_lightning\trainer\trainer.py", line 544, in fit
    call._call_and_handle_interrupt(
  File "E:\CondaEnvs\hre2_windows\Lib\site-packages\pytorch_lightning\trainer\call.py", line 43, in _call_and_handle_interrupt
    return trainer.strategy.launcher.launch(trainer_fn, *args, trainer=trainer, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "E:\CondaEnvs\hre2_windows\Lib\site-packages\pytorch_lightning\strategies\launchers\subprocess_script.py", line 105, in launch
    return function(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "E:\CondaEnvs\hre2_windows\Lib\site-packages\pytorch_lightning\trainer\trainer.py", line 580, in _fit_impl
    self._run(model, ckpt_path=ckpt_path)
  File "E:\CondaEnvs\hre2_windows\Lib\site-packages\pytorch_lightning\trainer\trainer.py", line 987, in _run
    results = self._run_stage()
              ^^^^^^^^^^^^^^^^^
  File "E:\CondaEnvs\hre2_windows\Lib\site-packages\pytorch_lightning\trainer\trainer.py", line 1033, in _run_stage
    self.fit_loop.run()
  File "E:\CondaEnvs\hre2_windows\Lib\site-packages\pytorch_lightning\loops\fit_loop.py", line 205, in run
    self.advance()
  File "E:\CondaEnvs\hre2_windows\Lib\site-packages\pytorch_lightning\loops\fit_loop.py", line 363, in advance
    self.epoch_loop.run(self._data_fetcher)
  File "E:\CondaEnvs\hre2_windows\Lib\site-packages\pytorch_lightning\loops\training_epoch_loop.py", line 140, in run
    self.advance(data_fetcher)
  File "E:\CondaEnvs\hre2_windows\Lib\site-packages\pytorch_lightning\loops\training_epoch_loop.py", line 250, in advance
    batch_output = self.automatic_optimization.run(trainer.optimizers[0], batch_idx, kwargs)
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "E:\CondaEnvs\hre2_windows\Lib\site-packages\pytorch_lightning\loops\optimization\automatic.py", line 190, in run
    self._optimizer_step(batch_idx, closure)
  File "E:\CondaEnvs\hre2_windows\Lib\site-packages\pytorch_lightning\loops\optimization\automatic.py", line 268, in _optimizer_step
    model.backward(tensor, *args, **kwargs)
    call._call_lightning_module_hook(
  File "E:\CondaEnvs\hre2_windows\Lib\site-packages\pytorch_lightning\core\module.py", line 1090, in backward
  File "E:\CondaEnvs\hre2_windows\Lib\site-packages\pytorch_lightning\trainer\call.py", line 157, in _call_lightning_module_hook
    output = fn(*args, **kwargs)
    loss.backward(*args, **kwargs)
             ^^^^^^^^^^^^^^^^^^^
  File "E:\CondaEnvs\hre2_windows\Lib\site-packages\torch\_tensor.py", line 522, in backward
  File "E:\CondaEnvs\hre2_windows\Lib\site-packages\pytorch_lightning\core\module.py", line 1303, in optimizer_step
    optimizer.step(closure=optimizer_closure)
  File "E:\CondaEnvs\hre2_windows\Lib\site-packages\pytorch_lightning\core\optimizer.py", line 152, in step
    step_output = self._strategy.optimizer_step(self._optimizer, closure, **kwargs)
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "E:\CondaEnvs\hre2_windows\Lib\site-packages\pytorch_lightning\strategies\ddp.py", line 270, in optimizer_step
    optimizer_output = super().optimizer_step(optimizer, closure, model, **kwargs)
                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "E:\CondaEnvs\hre2_windows\Lib\site-packages\pytorch_lightning\strategies\strategy.py", line 239, in optimizer_step
    return self.precision_plugin.optimizer_step(optimizer, model=model, closure=closure, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "E:\CondaEnvs\hre2_windows\Lib\site-packages\pytorch_lightning\plugins\precision\amp.py", line 80, in optimizer_step
    closure_result = closure()
                     ^^^^^^^^^
  File "E:\CondaEnvs\hre2_windows\Lib\site-packages\pytorch_lightning\loops\optimization\automatic.py", line 144, in __call__
    self._result = self.closure(*args, **kwargs)
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "E:\CondaEnvs\hre2_windows\Lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "E:\CondaEnvs\hre2_windows\Lib\site-packages\pytorch_lightning\loops\optimization\automatic.py", line 138, in closure
    self._backward_fn(step_output.closure_loss)
  File "E:\CondaEnvs\hre2_windows\Lib\site-packages\pytorch_lightning\loops\optimization\automatic.py", line 239, in backward_fn
    call._call_strategy_hook(self.trainer, "backward", loss, optimizer)
  File "E:\CondaEnvs\hre2_windows\Lib\site-packages\pytorch_lightning\trainer\call.py", line 309, in _call_strategy_hook
    output = fn(*args, **kwargs)
             ^^^^^^^^^^^^^^^^^^^
  File "E:\CondaEnvs\hre2_windows\Lib\site-packages\pytorch_lightning\strategies\strategy.py", line 213, in backward
    self.precision_plugin.backward(closure_loss, self.lightning_module, optimizer, *args, **kwargs)
  File "E:\CondaEnvs\hre2_windows\Lib\site-packages\pytorch_lightning\plugins\precision\precision.py", line 72, in backward
    model.backward(tensor, *args, **kwargs)
  File "E:\CondaEnvs\hre2_windows\Lib\site-packages\pytorch_lightning\core\module.py", line 1090, in backward
    loss.backward(*args, **kwargs)
  File "E:\CondaEnvs\hre2_windows\Lib\site-packages\torch\_tensor.py", line 522, in backward
    torch.autograd.backward(
    torch.autograd.backward(
  File "E:\CondaEnvs\hre2_windows\Lib\site-packages\torch\autograd\__init__.py", line 266, in backward
  File "E:\CondaEnvs\hre2_windows\Lib\site-packages\torch\autograd\__init__.py", line 266, in backward
    Variable._execution_engine.run_backward(  # Calls into the C++ engine to run the backward pass
    Variable._execution_engine.run_backward(  # Calls into the C++ engine to run the backward pass
RuntimeError: cuDNN error: CUDNN_STATUS_EXECUTION_FAILED
RuntimeError: cuDNN error: CUDNN_STATUS_EXECUTION_FAILED
Epoch 0:   0%|          | 0/704 [00:01<?, ?it/s]

My system is Windows 11 with 2 GTX 1060, and I using pytorch lightning to automated multi GPU training.
I had never meet this error before with my own code. Other unet implementations (for example: https://github.com/milesial/Pytorch-UNet) works as well.

I have tried turning off cudnn benchmmark:

import torch.backends.cudnn as cudnn
cudnn.benchmark = False 

It was no use.

I reduce the dims of x-unet, but it still doesn't work.

@ZhiZe-ZG
Copy link
Author

Here is my environment yml:

channels:
  - pytorch
  - nvidia
  - conda-forge
  - defaults
dependencies:
  - abseil-cpp=20230802.0=h5da7b33_1
  - absl-py=2.1.0=pyhd8ed1ab_0
  - aom=3.6.0=hd77b12b_0
  - asttokens=2.4.1=pyhd8ed1ab_0
  - blas=1.0=mkl
  - blosc=1.21.3=h6c2663c_0
  - brotli=1.0.9=h2bbff1b_7
  - brotli-bin=1.0.9=h2bbff1b_7
  - bzip2=1.0.8=h2bbff1b_5
  - c-ares=1.19.1=h2bbff1b_0
  - ca-certificates=2024.2.2=h56e8100_0
  - certifi=2024.2.2=pyhd8ed1ab_0
  - cfitsio=3.470=h2bbff1b_7
  - charls=2.2.0=h6c2663c_0
  - charset-normalizer=2.0.4=pyhd3eb1b0_0
  - colorama=0.4.6=pyhd8ed1ab_0
  - contourpy=1.2.0=py312h59b6b97_0
  - cuda-cccl=12.4.127=0
  - cuda-cudart=11.8.89=0
  - cuda-cudart-dev=11.8.89=0
  - cuda-cupti=11.8.87=0
  - cuda-libraries=11.8.0=0
  - cuda-libraries-dev=11.8.0=0
  - cuda-nvrtc=11.8.89=0
  - cuda-nvrtc-dev=11.8.89=0
  - cuda-nvtx=11.8.86=0
  - cuda-profiler-api=12.4.127=0
  - cuda-runtime=11.8.0=0
  - cycler=0.12.1=pyhd8ed1ab_0
  - dav1d=1.2.1=h2bbff1b_0
  - decorator=5.1.1=pyhd8ed1ab_0
  - exceptiongroup=1.2.0=pyhd8ed1ab_2
  - executing=2.0.1=pyhd8ed1ab_0
  - expat=2.6.2=hd77b12b_0
  - filelock=3.13.1=py312haa95532_0
  - fonttools=4.25.0=pyhd3eb1b0_0
  - freetype=2.12.1=ha860e81_0
  - fsspec=2024.3.1=pyhca7485f_0
  - giflib=5.2.1=h8cc25b3_3
  - grpc-cpp=1.48.2=h6772dbd_4
  - grpcio=1.48.2=py312h6772dbd_4
  - icc_rt=2022.1.0=h6049295_2
  - icu=73.1=h6c2663c_0
  - idna=3.4=py312haa95532_0
  - imagecodecs=2023.1.23=py312hd5bf116_1
  - imageio=2.34.1=pyh4b66e23_0
  - importlib-metadata=7.1.0=pyha770c72_0
  - intel-openmp=2023.1.0=h59b6b97_46320
  - ipython=8.22.2=pyh7428d3b_0
  - jedi=0.19.1=pyhd8ed1ab_0
  - jinja2=3.1.3=py312haa95532_0
  - joblib=1.4.0=pyhd8ed1ab_0
  - jpeg=9e=h2bbff1b_1
  - kiwisolver=1.4.4=py312hd77b12b_0
  - krb5=1.20.1=h5b6d351_0
  - lazy_loader=0.4=pyhd8ed1ab_0
  - lcms2=2.12=h83e58a3_0
  - lerc=3.0=hd77b12b_0
  - libaec=1.0.4=h33f27b4_1
  - libavif=0.11.1=h2bbff1b_0
  - libbrotlicommon=1.0.9=h2bbff1b_7
  - libbrotlidec=1.0.9=h2bbff1b_7
  - libbrotlienc=1.0.9=h2bbff1b_7
  - libclang=14.0.6=default_hb5a9fac_1
  - libclang13=14.0.6=default_h8e68704_1
  - libcublas=11.11.3.6=0
  - libcublas-dev=11.11.3.6=0
  - libcufft=10.9.0.58=0
  - libcufft-dev=10.9.0.58=0
  - libcurand=10.3.5.147=0
  - libcurand-dev=10.3.5.147=0
  - libcusolver=11.4.1.48=0
  - libcusolver-dev=11.4.1.48=0
  - libcusparse=11.7.5.86=0
  - libcusparse-dev=11.7.5.86=0
  - libdeflate=1.17=h2bbff1b_1
  - libffi=3.4.4=hd77b12b_0
  - libjpeg-turbo=2.0.0=h196d8e1_0
  - libnpp=11.8.0.86=0
  - libnpp-dev=11.8.0.86=0
  - libnvjpeg=11.9.0.86=0
  - libnvjpeg-dev=11.9.0.86=0
  - libpng=1.6.39=h8cc25b3_0
  - libpq=12.17=h906ac69_0
  - libprotobuf=3.20.3=h23ce68f_0
  - libtiff=4.5.1=hd77b12b_0
  - libuv=1.44.2=h2bbff1b_0
  - libwebp-base=1.3.2=h2bbff1b_0
  - libzopfli=1.0.3=h0e60522_0
  - lightning=2.2.2=pyhd8ed1ab_0
  - lightning-utilities=0.11.2=pyhd8ed1ab_0
  - lz4-c=1.9.4=h2bbff1b_0
  - markdown=3.6=pyhd8ed1ab_0
  - markupsafe=2.1.3=py312h2bbff1b_0
  - matplotlib=3.8.4=py312haa95532_0
  - matplotlib-base=3.8.4=py312hc7c4135_0
  - matplotlib-inline=0.1.7=pyhd8ed1ab_0
  - mkl=2023.1.0=h6b88ed4_46358
  - mkl-service=2.4.0=py312h2bbff1b_1
  - mkl_fft=1.3.8=py312h2bbff1b_0
  - mkl_random=1.2.4=py312h59b6b97_0
  - mpmath=1.3.0=py312haa95532_0
  - munkres=1.1.4=pyh9f0ad1d_0
  - networkx=3.1=py312haa95532_0
  - numpy=1.26.4=py312hfd52020_0
  - numpy-base=1.26.4=py312h4dde369_0
  - openjpeg=2.4.0=h4fc8c34_0
  - openssl=3.0.13=h2bbff1b_0
  - packaging=24.0=pyhd8ed1ab_0
  - parso=0.8.4=pyhd8ed1ab_0
  - pickleshare=0.7.5=py_1003
  - pillow=10.2.0=py312h2bbff1b_0
  - pip=23.3.1=py312haa95532_0
  - ply=3.11=pyhd8ed1ab_2
  - prompt-toolkit=3.0.42=pyha770c72_0
  - protobuf=3.20.3=py312hd77b12b_0
  - pure_eval=0.2.2=pyhd8ed1ab_0
  - pygments=2.17.2=pyhd8ed1ab_0
  - pyparsing=3.0.9=pyhd8ed1ab_0
  - pyqt=5.15.10=py312hd77b12b_0
  - pyqt5-sip=12.13.0=py312h2bbff1b_0
  - python=3.12.3=h1d929f7_0
  - python-dateutil=2.9.0=pyhd8ed1ab_0
  - pytorch=2.2.2=py3.12_cuda11.8_cudnn8_0
  - pytorch-cuda=11.8=h24eeafa_5
  - pytorch-lightning=2.2.2=pyhd8ed1ab_0
  - pytorch-mutex=1.0=cuda
  - pyyaml=6.0.1=py312h2bbff1b_0
  - qt-main=5.15.2=h19c9488_10
  - re2=2022.04.01=h0e60522_0
  - requests=2.31.0=py312haa95532_1
  - scikit-image=0.22.0=py312h20b63e8_0
  - scikit-learn=1.3.0=py312hc7c4135_2
  - scipy=1.12.0=py312hbb039d4_0
  - setuptools=68.2.2=py312haa95532_0
  - sip=6.7.12=py312hd77b12b_0
  - six=1.16.0=pyh6c4a22f_0
  - snappy=1.1.10=h6c2663c_1
  - sqlite=3.41.2=h2bbff1b_0
  - stack_data=0.6.2=pyhd8ed1ab_0
  - sympy=1.12=py312haa95532_0
  - tbb=2021.8.0=h59b6b97_0
  - tensorboard=2.16.2=pyhd8ed1ab_0
  - tensorboard-data-server=0.7.0=py312haa95532_0
  - threadpoolctl=3.4.0=pyhc1e730c_0
  - tifffile=2023.2.28=pyhd8ed1ab_0
  - tk=8.6.12=h2bbff1b_0
  - torchmetrics=1.3.2=pyhd8ed1ab_0
  - tornado=6.3.3=py312h2bbff1b_0
  - tqdm=4.66.2=pyhd8ed1ab_0
  - traitlets=5.14.3=pyhd8ed1ab_0
  - typing-extensions=4.9.0=py312haa95532_1
  - typing_extensions=4.9.0=py312haa95532_1
  - tzdata=2024a=h04d1e81_0
  - urllib3=2.1.0=py312haa95532_0
  - vc=14.2=h21ff451_1
  - vs2015_runtime=14.27.29016=h5e58377_2
  - wcwidth=0.2.13=pyhd8ed1ab_0
  - werkzeug=3.0.2=pyhd8ed1ab_0
  - wheel=0.41.2=py312haa95532_0
  - xz=5.4.6=h8cc25b3_0
  - yaml=0.2.5=he774522_0
  - zfp=1.0.0=hd77b12b_0
  - zipp=3.17.0=pyhd8ed1ab_0
  - zlib=1.2.13=h8cc25b3_0
  - zstd=1.5.5=hd43e919_0
  - pip:
      - beartype==0.18.5
      - einops==0.7.0
      - torchaudio==2.2.2
      - torchvision==0.17.2
      - x-unet==0.3.1

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant