You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
thanks a lot for all your great repos and implementations!
I've wanted to try this for a segmentation problem and I've had issues training on colabs 40GB GPU with dimensions 256x256.
The Model I've wanted to use is initialized like so:
gen = XUnet(
dim = target_shape,
channels = 3,
dim_mults = (1, 2, 4, 4),
nested_unet_depths = (4, 3, 2, 1), # nested unet depths, from unet-squared paper
consolidate_upsample_fmaps = True, # whether to consolidate outputs from all upsample blocks, used in unet-squared paper
).to(device)
Is there a trick or what do you estimate the needed Memory is?
I set pin_memory to false, which improved it a little, but still wasn't able to do a single pass (batch_size = 1).
I also noticed most of the memory is reserved, and not allocated, irrespective of the initial size? (always around 35 - 38 GB).
The text was updated successfully, but these errors were encountered:
Hi there,
thanks a lot for all your great repos and implementations!
I've wanted to try this for a segmentation problem and I've had issues training on colabs 40GB GPU with dimensions 256x256.
The Model I've wanted to use is initialized like so:
Is there a trick or what do you estimate the needed Memory is?
I set pin_memory to false, which improved it a little, but still wasn't able to do a single pass (batch_size = 1).
I also noticed most of the memory is reserved, and not allocated, irrespective of the initial size? (always around 35 - 38 GB).
The text was updated successfully, but these errors were encountered: