Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

pipeline using all of the available memory #79

Open
TheBerberiCode opened this issue Nov 3, 2020 · 11 comments
Open

pipeline using all of the available memory #79

TheBerberiCode opened this issue Nov 3, 2020 · 11 comments
Labels

Comments

@TheBerberiCode
Copy link

Hi,

We are running this pipeline on a 20.04 ubuntu host and when when this pipline runs, it begins to consume ram and continues to consume it until all available system memory has been used up and the process then terminates. We have tried adding more ram and cpu to the machine but the outcome is the same and the ram eventually gets used up by python processes that are using multiple gb of ram each. Any idea why this might be happening? Appreciate the help

@william-silversmith
Copy link
Contributor

william-silversmith commented Nov 3, 2020 via email

@bluehorseshoe1
Copy link

bluehorseshoe1 commented Nov 3, 2020

import sys
import os
from concurrent.futures import ProcessPoolExecutor

import numpy as np
from PIL import Image
from cloudvolume import CloudVolume
from cloudvolume.lib import mkdir, touch

Image.MAX_IMAGE_PIXELS = None

info = CloudVolume.create_new_info(  # 'image' or 'segmentation'
                                     # can pick any popular uint
                                     # other options: 'jpeg', 'compressed_segmentation' (req. uint32 or uint64)
                                     # X,Y,Z values in nanometers
                                     # values X,Y,Z values in voxels
                                     # rechunk of image X,Y,Z in voxels
                                     # X,Y,Z size in voxels
    num_channels=1,
    layer_type='image',
    data_type='uint8',
    encoding='raw',
    resolution=[6, 6, 6],
    voxel_offset=[0, 0, 0],
    chunk_size=[2048, 2048, 1],
    volume_size=[15068, 17500, 13892]
    )


try:
  vol = CloudVolume('file:///data01/output', info=info, compress='gzip')

  vol.commit_info()  # generates gs://bucket/dataset/layer/info json file

  direct = '/data01/200121_B2_final'

  progress_dir = mkdir('progress')  # unlike os.mkdir doesn't crash on prexisting
  done_files = set([int(z) for z in os.listdir(progress_dir)])
  all_files = set(range(vol.bounds.minpt.z, vol.bounds.maxpt.z))

  to_upload = [int(z) for z in list(all_files.difference(done_files))]
  to_upload.sort()
except IOError as err:
  errno, strerror = err.args
  print ('I/O error({0}): {1}'.format(errno, strerror))
  print (err)
except ValueError as ve:
  print ('Could not convert data to an integer.')
  print (ve)
except:
  print ('Unexpected error:', sys.exc_info()[0])
  raise

def process(z):
    try:
      img_name = 'left_resliced%05d.tif' % z
      print('Processing ', img_name)
      image = Image.open(os.path.join(direct, img_name))
      (width, height) = image.size
      array = np.array(list(image.getdata()), dtype=np.uint8, order='F')
      array = array.reshape((1, height, width)).T
      vol[:, :, z] = array
      image.close()
      touch(os.path.join(progress_dir, str(z)))
    except IOError as err:
      errno, strerror = err.args
      print ('I/O error({0}): {1}'.format(errno, strerror))
      print (err)
    except ValueError as ve:
      print ('Could not convert data to an integer.')
      print (ve)
    except:
      print ('Unexpected error:', sys.exc_info()[0])
      raise


with ProcessPoolExecutor(max_workers=4) as executor:
    executor.map(process, to_upload)

@bluehorseshoe1
Copy link

bluehorseshoe1 commented Nov 3, 2020

Just for some more info. We are coming from an IT infrastructure prospective assisting a lab with getting this setup on a host. We are kind of learning as we go along. We are running this percomputed_image.py script above. We had a very large EC2 provisioned in AWS (r5dn.8xlarge 32vCpu and 256 GB RAM. We are attempting to run against 13,000 .tif files. We are using 4 max workers.

Our last test ran for about 3 hours before consuming all of the memory and crashing.

@bluehorseshoe1
Copy link

Screen Shot 2020-11-02 at 9 01 32 PM

@william-silversmith
Copy link
Contributor

This is pretty weird. It should be using a bit more than 10GB with 4 processes. Does this kind of memory growth happen if you run it like a regular script without the multiprocessing? How far did the process get in terms of slices before crashing? Part of me wonders if there's a dangling reference to the image somewhere.

Another thing you can try is putting the CV initialization inside of process. That should prevent any weird references from persisting. Since you're writing to disk, fetching the info file will be fast. Let me know what happens, if there's a memory leak in CV I'd want to fix it.

@cnzqy1
Copy link

cnzqy1 commented Nov 6, 2020

Just to follow up with this issue, running this script locally works fine and it never used more than 24 GB of RAM with 8 processes, as you pointed out.

To get around the issue of this first script I ran it locally to create [8960, 8960, 1] chunks, uploaded them to the server, and then ran the following script to rechunk. I was expecting this to use <64 GB of memory total, but it completely filled out 128 GB of memory and the process became extremely slow and eventually crashed the host server. Similarly, this script works fine locally but doesn't run properly on EC2.

import igneous.task_creation as tc

src_layer_path = 'file://output'
dest_layer_path = 'file://output2'

with LocalTaskQueue(parallel=8) as tq:
  tasks = tc.create_transfer_tasks(
    src_layer_path, dest_layer_path, 
    chunk_size=(64,64,64), skip_downsamples=True, compress='gzip'
  )
  tq.insert_all(tasks)

print("Done!")

@william-silversmith
Copy link
Contributor

william-silversmith commented Nov 6, 2020 via email

@cnzqy1
Copy link

cnzqy1 commented Nov 6, 2020

The filesystem is an SSD block storage (XFS file system) mounted directly on the EC2 instance. I ran the exact code as above on the server through SSH and using file protocol.

@william-silversmith
Copy link
Contributor

This is pretty weird. The exact same codepath is going to be executed in both situations. I use an SSD filesystem on my local machine, but it's MacOS so the major differences would be Linux and XFS. Given we extensively use igneous with Linux, XFS seems to be the odd man out. I don't think I've ever tested with that filesystem.

How does the script perform in single-process mode? As 8 independent processes? Can you check to see if contents are getting written to disk or is the OS buffer filling up until everything explodes?

@cnzqy1
Copy link

cnzqy1 commented Nov 9, 2020

I tried Ext4 and the same issue happened. I didn't check using it in single-process mode. Contents are being written to disk as expected. For now I'm just going to run igneous locally and upload the results to the server.

@william-silversmith
Copy link
Contributor

The memory usage is pretty abnormal. I'll keep my eye open for more instances of this. If you end up wanting to debug it, I'll be happy to follow along and provide help.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

4 participants