Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Dnn ops #734

Draft
wants to merge 20 commits into
base: main
Choose a base branch
from
Draft

Dnn ops #734

wants to merge 20 commits into from

Commits on Aug 19, 2024

  1. [DAPHNE-daphne-eu#758] MetaDataObject for CSRMatrix

    * This commit introduces the meta data object to the CSR data type
    
    * Memory pinning
    
    To prevent excessive allocation ID lookups in the hot path when using --vec, this change "pins" memory by allocation type of previous accesses.
    corepointer committed Aug 19, 2024
    Configuration menu
    Copy the full SHA
    59944e7 View commit details
    Browse the repository at this point in the history
  2. Configuration menu
    Copy the full SHA
    ca1210e View commit details
    Browse the repository at this point in the history
  3. [BUGFIX] LoadPartitioningDistributed crashed

    Due to the use of ptr to local var the distributed (GRPC_SYNC) mode crashed in test cases. This patch fixes this by using std::unique_ptr appropriately.
    corepointer committed Aug 19, 2024
    Configuration menu
    Copy the full SHA
    4e0ae52 View commit details
    Browse the repository at this point in the history
  4. [DAPHNE-#xyz1] DNN Ops Type/Shape Inference; NN Namespace

    batch_norm2d shape & type inference
    
    affine shape/type inference
    
    softmax namespace, shape & type inference
    
    "fix" shape inference of some dnn ops by returning -1 instead of trying to calculate proper dimensions based on data that is sometimes just not there :-/
    
    * getShape(getInput()) returns dimensions when input comes from readMatrix() but not if it's a rand() ?!
    * getInputHeight(), getInputWidth() does not return proper dimensions
    corepointer committed Aug 19, 2024
    Configuration menu
    Copy the full SHA
    aaac9a7 View commit details
    Browse the repository at this point in the history
  5. [DAPHNE-#xyz2] ReLU backward pass

    This commit adds the necessary code changes to call CUDNN's activation backward function with ReLU activation type. No tests yet.
    corepointer committed Aug 19, 2024
    Configuration menu
    Copy the full SHA
    06bff26 View commit details
    Browse the repository at this point in the history
  6. Configuration menu
    Copy the full SHA
    d6c4e1e View commit details
    Browse the repository at this point in the history
  7. [DAPHNE-#xyz4] Neural Network DaphneDSL Scripts

    Contains ports from SystemDS
    * script level alternatives for pooling, convolution, etc
    * wrapper scripts for DAPHNE builtins (conv2d() -> conv2d.forward(), etc)
    * script path in default UserConfig.json
    corepointer committed Aug 19, 2024
    Configuration menu
    Copy the full SHA
    511e379 View commit details
    Browse the repository at this point in the history
  8. [DAPHNE-#xyz5] Lenet Example Pipeline for MNIST character classification

    Example ported from SystemDS
    corepointer committed Aug 19, 2024
    Configuration menu
    Copy the full SHA
    f4651b1 View commit details
    Browse the repository at this point in the history
  9. [DAPHNE-#xyz6] WIP Script level tests for DNN ops

    currently supported: relu & conv2d
    
    tests are failing atm
    corepointer committed Aug 19, 2024
    Configuration menu
    Copy the full SHA
    6d322c4 View commit details
    Browse the repository at this point in the history
  10. [DAPHNE-#xyz7] Make DaphneContext global

    This change makes the DaphneContext object global to avoid creation/desctruction in every UDF. The global context is passed by int64 casted pointer through the UserConfig.
    corepointer committed Aug 19, 2024
    Configuration menu
    Copy the full SHA
    a0c7d9a View commit details
    Browse the repository at this point in the history
  11. [BUGFIX] AggAll CUDA launch config

    agg all cuda launch config bugfix
    
    fix allagg cuda launch configs (now looping)
    
    aggall log
    corepointer committed Aug 19, 2024
    Configuration menu
    Copy the full SHA
    7a652ba View commit details
    Browse the repository at this point in the history
  12. Configuration menu
    Copy the full SHA
    cd58be5 View commit details
    Browse the repository at this point in the history
  13. [MINOR] CUDA EwBinaryObjSca MIN/MAX/NEQ

    Added more operators to apply elementwise
    corepointer committed Aug 19, 2024
    Configuration menu
    Copy the full SHA
    f0ffae5 View commit details
    Browse the repository at this point in the history
  14. Configuration menu
    Copy the full SHA
    f56c270 View commit details
    Browse the repository at this point in the history
  15. [MINOR] Cleanup & bugfix CUDA EwBinaryMat to handle mats of 1x1

    The handling of the 1x1 matrix case should not be needed anymore once this is fixed in the compiler to call EwBinaryObjSca
    corepointer committed Aug 19, 2024
    Configuration menu
    Copy the full SHA
    8d9d4d2 View commit details
    Browse the repository at this point in the history
  16. Configuration menu
    Copy the full SHA
    b761c7e View commit details
    Browse the repository at this point in the history
  17. Configuration menu
    Copy the full SHA
    547d168 View commit details
    Browse the repository at this point in the history
  18. Configuration menu
    Copy the full SHA
    997a324 View commit details
    Browse the repository at this point in the history
  19. Configuration menu
    Copy the full SHA
    5b2d890 View commit details
    Browse the repository at this point in the history
  20. [LOCAL] test scripts

    corepointer committed Aug 19, 2024
    Configuration menu
    Copy the full SHA
    afd7502 View commit details
    Browse the repository at this point in the history