Replies: 5 comments 16 replies
-
Another point: High-Level Logging |
Beta Was this translation helpful? Give feedback.
-
Copying stuff from our discussion on Slack: I think there are a few differences both from a conceptual side and from an implementational side: Conceptual:
Implementational:
|
Beta Was this translation helpful? Give feedback.
-
After thinking about this a bit, I agree with @yhmtsai and I think it does make sense to go back to LinOp instead of a separate hierarchy for batched operators. Considering cases of tasked based approaches and distributed approaches in mind, given that LinOp is our usual base type, it would indeed make it a lot easier to have everything as LinOp. I will update the PR, but before that I would also probably first test this on |
Beta Was this translation helpful? Give feedback.
-
@tcojean On the relevance of Or perhaps you think the problem of getting the proper information to the task runtime is independent of what we do with |
Beta Was this translation helpful? Give feedback.
-
Discussion: BatchDense ("BatchVector") vs Dense ("Vector"):
Conclusion: There does not currently appear to be a need for interoperability between LinOp and BatchLinOp, and because having the separation makes application code more explicit and the development philosophy for batched functionality is a bit different, it is decided to keep BatchLinOp separate from LinOp. If at some point in the future, we realize we need an overall base class, we would break interface if necessary and make BatchLinOp a sub-type of LinOp. |
Beta Was this translation helpful? Give feedback.
-
Arguments in favour of separate
BatchLinOp
:BatchLinOp* apply(const BatchLinOp*, BatchLinOp*)
comes with a compile-time guard against trying to apply a batch operator to a non-batch operator etc. I think this is very useful for applications that use both batch functionality and regular functionality.BatchLinOp
, where would we put the validation of batch sizes of operators inapply
? Note that the validation for batch operators is different from non-batch ones. We could keep it inEnableBatchLinOp
, but we would still miss proper validation whenLinOp::apply
is used. My understanding isLinOp::apply
is the one that gets called when one of the arguments is simply aLinOp*
rather than a pointer to a concrete type.Answer using @upsj's idea: The batch validation could go into the backends and onto the device. This also lets us scalably do non-uniform batch operator validation.
Arguments against
BatchLinOp
(instead just keeping everything asLinOp
):BatchLinOp
andGKO_ENABLE_BATCH_LINOP_FACTORY
, which repeat what is already done for theLinOp
counterparts, are not necessary. (Note that we would still keepEnableBatchLinOp
to keepbatch_dim
there.)LinOp
might make it easier to integrate task-based features later.@pratikvn and @yhmtsai, could you document other points here?
Beta Was this translation helpful? Give feedback.
All reactions