Unhandled async exception? #414
-
Hi @jamessimone , the org I'm in is new to apex-rollup and we're excited to use this in production. We have v1.5.49 set up to run via trigger with this custom metadata record: And we got this Apex script unhandled exception in email when a user initiated apex-rollup via trigger:
We also saw an apex job failure when we initiate Recalculate Rollup (but didn't get an email): In full disclosure, apex-rollup also exceeds the heap size because of a custom logger that we've registered. We'll fix that, and in UAT, we've set apex-rollup's log level to ERROR to avoid the limit. We haven't fixed that in prod yet. Our custom queueables and batchables don't check limits to make sure they can run, so that could be a problem. But if I understand the stacktrace above, something started RollupAsyncProcessor as a batch and it eventually tries to start another batch which isn't allowed. Should RollupAsyncProcessor.runCalc do a similar check like RollupParentResetProcessor.runCalc? Unfortunately, this was in production so I don't have a debug log and I'm having trouble reproducing it in our full sandbox. The full sandbox was refreshed two months ago, but it's still odd that Recalculate Rollup works as expected there (after setting the log level to ERROR), but not in prod. Do you have any clues on how I might reproduce this problem in our sandbox, what the cause might be, or where to go from here? Thanks! |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 29 replies
-
@pyao-bwc I'll take a look. There should already be logic handling this in Anyway, with all of that being said I'll take another look at the |
Beta Was this translation helpful? Give feedback.
@pyao-bwc I'll take a look. There should already be logic handling this in
RollupAsyncProcessor.runCalc()
, but I've heard a few times before from people where in the process of updating a parent-level record, another batch process needs to be kicked off. It's my understanding that those downstream processes at the parent-level have been responsible for people receiving this message in the past - the logic that's being hit here is due to a timeout of some kind (and the timeout might only be occurring if intensive logging is enabled). I'd be curious to hear if you have that same error with logging disabled (until you can mitigate the custom logger, from the sounds of it).Anyway, with all o…