Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Question about ImgWtLossSoftNLL #193

Open
Seyoung9304 opened this issue May 16, 2023 · 0 comments
Open

Question about ImgWtLossSoftNLL #193

Seyoung9304 opened this issue May 16, 2023 · 0 comments

Comments

@Seyoung9304
Copy link

Seyoung9304 commented May 16, 2023

Hello! Thank you for the great work.

I have two questions at loss.py - ImgWtLossSoftNLL.

1.

As far as I understand, from line 182 to 188, we are calculating loss for each image, iterating as much as the batch size.
However, when calling the custom_nll function, weights does not give in batch units, but delivers the whole as a factor.
I think it's right to fix it with weights[i], can you check if I'm right?

# As-is
for i in range(0, inputs.shape[0]):
            if not self.batch_weights:
                class_weights = self.calculate_weights(target_cpu[i])
            loss = loss + self.custom_nll(inputs[i].unsqueeze(0),
                                          target[i].unsqueeze(0),
                                          class_weights=torch.Tensor(class_weights).cuda(),
                                          border_weights=weights, mask=ignore_mask[i])
# To-be
for i in range(0, inputs.shape[0]):
            if not self.batch_weights:
                class_weights = self.calculate_weights(target_cpu[i])
            loss = loss + self.custom_nll(inputs[i].unsqueeze(0),
                                          target[i].unsqueeze(0),
                                          class_weights=torch.Tensor(class_weights).cuda(),
                                          border_weights=weights[i], mask=ignore_mask[i])

2.

At line 192 of loss.py, I think loss should be devided by batch size (i.e. inputs.shape[0]).
In the case of CrossEntropyLoss2d, this criterion is returning the batch average of loss.

# As-is
return loss
# To-be
return loss / inputs.shape[0]

Thank you in advance. :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant