Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

target_embedding without training #51

Open
bkj opened this issue Feb 27, 2024 · 0 comments
Open

target_embedding without training #51

bkj opened this issue Feb 27, 2024 · 0 comments

Comments

@bkj
Copy link

bkj commented Feb 27, 2024

Hello --

Really enjoyed the paper. One clarifying question: you add the target_embedding to the query point embedding here:
https://github.com/ZrrSkywalker/Personalize-SAM/blob/main/per_segment_anything/modeling/transformer.py#L94

but you don't fine-tune the model. Do you have an intuition for why that works? Is it basically that the TwoWayAttentionBlock is now computing attention based on the "average" of the similarity between points <-> image embeddings and target_embedding <-> image embeddings?

Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant