Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

A query regarding Image-Cap #10

Open
rupeshS9102 opened this issue Jun 29, 2019 · 4 comments
Open

A query regarding Image-Cap #10

rupeshS9102 opened this issue Jun 29, 2019 · 4 comments

Comments

@rupeshS9102
Copy link

https://github.com/ne7ermore/torch-light/tree/master/Image-Cap
Hi @ne7ermore,
I found this code very educational. Thank you. But, I do not have a GPU. When I run train.py on a CPU, the estimated time of training is about 3-4 weeks. So, could you please share the trained model. It would be of great help. My email id - rupesh.s9102@gmail.com.
Thank You

@Michael-Hsu
Copy link

https://github.com/ne7ermore/torch-light/tree/master/Image-Cap
Hi @ne7ermore,
I found this code very educational. Thank you. But, I do not have a GPU. When I run train.py on a CPU, the estimated time of training is about 3-4 weeks. So, could you please share the trained model. It would be of great help. My email id - rupesh.s9102@gmail.com.
Thank You

Hello, did you get the trained model? I also have the same problem. If you do get it, could you please share the trained model with me ? My email is broadenhsu@gmail.com. Thanks a lot.

@Michael-Hsu
Copy link

Michael-Hsu commented Oct 20, 2019

@ne7ermore
Thanks for your sharing code for Image Cap. I find there might be some bug in the train.py. There isn't definition of "speak" in model.py. Or I misunderstand, could you please give me some advice?

def eval():
actor.eval()
eval_score = .0
for imgs, labels in tqdm(validation_data,
mininterval=1,
desc="Actor-Critic Eval",
leave=False):
enc = actor.encode(imgs)

    hidden = actor.feed_enc(enc)
    ### Bug to fix??
    words, _ = actor.speak(hidden)

    scores = rouge_l(words, labels)
    scores = scores.sum()

    eval_score += scores.data

eval_score = eval_score[0] / validation_data.sents_size

return eval_score

@fireholder
Copy link

I met the same issue as you, have you solved it ? @Michael-Hsu

@Michael-Hsu
Copy link

Michael-Hsu commented Oct 30, 2019 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants