Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Clarification on README #46

Open
mperacchi opened this issue Jan 17, 2019 · 4 comments
Open

Clarification on README #46

mperacchi opened this issue Jan 17, 2019 · 4 comments

Comments

@mperacchi
Copy link

Hello,
I can't fully understand the documentation, would you mind clarifying some points for me?

Firstly:
embeddings = numpy.array([[0.1, 1], [1, 0.1]], dtype=numpy.float32)
this is an array containing my the array of single words embedding, if I have 20 sentence, each one of 10 words, and each word represented with 300 dimension vector, embeddings will be (20 x 10 x 300), right?

nbow = {"first":  ("#1", [0, 1], numpy.array([1.5, 0.5], dtype=numpy.float32)),
        "second": ("#2", [0, 1], numpy.array([0.75, 0.15], dtype=numpy.float32))}
calc = WMD(embeddings, nbow, vocabulary_min=2)

Than in the documentation I only found this:

The first element is the human-readable name of the sample, the second is an iterable with item identifiers and the third is numpy.ndarray with the corresponding weights.

So the "#1" is just an indexed, I'm sorry but I can't understand what the [0, 1] and the numpy.array([1.5, 0.5], are supposed to represent. I think the second one is supposed to be the weight of each word, that should be calculated using the term frequency, isn't it supposed to sum up to 1? What items are to be identified by the [0,1]?

I'm sorry if I'm missing something, from there I just can't understand what's going on, I'm available to have a private chat if you are some time to spare, thank you very much.
Mattia

@abdelrahman-t
Copy link

abdelrahman-t commented Jan 17, 2019

I have these same questions,

thanks.

@c0ntradicti0n
Copy link

One can get an idea from this, I think, seeing the meaning of the used spacy properties.

spacy example

@skypc785308
Copy link

skypc785308 commented Feb 17, 2020

nbow = {"first":  ("#1", [0, 1], numpy.array([1.5, 0.5], dtype=numpy.float32)),
        "second": ("#2", [0, 1], numpy.array([0.75, 0.15], dtype=numpy.float32))}
calc = WMD(embeddings, nbow, vocabulary_min=2)

I think the nbow is a dictionary of documents , it contain the identify of document(the key of dictionary), human readable text(the first element of tuple), tokens which appear in the document, and transfer them to identify of your W2V model(the second element of tuple).third element of tuple, which is normalize of bag of word in the document, so the sum of it should be 1.

In my work, I query a English sentence to find the most shortest WMD in Chinese sentences,
It work fine ! :-)

@vmarkovtsev
Copy link
Collaborator

Can please somebody PR the documentation fix? Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants