Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Exception when loading word embedding models with lines containing 2 words #5

Closed
ricardopieper opened this issue Jun 12, 2019 · 8 comments
Labels
enhancement New feature or request

Comments

@ricardopieper
Copy link
Contributor

Some model files contain embeddings with multiple words (the NILC embeddings for portuguese) which causes the model loading code to explode. For instance, a line in the model file might contain this:

Hey there 0.001 0.0003 0.86245 ........

The same does not happen in Spacy, for instance.

I fixed it in my local dev environment, might make a pull request later.

@ricardopieper ricardopieper changed the title Word embedding models with lines containing 2 words Exception when loading word embedding models with lines containing 2 words Jun 12, 2019
@makcedward
Copy link
Owner

@ricardopieper
Can you share the model file for testing?

@makcedward makcedward added the enhancement New feature or request label Jun 12, 2019
@ricardopieper
Copy link
Contributor Author

@ricardopieper
Copy link
Contributor Author

One of the offending lines start with "R$ 0,00", it explodes because 0,00 can't be parsed to float.

@ricardopieper
Copy link
Contributor Author

I changed the Fasttext model loader class with this code:

 def read(self, file_path, max_num_vector=None):
        with open(file_path, 'r', encoding='utf-8') as f:
            header = f.readline()
            self.vocab_size, self.emb_size = map(int, header.split())

            for i, line in enumerate(f):
                tokens = line.split()
                word = " ".join(tokens[0:(len(tokens) - self.emb_size):])
                values = np.array([float(val) for val in tokens[(self.emb_size*-1):]]) 

The idea is that, if we know the size of the word vectors, we can just load the last N splitted values and consider the rest as the word itself.

@ricardopieper
Copy link
Contributor Author

made a pull request describing the fix #8 feel free to evaluate the solution

makcedward added a commit that referenced this issue Jun 14, 2019
@makcedward
Copy link
Owner

One of the offending lines start with "R$ 0,00", it explodes because 0,00 can't be parsed to float.

Which pre-trained embeddings do you use and hitting this bug?

@ricardopieper
Copy link
Contributor Author

ricardopieper commented Jun 15, 2019

@makcedward the same I mentioned earlier. Here's a bit more context:

For portuguese word embeddings, we like to use USP's models (USP = Universidade de Sao Paulo, Brazil). They provide models with varied size. In particular we're using fasttext, though we could use any other (for our particular case, fasttext seems to be a bit better).

The one I'm using is this one: http://143.107.183.175:22980/download.php?file=embeddings/fasttext/cbow_s50.zip

You can find more here: http://nilc.icmc.usp.br/embeddings

Also I'm afraid the same fix has to be applied to all other models (glove, word2vec, etc), but I haven't checked.

@makcedward
Copy link
Owner

makcedward commented Jul 2, 2019

@ricardopieper

@makcedward the same I mentioned earlier. Here's a bit more context:

For portuguese word embeddings, we like to use USP's models (USP = Universidade de Sao Paulo, Brazil). They provide models with varied size. In particular we're using fasttext, though we could use any other (for our particular case, fasttext seems to be a bit better).

The one I'm using is this one: http://143.107.183.175:22980/download.php?file=embeddings/fasttext/cbow_s50.zip

You can find more here: http://nilc.icmc.usp.br/embeddings

Also I'm afraid the same fix has to be applied to all other models (glove, word2vec, etc), but I haven't checked.

After studying pre-trained embeddings from http://nilc.icmc.usp.br/embeddings, found that word2vec, glove and fasttext embeddings follow same fasttext's (FB official embeddings) file format.

Will suggest to use FasttextAug to load those library. On the other hand,

One of the offending lines start with "R$ 0,00", it explodes because 0,00 can't be parsed to float.

Which pre-trained embeddings do you use and hitting this bug?

Will apply the following change to read content correctly.

def read(self, file_path, max_num_vector=None):
       with open(file_path, 'r', encoding='utf-8') as f:
           header = f.readline()
           self.vocab_size, self.emb_size = map(int, header.split())

           for i, line in enumerate(f):
               tokens = line.split()
               values = [val for val in tokens[(self.emb_size * -1):]]
               value_pos = line.find(' '.join(values))
               word = line[:value_pos-1]
               values = np.array([float(val) for val in values])

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants