-
Updated
Mar 8, 2020 - Python
embeddings
Here are 458 public repositories matching this topic...
Expected Behavior
I want to convert torch.nn.Linear modules to weight drop linear modules in my model (possibly big), and I want to train my model with multi-GPUs. However, I have RuntimeError in my sample code. First, I have _weight_drop() which drops some part of weights in torch.nn.Linear (see the code below).
Actual Behavior
RuntimeError: arguments are located on different GPUs at /
-
Updated
Apr 7, 2019 - Jupyter Notebook
-
Updated
May 25, 2020 - Python
-
Updated
Jan 10, 2019 - Python
-
Updated
May 9, 2019 - Python
-
Updated
Apr 24, 2020 - Python
-
Updated
Nov 8, 2019 - Python
-
Updated
Apr 3, 2020 - Python
-
Updated
Feb 21, 2020 - Python
(tensorflow) F:\Postgraduate\KaggleLearning\multi-class-text-classification-cnn-rnn-master\multi-class-text-classification-cnn-rnn-master>python predict.py ./t
rained_results_1541818386/ ./data2/samples.csv
D:\Anaconda\anaconda\envs\tensorflow\lib\site-packages\gensim\utils.py:1212: UserWarning: detected Windows; aliasing chunkize to chunkize_serial
warnings.warn("detected Windows; aliasing c
-
Updated
Mar 10, 2019 - Python
-
Updated
May 16, 2020 - Python
-
Updated
May 19, 2020 - Python
-
Updated
Feb 14, 2020 - Python
What code should be added so that the word embedding and confusion matrix at each step could be visualized in tensorboard?
-
Updated
May 19, 2020
-
Updated
May 5, 2020 - Python
I viewed the whole code and found that the code only use toy dummy data to train model. So I don't really understand how you use those data to train GCN model. Can you supply the code or instructions about how to use real-world data to train model?
-
Updated
Mar 26, 2018 - Python
-
Updated
Jan 22, 2019 - C++
-
Updated
Mar 2, 2020 - Python
Loading pre-trained word wiki.txt.word_vectors from the location - 'http://ltdata1.informatik.uni-hamburg.de/sensegram/en/'. Some of the words do not have 300 dimensions. Loading it using gensim package gives an error.
-
Updated
Sep 26, 2019 - Python
-
Updated
Oct 4, 2018 - Jupyter Notebook
-
Updated
Oct 24, 2018 - Python
Improve this page
Add a description, image, and links to the embeddings topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the embeddings topic, visit your repo's landing page and select "manage topics."
Dear TF Hub Team,
USE paper Section 5 has a interesting paragraph on evaluation where authors use Arc Cosine (Cos Inverse) whose range is 0 to Pi in radians instead of plain cosine distance with range 0 to 2.
". For the pairwise semantic similarity task, we directly assess
the similarity of the sentence embedding produced by our two encoders. As show