Skip to content
#

deeplearning

Deep learning is an AI function and subset of machine learning, used for processing large amounts of complex data.

Here are 1,774 public repositories matching this topic...

Flamefire
Flamefire commented Nov 13, 2019

Judging by the logic in https://github.com/horovod/horovod/blob/38e91bee84efbb5b563a4928027a75dc3974633b/setup.py#L1369 it is clear, that before installing Horovod one needs to install the underlying framework(s) (TensorFlow, PyTorch, ...).

This is not mentioned in the installation instructions which made me think, I can install Horovod and then any framework I like (or switch between them) and

ludwig
iiapache
iiapache commented Jan 3, 2019

Instead of setting the dump_attention_no_plot parameter's value to "true",you should use "dump_plots: false" to disable the attention plot.
config example:
python -m bin.infer \
--tasks "
- class: DecodeText
- class: DumpAttention
params:
output_dir: $PRED_DIR/attention
dump_plots: false" \
--model_dir $MODEL_DIR \
--batch_siz

gorgonia
nlp-architect
hoonkai
hoonkai commented Jan 28, 2019

Target objective:

Creating an ensemble using weights

Question

Looking at https://github.com/NervanaSystems/nlp-architect/blob/master/examples/supervised_sentiment/example_ensemble.py#L96-L97 how come the ensemble weights are training accuracies rather than validation accuracies? If the difference between acc and val_acc is great, wouldn't the ensemble overfit the training se

ari62
ari62 commented Aug 26, 2019

Concerning the tutorial here:
https://deeplearning4j.org/tutorials/04-feed-forward
A number of possible corrections I may or may not be right about:
The first example has an input and output layer but it says:
"As you can see above that we have made a feed-forward network configuration with one hidden layer. "

Also the input layer is a DenseLayer, not a FeedForwardLayer as I would assume it

You can’t perform that action at this time.