Skip to content
#

Data Science

Data science is an inter-disciplinary field that uses scientific methods, processes, algorithms, and systems to extract knowledge from structured and unstructured data. Data scientists perform data analysis and preparation, and their findings inform high-level decisions in many organizations.

Here are 17,961 public repositories matching this topic...

sh-biswas
sh-biswas commented Mar 9, 2021

It appears that the docs for Logistic Regression differ based on solvers and penalties. The "penalty" parameter states that "The ‘newton-cg’, ‘sag’ and ‘lbfgs’ solvers support only l2 penalties," while the "solver" parameter states that "‘newton-cg’, ‘lbfgs’, ‘sag’ and ‘saga’ handle L2 or no penalty" (attaching some screenshots). This was actually a little unclear to me, as I wasn't sure if the n

superset
zuzana-vej
zuzana-vej commented Mar 26, 2021

Is your feature request related to a problem? Please describe.
Currently when users searches metric in Chart Explore Metric dropdown, the search first displays the "ending" string of the metric, even if there is an exact match. For example if you search for san_francisco and your metrics include population_in_san_francisco san_francisco and san_francisco_weather your first result woul

Data science Python notebooks: Deep learning (TensorFlow, Theano, Caffe, Keras), scikit-learn, Kaggle, big data (Spark, Hadoop MapReduce, HDFS), matplotlib, pandas, NumPy, SciPy, Python essentials, AWS, and various command lines.

  • Updated Feb 18, 2021
  • Python
dash
vdonato
vdonato commented Mar 19, 2021

This bug is actually caused by #2989, but I'm filing it separately as fixing #2989 may end up being a few days worth of effort while a small workaround to fix this apparently incorrect behavior (that's just a bandaid until the real issue is fixed) would likely only take a couple hours.

What happens here is that a file calls config.get_option on import, causing config files to be parsed, then wh

pytorch-lightning
gensim
mahnerak
mahnerak commented Jan 2, 2021

While setting train_parameters to False very often we also may consider disabling dropout/batchnorm, in other words, to run the pretrained model in eval mode.
We've done a little modification to PretrainedTransformerEmbedder that allows providing whether the token embedder should be forced to eval mode during the training phase.

Do you this feature might be handy? Should I open a PR?