-
Updated
Feb 12, 2020
artificial-intelligence
The branch of computer science dealing with the reproduction, or mimicking of human-level intelligence, self-awareness, knowledge, conscience, and thought in computer programs.
Here are 7,513 public repositories matching this topic...
-
Updated
Oct 19, 2019
I'm looking at the react tutorial at https://github.com/amark/gun/wiki/React-Tutorial and noticed that the code examples are using depreciated lifecycle methods such as componentWillMount.
Issue Description
The following test fails for seed = 0 but passes for (as far as I can tell) any other seed (e.g. seed = 1)
https://gist.github.com/orausch/9a42e24b782319447a515e8c29b364a0
Version Information
Please indicate relevant versions, including, if relevant:
- Deeplearning4j version:
beta6 - Platform information (OS, etc): Ubuntu 19.10
(cc @rpatra)
There seems to be conflicting data on what the "OriginGeopoint" is. In the documentation it's referenced as the location of the PlayerStart while in code it's commented as the coordinates of Unreal level at the coordinates 0,0,0.
Documentation:
https://micro
我发现examples/retinaface.cpp中,如果开启OMP加速的话似乎在检测到人脸时会发生内存泄漏,但我定位不了这个问题的具体原因。
值得注意的时,如果将qsort_descent_inplace函数中的OMP指令注释掉这个问题就会消失掉。
static void qsort_descent_inplace(std::vector<FaceObject>& faceobjects, int left, int right)
{
int i = left;
int j = right;
float p = faceobjects[(left + right) / 2].prob;
...
// #pragma omp parallel sections
{
// #pragma-
Updated
Apr 4, 2020 - Python
Currently, when entering epic mode the README is frozen in the last level of the tower. When you're trying to fine-tune the score for a level other than the last one, it would be helpful if we had the README for that level available. The proposal is that when entering epic mode, the README is updated with all levels, one following the other.
Example:
# Starbolt - beginner-
Updated
Nov 21, 2018 - Shell
-
Updated
Apr 20, 2020 - Python
Inspired by: https://jigsaw.tighten.co/docs/starter-templates/
For hosting we can be use GitHub Pages
Your open source project is as good as its documentation
Arkadiusz Kondas (or maybe it's somewhere I heard)
This can help to #318
Hey,
I was trying to get started playing with mbart, and don't understand two things:
- what is
path_2_datashould be in this command?
https://github.com/pytorch/fairseq/blob/5e79322b3a4a9e9a11525377d3dda7ac520b921c/examples/mbart/README.md#L80 - what the "Set tokenizer here" refers to in the wmt-16 scripts.
Would be happy to write detailed steps if you guys could point me in the right d
-
Updated
Dec 29, 2019 - Jupyter Notebook
Description
Add Azure notebook to our SETUP doc.
I tested google colab and Azure notebook to run reco-repo without requiring creating any DSVM or compute by myself, and it works really well with simple tweaks to the notebooks (e.g. for some libs, should install manually).
I think it would be good to add at least Azure notebook to our SETUP doc, where users can easily test out our repo w/o
-
Updated
Apr 21, 2020 - Python
Upon environment timeout python client will only receive the error message "Environment in wrong status for call to observations()". Might be good to provide more information why the environment is not running anymore (due to timeout etc.)
if (!is_running(self)) {
PyErr_SetString(PyExc_RuntimeError,
"Environment in wrong status for call to observations()");
return NULL;
}
There is nan value in multistepBucketLikelihoods, when I use my own dataset, and set _NUM_RECORDS as 6000. The error is listed as below.
multistepBucketLikelihoods = {1: {499: 1.0}, 5: {499: nan, 501: 0.0}}
File "D:\ProgramData\PythonWorkspace\nupic\docs\examples\opf\test.py", line 52, in runHotgym fiveStepConfidence = allPredictions[5][fiveStep]
File "D:\ProgramData\PythonWorkspace\nup
I understand that these two python files show two different methods to construct a model. The original n_epoch is 500 which works perfect for both python files. But if I change n_epoch to 20, only tutorial_mnist_mlp_static.py can achieve a high test accuracy (~0.97). The other file tutorial_mnist_mlp_static_2.py only get 0.47.
The models built from these two files looks the same for me (the s
-
Updated
May 3, 2017 - Swift
I'm submitting a ... (check one with "x")
[ ] bug report
[ ] help wanted
[ ] feature request
Current behavior
Expected/desired behavior
Reproduction of the problem
If the current behavior is a bug or you can illustrate your feature request better with an example, please provide the steps to reproduce.
What is the expected behavior?
- Operating System: Windows
- Serpent.AI Version: not sure
- Game: (Cuphead) Executable
- Backend: GPU
i followed the hello world tutorial, created plug in for cuphead executable game when i launch the game i get this error
(AI) C:\Users\ANTONY\SerpentAI>serpent launch cuphead
Traceback (most recent call last):
File "c:\programdata\anaconda3\envs\ai\lib\runpy
-
Updated
Mar 12, 2020
I notice that one can evaluate the model on a list of validation/test data loaders. Is it also possible to extract data from multiple train_data_loader in the training step in the current version? This feature might be useful in tasks like transfer learning or semi-supervised learning, which usually maintain multiple datasets in the training stage (e.g., source and target datasets in transfer
Missing functionality
Ability to have descriptions of variables alongside their descriptive statistics. Most often when EDA is being performed we are unfamiliar with a dataset. Being able to incorporate variable descriptions into the reports would be useful.
Proposed feature
Ability to provide a variable dictionary {"var_name": "description"} to the ProfileReport function. These des
-
Updated
Apr 11, 2020 - Jupyter Notebook
A more consistent and multi-functional global level of verbosity control,
suggest an enhancement that will see print(...) in project be converted to using the python logging. module
import logging
#Then instead of print() use either
logging.info(......)
#or
logging.debug(.....)
#or
logging.warning(....)
#or
#logging.error()
In that way verbosity can be globally
-
Updated
Mar 27, 2020 - Python
docstrings for carla
Is is possible to include the docs from https://carla.readthedocs.io/en/latest/python_api/#carla.Actor in the python library? I would prefer using the docs in my IDE rather than jumping into the browser when I have to look something up.
Thank you.
In doc.pyx' s line 590:
I can still do a good job of chunking by tokenization and pos tagging only, without the full parse. Also in some languages parse isn't available. This will leave more flexibilities to users. I can comment out this in my copy of spacy, but when I update spacy to a new release, I have to chang