Skip to content
#

artificial-intelligence

The branch of computer science dealing with the reproduction, or mimicking of human-level intelligence, self-awareness, knowledge, conscience, and thought in computer programs.

Here are 7,513 public repositories matching this topic...

lingvisa
lingvisa commented Mar 29, 2020

In doc.pyx' s line 590:

 if not self.is_parsed:
            raise ValueError(Errors.E029)

I can still do a good job of chunking by tokenization and pos tagging only, without the full parse. Also in some languages parse isn't available. This will leave more flexibilities to users. I can comment out this in my copy of spacy, but when I update spacy to a new release, I have to chang

xfan1024
xfan1024 commented Jan 7, 2020

我发现examples/retinaface.cpp中,如果开启OMP加速的话似乎在检测到人脸时会发生内存泄漏,但我定位不了这个问题的具体原因。

值得注意的时,如果将qsort_descent_inplace函数中的OMP指令注释掉这个问题就会消失掉。

static void qsort_descent_inplace(std::vector<FaceObject>& faceobjects, int left, int right)
{
    int i = left;
    int j = right;
    float p = faceobjects[(left + right) / 2].prob;
    ...
    // #pragma omp parallel sections
    {
        // #pragma
olistic
olistic commented May 25, 2018

Currently, when entering epic mode the README is frozen in the last level of the tower. When you're trying to fine-tune the score for a level other than the last one, it would be helpful if we had the README for that level available. The proposal is that when entering epic mode, the README is updated with all levels, one following the other.

Example:

# Starbolt - beginner
loomlike
loomlike commented Apr 15, 2019

Description

Add Azure notebook to our SETUP doc.
I tested google colab and Azure notebook to run reco-repo without requiring creating any DSVM or compute by myself, and it works really well with simple tweaks to the notebooks (e.g. for some libs, should install manually).

I think it would be good to add at least Azure notebook to our SETUP doc, where users can easily test out our repo w/o

bodgergely
bodgergely commented Jun 12, 2017

Upon environment timeout python client will only receive the error message "Environment in wrong status for call to observations()". Might be good to provide more information why the environment is not running anymore (due to timeout etc.)

if (!is_running(self)) {
  PyErr_SetString(PyExc_RuntimeError,
  "Environment in wrong status for call to observations()");
  return NULL;
}
RootChenLQ
RootChenLQ commented Oct 30, 2019

There is nan value in multistepBucketLikelihoods, when I use my own dataset, and set _NUM_RECORDS as 6000. The error is listed as below.

multistepBucketLikelihoods = {1: {499: 1.0}, 5: {499: nan, 501: 0.0}}
File "D:\ProgramData\PythonWorkspace\nupic\docs\examples\opf\test.py", line 52, in runHotgym fiveStepConfidence = allPredictions[5][fiveStep]
File "D:\ProgramData\PythonWorkspace\nup
tensorlayer
0xtyls
0xtyls commented Jan 3, 2020

I understand that these two python files show two different methods to construct a model. The original n_epoch is 500 which works perfect for both python files. But if I change n_epoch to 20, only tutorial_mnist_mlp_static.py can achieve a high test accuracy (~0.97). The other file tutorial_mnist_mlp_static_2.py only get 0.47.

The models built from these two files looks the same for me (the s

MaJian199609
MaJian199609 commented Mar 16, 2019

I'm submitting a ... (check one with "x")

[ ] bug report
[ ] help wanted
[ ] feature request

Current behavior

Expected/desired behavior

Reproduction of the problem

If the current behavior is a bug or you can illustrate your feature request better with an example, please provide the steps to reproduce.

What is the expected behavior?

LeonardoDavid
LeonardoDavid commented Nov 14, 2018
  • Operating System: Windows
  • Serpent.AI Version: not sure
  • Game: (Cuphead) Executable
  • Backend: GPU

i followed the hello world tutorial, created plug in for cuphead executable game when i launch the game i get this error

(AI) C:\Users\ANTONY\SerpentAI>serpent launch cuphead
Traceback (most recent call last):
  File "c:\programdata\anaconda3\envs\ai\lib\runpy
pytorch-lightning
louis2889184
louis2889184 commented Mar 8, 2020

I notice that one can evaluate the model on a list of validation/test data loaders. Is it also possible to extract data from multiple train_data_loader in the training step in the current version? This feature might be useful in tasks like transfer learning or semi-supervised learning, which usually maintain multiple datasets in the training stage (e.g., source and target datasets in transfer

Bradley-Butcher
Bradley-Butcher commented Mar 5, 2020

Missing functionality
Ability to have descriptions of variables alongside their descriptive statistics. Most often when EDA is being performed we are unfamiliar with a dataset. Being able to incorporate variable descriptions into the reports would be useful.

Proposed feature
Ability to provide a variable dictionary {"var_name": "description"} to the ProfileReport function. These des

auphofBSF
auphofBSF commented Aug 21, 2019

A more consistent and multi-functional global level of verbosity control,
suggest an enhancement that will see print(...) in project be converted to using the python logging. module

import logging
#Then instead of print() use either
logging.info(......)
#or
logging.debug(.....)
#or
logging.warning(....)
#or
#logging.error()

In that way verbosity can be globally

carla
You can’t perform that action at this time.