-
Updated
Oct 24, 2020 - Python
python-api
Here are 159 public repositories matching this topic...
-
Updated
Nov 5, 2020 - Python
-
Updated
Dec 16, 2018 - Rust
-
Updated
Oct 2, 2020 - Python
-
Updated
Nov 5, 2020 - C++
-
Updated
Nov 6, 2020 - C++
-
Updated
Apr 1, 2020 - Python
The version of device_copy_n which takes a CUDA stream argument
https://github.com/clara-parabricks/GenomeWorks/blob/74e6424c156a7ee15b4137d4788a4257ee6482c4/common/base/include/claraparabricks/genomeworks/utils/cudautils.hpp#L138
should be renamed to device_copy_n_async to make the non-blocking behavior more obvious.
The blocking three-argument version of device_copy_n should remain
-
Updated
Oct 19, 2020 - C++
-
Updated
Oct 25, 2018 - Python
When running the regression, the resulting logs seem to end up in third_party/tests/$TEST/.... This of course is not unnoticed by git, so a git status shows a ton of non-added new files added.
To reproduce:
make
make regression
git status # observe all the filesBuild or test artifacts should never clutter the rest of the code-base (we should regard them as read-only in
Flags should be renamed to be positive statements rather than negative ones, if possible.
-
Updated
Oct 3, 2018 - Python
-
Updated
Jan 2, 2018 - Python
-
Updated
Sep 16, 2020 - Jupyter Notebook
-
Updated
Apr 21, 2020 - Python
-
Updated
Aug 20, 2020 - Python
-
Updated
Sep 25, 2018 - Python
-
Updated
Mar 23, 2020 - Python
-
Updated
Nov 3, 2020 - Python
-
Updated
Jul 25, 2018 - Python
-
Updated
Oct 6, 2020 - Python
-
Updated
Jul 19, 2019 - Python
-
Updated
Mar 1, 2018 - Python
-
Updated
May 14, 2020 - Python
-
Updated
Oct 3, 2019 - Python
-
Updated
Nov 2, 2020 - Python
-
Updated
Sep 25, 2020 - Python
Improve this page
Add a description, image, and links to the python-api topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the python-api topic, visit your repo's landing page and select "manage topics."
Current default value for
rows_per_chunkparameter of the CSV writer is 8, which means that the input table is by default broken into many small slices that are written out sequentially. This reduces the performance by an order on magnitude in some cases.In Python layer, the default is the number of rows (i.e. write table out in a single pass). We can follow this by setting
rows_per_chunk