hdf5
Here are 259 public repositories matching this topic...
-
Updated
Jan 23, 2021 - Python
-
Updated
Jan 21, 2021 - Julia
-
Updated
Sep 24, 2020 - JavaScript
-
Updated
Apr 21, 2020 - Python
-
Updated
Dec 14, 2020 - Python
-
Updated
Sep 29, 2020 - MATLAB
-
Updated
Nov 9, 2020 - C++
-
Updated
Jan 18, 2021 - Jupyter Notebook
-
Updated
Dec 18, 2020 - C
Consolidating multiple issues here:
- blacklist BED file with one entry #196
- bg2 file with a header #128
- malformed pairs file #135
- spaces instead of tabs in chromsizes file #124
- other chromsizes weirdness #142
-
Updated
Jan 4, 2021 - Java
-
Updated
Jun 30, 2020 - C++
2) Feature Request
Currently the ordering of dimensions described in the schema is in many cases not listed in the documentation. E.g., for ElectricalSeries.data we should add to the docval that the dimensions are num_time | num_channels. This would help users avoid errors with ordering of dimensions.
This issue was motivated by #960
This issue is in part also related to #626
Does CLUST_WTS not exist in GDL or am I missing something?
GDL> array=dist(500)
GDL> weights = CLUST_WTS(array, N_CLUSTERS = 500, n_iter=10)
% Function not found: CLUST_WTS
-
Updated
Nov 24, 2020 - Jupyter Notebook
-
Updated
Jan 22, 2021 - Python
-
Updated
Jan 3, 2021 - Java
-
Updated
Jan 22, 2021 - C++
Improve this page
Add a description, image, and links to the hdf5 topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the hdf5 topic, visit your repo's landing page and select "manage topics."
Hello,
Considering your amazing efficiency on pandas, numpy, and more, it would seem to make sense for your module to work with even bigger data, such as Audio (for example .mp3 and .wav). This is something that would help a lot considering the nature audio (ie. where one of the lowest and most common sampling rates is still 44,100 samples/sec). For a use case, I would consider vaex.open('Hu