-
Updated
May 24, 2020 - Python
audio-processing
Here are 628 public repositories matching this topic...
Description:
Mentioned before in #178
Currently the developer documentation is lacking to say the least.
Need to make the process for developers to contribute as easy as possible.
Sampler Graphics
We need some good graphics for the main sampler screen. This is where you can do rudimentary editing of the samples that are played in the sequencer.
There are two screens. The main screen with the controls:
- Volume (this one may be changed later)
- Speed (Playspeed. The sample also pitches up or down)
- Filter (in the middle the sound is unchanged, away from center it engages either hi-pass
-
Updated
Aug 5, 2019 - TeX
I search everywhere in the code but I didn't get the usefulness of this file it is not even used. could you explain that, please?
Thanks for the code :)
-
Updated
May 24, 2020 - Python
We would benefit from tutorial how to convert DNN models trained either by HTK 3.5.2-BETA or by Kaldi. I am specifically interested how you converted DNN models trained by Kaldi later used in Japanese speech dictation published here.
When its run i recieve this error:
ALSA lib pcm_dmix.c:1108:(snd_pcm_dmix_open) unable to open slave
However the speedy-player example works but only when installed via go get.
I'm on ubuntu 19.04, running Sway, go 1.12
Edit:
I'm even more confused than before, go build app.go && ./app works without any errors, go run app.go does not
-
Updated
Apr 2, 2020 - C++
-
Updated
May 17, 2020 - C
<?xml version="1.0" encoding="utf-8"?>
<mlt>
<profile description="" width="640" height="360"/>
<producer id="16da6066-6d38-76bd-75de-5394693db212">
<property name="resource">http://kb-oss-daily.oss-cn-zhangjiakou.aliyuncs.com/video/2019/08/06/67399a58-8788-474b-90d0-56
Starting from v0.15.4, Giada supports a primitive form of mono/stereo plug-ins. Full support is welcome, where the user can customize the channels layout through the channel matrix (see Reaper for example).
Useful JUCE classes:
https://docs.juce.com/master/classAudioProcessorGraph.html
https://docs.juce.com/master/classAudioProcessorPlayer.html
Readings:
-
Updated
Mar 27, 2020 - C#
-
Updated
Dec 23, 2019 - Python
-
Updated
Mar 26, 2020 - Swift
-
Updated
May 10, 2020 - JavaScript
You already mention the licensing in the README, however, having a LICENSE or COPYING file in the repository would make it easier to spot what license is used.
Additionally, the individual code files are currently having a header which smells like "All Rights Reserved". Ideally they'd receive a file header which would restate the license (GPL header and a note that commercial licenses are avail
-
Updated
Apr 28, 2019 - Java
-
Updated
Apr 15, 2020 - C++
-
Updated
Jan 28, 2020 - Python
-
Updated
May 17, 2020 - C#
- Write JavaDoc comments for all class variables and functions.
- In many cases, there are multi-line JavaDoc comments where the opening
/**and closing*/could be moved onto the same line as the comment itself. - There are functions with unfinished JavaDoc comments.
- JavaDoc comments on classes can be removed if they have no useful information (Ex. A one-line description of t
-
Updated
Mar 30, 2020 - Go
Better DDoc
Now that we have auto-generated doc with http://dplug.dpldocs.info/dplug.html the docs should be more welcoming.
-
Updated
May 24, 2020 - C++
-
Updated
Apr 11, 2020 - C++
-
Updated
May 25, 2020 - Swift
The currently supported radices are 2, 3, 4, and 8. Small primes 5 and 7 should be added for parity with other FFT libraries. Larger powers of two might be desirable for performance.
-
Updated
May 6, 2020 - Python
Improve this page
Add a description, image, and links to the audio-processing topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the audio-processing topic, visit your repo's landing page and select "manage topics."

Hi, I'm a starter and trying to learn how to customize or modify my own mediapipe line. I used Neural Networks to train landmarks which extract from mediapipe. Is there any way I can put my trained model back to mediapipe to implementing real-time gesture recognition?Thanks for your help.