audio
Here are 4,128 public repositories matching this topic...
So I have a Samsung LC27HG70 monitor that apparently supports up to 600 cd/m2. In Windows I want to be able to watch HDR content at optimized settings. I have searched with "mpv + hdr", multiple results including github issues came up from few years back. And the more I read the more I got confused. I tried applying each different setting, they were all okay in a different fashion so I was not su
Steps to reproduce:
- Queue several songs
- Click on the header e.g. Title to sort by Title. The sorting indicator (arrow) appears.
- Click Shuffle button
Expected:
- The sorting indicator disappears, as the songs are not sorted anymore.
Actual result:
- The arrow is still there.
API documentation
-
Updated
Mar 3, 2020 - Jupyter Notebook
-
Updated
Mar 3, 2020 - Jupyter Notebook
Using Gstreamer isPlaying() always returns true even when paused and using plain ofVideoPlayer it returns false when paused. It would be great to have a standard behavior.
see:
arturoc/ofxGStreamer#27
-
Updated
Mar 3, 2020 - C++
User story:
I'm watching a YouTube video. The person speaking in the video pauses for a moment before sentences. The pauses are long enough to be considered as 'silent' to BackgroundMusic and therefore the music plays, but then the person speaking in the video continues speaking, triggering the music playback to stop. Note: there's also a delay between the moment the person starts speaking a
The documentation does not have a key code for the numberpad dot. Is this intentional? It comes up as Unknown when it is pressed. The same is true for the numlock and capslock key although this kinda makes sense.
Shouldn't there be a NumpadDot key code?
Using the google speech recognizer, I get a well-formatted dictionary object when I specify the show_all=True option. When I try the same for the sphinx recognizer, it returns a pocketsphinx object (show_all=False returns the expected string output). This behavior is reproducible across any audio files that I've used. Code example:
Input:
import speech_recognition as sr
r = s
We're only using the pipeline functionality as a convenient way to string together transformations and cache intermediates; probably no need to bring all the extra baggage that comes along with sklearn.
The spectrogram plugin has a frequenciesDataUrl parameter that seems to be used by loadFrequenciesData(url) to pull frequency data from a JSON file and draw to canvas. Is there any documentation on how to use this parameter or what format it expects the data to be in? Generating spectrograms when loaded can take upwards of 10 seconds for a simple 2 minute mp3 and being able to preprocess and load
-
Updated
Mar 2, 2020 - C
http://xxapp.github.io/blog/2017/02/analyze-rythm-js
Best of :
- " Although it seems that there is no wide range of practical value, Okazari can be seen as a person who knows how to enjoy life."
- "Okazari's entertainment spirit is worth learning, and sometimes the code is not just for economic benefits, but also for happiness."
Checking the other players,
we pass props.display when styling.
on Facebook, we don't pass it.
https://github.com/CookPete/react-player/blob/master/src/players/Facebook.js#L102
example from YouTube (https://github.com/CookPete/react-player/blob/master/src/players/YouTube.js)
const { display } = this.props
const style = {
width: '100%',
height: '100%',
Description
The docstring examples use a mix of stateful and object-oriented API, which is not great style. At some point, we should audit these for consistency and adhere to best practices and recommendations from the matplotlib developers.
This will be a huge and tedious undertaking, similar in scope to #783 , so I'm not attaching it to a specific major version yet.
This should be added to support the Tor folks!
Motivation
In supercollider/supercollider#4572 we discussed the need for better documentation on when collection functions test on equality vs identity. For instance, SequenceableCollection:indexOf tests on identity, but you wouldn't know it f
pyAudioAnalyis comes with various pre-trained models in the /data subfolder, but where can I find the documentation for each pre-trained model? i.e. what it was trained on, and best used for.
Mumble-Client.
It would be nice if it would be possible to give others users(only seen by yourself) nickname next to the username, so people can be fastly recognized even if they e.g. have a username which is similiar to another users username.
Suggestion:
This can be set by right-clicking the users name with an entry called "Set nickname".
Then name would like the following in the UI:
Update Dependencies
I was looking through flatpak/io.github.qtox.qTox.json and noticed we are specifying to use ffmpeg 4.0.1 which has ~60 CVEs. The dependencies table lists an even older
Related to #685.
When the phone is put in airplane mode and a new message is received during the offline state, when going online again a new notification is received that there's a new message, but the badge is not updated (i.e. if there was no badge counter at all before receiving a message, there will be none on new message).
When straight up using the Makefile, the man file gets installed. However, this is not the case with CMake, and I can not find an option for it. Please also allow installing any documentation, like man files, when using CMake.
-
Updated
Mar 2, 2020 - C
-
Updated
Mar 3, 2020 - Python
I think there's an extra #endif in tinyfiles.h. Around line 88 there's this:
#if defined( TINYPATH_IMPLEMENTATION )
#endif TINYPATH_IMPLEMENTATION
That #endif doesn't look like it should be there. I get compilation errors about the final #endif having no matching #if
Improve this page
Add a description, image, and links to the audio topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the audio topic, visit your repo's landing page and select "manage topics."

the documentation mentions groups but does not show how to define them
it would be nice to get an example on groups added.