stream-processing
Here are 760 public repositories matching this topic...
-
Updated
Jan 4, 2022 - Java
-
Updated
Jan 17, 2022
-
Updated
Jan 18, 2022 - Python
-
Updated
Sep 29, 2021
Describe the bug
There are a few functions (AS_MAP, ARRAY_SORT and URL_ENCODE_PARAM for example) that don't do null checks on inputs. We should do null checks because these errors are left uncaught and end up crashing queries. An example of one that has been fixed recently is confluentinc/ksql#8400
Avoid controlling endless loop with exception in loadAnonymousClasses, e.g. by extracting loading class to the method:
private boolean tryLoadClass(String innerClassName) {
try {
parent.loadClass(innerClassName);
} catch (ClassNotFoundException ex) {
return false;
}
return true;
}Under the hood, Benthos csv input uses the standard encoding/csv packages's csv.Reader struct.
The current implementation of csv input doesn't allow setting the LazyQuotes field.
We have a use case where we need to set the LazyQuotes field in order to make things work correctly.
i'm attempting to write a health check for a service that uses watermill and i'd like to be able to easily determine if the router is still up and running
Router exposes a channel at .Running(), and it's possible to infer if the router is closed with a little indirection:
running := false
go func() {
r := router.Running()
for {
if _, open := <-r; open {
running -
Updated
Jan 25, 2022 - C
-
Updated
Jan 25, 2022 - Java
-
Updated
Aug 6, 2021 - C
It would be really useful if there was a method that could insert a column into an existing Dataframe between two existing columns. I know about .addColumn, but that seems to place the new column at the end of the Dataframe.
For example:
df.print()
A | B
======
7 | 5
3 | 6
df.insert({ "afterColumn": "A", "newColumnName": "C", "data": [4,1], inplace: true })
df.print()
-
Updated
Jan 4, 2022 - Go
-
Updated
Jan 24, 2022
-
Updated
Dec 28, 2021 - Python
It can be very difficult to piece together a reasonably estimate of a history of events from the current workers logs because none of them have timestamps.
So for that end, I think we should add timestamps to the logs.
This has some cons:
- We can't just use
@printflike we have been until now. We need to either include a timestamp in every@printfcall (laborious and error prone) or c
-
Updated
Jan 19, 2022 - Java
For example, given a simple pipeline such as:
Pipeline p = Pipeline.create();
p.readFrom(TestSources.items("the", "quick", "brown", "fox"))
.aggregate(aggregator)
.writeTo(Sinks.logger());
I'd like aggregator to be something requiring a non-serialisable dependency to do its work.
I know I can do this:
Pipeline p = Pipeline.create();
p.readFrom(TestSource
-
Updated
Dec 14, 2021 - JavaScript
The mapcat function seems to choke if you pass in a mapping function that returns a stream instead of a sequence:
user> (s/stream->seq (s/mapcat (fn [x] (s/->source [x])) (s/->source [1 2 3])))
()
Aug 18, 2019 2:23:39 PM clojure.tools.logging$eval5577$fn__5581 invoke
SEVERE: error in message propagation
java.lang.IllegalArgumentException: Don't know how to create ISeq from: manifold.
-
Updated
Jan 24, 2022 - Java
-
Updated
Jan 10, 2022 - Go
-
Updated
Jan 13, 2022 - Scala
-
Updated
Jan 19, 2022 - Java
-
Updated
Jan 25, 2022 - Go
-
Updated
Oct 17, 2021 - Go
Improve this page
Add a description, image, and links to the stream-processing topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the stream-processing topic, visit your repo's landing page and select "manage topics."

I figured out a way to get the (x,y,z) data points for each frame from one hand previously. but im not sure how to do that for the new holistic model that they released. I am trying to get the all landmark data points for both hands as well as parts of the chest and face. does anyone know how to extract the holistic landmark data/print it to a text file? or at least give me some directions as to h