stream-processing
Here are 688 public repositories matching this topic...
-
Updated
May 4, 2021 - Java
-
Updated
May 15, 2021
-
Updated
May 19, 2021 - Python
There is no technical difficulty to support includeValue option, looks like we are just missing it on the API level.
See SO question
Is your feature request related to a problem? Please describe.
A user requested the ability to map an array into fields where the arrays have some predefined set of values:
Sample Data:
- FOO: [{"key":"a", "val":2}, {"key":"b", "val":4}]
- FOO: [{"key":"b", "val":3}]
Sample Output:
| a | b |
|---|---|
| 2 | 4 |
|nul| 3 |
Describe the solution you'd like
-
Updated
Apr 7, 2021
I suggest adding incrby as an operator to the redis processor.
use case
incrby can be used for counting, of which there are many situations where counting is involved. An example, let's say we have this data:
{"name": "morkel", "owes": "government", "amount": 100}
{"name": "ash", "owes": "morkel", "amount":
For an implementation of #126 (PostgreSQL driver with SKIP LOCKED), I create a SQL table for each consumer group containing the offsets ready to be consumed. The name for these tables is build by concatenating some prefix, the name of the topic and the name of the consumer group. In some of the test cases in the test suite, UUID are used for both, the topic and the consumer group. Each UUID has
-
Updated
Jun 3, 2021 - C
-
Updated
May 1, 2019 - C
-
Updated
Apr 17, 2021 - Go
-
Updated
Jun 3, 2021 - Java
-
Updated
Jun 3, 2021
Is your feature request related to a problem?
Currently, trace filter page allows latency filter have max value to lower than min value. This will always give 0 traces as results
Describe the solution you'd like
Give a warning when max value is less than min value, and don't allow latency filter to be applied. May be keep apply button in latency filter in disabled state
Descri
-
Updated
Aug 14, 2020 - Python
It can be very difficult to piece together a reasonably estimate of a history of events from the current workers logs because none of them have timestamps.
So for that end, I think we should add timestamps to the logs.
This has some cons:
- We can't just use
@printflike we have been until now. We need to either include a timestamp in every@printfcall (laborious and error prone) or c
-
Updated
Jun 1, 2021 - Java
For example, given a simple pipeline such as:
Pipeline p = Pipeline.create();
p.readFrom(TestSources.items("the", "quick", "brown", "fox"))
.aggregate(aggregator)
.writeTo(Sinks.logger());
I'd like aggregator to be something requiring a non-serialisable dependency to do its work.
I know I can do this:
Pipeline p = Pipeline.create();
p.readFrom(TestSource
-
Updated
Mar 18, 2021 - JavaScript
-
Updated
Jun 3, 2021 - Java
-
Updated
May 28, 2021 - Scala
-
Updated
Jun 3, 2021 - Java
-
Updated
Mar 12, 2021 - Go
-
Updated
May 28, 2021 - TypeScript
-
Updated
May 16, 2021 - Go
-
Updated
Mar 31, 2018
-
Updated
May 11, 2021 - JavaScript
Improve this page
Add a description, image, and links to the stream-processing topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the stream-processing topic, visit your repo's landing page and select "manage topics."
I figured out a way to get the (x,y,z) data points for each frame from one hand previously. but im not sure how to do that for the new holistic model that they released. I am trying to get the all landmark data points for both hands as well as parts of the chest and face. does anyone know how to extract the holistic landmark data/print it to a text file? or at least give me some directions as to h