stream-processing
Here are 786 public repositories matching this topic...
-
Updated
Apr 12, 2022 - Java
-
Updated
Apr 25, 2022
-
Updated
Apr 23, 2022 - Python
-
Updated
Sep 29, 2021
Is your feature request related to a problem? Please describe.
A user in the community Slack wanted to filter out/select items from a JSON array of objects. Given records in the following format:
'[{"type": "AAA", "timestamp": "2021-09-27"}, {"type": "BBB", "timestamp": "2021-09-27"}, {"type": "AAA", "tAdd --add-exports jdk.management/com.ibm.lang.management.internal only when OpenJ9 detected.
Otherwise we got WARNING: package com.ibm.lang.management.internal not in jdk.management in logs
Under the hood, Benthos csv input uses the standard encoding/csv packages's csv.Reader struct.
The current implementation of csv input doesn't allow setting the LazyQuotes field.
We have a use case where we need to set the LazyQuotes field in order to make things work correctly.
-
Updated
Apr 26, 2022 - Rust
This comment says that the message ID is optional,
but for SQL transport it is a mandatory attribute,
in turn it causes misunderstanding?
Is it possible to fix it or did I get something wrong?
https://github.com/ThreeDotsLabs/watermill/blob/b9928e750ba673cf93d442db88efc04706f67388/message/message.go#L20
https://github.com/ThreeDotsLabs/watermill/blob/b9928e750ba673cf93d442db88efc04706f6
-
Updated
Apr 26, 2022 - C
to_dict() equivalent
I would like to convert a DataFrame to a JSON object the same way that Pandas does with to_dict().
toJSON() treats rows as elements in an array, and ignores the index labels. But to_dict() uses the index as keys.
Here is an example of what I have in mind:
function to_dict(df) {
const rows = df.toJSON();
const entries = df.index.map((e, i) => ({ [e]: rows[i] }));
-
Updated
Apr 26, 2022 - Java
-
Updated
Feb 20, 2022 - C
-
Updated
Jan 4, 2022 - Go
-
Updated
Apr 26, 2022
-
Updated
Dec 28, 2021 - Python
It can be very difficult to piece together a reasonably estimate of a history of events from the current workers logs because none of them have timestamps.
So for that end, I think we should add timestamps to the logs.
This has some cons:
- We can't just use
@printflike we have been until now. We need to either include a timestamp in every@printfcall (laborious and error prone) or c
-
Updated
Mar 28, 2022 - Java
Describe the bug
offset in select statements does not work.
To Reproduce
eric=> create table t1 (v1 int not null, v2 int not null, v3 int not null);
CREATE_TABLE
eric=> insert into t1 values (1,1,4), (5,1,4), (1,9,1), (9,8,1), (0,2,3);
INSERT 0 5
eric=> select * from t1 order by v1 limit 3;
v1 | v2 | v3
----+----+----
0 | 2 | 3
1 | 1 | 4
1 | 9 | 1
(3
For example, given a simple pipeline such as:
Pipeline p = Pipeline.create();
p.readFrom(TestSources.items("the", "quick", "brown", "fox"))
.aggregate(aggregator)
.writeTo(Sinks.logger());
I'd like aggregator to be something requiring a non-serialisable dependency to do its work.
I know I can do this:
Pipeline p = Pipeline.create();
p.readFrom(TestSource
-
Updated
Dec 14, 2021 - JavaScript
The mapcat function seems to choke if you pass in a mapping function that returns a stream instead of a sequence:
user> (s/stream->seq (s/mapcat (fn [x] (s/->source [x])) (s/->source [1 2 3])))
()
Aug 18, 2019 2:23:39 PM clojure.tools.logging$eval5577$fn__5581 invoke
SEVERE: error in message propagation
java.lang.IllegalArgumentException: Don't know how to create ISeq from: manifold.
-
Updated
Feb 11, 2022 - Go
-
Updated
Apr 26, 2022 - Java
-
Updated
Apr 26, 2022 - Go
-
Updated
Apr 26, 2022 - Java
-
Updated
Mar 1, 2022 - Scala
Improve this page
Add a description, image, and links to the stream-processing topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the stream-processing topic, visit your repo's landing page and select "manage topics."
I figured out a way to get the (x,y,z) data points for each frame from one hand previously. but im not sure how to do that for the new holistic model that they released. I am trying to get the all landmark data points for both hands as well as parts of the chest and face. does anyone know how to extract the holistic landmark data/print it to a text file? or at least give me some directions as to h