stream-processing
Here are 794 public repositories matching this topic...
-
Updated
May 20, 2022 - Java
-
Updated
May 19, 2022
Is your feature request related to a problem? Please describe.
The mentioned functions only require that types by Comparable. Presently, they do not support BYTES or the time types.
Describe the solution you'd like
Add support for BYTES and time types to TopKDistinct, GREATEST, and LEAST
Additional context
confluentinc/ksql#8912 is about handling TopK for these t
IMap.get gets blocked forever when Cassandra NoNodeAvailableException is thrown from the MapStore
Describe the bug
The IMap.get() call will be blocked forever when a com.datastax.oss.driver.api.core.NoNodeAvailableException is thrown from the MapStore.
Setup:
Hazelcast v5.1.1
Cassandra SDK v4.14.1
Client-server setup
Read through cache, Cassandra DB gets called on a cache miss.
Test scenario: Read through cache, Cassandra server is down:
We tested the scenario whe
-
Updated
May 24, 2022 - Go
-
Updated
May 24, 2022 - Rust
This comment says that the message ID is optional,
but for SQL transport it is a mandatory attribute,
in turn it causes misunderstanding?
Is it possible to fix it or did I get something wrong?
https://github.com/ThreeDotsLabs/watermill/blob/b9928e750ba673cf93d442db88efc04706f67388/message/message.go#L20
https://github.com/ThreeDotsLabs/watermill/blob/b9928e750ba673cf93d442db88efc04706f6
-
Updated
May 23, 2022 - C
to_dict() equivalent
I would like to convert a DataFrame to a JSON object the same way that Pandas does with to_dict().
toJSON() treats rows as elements in an array, and ignores the index labels. But to_dict() uses the index as keys.
Here is an example of what I have in mind:
function to_dict(df) {
const rows = df.toJSON();
const entries = df.index.map((e, i) => ({ [e]: rows[i] }));
-
Updated
May 24, 2022 - Java
-
Updated
Feb 20, 2022 - C
-
Updated
Jan 4, 2022 - Go
-
Updated
May 23, 2022
Is your feature request related to a problem? Please describe.
Approximate Count Distinct (approx_count_distinct) is a function widely supported in many database/streaming systems. It uses the HyperLogLog algorithm to do distinct counting with very little cost, compared with a precise COUNT(DISTINCT).
https://docs.microsoft.com/en-us/sql/t-sql/functions/approx-count-distinct-transact
-
Updated
Dec 28, 2021 - Python
It can be very difficult to piece together a reasonably estimate of a history of events from the current workers logs because none of them have timestamps.
So for that end, I think we should add timestamps to the logs.
This has some cons:
- We can't just use
@printflike we have been until now. We need to either include a timestamp in every@printfcall (laborious and error prone) or c
-
Updated
May 20, 2022 - Java
-
Updated
May 23, 2022 - Go
For example, given a simple pipeline such as:
Pipeline p = Pipeline.create();
p.readFrom(TestSources.items("the", "quick", "brown", "fox"))
.aggregate(aggregator)
.writeTo(Sinks.logger());
I'd like aggregator to be something requiring a non-serialisable dependency to do its work.
I know I can do this:
Pipeline p = Pipeline.create();
p.readFrom(TestSource
-
Updated
Dec 14, 2021 - JavaScript
The mapcat function seems to choke if you pass in a mapping function that returns a stream instead of a sequence:
user> (s/stream->seq (s/mapcat (fn [x] (s/->source [x])) (s/->source [1 2 3])))
()
Aug 18, 2019 2:23:39 PM clojure.tools.logging$eval5577$fn__5581 invoke
SEVERE: error in message propagation
java.lang.IllegalArgumentException: Don't know how to create ISeq from: manifold.
-
Updated
May 24, 2022 - Java
-
Updated
Feb 11, 2022 - Go
-
Updated
May 24, 2022 - Java
-
Updated
Mar 1, 2022 - Scala
-
Updated
Apr 29, 2022 - TypeScript
-
Updated
Oct 17, 2021 - Go
Improve this page
Add a description, image, and links to the stream-processing topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the stream-processing topic, visit your repo's landing page and select "manage topics."
I figured out a way to get the (x,y,z) data points for each frame from one hand previously. but im not sure how to do that for the new holistic model that they released. I am trying to get the all landmark data points for both hands as well as parts of the chest and face. does anyone know how to extract the holistic landmark data/print it to a text file? or at least give me some directions as to h