-
Updated
Nov 18, 2021
streaming-data
Here are 282 public repositories matching this topic...
-
Updated
Dec 29, 2021 - Go
-
Updated
Dec 29, 2021 - Go
-
Updated
Dec 26, 2021 - Python
Describe the bug
"Lorem ipsum"-ish messages generated by default are being sent as null if corresponding fields are not changed (e.g. headers are being changed). This behaviour is not clear.
Set up
docker-compose -f kafka-ui.yaml
Steps to Reproduce
Steps to reproduce the behavior:
- Press produce message button
- Change headers (add "1" to the end and backspace it)
- P
-
Updated
Dec 21, 2021 - Java
Implement progressive versions of hopping and tumbling windows:
- Both window macro methods should get added versions that take an additional parameter
- The parameter should represent the time interval that should be used to produce intermediate results of aggregations
- The parameter should be a clean divisor of the tumble size for tumbling windows and the hop size for hopping windows
-
Updated
Dec 24, 2021 - Python
-
Updated
Dec 25, 2021 - Go
-
Updated
Dec 13, 2021 - C
-
Updated
Dec 8, 2021 - Julia
Hello, I have a CSV file that has 9 features and 9 expected targets, and I want to test 2 regression models on this data (that should be generated as a stream).
When I test the MultiTargetRegressionHoeffdingTree and RegressorChain on this data I get a bad R2-score, but when I tried normalizing my data with scikit-learn I get a pretty good R2-score. The problem is that I should not use sci
-
Updated
Dec 21, 2021
-
Updated
Dec 18, 2021 - Java
-
Updated
Jan 13, 2021 - Python
-
Updated
Nov 28, 2021 - HTML
It is currently hard for users to track which versions of dependencies they are getting and which versions they should use when adding extra dependencies to their projects. This results in code like this in our own example projects:
libraryDependencies ++= Seq(
"com.lightbend.akka" %% "akka-stream-alpakka-file" % "1.1.2",
"com.typesafe.akka" %% "akka-http-spray-js
Is your feature request related to a problem? Please describe.
Today the user needs to deploy udf jars and reference data csvs manually to the blob location
Describe the solution you'd like
Enable the user to choose a file on a local disk which the web portal will then upload to the right location
-
Updated
Jul 11, 2021 - Java
-
Updated
Mar 14, 2021 - C++
-
Updated
May 11, 2020 - Jupyter Notebook
-
Updated
Sep 8, 2020 - Python
-
Updated
Dec 22, 2021 - C++
-
Updated
Dec 23, 2021 - C#
-
Updated
Dec 4, 2020 - Java
-
Updated
Nov 14, 2019 - Go
Improve this page
Add a description, image, and links to the streaming-data topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the streaming-data topic, visit your repo's landing page and select "manage topics."
Problem description
Hi, smart_open currently supports
hdfs://,viewfs://is also a hadoop file system which is similar tohdfs://.I find that
tf.io.gfile.GFileof tensorflow can support bothhdfs://andviewfs://, is it possible that smart_open add support forviewfs://?Thanks~