Skip to content
#

data-lake

Here are 158 public repositories matching this topic...

lakeFS

Real Time Big Data / IoT Machine Learning (Model Training and Inference) with HiveMQ (MQTT), TensorFlow IO and Apache Kafka - no additional data store like S3, HDFS or Spark required

  • Updated Nov 5, 2020
  • Jupyter Notebook
amazon-s3-find-and-forget
matteofigus
matteofigus commented May 12, 2021

When an Item in the queue is added with incorrect type for the corresponding Data Mapper, the Job fails during planning, without any information about which datamapper/queue item id is involved.

Let's take a Data Mapper with a identifier of type int for instance. If we add foo to the deletion queue, the find will fail with a log like this:

{
  "EventData": {
    "Error": "ValueError
bug good first issue
zzeekk
zzeekk commented Apr 11, 2022

Is your feature request related to a problem? Please describe.
Executing all tests takes already about 30mins. We should try to optimize that.

Describe the solution you'd like
Much time is taken by preparing input data by writing test data to DataObjects (Csv or Hive). This could be significantly reduced by creating a custom DataObject where a DataFrame can be set as input data, which

enhancement good first issue technical-debt
saig0
saig0 commented Aug 27, 2021

If ZeeQS imports records from a Zeebe cluster with multiple partitions then it can happen that variable updates, element instance transitions, and message correlations are not persisted.

The problem is caused by the importer. It uses the record position as ID for the entities. But the positions are not unique across multiple partitions.

Related issue: https://github.com/camunda-community-hub

bug good first issue scope-hazelcast hacktoberfest

Improve this page

Add a description, image, and links to the data-lake topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with the data-lake topic, visit your repo's landing page and select "manage topics."

Learn more