Skip to content
#

confluent

Here are 123 public repositories matching this topic...

TrueWill
TrueWill commented Sep 11, 2019

Compacted topics require keys when producing. If you forget to specify a key, the Kafka CLI tool kafka-console-producer returns a helpful error within a few seconds:

CorruptRecordException: This message has failed its CRC checksum, exceeds the valid size, has a null key for a compacted topic, or is otherwise corrupt.

If you forget to specify a key with POST /topics/(string:topic_name), tho

mausch
mausch commented Apr 2, 2020

The title might seem a bit vague but I don't know how to describe it any better tbh :-)

Anyway this is what happened: I got some 500 responses from the schema registry and all I could see in the logs was :

[2020-04-02 16:03:35,048] INFO 100.96.14.58 - - [02/Apr/2020:16:03:34 +0000] "PUT /config/some-topic-value HTTP/1.1" 500 69  502 (io.confluent.rest-utils.requests)

The logs di

zachariahyoung
zachariahyoung commented Mar 19, 2020

getting the following error message.

ksql> CREATE STREAM CUSTOMERS_SRC WITH (KAFKA_TOPIC='asgard.public.customers', VALUE_FORMAT='AVRO'); Avro schema for message values on topic asgard.public.customers does not exist in the

abhisheksahani
abhisheksahani commented Oct 20, 2019

Hi we have 25 topics each topic having 2 partition , we have created connect config having topics.regex, so that connector consumes from all 25 topics with tasks.max set to 50 i.e(one unique consumer per partition) but when we describe the consumer group only two unique consumers are attached to 50 partition.

here's the config:
{
"name": "testConnectorfinalTest04",
"config": {

Fobhep
Fobhep commented Apr 30, 2020

I am aware there have been a number of Community version related issues eg #85 and #219 but I was wondering how you would feel about a PR adding some more documentation and maybe configuration to this stack.

As a user I would love to have it documented specifically that I can utilize community-only things by setting certain parameters. Also exclude certain packages from the install stack.
And

Real Time Big Data / IoT Machine Learning (Model Training and Inference) with HiveMQ (MQTT), TensorFlow IO and Apache Kafka - no additional data store like S3, HDFS or Spark required

  • Updated Apr 6, 2020
  • Jupyter Notebook

Improve this page

Add a description, image, and links to the confluent topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with the confluent topic, visit your repo's landing page and select "manage topics."

Learn more

You can’t perform that action at this time.