confluent
Here are 123 public repositories matching this topic...
Description
Seems like the documentation on https://docs.confluent.io/ for this package is for version 1.0.1, but the current version of this package is 1.2.0 (see screenshot).
Description
A comment in Consumer example states:
// partition offsets can be committed to a group even by consumers not
// subscribed to the group. in this example, auto commit is disabled
// to prevent this from occurring.
On the [next line](https://github.com/confluentinc/confluent-kafka-dotnet/blob/d0be307fece25482b096a906fe9a6931629b8e7e/examples/Consumer/Program.c
Compacted topics require keys when producing. If you forget to specify a key, the Kafka CLI tool kafka-console-producer returns a helpful error within a few seconds:
CorruptRecordException: This message has failed its CRC checksum, exceeds the valid size, has a null key for a compacted topic, or is otherwise corrupt.
If you forget to specify a key with POST /topics/(string:topic_name), tho
The title might seem a bit vague but I don't know how to describe it any better tbh :-)
Anyway this is what happened: I got some 500 responses from the schema registry and all I could see in the logs was :
[2020-04-02 16:03:35,048] INFO 100.96.14.58 - - [02/Apr/2020:16:03:34 +0000] "PUT /config/some-topic-value HTTP/1.1" 500 69 502 (io.confluent.rest-utils.requests)
The logs di
https://kafka.apache.org/documentation/#brokerconfigs indicates that log.dirs defaults to null/empty, so log.dir wins.
| Name | Description | Type | Default | Valid Values | Importance | Dynamic Update Mode |
|---|---|---|---|---|---|---|
| log.dir | The directory in which the log data is kept (supplemental for log.dirs property) | string | /tmp/kafka-logs | high | read-only | |
| log.dirs |
getting the following error message.
ksql> CREATE STREAM CUSTOMERS_SRC WITH (KAFKA_TOPIC='asgard.public.customers', VALUE_FORMAT='AVRO'); Avro schema for message values on topic asgard.public.customers does not exist in the
Kafka Connector stop if connection with database break with Sink Connector. There should be retry in Sink connector as well. This is available in source connector
Hi, I'd like to sink data from kafka topics only if the document does not already exist in elasticsearch index (based on id). I.e. use the op_type create when indexing documents.
From what I see, that not seems currently possible.
It could be a new write.method configuration property "CREATE".
W
Not clear in docs relation between tasks.max and number of consumers attached to topic partition
Hi we have 25 topics each topic having 2 partition , we have created connect config having topics.regex, so that connector consumes from all 25 topics with tasks.max set to 50 i.e(one unique consumer per partition) but when we describe the consumer group only two unique consumers are attached to 50 partition.
here's the config:
{
"name": "testConnectorfinalTest04",
"config": {
parse_args function in main.py returns a Namespace object.
Returning a dict/kwargs style object would make using parsed command-line arguments slightly simpler (and in MockArgs for unit tests for example).
I am aware there have been a number of Community version related issues eg #85 and #219 but I was wondering how you would feel about a PR adding some more documentation and maybe configuration to this stack.
As a user I would love to have it documented specifically that I can utilize community-only things by setting certain parameters. Also exclude certain packages from the install stack.
And
-
Updated
Apr 22, 2020 - Java
I was trying to use the Connector to ingest data into an S3 bucket which is encrypted.
I received the following cryptic error:
{
"name": "s3-sink-connector",
"connector": {
"state": "RUNNING",
"worker_id": "10.153.19.140:8787"
},
"tasks": [
{
"state": "FAILED",
"trace": "org.apache.kafka.connect.errors.Connect-
Updated
Apr 6, 2020 - Jupyter Notebook
-
Updated
May 11, 2020 - Go
-
Updated
Mar 17, 2020 - Shell
-
Updated
Nov 12, 2019 - Java
-
Updated
Jun 11, 2020 - Java
-
Updated
Dec 14, 2019 - Clojure
The "Mirror of Linkedin's Camus" description on the repo homepage is wrong by most reasonable definitions of the word mirror. The content of the repos appear to be quite different.
-
Updated
Jun 13, 2020 - Elixir
-
Updated
Jun 11, 2020 - Java
-
Updated
May 29, 2019
-
Updated
Oct 18, 2019 - PHP
-
Updated
Jun 10, 2020 - Java
Improve this page
Add a description, image, and links to the confluent topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the confluent topic, visit your repo's landing page and select "manage topics."

APIs working with offsets, like
CommitOffsets,StoreOffsets, does not specify what offset means. I recently found out offset means next offset to consume, not last offset just processed.I think it's a common trap that many users fall into, some examples are: