DataOps
DataOps is an automated, process-oriented methodology, used by analytic and data teams, to improve the quality and reduce the cycle time of data analytics. While DataOps began as a set of best practices, it has now matured to become a new and independent approach to data analytics. DataOps applies to the entire data lifecycle from data preparation to reporting, and recognizes the interconnected nature of the data analytics team and information technology operations.
Here are 110 public repositories matching this topic...
We need to support writing to and reading from TFRecord format.
Reference doc: https://www.tensorflow.org/tutorials/load_data/tfrecord
More on TypeTransfomers can be found here.
Related PR that adds PyTorch tensor and module as Flyte types: flyteorg/flytekit#1032
-
Updated
Apr 29, 2022 - Shell
-
Updated
Jun 5, 2022 - HTML
The default RubrixLogHTTPMiddleware record mapper for token classification expect a structured including a text field for inputs. This could make prediction model inputs a bit cumbersome. Default mapper could accepts also flat strings as inputs:
def token_classification_mapper(inputs, outputs):
i-
Updated
May 20, 2022 - Scala
Sending a rest call to delete a job specification throws 404 where as grpc call works fine. Steps to reproduce
curl -X DELETE "http://localhost:9100/v1/project/my-project/namespace/kush/helloworld" -H "accept: application/json"-
Updated
Jun 6, 2022 - Java
Task Overview
- Currently timestamp_column is the only configuration that is needed to be configured globally in the model config section (usually it's being configured in the properties.yml under elementary in the config tag).
- Passing the timestamp_column as a test param will enable running multiple tests with different timestamp columns. For example running a test with updated_at colum
-
Updated
May 23, 2022 - Python
-
Updated
Jun 4, 2022
-
Updated
May 31, 2022 - Shell
-
Updated
Jun 1, 2022 - Java
-
Updated
Jun 3, 2022 - Python
-
Updated
Jun 6, 2022 - Python
Currently, both Kafka and Influx sink logs only the data(Row) that is being sent.
Add support for logging column names as well along with data points similar to the implementation in log sink.
This will enable users to correlate the data points with column names.
Zap configurations should be pushed to grpc middleware here: cmd/setup.go#L47
-
Updated
Apr 29, 2022 - Shell
-
Updated
May 24, 2022 - Go
In golang client, consumers get dynamic message instance after parsing. Add an example in the docs on how to use dynamic message instance to get values of different types in consumer code.
List of protobuf types to cover
- timestamp
- duration
- bytes
- message type
- struct
- map
-
Updated
Jun 5, 2022 - Go
-
Updated
Jun 1, 2022 - Python
-
Updated
Aug 2, 2020 - Smarty
-
Updated
Feb 18, 2022 - Go
-
Updated
Jan 31, 2022
-
Updated
May 31, 2022 - Python
Siren creates alertmanager config and sycns with alertmanger. The alert manager config can change for the same subscriptions if their order changes. We should follow some sorting conventions and stick to those conventions to create an alert manager config.
We are using the protobuf-git configuration as described at https://cloudhut.dev/docs/features/protobuf#git-repository
In our repository the proto-files live within a
protodirectory, which seems to be very common, and contains 5 levels of nested folders.Currently KOWL searches only the first 5 levels of the checkout for
.protofiles, so our last level is not considered.Please