data-ops
DataOps is an automated, process-oriented methodology, used by analytic and data teams, to improve the quality and reduce the cycle time of data analytics. While DataOps began as a set of best practices, it has now matured to become a new and independent approach to data analytics. DataOps applies to the entire data lifecycle from data preparation to reporting, and recognizes the interconnected nature of the data analytics team and information technology operations.
Here are 11 public repositories matching this topic...
Under the hood, Benthos csv input uses the standard encoding/csv packages's csv.Reader struct.
The current implementation of csv input doesn't allow setting the LazyQuotes field.
We have a use case where we need to set the LazyQuotes field in order to make things work correctly.
We have recently made dataset versions traversable via our dataset tab on our lineage page. We would like to do the same for job versions as well. We will want to be able to start with a job, navigate across versions, then navigate again across the runs for that job version. We would also like to see detailed information about job versions on this intermediate page as well. One prereq for this is
-
Updated
Nov 6, 2021 - Ruby
-
Updated
Oct 2, 2021 - Python
-
Updated
Jan 21, 2022 - Python
-
Updated
Dec 24, 2021 - Python
-
Updated
Dec 10, 2021 - Rust
-
Updated
Apr 29, 2020
-
Updated
Feb 8, 2021 - Dockerfile
-
Updated
Jul 23, 2021 - C#
Current behavior
You get an error if you try to upload the same file name
Proposed behavior
The task should take in an
overwriteargument and pass it to [this line](https://github.com/PrefectHQ/prefect/blob/6cd24b023411980842fa77e6c0ca2ced47eeb83e/src/prefect/