dbt
Here are 335 public repositories matching this topic...
-
Updated
May 24, 2022 - TypeScript
[MACRO] Valid json
Let's prepare a mixin for interacting with Roles and Policies with the Python client, in case users want to use the API directly.
Do not only have the list, get etc, but also utility methods, such as updating a default role. It should wrap the following logic:
import requests
import json
# Get the ID
data_consumer = requests.get("http://localhost:8585/api/v1/roles/name/DataCo-
Updated
May 21, 2022 - Svelte
Task Overview
- Currently timestamp_column is the only configuration that is needed to be configured globally in the model config section (usually it's being configured in the properties.yml under elementary in the config tag).
- Passing the timestamp_column as a test param will enable running multiple tests with different timestamp columns. For example running a test with updated_at colum
In this PR, I wanted to solve issue #25 by creating a CSV file to list and also help the brand property matching process. This PR including:
-
Compile a list of all brand names and operator names from the name-suggestion-index as a CSV with the columns
id,display_name, andwiki_dataintmpfolder.
Deadline: 05.01.2022 -
Create a PySpark UDF similar to the ones in the osm
faldbt object already has access to the manifest; we need to pass it here in the exec method. We probably want to pass the dbt manifest directly rather than the fal wrapper as we dont want the fal wrapper to be our public api.
Where it makes sense, we should check inputs for datatype, length etc before processing, and if necessary raise an exception via exceptions.raise_compiler_error.
See: https://github.com/calogica/dbt-expectations/blob/b69ac04cacfe1dfaf1de129778908898e666f9e3/macros/schema_tests/multi-column/expect_compound_columns_to_be_unique.sql#L16
Documentation tasks
-
Updated
Jul 24, 2018 - Python
-
Updated
May 20, 2022
-
Updated
May 23, 2022 - TypeScript
-
Updated
May 19, 2022 - C
-
Updated
May 19, 2022
Describe the bug
In DORA dashboards, for the top left gauge chart Weekly Release / Deployment Frequency (Avg), we use a UI built query to compute the average. It is incorrect because weeks with no releases / deployments are simply ignored in the final averaging because of how the query is built. Indeed, the first aggregation that counts per week will have no entries for week with no release /
-
Updated
May 23, 2022 - Kotlin
-
Updated
May 23, 2022 - Python
-
Updated
May 17, 2022 - Python
@mhindery asks whether we could use the binary instead as a lot of other common data engineering projects defer to it now and this forces users to use yet another package.
We will have to test it but I think it should be fine.
Initial discussion happened here:
bitpicky/dbt-sugar#20 (comment)
Other adapters (e.g. dbt-spark) have adopted a single-source-of-truth approach to documentation, prefering to document setup and configuration information only on the docs.getdbt website, rather than duplicating it on the docs page and the adapter repo's readme.
I think we should do the same.
-
Updated
May 17, 2022
-
Updated
Jul 10, 2021 - HCL
-
Updated
May 9, 2022 - SQL
-
Updated
Feb 26, 2022 - Python
Improve this page
Add a description, image, and links to the dbt topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the dbt topic, visit your repo's landing page and select "manage topics."
What type of re_data dbt macro you would like to add
What macro should be doing
Return true is string is a valid JSON and can be parsed to JSON