-
Updated
Aug 4, 2021 - Go
deduplication
Here are 236 public repositories matching this topic...
-
Updated
Aug 4, 2021 - Go
-
Updated
Jul 25, 2021 - C
-
Updated
Jun 5, 2021 - C
I am synching my local repo to a Wasabi S3 bucket daily. I am capturing the output from the cron job and have that one sent as an e-mail to myself. The output of the sync-to command is reallly… lengthy and I am actually not interested in that output.
How about adding the --no-progress option to this command, so it only outputs any errors ans not the whole progress?
-
Updated
Jul 30, 2021 - Python
-
Updated
Jun 4, 2021 - C
-
Updated
Jul 29, 2021 - Rust
-
Updated
May 5, 2021 - JavaScript
-
Updated
Apr 28, 2021 - Python
-
Updated
Jul 12, 2021 - Go
-
Updated
Apr 20, 2020
-
Updated
Jun 7, 2020 - Python
-
Updated
Jul 28, 2021 - C
-
Updated
Jul 16, 2017 - Go
-
Updated
Jul 28, 2021 - C
Is your feature request related to a problem? Please describe.
Currently, MapType are not supported for Spark DataFrames
Describe the solution you'd like
Add support for MapType Spark DataFrame columns
Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered.
Additional context
Add any other co
-
Updated
Jul 5, 2021 - Roff
-
Updated
Jul 19, 2020 - Go
-
Updated
May 5, 2021 - Python
-
Updated
Jun 2, 2021 - Python
-
Updated
Jun 23, 2021 - C
-
Updated
Apr 11, 2019 - Python
-
Updated
Jul 15, 2021 - Python
-
Updated
Jul 16, 2021 - Jupyter Notebook
Right dduper has minimal test script to check basic functionality See ci/gitlab/*.sh . Enhance it add RAID tests.
-
Updated
Jul 19, 2020 - Python
Improve this page
Add a description, image, and links to the deduplication topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the deduplication topic, visit your repo's landing page and select "manage topics."
the borg files cache can be rather large, because it keeps some information about all files that have been processed recently.
lz4 is a very fast compression / decompression algorithm, so we could try to use it to lower the in-memory footprint of the files cache entries.
before implementing this, we should check how big the savings typically are - to determine whether it is worthwhile doing