distributed-database
Here are 210 public repositories matching this topic...
-
Updated
Aug 23, 2020 - Go
Index constraint generation currently supports constraints from LIKE expressions, but the opt statistics code does not. A simple improvement would be to handle the case where the expression is exactly equivalent to a range of values.
Note that such a rule would need to handle escaping correctly (which the index constraints code currently does not, see #44123).
-
Updated
Jul 21, 2020
Please finish assertProxyConfiguration for two test cases
Now insert and query share the resource ( Max Process Count control) 。 When the query with high TPS,the insert will get error (“error: too many process”). I think separator the resource for Insert and Query will makes sense. Ensure enough resource for insert。It looks like Use Yarn, Insert and Query use the different resource quota。
Or the simple way , Can we set Ratio for Insert and
-
Updated
Aug 23, 2020 - C++
The old restore supports adding prefix and suffix to the backup data. This allows us to restore a key space (say [a, b)) to a new key space (say [awesome_a, awesome_b).
This feature is useful in:
- Performance testing: We do not want to throw away existing loaded data in the cluster because loading a huge DB takes time;
- Cross-verification: Users may want to verify the restored data in the
-
Updated
Aug 19, 2020
Hello Philip!
I think there is an issue with this part of the code of rqlite (store/store.go).
func (s *Store) Database(leader bool) ([]byte, error) {
if leader && s.raft.State() != raft.Leader {
return nil, ErrNotLeader
}
// Ensure only one snapshot can take place at once, and block all queries.
s.mu.Lock()
defer s.mu.Unlock()
f, err := ioutil.TempFile("", "rqlilte-snap-
-
Updated
Aug 14, 2020 - Rust
-
Updated
Aug 21, 2020 - C
[DocDB] in yb-master /tablet-servers view summarize read-replica and primary cluster info separately
Sample screenshot from the http://yb-master:7000/tablet-servers view of a Yugabyte universe with a primary cluster + read-replica cluster:
As you can see, the Tablet-Peers by Availability Zone section simply summarizes by
-
Updated
Jan 31, 2020 - Ruby
-
Updated
Jun 21, 2020 - Go
Currently, Stats command in the protocol and olric-stats tool only work on a cluster member. So if you run the following command:
olric-stats -a=cluster-member:portIt returns statistics for only one cluster member. This is not very useful most of the time. We need to aggregate all statistics.
-s/--summary would also be useful.
-
Updated
Aug 21, 2020 - Go
SELECT custkey, sum(totalprice)
FROM orders
GROUP BY custkey
ORDER BY 2 DESCEXPLAIN
Query Plan
---------------------------------------------------------------------------------------
Fragment 0 [SINGLE]
Output layout: [custkey, sum]
Output partitioning: SINGLE []
Stage Execution Strategy: UNGROUPED_EXECUTION
-
Updated
Aug 21, 2020 - C++
-
Updated
Jul 17, 2020 - C++
-
Updated
May 6, 2017 - C++
-
Updated
Aug 21, 2020 - Java
Now that we support more than 1 data type, we should include the values data type in cache nodes. This give us a way of easily identifying what type of data a key points to e.g. string, queue (and more types in the future)
Currently a cache node object consists of a Key, a Value and a TTL. We should also add a field specifying the type of the Value
-
Updated
Aug 22, 2020 - Java
-
Updated
Nov 5, 2019
-
Updated
Nov 6, 2019 - Go
-
Updated
May 18, 2020
-
Updated
Aug 12, 2018 - Swift
-
Updated
Aug 13, 2018 - JavaScript
Improve this page
Add a description, image, and links to the distributed-database topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the distributed-database topic, visit your repo's landing page and select "manage topics."

Today,
etcdctl endpoint healthonly performs a quorum GET. If server is corrupted or exceeded quota (which put the server into read only mode), this command incorrectly returnshealthy.