hive
Here are 800 public repositories matching this topic...
If I were to deploy cube.js using AWS serverless architecture, is athena required?
The docs https://cube.dev/docs/deployment#serverless do not mention whether Athena is optional or required. But when reading it, I assume it is required because there are athena keys in the serverless.yml config. I'm evaluating the idea of using Postgres RDS as the sole datasource for cubejs.
-
Updated
Apr 1, 2019 - Java
Steps to Reproduce
See below.
Code sample
This is an extract from the Relationships section of hive docs.
void main() async {
Hive.registerAdapter(PersonAdapter());
var persons = await Hive.openBox<Person>('personsWithLists');
persons.clear();
var mario = Person('Mario');
var luna = Person('Luna')I think the new argument in the init function in presto.py was meant to be principal_username not principle_username.
Might need to wait for a major version change as it's a breaking one. Or to add the principal_username argument, deprecate the old one and wait for the major version to remove it.
https://github.com/dropbox/PyHive/blob/bbf79418cfd2806fd9079893e70acc50a727a65d/pyhi
-
Updated
May 29, 2020 - Scala
CHAR is mapped to Kudu string
https://github.com/prestosql/presto/blob/ec549ac8a1192b18c3667203974f453d5fe5a9fa/presto-kudu/src/main/java/io/prestosql/plugin/kudu/TypeHelper.java#L88-L89
but char(3) value ab⎵ is not persisted as such, it's stored as ab instead.
- 安装linkis jobtypes
按照官方安装文档进行自动化安装,执行sh install.sh最后一步报错:{"error":"Missing required parameter 'execid'."}。并没有看到文档中所说的“如果安装成功最后会打印:{"status":"success"}”,但是能在azkaban的/plugins/jobtypes目录下看到已经安装好的linkis任务插件。通过排查在安装脚本最后一步会去调用"curl http://azkaban_ip:executor_port/executor?action=reloadJobTypePlugins"进行插件的刷新。重启azkaban executor日志中看到已经加载了插件的信息 `INFO [JobTypeManager][Azkaban] Loaded jobtype linkis
-
Updated
Mar 13, 2020
-
Updated
Apr 3, 2020 - Vue
-
Updated
May 31, 2020 - Java
-
Updated
Jun 1, 2020 - JavaScript
-
Updated
Mar 11, 2020
-
Updated
May 28, 2020 - Scala
-
Updated
May 26, 2019 - Shell
Describe the bug
A clear and concise description of what the bug is.
如果打开不同用户的webui 在查看用户执行的sql记录时,会调到其他用户的sql记录
To Reproduce
Steps to reproduce the behavior:
Expected behavior
A clear and
-
Updated
May 30, 2020 - Java
-
Updated
Oct 22, 2019 - Java
-
Updated
Apr 30, 2020 - Java
For documentation???
Spark is great at parallel processing data already in a distributed store like HDFS but it's not really designed for ingesting data at REST from a non-distributed store like a Local File System though there is support for it, i.e. local mode.
The disadvantage of ingesting data at REST from a local file system:
- There's no advantage in using YARN o
Improve this page
Add a description, image, and links to the hive topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the hive topic, visit your repo's landing page and select "manage topics."

We have a
value_at_quantilefunction for QDigest types. Occasionally, it's useful to also retrieve a quantile given a value (say, for X, which percentile in the QDigest does it fall under).The signature would look like the following:
quantile_at_value(qdigest(T), DOUBLE) -> TWhere T is one of
DOUBLE,REALorBIGINT.