Branch: master
-
[SPARK-27029][BUILD] Update Thrift to 0.12.0
## What changes were proposed in this pull request? Update Thrift to 0.12.0 to pick up bug and security fixes. Changes: https://github.com/apache/thrift/blob/master/CHANGES.md The important one is for https://issues.apache.org/jira/browse/THRIFT-4506 ## How was this patch tested? Existing tests. A quick local test suggests this works. Closes #23935 from srowen/SPARK-27029. Authored-by: Sean Owen <sean.owen@databricks.com> Signed-off-by: Dongjoon Hyun <dhyun@apple.com>
-
[SPARK-26048][BUILD] Enable flume profile when creating 2.x releases.
Closes #23931 from vanzin/SPARK-26048. Authored-by: Marcelo Vanzin <vanzin@cloudera.com> Signed-off-by: Dongjoon Hyun <dhyun@apple.com>
-
[SPARK-27007][PYTHON] add rawPrediction to OneVsRest in PySpark
## What changes were proposed in this pull request? Add RawPrediction to OneVsRest in PySpark to make it consistent with scala implementation ## How was this patch tested? Add doctest Closes #23910 from huaxingao/spark-27007. Authored-by: Huaxin Gao <huaxing@us.ibm.com> Signed-off-by: Sean Owen <sean.owen@databricks.com>
-
[SPARK-26807][DOCS] Clarify that Pyspark is on PyPi now
## What changes were proposed in this pull request? Docs still say that Spark will be available on PyPi "in the future"; just needs to be updated. ## How was this patch tested? Doc build Closes #23933 from srowen/SPARK-26807. Authored-by: Sean Owen <sean.owen@databricks.com> Signed-off-by: Hyukjin Kwon <gurwls223@apache.org>
-
[SPARK-26982][SQL] Enhance describe framework to describe the output …
…of a query. ## What changes were proposed in this pull request? Currently we can use `df.printSchema` to discover the schema information for a query. We should have a way to describe the output schema of a query using SQL interface. Example: DESCRIBE SELECT * FROM desc_table DESCRIBE QUERY SELECT * FROM desc_table ```SQL spark-sql> create table desc_table (c1 int comment 'c1-comment', c2 decimal comment 'c2-comment', c3 string); spark-sql> desc select * from desc_table; c1 int c1-comment c2 decimal(10,0) c2-comment c3 string NULL ``` ## How was this patch tested? Added a new test under SQLQueryTestSuite and SparkSqlParserSuite Closes #23883 from dilipbiswal/dkb_describe_query. Authored-by: Dilip Biswal <dbiswal@us.ibm.com> Signed-off-by: Wenchen Fan <wenchen@databricks.com>
-
[SPARK-26977][CORE] Fix warn against subclassing scala.App
## What changes were proposed in this pull request? Fix warn against subclassing scala.App ## How was this patch tested? Manual test Closes #23903 from manuzhang/fix_submit_warning. Authored-by: manuzhang <owenzhang1990@gmail.com> Signed-off-by: Sean Owen <sean.owen@databricks.com>
-
[SPARK-26215][SQL][FOLLOW-UP][MINOR] Fix the warning from ANTR4
## What changes were proposed in this pull request? I see the following new warning from ANTR4 after SPARK-26215 after it added `SCHEMA` keyword in the reserved/unreserved list. This is a minor PR to cleanup the warning. ``` WARNING] warning(125): org/apache/spark/sql/catalyst/parser/SqlBase.g4:784:90: implicit definition of token SCHEMA in parser [WARNING] .../apache/spark/org/apache/spark/sql/catalyst/parser/SqlBase.g4 [784:90]: implicit definition of token SCHEMA in parser ``` ## How was this patch tested? Manually built catalyst after the fix to verify Closes #23897 from dilipbiswal/minor_parser_token. Authored-by: Dilip Biswal <dbiswal@us.ibm.com> Signed-off-by: Dongjoon Hyun <dhyun@apple.com>
-
[K8S][MINOR] Log minikube version when running integration tests.
vanzin committedMar 1, 2019 Closes #23893 from vanzin/minikube-version. Authored-by: Marcelo Vanzin <vanzin@cloudera.com> Signed-off-by: Marcelo Vanzin <vanzin@cloudera.com>
-
[SPARK-26967][CORE] Put MetricsSystem instance names together for cle…
…arer management ## What changes were proposed in this pull request? `MetricsSystem` instance creations have a scattered distribution in the project code. So do their names. It may cause some inconvenience for browsing and management. This PR tries to put them together. In this way, we can have a uniform location for adding or removing them, and have a overall view of `MetircsSystem `instances in current project. It's also helpful for maintaining user documents by avoiding missing something. ## How was this patch tested? Existing unit tests. Closes #23869 from SongYadong/metrics_system_inst_manage. Lead-authored-by: SongYadong <song.yadong1@zte.com.cn> Co-authored-by: walter2001 <ydsong2007@163.com> Signed-off-by: Sean Owen <sean.owen@databricks.com>
-
[MINOR] Remove unnecessary gets when getting a value from map.
## What changes were proposed in this pull request? Redundant `get` when getting a value from `Map` given a key. ## How was this patch tested? N/A Closes #23901 from 10110346/removegetfrommap. Authored-by: liuxian <liu.xian3@zte.com.cn> Signed-off-by: Sean Owen <sean.owen@databricks.com>
-
[SPARK-26986][ML][FOLLOWUP] Add JAXB reference impl to build for Java 9+
srowen committedMar 1, 2019 ## What changes were proposed in this pull request? Remove a few new JAXB dependencies that shouldn't be necessary now. See #23890 (comment) ## How was this patch tested? Existing tests Closes #23923 from srowen/SPARK-26986.2. Authored-by: Sean Owen <sean.owen@databricks.com> Signed-off-by: Sean Owen <sean.owen@databricks.com>
-
[SPARK-27009][TEST] Add Standard Deviation to benchmark results
## What changes were proposed in this pull request? Add standard deviation to the stats taken during benchmark testing. ## How was this patch tested? Manually ran a few benchmark tests locally and visually inspected the output Closes #23914 from yifeih/spark-27009-stdev. Authored-by: Yifei Huang <yifeih@palantir.com> Signed-off-by: Dongjoon Hyun <dhyun@apple.com>
-
[SPARK-26420][K8S] Generate more unique IDs when creating k8s resourc…
…e names. Using the current time as an ID is more prone to clashes than people generally realize, so try to make things a bit more unique without necessarily using a UUID, which would eat too much space in the names otherwise. The implemented approach uses some bits from the current time, plus some random bits, which should be more resistant to clashes. Closes #23805 from vanzin/SPARK-26420. Authored-by: Marcelo Vanzin <vanzin@cloudera.com> Signed-off-by: Dongjoon Hyun <dhyun@apple.com>
-
[SPARK-27008][SQL] Support java.time.LocalDate as an external type of…
… DateType ## What changes were proposed in this pull request? In the PR, I propose to add new Catalyst type converter for `DateType`. It should be able to convert `java.time.LocalDate` to/from `DateType`. Main motivations for the changes: - Smoothly support Java 8 time API - Avoid inconsistency of calendars used inside of Spark 3.0 (Proleptic Gregorian calendar) and `java.sql.Date` (hybrid calendar - Julian + Gregorian). - Make conversion independent from current system timezone. By default, Spark converts values of `DateType` to `java.sql.Date` instances but the SQL config `spark.sql.datetime.java8API.enabled` can change the behavior. If it is set to `true`, Spark uses `java.time.LocalDate` as external type for `DateType`. ## How was this patch tested? Added new testes to `CatalystTypeConvertersSuite` to check conversion of `DateType` to/from `java.time.LocalDate`, `JavaUDFSuite`/ `UDFSuite` to test usage of `LocalDate` type in Scala/Java UDFs. Closes #23913 from MaxGekk/date-localdate. Authored-by: Maxim Gekk <maxim.gekk@databricks.com> Signed-off-by: Wenchen Fan <wenchen@databricks.com>
-
[SPARK-26774][CORE] Update some docs on TaskSchedulerImpl.
A couple of places in TaskSchedulerImpl could use a minor doc update on threading concerns. There is one bug fix here, but only in sc.killTaskAttempt() which is probably not used much. Closes #23874 from squito/SPARK-26774. Authored-by: Imran Rashid <irashid@cloudera.com> Signed-off-by: Marcelo Vanzin <vanzin@cloudera.com>
-
[SPARK-19591][ML][PYSPARK][FOLLOWUP] Add sample weights to decision t…
…rees ## What changes were proposed in this pull request? Add sample weights to decision trees ## How was this patch tested? updated testsuites Closes #23818 from zhengruifeng/py_tree_support_sample_weight. Authored-by: zhengruifeng <ruifengz@foxmail.com> Signed-off-by: Sean Owen <sean.owen@databricks.com>
-
[SPARK-26895][CORE][FOLLOW-UP] Uninitializing log after `prepareSubmi…
…tEnvironment` in SparkSubmit ## What changes were proposed in this pull request? Currently, if I run `spark-shell` in my local, it started to show the logs as below: ``` $ ./bin/spark-shell ... 19/02/28 04:42:43 INFO SecurityManager: Changing view acls to: hkwon 19/02/28 04:42:43 INFO SecurityManager: Changing modify acls to: hkwon 19/02/28 04:42:43 INFO SecurityManager: Changing view acls groups to: 19/02/28 04:42:43 INFO SecurityManager: Changing modify acls groups to: 19/02/28 04:42:43 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(hkwon); groups with view permissions: Set(); users with modify permissions: Set(hkwon); groups with modify permissions: Set() 19/02/28 04:42:43 INFO SignalUtils: Registered signal handler for INT 19/02/28 04:42:48 INFO SparkContext: Running Spark version 3.0.0-SNAPSHOT 19/02/28 04:42:48 INFO SparkContext: Submitted application: Spark shell 19/02/28 04:42:48 INFO SecurityManager: Changing view acls to: hkwon ``` Seems to be the cause is #23806 and `prepareSubmitEnvironment` looks actually reinitializing the logging again. This PR proposes to uninitializing log later after `prepareSubmitEnvironment`. ## How was this patch tested? Manually tested. Closes #23911 from HyukjinKwon/SPARK-26895. Authored-by: Hyukjin Kwon <gurwls223@apache.org> Signed-off-by: Marcelo Vanzin <vanzin@cloudera.com>
-
[SPARK-27002][SS] Get kafka delegation tokens right before consumer/p…
…roducer created ## What changes were proposed in this pull request? Spark not always picking up the latest Kafka delegation tokens even if a new one properly obtained. In the PR I'm setting delegation tokens right before `KafkaConsumer` and `KafkaProducer` creation to be on the safe side. ## How was this patch tested? Long running Kafka to Kafka tests on 4 node cluster with randomly thrown artificial exceptions. Test scenario: * 4 node cluster * Yarn * Kafka broker version 2.1.0 * security.protocol = SASL_SSL * sasl.mechanism = SCRAM-SHA-512 Kafka broker settings: * delegation.token.expiry.time.ms=600000 (10 min) * delegation.token.max.lifetime.ms=1200000 (20 min) * delegation.token.expiry.check.interval.ms=300000 (5 min) After each 7.5 minutes new delegation token obtained from Kafka broker (10 min * 0.75). But when token expired after 10 minutes (Spark obtains new one and doesn't renew the old), the brokers expiring thread comes after each 5 minutes (invalidates expired tokens) and artificial exception has been thrown inside the Spark application (such case Spark closes connection), then the latest delegation token not always picked up. Closes #23906 from gaborgsomogyi/SPARK-27002. Authored-by: Gabor Somogyi <gabor.g.somogyi@gmail.com> Signed-off-by: Marcelo Vanzin <vanzin@cloudera.com>
-
[SPARK-24063][SS] Add maximum epoch queue threshold for ContinuousExe…
…cution ## What changes were proposed in this pull request? Continuous processing is waiting on epochs which are not yet complete (for example one partition is not making progress) and stores pending items in queues. These queues are unbounded and can consume up all the memory easily. In this PR I've added `spark.sql.streaming.continuous.epochBacklogQueueSize` configuration possibility to make them bounded. If the related threshold reached then the query will stop with `IllegalStateException`. ## How was this patch tested? Existing + additional unit tests. Closes #23156 from gaborgsomogyi/SPARK-24063. Authored-by: Gabor Somogyi <gabor.g.somogyi@gmail.com> Signed-off-by: Marcelo Vanzin <vanzin@cloudera.com>
-
[SPARK-24736][K8S] Let spark-submit handle dependency resolution.
vanzin committedFeb 27, 2019 Before this change, there was some code in the k8s backend to deal with how to resolve dependencies and make them available to the Spark application. It turns out that none of that code is necessary, since spark-submit already handles all that for applications started in client mode - like the k8s driver that is run inside a Spark-created pod. For that reason, specifically for pyspark, there's no need for the k8s backend to deal with PYTHONPATH; or, in general, to change the URIs provided by the user at all. spark-submit takes care of that. For testing, I created a pyspark script that depends on another module that is shipped with --py-files. Then I used: - --py-files http://.../dep.py http://.../test.py - --py-files http://.../dep.zip http://.../test.py - --py-files local:/.../dep.py local:/.../test.py - --py-files local:/.../dep.zip local:/.../test.py Without this change, all of the above commands fail. With the change, the driver is able to see the dependencies in all the above cases; but executors don't see the dependencies in the last two. That's a bug in shared Spark code that deals with local: dependencies in pyspark (SPARK-26934). I also tested a Scala app using the main jar from an http server. Closes #23793 from vanzin/SPARK-24736. Authored-by: Marcelo Vanzin <vanzin@cloudera.com> Signed-off-by: Marcelo Vanzin <vanzin@cloudera.com>
-
[SPARK-27000][PYTHON] Upgrades cloudpickle to v0.8.0
HyukjinKwon committedFeb 27, 2019 ## What changes were proposed in this pull request? After upgrading cloudpickle to 0.6.1 at #20691, one regression was found. Cloudpickle had a critical cloudpipe/cloudpickle#240 for that. Basically, it currently looks existing globals would override globals shipped in a function's, meaning: **Before:** ```python >>> def hey(): ... return "Hi" ... >>> spark.range(1).rdd.map(lambda _: hey()).collect() ['Hi'] >>> def hey(): ... return "Yeah" ... >>> spark.range(1).rdd.map(lambda _: hey()).collect() ['Hi'] ``` **After:** ```python >>> def hey(): ... return "Hi" ... >>> spark.range(1).rdd.map(lambda _: hey()).collect() ['Hi'] >>> >>> def hey(): ... return "Yeah" ... >>> spark.range(1).rdd.map(lambda _: hey()).collect() ['Yeah'] ``` Therefore, this PR upgrades cloudpickle to 0.8.0. Note that cloudpickle's release cycle is quite short. Between 0.6.1 and 0.7.0, it contains minor bug fixes. I don't see notable changes to double check and/or avoid. There is virtually only this fix between 0.7.0 and 0.8.1 - other fixes are about testing. ## How was this patch tested? Manually tested, tests were added. Verified unit tests were added in cloudpickle. Closes #23904 from HyukjinKwon/SPARK-27000. Authored-by: Hyukjin Kwon <gurwls223@apache.org> Signed-off-by: Hyukjin Kwon <gurwls223@apache.org>
-
[SPARK-26890][DOC] Add list of available Dropwizard metrics in Spark …
…and add additional configuration details to the monitoring documentation ## What changes were proposed in this pull request? This PR proposes to extend the documentation of the Spark metrics system in the monitoring guide. In particular by: - adding a list of the available metrics grouped per component instance - adding information on configuration parameters that can be used to configure the metrics system in alternative to the metrics.properties file - adding information on the configuration parameters needed to enable certain metrics - it also propose to add an example of Graphite sink configuration in metrics.properties.template Closes #23798 from LucaCanali/metricsDocUpdate. Authored-by: Luca Canali <luca.canali@cern.ch> Signed-off-by: Sean Owen <sean.owen@databricks.com>
-
[SPARK-26803][PYTHON] Add sbin subdirectory to pyspark
## What changes were proposed in this pull request? Modifies `setup.py` so that `sbin` subdirectory is included in pyspark ## How was this patch tested? Manually tested with python 2.7 and python 3.7 ```sh $ ./build/mvn -D skipTests -P hive -P hive-thriftserver -P yarn -P mesos clean package $ cd python $ python setup.py sdist $ pip install dist/pyspark-2.1.0.dev0.tar.gz ``` Checked manually that `sbin` is now present in install directory. srowen holdenk Closes #23715 from oulenz/pyspark_sbin. Authored-by: Oliver Urs Lenz <oliver.urs.lenz@gmail.com> Signed-off-by: Sean Owen <sean.owen@databricks.com>
-
[MINOR] Simplify boolean expression
## What changes were proposed in this pull request? Comparing whether Boolean expression is equal to true is redundant For example: The datatype of `a` is boolean. Before: if (a == true) After: if (a) ## How was this patch tested? N/A Closes #23884 from 10110346/simplifyboolean. Authored-by: liuxian <liu.xian3@zte.com.cn> Signed-off-by: Sean Owen <sean.owen@databricks.com>
-
[SPARK-26902][SQL] Support java.time.Instant as an external type of T…
…imestampType ## What changes were proposed in this pull request? In the PR, I propose to add new Catalyst type converter for `TimestampType`. It should be able to convert `java.time.Instant` to/from `TimestampType`. Main motivations for the changes: - Smoothly support Java 8 time API - Avoid inconsistency of calendars used inside of Spark 3.0 (Proleptic Gregorian calendar) and `java.sql.Timestamp` (hybrid calendar - Julian + Gregorian). - Make conversion independent from current system timezone. By default, Spark converts values of `TimestampType` to `java.sql.Timestamp` instances but the SQL config `spark.sql.catalyst.timestampType` can change the behavior. It accepts two values `Timestamp` (default) and `Instant`. If the former one is set, Spark returns `java.time.Instant` instances for timestamp values. ## How was this patch tested? Added new testes to `CatalystTypeConvertersSuite` to check conversion of `TimestampType` to/from `java.time.Instant`. Closes #23811 from MaxGekk/timestamp-instant. Lead-authored-by: Maxim Gekk <maxim.gekk@databricks.com> Co-authored-by: Maxim Gekk <max.gekk@gmail.com> Signed-off-by: Wenchen Fan <wenchen@databricks.com>
-
[SPARK-26990][SQL] FileIndex: use user specified field names if possible
## What changes were proposed in this pull request? WIth the following file structure: ``` /tmp/data └── a=5 ``` In the previous release: ``` scala> spark.read.schema("A int, ID long").parquet("/tmp/data/").printSchema root |-- ID: long (nullable = true) |-- A: integer (nullable = true) ``` While in current code: ``` scala> spark.read.schema("A int, ID long").parquet("/tmp/data/").printSchema root |-- ID: long (nullable = true) |-- a: integer (nullable = true) ``` We can see that the partition column name `a` is different from `A` as user specifed. This PR is to fix the case and make it more user-friendly. ## How was this patch tested? Unit test Closes #23894 from gengliangwang/fileIndexSchema. Authored-by: Gengliang Wang <gengliang.wang@databricks.com> Signed-off-by: Wenchen Fan <wenchen@databricks.com> -
[SPARK-22000][SQL] Address missing Upcast in JavaTypeInference.deseri…
…alizerFor ## What changes were proposed in this pull request? Spark expects the type of column and the type of matching field is same when deserializing to Object, but Spark hasn't actually restrict it (at least for Java bean encoder) and some users just do it and experience undefined behavior (in SPARK-22000, Spark throws compilation failure on generated code because it calls `.toString()` against primitive type. It doesn't produce error in Scala side because `ScalaReflection.deserializerFor` properly inject Upcast if necessary. This patch proposes applying same thing to `JavaTypeInference.deserializerFor` as well. Credit to srowen, maropu, and cloud-fan since they provided various approaches to solve this. ## How was this patch tested? Added UT which query is slightly modified based on sample code in attachment on JIRA issue. Closes #23854 from HeartSaVioR/SPARK-22000. Authored-by: Jungtaek Lim (HeartSaVioR) <kabhwan@gmail.com> Signed-off-by: Wenchen Fan <wenchen@databricks.com>
-
[SPARK-26830][SQL][R] Vectorized R dapply() implementation
## What changes were proposed in this pull request? This PR targets to add vectorized `dapply()` in R, Arrow optimization. This can be tested as below: ```bash $ ./bin/sparkR --conf spark.sql.execution.arrow.enabled=true ``` ```r df <- createDataFrame(mtcars) collect(dapply(df, function(rdf) { data.frame(rdf$gear + 1) }, structType("gear double"))) ``` ### Requirements - R 3.5.x - Arrow package 0.12+ ```bash Rscript -e 'remotes::install_github("apache/arrowapache-arrow-0.12.0", subdir = "r")' ``` **Note:** currently, Arrow R package is not in CRAN. Please take a look at ARROW-3204. **Note:** currently, Arrow R package seems not supporting Windows. Please take a look at ARROW-3204. ### Benchmarks **Shall** ```bash sync && sudo purge ./bin/sparkR --conf spark.sql.execution.arrow.enabled=false --driver-memory 4g ``` ```bash sync && sudo purge ./bin/sparkR --conf spark.sql.execution.arrow.enabled=true --driver-memory 4g ``` **R code** ```r rdf <- read.csv("500000.csv") df <- cache(createDataFrame(rdf)) count(df) test <- function() { options(digits.secs = 6) # milliseconds start.time <- Sys.time() count(cache(dapply(df, function(rdf) { rdf }, schema(df)))) end.time <- Sys.time() time.taken <- end.time - start.time print(time.taken) } test() ``` **Data (350 MB):** ```r object.size(read.csv("500000.csv")) 350379504 bytes ``` "500000 Records" http://eforexcel.com/wp/downloads-16-sample-csv-files-data-sets-for-testing/ **Results** ``` Time difference of 13.42037 mins ``` ``` Time difference of 30.64156 secs ``` The performance improvement was around **2627%**. ### Limitations - For now, Arrow optimization with R does not support when the data is `raw`, and when user explicitly gives float type in the schema. They produce corrupt values. - Due to ARROW-4512, it cannot send and receive batch by batch. It has to send all batches in Arrow stream format at once. It needs improvement later. ## How was this patch tested? Unit tests were added, and manually tested. Closes #23787 from HyukjinKwon/SPARK-26830-1. Authored-by: Hyukjin Kwon <gurwls223@apache.org> Signed-off-by: Hyukjin Kwon <gurwls223@apache.org> -
[SPARK-26837][SQL] Pruning nested fields from object serializers
## What changes were proposed in this pull request? In SPARK-26619, we make change to prune unnecessary individual serializers when serializing objects. This is extension to SPARK-26619. We can further prune nested fields from object serializers if they are not used. For example, in following query, we only use one field in a struct column: ```scala val data = Seq((("a", 1), 1), (("b", 2), 2), (("c", 3), 3)) val df = data.toDS().map(t => (t._1, t._2 + 1)).select("_1._1") ``` So, instead of having a serializer to create a two fields struct, we can prune unnecessary field from it. This is what this PR proposes to do. In order to make this change conservative and safer, a SQL config is added to control it. It is disabled by default. TODO: Support to prune nested fields inside MapType's key and value. ## How was this patch tested? Added tests. Closes #23740 from viirya/nested-pruning-serializer-2. Authored-by: Liang-Chi Hsieh <viirya@gmail.com> Signed-off-by: Wenchen Fan <wenchen@databricks.com> -
[SPARK-26986][ML] Add JAXB reference impl to build for Java 9+
srowen committedFeb 27, 2019 ## What changes were proposed in this pull request? Add reference JAXB impl for Java 9+ from Glassfish. Right now it's only apparently necessary in MLlib but can be expanded later. ## How was this patch tested? Existing tests particularly PMML-related ones, which use JAXB. This works on Java 11. Closes #23890 from srowen/SPARK-26986. Authored-by: Sean Owen <sean.owen@databricks.com> Signed-off-by: Sean Owen <sean.owen@databricks.com>
-
[SPARK-26449][PYTHON] Add transform method to DataFrame API
## What changes were proposed in this pull request? Added .transform() method to Python DataFrame API to be in sync with Scala API. ## How was this patch tested? Addition has been tested manually. Closes #23877 from Hellsen83/pyspark-dataframe-transform. Authored-by: Hellsen83 <erik.christiansen83@gmail.com> Signed-off-by: Sean Owen <sean.owen@databricks.com>
-
[SPARK-22860][CORE][YARN] Redact command line arguments for running D…
…river and Executor before logging (standalone and YARN) ## What changes were proposed in this pull request? This patch applies redaction to command line arguments before logging them. This applies to two resource managers: standalone cluster and YARN. This patch only concerns about arguments starting with `-D` since Spark is likely passing the Spark configuration to command line arguments as `-Dspark.blabla=blabla`. More change is necessary if we also want to handle the case of `--conf spark.blabla=blabla`. ## How was this patch tested? Added UT for redact logic. This patch only touches how to log so not easy to add UT regarding it. Closes #23820 from HeartSaVioR/MINOR-redact-command-line-args-for-running-driver-executor. Authored-by: Jungtaek Lim (HeartSaVioR) <kabhwan@gmail.com> Signed-off-by: Marcelo Vanzin <vanzin@cloudera.com>
-
Revert "[SPARK-26742][K8S] Update Kubernetes-Client version to 4.1.2"
vanzin committedFeb 26, 2019 This reverts commit a3192d9.
-
[SPARK-26978][CORE][SQL] Avoid magic time constants
## What changes were proposed in this pull request? In the PR, I propose to refactor existing code related to date/time conversions, and replace constants like `1000` and `1000000` by `DateTimeUtils` constants and transformation functions from `java.util.concurrent.TimeUnit._`. ## How was this patch tested? The changes are tested by existing test suites. Closes #23878 from MaxGekk/magic-time-constants. Lead-authored-by: Maxim Gekk <max.gekk@gmail.com> Co-authored-by: Maxim Gekk <maxim.gekk@databricks.com> Signed-off-by: Sean Owen <sean.owen@databricks.com>
-
[MINOR][DOCS] Remove Akka leftover
## What changes were proposed in this pull request? Since Spark 2.0, Akka is not used anymore and Akka related stuff were removed. However there are still some leftover. This PR aims to remove these leftover. * `/pom.xml` has a comment about Akka, which is not needed anymore. ## How was this patch tested? Existing tests. Closes #23885 from seancxmao/remove-akka-leftover. Authored-by: seancxmao <seancxmao@gmail.com> Signed-off-by: Sean Owen <sean.owen@databricks.com>