-
-
Notifications
You must be signed in to change notification settings - Fork 3
fix: Remove userClassPathFirst properties #355
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
This fails with: ``` spark org/apache/spark/sql/delta/stats/StatisticsCollection$SqlParser$$anon$1.visitMultipartIdentifierList(Lorg/apache/spark/sql/catalyst/parser/SqlBaseParser$MultipartIdentifierListContext;)Lscala/collection/Seq; @17: invokevirtual spark Reason: spark Type 'org/apache/spark/sql/catalyst/parser/SqlBaseParser$MultipartIdentifierListContext' (current frame, stack[1]) is not assignable to 'org/antlr/v4/runtime/ParserRuleContext' spark Current Frame: spark bci: @17 spark flags: { } spark locals: { 'org/apache/spark/sql/delta/stats/StatisticsCollection$SqlParser$$anon$1', 'org/apache/spark/sql/catalyst/parser/SqlBaseParser$MultipartIdentifierListContext' } spark stack: { 'org/apache/spark/sql/catalyst/parser/ParserUtils$', 'org/apache/spark/sql/catalyst/parser/SqlBaseParser$MultipartIdentifierListContext', 'scala/Option', 'scala/Function0' } spark Bytecode: spark 0000000: b200 232b b200 23b6 0027 2a2b ba00 3f00 spark 0000010: 00b6 0043 c000 45b0 spark spark at org.apache.spark.sql.delta.stats.StatisticsCollection$SqlParser.<init>(StatisticsCollection.scala:409) spark at org.apache.spark.sql.delta.stats.StatisticsCollection$.<init>(StatisticsCollection.scala:422) spark at org.apache.spark.sql.delta.stats.StatisticsCollection$.<clinit>(StatisticsCollection.scala) spark at org.apache.spark.sql.delta.OptimisticTransactionImpl.updateMetadataInternal(OptimisticTransaction.scala:429) spark at org.apache.spark.sql.delta.OptimisticTransactionImpl.updateMetadataInternal$(OptimisticTransaction.scala:424) spark at org.apache.spark.sql.delta.OptimisticTransaction.updateMetadataInternal(OptimisticTransaction.scala:142) spark at org.apache.spark.sql.delta.OptimisticTransactionImpl.updateMetadata(OptimisticTransaction.scala:400) spark at org.apache.spark.sql.delta.OptimisticTransactionImpl.updateMetadata$(OptimisticTransaction.scala:393) spark at org.apache.spark.sql.delta.OptimisticTransaction.updateMetadata(OptimisticTransaction.scala:142) spark at org.apache.spark.sql.delta.schema.ImplicitMetadataOperation.updateMetadata(ImplicitMetadataOperation.scala:97) spark at org.apache.spark.sql.delta.schema.ImplicitMetadataOperation.updateMetadata$(ImplicitMetadataOperation.scala:56) spark at org.apache.spark.sql.delta.commands.WriteIntoDelta.updateMetadata(WriteIntoDelta.scala:76) spark at org.apache.spark.sql.delta.commands.WriteIntoDelta.write(WriteIntoDelta.scala:162) spark at org.apache.spark.sql.delta.commands.WriteIntoDelta.$anonfun$run$1(WriteIntoDelta.scala:105) ```
Tests pass on OpenShift 4.13:
|
this is super sexy :) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
re-running the failed job (503 from |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would be ok with merging this, but I would really like us solving the classpath problem with the logging libs instead of ripping out logging :)
Co-authored-by: Sebastian Bernauer <[email protected]>
Co-authored-by: Sebastian Bernauer <[email protected]>
What do you mean "ripping out logging" ? Logs from applications are not affected by this change. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What do you mean "ripping out logging"
That the logs of the spark-submit job will not end up in you logging sink if I understood correctly. Imagine a job that runs every hours did not properly start this night at 02:00. You don't have any insights what happened, e.g. the Kerberos server not being reachable.
But if you did not get it to work it is what it is I guess
Fixes #354