You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
for the endpoint there is a configoption, the access key and secret key should be mounted from the referenced secret. The secret structure used in druid is:
The secret should be mounted and the env vars for that are AWS_ACCESS_KEY_IDAWS_SECRET_ACCESS_KEY. The secret also needs to be mounted in the executors, so they can read from S3 too; so the pod template needs to be adjusted accordingly.
Update:
We don't know in which namespaces the SparkApplications will be created, so we will need to create the ServiceAccount and RoleBinding on demand in the namespace of the SparkApplication. Our Kafka Operator is already creating service accounts, we can have a look there.
The role can be a ClusterRole, created in the Helm Chart.
The text was updated successfully, but these errors were encountered:
Uh oh!
There was an error while loading. Please reload this page.
We want an s3 section in the CRD, for now we can use the same structure as in druid:
for the endpoint there is a configoption, the access key and secret key should be mounted from the referenced secret. The secret structure used in druid is:
The secret should be mounted and the env vars for that are
AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY
. The secret also needs to be mounted in the executors, so they can read from S3 too; so the pod template needs to be adjusted accordingly.Update:
We don't know in which namespaces the
SparkApplication
s will be created, so we will need to create theServiceAccount
andRoleBinding
on demand in the namespace of theSparkApplication
. Our Kafka Operator is already creating service accounts, we can have a look there.The role can be a
ClusterRole
, created in the Helm Chart.The text was updated successfully, but these errors were encountered: