Upgrading Hadoop version on the s2i Spark notebook
by pimilosevic@croz.net
Hi all,
I'm a little new at OpenShifting and I need to deploy a whole ODH stack.. We have OCS set up to use the s3 storage but the s2i-spark-notebook that comes with the Operator uses a Hadoop version that refuses to change the URL style with hadoopConf.set("fs.s3a.path.style.access", "true")
I get a big error log saying the URL of my bucket is unreachable and the URL that gets used is the bucket.s3.storage.whatever where it should be s3.storage.whatever/bucket
Upon looking around online i found that it could be a bug that was solved in Hadoop version 2.8 so I'd like to upgrade to that if at all possible but don't really understand how to do it. I appreciate any advice.
Stay good,
Petar
3 years, 7 months
suggestion on managing secrets like aws-secret
by Ke Zhu
I'm trying to adopt the opendatahub/kubeflow operator with kustomize to
provisioning components like Hive/Trino, but one common requirement for
running production system is credential management.
what's the general suggestion on managing credentials required by
opendatahub? k8s secrets?
For example:
https://github.com/opendatahub-io/odh-manifests/blob/master/trino/base/aw...
I'd prefer not to manage credential in code via kustomize. It's ok by
using environment variable plus kustomize secret generator, but it
won't work if it's the operator pull then provisioning the opendatahub
via kfctl/kustomize.
3 years, 8 months