Hello,

You can control the secret name used in Trino deployment with the parameter "s3_credentials_secret". https://github.com/AICoE/idh-manifests/tree/production/trino#s3_credentials_secret

You don't need to make kfctl/kustomize to control the credentials deployment, but you must specify the secret name that contains the right credentials in the parameters, otherwise Trino will use the default "aws_secret" secret that contains only hard-coded "changeme" values, which you don't want to use in a real environment. Unfortunately, that was the only method to parameterize the credentials in Trino and Data Catalog.

Let me know if you have any other questions.

On Tue, May 4, 2021 at 9:49 AM Ke Zhu <kzhu@us.ibm.com> wrote:
I'm trying to adopt the opendatahub/kubeflow operator with kustomize to
provisioning components like Hive/Trino, but one common requirement for
running production system is credential management.

what's the general suggestion on managing credentials required by
opendatahub? k8s secrets?

For example:
https://github.com/opendatahub-io/odh-manifests/blob/master/trino/base/aws-secret.yaml

I'd prefer not to manage credential in code via kustomize. It's ok by
using environment variable plus kustomize secret generator, but it
won't work if it's the operator pull then provisioning the opendatahub
via kfctl/kustomize.
_______________________________________________
Users mailing list -- users@lists.opendatahub.io
To unsubscribe send an email to users-leave@lists.opendatahub.io


--

Ricardo Martinelli De Oliveira

Senior Software Engineer, AI CoE

Red Hat Brazil

Av. Brigadeiro Faria Lima, 3900

8th floor

rmartine@redhat.com    T: +551135426125    
M: +5511970696531