Hi Kidong,
For installing ODH, here are the steps
1. Install the Open Data Hub operator from the OperatorHub on Openshift
2. Create a namespace called opendatahub, go to installed operators and
install the default kfdef from ODH operator. This will install ODH
components but not Kubeflow. Once all pods are running you can access the
components from the ODH dashboard in Routes
3. To install Kubeflow, create another namespace called kubeflow, and from
the ODH operator create this kfdef :
https://github.com/kubeflow/manifests/blob/v1.3-branch/distributions/kfde...
. Once all pods are running, you can access the Kubeflow dashboard from the
istio ingress route in namespace istio-system.
Hope this helps
Juana
On Tue, Oct 12, 2021 at 8:35 AM Ki Dong Lee <kidlee(a)redhat.com> wrote:
Hi everyone,
I am new to ODH.
It seems that all the components in ODH will be deployed with kubeflow
operator and kustomize manifests.
Could you tell me how to deploy such components in detail on OCP in ODH?
Another question is about spark on kubernetes. I have noticed that in ODH,
If you want to deploy spark thrift server as hiver server2 , spark cluster
needs to be deployed on OCP in ODH beforehand.
I think, there is a way to submit spark thrift server on kubernetes/OCP
<
https://spark.apache.org/docs/3.0.3/running-on-kubernetes.html> directly
without having spark cluster deployed on OCP.
Is there any reason to do so?
Cheers,
- Kidong Lee.
_______________________________________________
Users mailing list -- users(a)lists.opendatahub.io
To unsubscribe send an email to users-leave(a)lists.opendatahub.io