don't see spark or Jupyterhub pods
by Ravi Gupta
Following manual install on 3.11, any idea what I am missing
ravis-MacBook-Pro:opendatahub-operator rgupta$ oc get pods
NAME READY STATUS RESTARTS AGE
opendatahub-operator-7756f6cb5f-zv5ql 1/1 Running 0 5m
ravis-MacBook-Pro:opendatahub-operator rgupta$
I don't see this:
*oc get pods
NAME READY STATUS RESTARTS AGE
jupyterhub-1-bx5ks 1/1 Running 0 2m7s
jupyterhub-1-deploy 0/1 Completed 0 2m16s
jupyterhub-db-1-deploy 0/1 Completed 0 2m9s
jupyterhub-db-1-wfvl6 1/1 Running 1 2m1s
opendatahub-operator-6fb66fc5b9-z9xb8 1/1 Running 0 2m58s
spark-cluster-opendatahub-m-ft92h 1/1 Running 0 72s
spark-cluster-opendatahub-w-7mnsl 1/1 Running 0 72s
spark-cluster-opendatahub-w-9g8hm 1/1 Running 0 72s
spark-operator-7c67cb6f8f-6xpvs 1/1 Running 0 2m7s*
4 years, 10 months
Using custom spark images in opendatahub operator
by Ricardo Martinelli de Oliveira
Hi,
I'm integrating Spark SQL Thrift server into ODH operator and I need to use
a custom spark image (other than the RADAnalytics image) with additional
jars to access Ceph/S3 buckets. Actually, both thrift server and the spark
cluster will need this custom spark image in order to access the buckets.
With that being said, I'd like to discuss some options to get this done. I
am thinking about these options:
1) Let the customer specify the custom image in the yaml file (this is
already possible)
2) Create that custom spark image and publish on quay.io opendarahub
organization
3) Add a buildconfig object and make operator create the custom build and
set the image location into the deploymentconfig objects
Although the third option automate everything and deliver the whole set
with the custom image, there's this thing about supporting custom images
within operators. We'd need to add a spark_version variable where the build
could download the spark distribution corresponding to that version and the
artifacts related and run the build. In the first option, we simply don't
create the build objects and document that in order to use Thrift server in
ODH operator, both spark cluster and thrift must use a custom spark image
containing the jars needed to access Ceph/S3. At last, the middle term
between both is option two, so we don't need to worry about delegate this
task to the user or the operator.
What do you think? What could be the best option for this scenario?
--
Ricardo Martinelli De Oliveira
Data Engineer, AI CoE
Red Hat Brazil <https://www.redhat.com/>
Av. Brigadeiro Faria Lima, 3900
8th floor
rmartine(a)redhat.com T: +551135426125
M: +5511970696531
@redhatjobs <https://twitter.com/redhatjobs> redhatjobs
<https://www.facebook.com/redhatjobs> @redhatjobs
<https://instagram.com/redhatjobs>
<https://www.redhat.com/>
5 years, 1 month