Skip to content

Run E2E tests for codeflare operator #40441

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
Closed
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -28,6 +28,72 @@ tests:
requests:
cpu: 100m
memory: 200Mi
- as: codeflare-operator-e2e
commands: |
podman run -d -p 5000:5000 --name registry registry:2.8.1
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If possible, we'd want to rely on the OpenShift internal container image registry. So instead of starting a local one, we'd only need to expose and login to the internal registry, e.g.:

$ oc patch configs.imageregistry.operator.openshift.io/cluster --patch '{"spec":{"defaultRoute":true}}' --type=merge
$ podman login -u kubeadmin -p $(oc whoami -t) $(oc registry info)

export REGISTRY_ADDRESS=$(hostname -i):5000

KUBERAY_VERSION=$(make get-kuberay-version)
echo Deploying KubeRay ${KUBERAY_VERSION}
kubectl create -k "github.com/ray-project/kuberay/ray-operator/config/default?ref=${KUBERAY_VERSION}&timeout=90s"
echo Deploying CodeFlare operator
IMG="${REGISTRY_ADDRESS}"/codeflare-operator
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A nitpick, can you put some version to the image tag?

make image-push -e IMG="${IMG}"
make deploy -e IMG="${IMG}"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we rely on the OpenShift internal container registry, it'd need to be the internally addressable image.

kubectl wait --timeout=120s --for=condition=Available=true deployment -n openshift-operators codeflare-operator-manager
echo Deploying MCAD controller
kubectl create ns codeflare-system
cat <<EOF | kubectl apply -n codeflare-system -f -
apiVersion: codeflare.codeflare.dev/v1alpha1
kind: MCAD
metadata:
name: mcad
spec:
controllerResources: {}
EOF
cat <<EOF | kubectl apply -n codeflare-system -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: mcad-controller-rayclusters
rules:
- apiGroups:
- ray.io
resources:
- rayclusters
- rayclusters/finalizers
- rayclusters/status
verbs:
- get
- list
- watch
- create
- update
- patch
- delete
EOF
cat <<EOF | kubectl apply -n codeflare-system -f -
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: mcad-controller-rayclusters
subjects:
- kind: ServiceAccount
name: mcad-controller-mcad
namespace: codeflare-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: mcad-controller-rayclusters
EOF
kubectl wait --timeout=120s --for=condition=Available=true deployment -n codeflare-system mcad-controller-mcad

GOFLAGS="" make test-e2e
from: test-bin
resources:
requests:
cpu: 100m
memory: 200Mi
Comment on lines +95 to +96
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Based on https://github.com/opendatahub-io/distributed-workloads/blob/main/Quick-Start.md I would consider 4 CPU and at least 4Gi memory (considering just MCAD, Ray and CodeFlare operator now).

workflow: hypershift-hostedcluster-workflow
zz_generated_metadata:
branch: main
Expand Down