Skip to content

gsed support for MacOs #298

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 2 commits into from
Sep 21, 2023

Conversation

tedhtchang
Copy link
Member

Issue link

User like to run make install on MacOs. Refer to Conversation

What changes have been made

Let user define sed binary

Verification steps

For MacOs ONLY:

brew install gnu-sed
make install -e SED=/usr/local/bin/gsed

Checks

  • I've made sure the tests are passing.
  • Testing Strategy
    • Unit tests
    • [ x] Manual tests
    • Testing is not required for this change

@astefanutti
Copy link
Contributor

/lgtm

@astefanutti
Copy link
Contributor

/cc @sutaakar, you may want to double check.

@astefanutti
Copy link
Contributor

@tedhtchang thanks. Maybe we can update the README with an example on how to use it here:

- GNU sed - sed is used in several Makefile command. Using macOS default sed is incompatible, so GNU sed is needed for correct execution of these commands.

Copy link
Contributor

@sutaakar sutaakar left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/lgtm

@openshift-ci openshift-ci bot removed the lgtm label Sep 21, 2023
@astefanutti
Copy link
Contributor

/lgtm

@openshift-ci openshift-ci bot added the lgtm label Sep 21, 2023
@astefanutti
Copy link
Contributor

/approve

@openshift-ci
Copy link

openshift-ci bot commented Sep 21, 2023

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: astefanutti

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-merge-robot openshift-merge-robot merged commit c8572e1 into project-codeflare:main Sep 21, 2023
@tedhtchang
Copy link
Member Author

tedhtchang commented Sep 21, 2023

@jbusche Could you test this on your m1 macOS ?

brew install gnu-sed
podman machine start
kind create cluster

# Deploy codeflare-operator on kind on macOS
export IMG=quay.io/project-codeflare/codeflare-operator:v1.0.0-rc.1
export SED=/usr/local/bin/gsed
make deploy

# Kuberay
helm repo add kuberay https://ray-project.github.io/kuberay-helm/
helm repo update
helm install kuberay-operator kuberay/kuberay-operator --version 1.0.0-rc.0
cat <<EOF | kubectl apply -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: mcad-controller-rayclusters
rules:
  - apiGroups:
      - ray.io
    resources:
      - rayclusters
      - rayclusters/finalizers
      - rayclusters/status
    verbs:
      - get
      - list
      - watch
      - create
      - update
      - patch
      - delete
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: mcad-controller-rayclusters
subjects:
  - kind: ServiceAccount
    name: codeflare-operator-controller-manager
    namespace: openshift-operators
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: mcad-controller-rayclusters
EOF

# Deploy a RayCluster CR with appwrapper
cat <<EOF | kubectl apply -f -
apiVersion: workload.codeflare.dev/v1beta1
kind: AppWrapper
metadata:
  name: raycluster-complete
  namespace: default
spec:
  resources:
    GenericItems:
    - custompodresources:
      - limits:
          cpu: 1
          memory: 2G
          nvidia.com/gpu: 0
        replicas: 1
        requests:
          cpu: 500m
          memory: 2G
          nvidia.com/gpu: 0
      - limits:
          cpu: 1
          memory: 1G
          nvidia.com/gpu: 0
        replicas: 1
        requests:
          cpu: 500m
          memory: 1G
          nvidia.com/gpu: 0
      generictemplate:
        apiVersion: ray.io/v1alpha1
        kind: RayCluster
        metadata:
          labels:
            controller-tools.k8s.io: "1.0"
          name: raycluster-complete
        spec:
          headGroupSpec:
            rayStartParams:
              dashboard-host: 0.0.0.0
            serviceType: ClusterIP
            template:
              metadata:
                labels: {}
              spec:
                containers:
                - image: rayproject/ray:2.6.3
                  lifecycle:
                    preStop:
                      exec:
                        command:
                        - /bin/sh
                        - -c
                        - ray stop
                  name: ray-head
                  ports:
                  - containerPort: 6379
                    name: gcs
                  - containerPort: 8265
                    name: dashboard
                  - containerPort: 10001
                    name: client
                  resources:
                    limits:
                      cpu: "1"
                      memory: 2G
                    requests:
                      cpu: "500m"
                      memory: 2G
                  volumeMounts:
                  - mountPath: /tmp/ray
                    name: ray-logs
                volumes:
                - emptyDir: {}
                  name: ray-logs
          rayVersion: 2.5.0
          workerGroupSpecs:
          - groupName: small-group
            maxReplicas: 10
            minReplicas: 1
            rayStartParams: {}
            replicas: 1
            template:
              spec:
                containers:
                - image: rayproject/ray:2.6.3
                  lifecycle:
                    preStop:
                      exec:
                        command:
                        - /bin/sh
                        - -c
                        - ray stop
                  name: ray-worker
                  resources:
                    limits:
                      cpu: "1"
                      memory: 1G
                    requests:
                      cpu: "500m"
                      memory: 1G
                  volumeMounts:
                  - mountPath: /tmp/ray
                    name: ray-logs
                volumes:
                - emptyDir: {}
                  name: ray-logs
EOF

@jbusche
Copy link
Collaborator

jbusche commented Sep 21, 2023

Hey @tedhtchang, I tried it out... it worked for me except for the following changes:

  1. my gsed was installed elsewhere... I ended up using this command to export SED:
export SED=/opt/homebrew/bin/gsed
  1. It wasn't clear to me from where to run make deploy, so I ended up doing this:
mkdir TED ; cd TED
git clone https://github.com/tedhtchang/codeflare-operator.git
cd codeflare-operator
git checkout gsed-support

Then, it worked:

oc get pods,appwrappers
NAME                                               READY   STATUS    RESTARTS   AGE
pod/kuberay-operator-58c98b495b-m8qqv              1/1     Running   0          8m3s
pod/raycluster-complete-head-mzzxq                 1/1     Running   0          7m9s
pod/raycluster-complete-worker-small-group-2pt2c   1/1     Running   0          7m9s

NAME                                                    AGE
appwrapper.workload.codeflare.dev/raycluster-complete   7m10s

I guess the only thing I'm confused about is how is it dispatching an appwrapper when there's no mcad?

oc get pods -A
NAMESPACE             NAME                                           READY   STATUS    RESTARTS   AGE
default               kuberay-operator-58c98b495b-m8qqv              1/1     Running   0          7m16s
default               raycluster-complete-head-mzzxq                 1/1     Running   0          6m22s
default               raycluster-complete-worker-small-group-2pt2c   1/1     Running   0          6m22s
kube-system           coredns-5d78c9869d-58kqp                       1/1     Running   0          11m
kube-system           coredns-5d78c9869d-chstd                       1/1     Running   0          11m
kube-system           etcd-kind-control-plane                        1/1     Running   0          11m
kube-system           kindnet-qdjgj                                  1/1     Running   0          11m
kube-system           kube-apiserver-kind-control-plane              1/1     Running   0          11m
kube-system           kube-controller-manager-kind-control-plane     1/1     Running   0          11m
kube-system           kube-proxy-np5mx                               1/1     Running   0          11m
kube-system           kube-scheduler-kind-control-plane              1/1     Running   0          11m
local-path-storage    local-path-provisioner-6bc4bddd6b-qpmqw        1/1     Running   0          11m
openshift-operators   codeflare-operator-manager-8548dc89bb-j9grs    1/1     Running   0          7m52s

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants