-
Notifications
You must be signed in to change notification settings - Fork 310
Add openEuler support for AudioQnA #2191
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Open
zhihangdeng
wants to merge
1
commit into
opea-project:main
Choose a base branch
from
zhihangdeng:main
base: main
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Open
Changes from all commits
Commits
File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,10 @@ | ||
# Copyright (C) 2025 Huawei Technologies Co., Ltd. | ||
# SPDX-License-Identifier: Apache-2.0 | ||
|
||
ARG IMAGE_REPO=opea | ||
ARG BASE_TAG=latest | ||
FROM $IMAGE_REPO/comps-base:$BASE_TAG-openeuler | ||
|
||
COPY ./audioqna.py $HOME/audioqna.py | ||
|
||
ENTRYPOINT ["python", "audioqna.py"] |
92 changes: 92 additions & 0 deletions
92
AudioQnA/docker_compose/intel/cpu/xeon/compose_openeuler.yaml
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,92 @@ | ||
# Copyright (C) 2025 Huawei Technologies Co., Ltd. | ||
# SPDX-License-Identifier: Apache-2.0 | ||
|
||
services: | ||
whisper-service: | ||
image: ${REGISTRY:-opea}/whisper:${TAG:-latest}-openeuler | ||
container_name: whisper-service | ||
ports: | ||
- ${WHISPER_SERVER_PORT:-7066}:7066 | ||
ipc: host | ||
environment: | ||
no_proxy: ${no_proxy} | ||
http_proxy: ${http_proxy} | ||
https_proxy: ${https_proxy} | ||
restart: unless-stopped | ||
speecht5-service: | ||
image: ${REGISTRY:-opea}/speecht5:${TAG:-latest}-openeuler | ||
container_name: speecht5-service | ||
ports: | ||
- ${SPEECHT5_SERVER_PORT:-7055}:7055 | ||
ipc: host | ||
environment: | ||
no_proxy: ${no_proxy} | ||
http_proxy: ${http_proxy} | ||
https_proxy: ${https_proxy} | ||
restart: unless-stopped | ||
vllm-service: | ||
image: openeuler/vllm-cpu:0.9.1-oe2403lts | ||
container_name: vllm-service | ||
ports: | ||
- ${LLM_SERVER_PORT:-3006}:80 | ||
volumes: | ||
- "${MODEL_CACHE:-./data}:/root/.cache/huggingface/hub" | ||
shm_size: 128g | ||
privileged: true | ||
environment: | ||
no_proxy: ${no_proxy} | ||
http_proxy: ${http_proxy} | ||
https_proxy: ${https_proxy} | ||
HF_TOKEN: ${HF_TOKEN} | ||
LLM_MODEL_ID: ${LLM_MODEL_ID} | ||
VLLM_TORCH_PROFILER_DIR: "/mnt" | ||
LLM_SERVER_PORT: ${LLM_SERVER_PORT} | ||
VLLM_CPU_OMP_THREADS_BIND: all | ||
VLLM_CPU_KVCACHE_SPACE: 30 | ||
healthcheck: | ||
test: ["CMD-SHELL", "curl -f http://$host_ip:${LLM_SERVER_PORT}/health || exit 1"] | ||
interval: 10s | ||
timeout: 10s | ||
retries: 100 | ||
command: --model ${LLM_MODEL_ID} --host 0.0.0.0 --port 80 | ||
audioqna-xeon-backend-server: | ||
image: ${REGISTRY:-opea}/audioqna:${TAG:-latest}-openeuler | ||
container_name: audioqna-xeon-backend-server | ||
depends_on: | ||
- whisper-service | ||
- vllm-service | ||
- speecht5-service | ||
ports: | ||
- "3008:8888" | ||
environment: | ||
- no_proxy=${no_proxy} | ||
- https_proxy=${https_proxy} | ||
- http_proxy=${http_proxy} | ||
- MEGA_SERVICE_HOST_IP=${MEGA_SERVICE_HOST_IP} | ||
- WHISPER_SERVER_HOST_IP=${WHISPER_SERVER_HOST_IP} | ||
- WHISPER_SERVER_PORT=${WHISPER_SERVER_PORT} | ||
- LLM_SERVER_HOST_IP=${LLM_SERVER_HOST_IP} | ||
- LLM_SERVER_PORT=${LLM_SERVER_PORT} | ||
- LLM_MODEL_ID=${LLM_MODEL_ID} | ||
- SPEECHT5_SERVER_HOST_IP=${SPEECHT5_SERVER_HOST_IP} | ||
- SPEECHT5_SERVER_PORT=${SPEECHT5_SERVER_PORT} | ||
ipc: host | ||
restart: always | ||
audioqna-xeon-ui-server: | ||
image: ${REGISTRY:-opea}/audioqna-ui:${TAG:-latest}-openeuler | ||
container_name: audioqna-xeon-ui-server | ||
depends_on: | ||
- audioqna-xeon-backend-server | ||
ports: | ||
- "5173:5173" | ||
environment: | ||
- no_proxy=${no_proxy} | ||
- https_proxy=${https_proxy} | ||
- http_proxy=${http_proxy} | ||
- CHAT_URL=${BACKEND_SERVICE_ENDPOINT} | ||
ipc: host | ||
restart: always | ||
|
||
networks: | ||
default: | ||
driver: bridge |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,103 @@ | ||
#!/bin/bash | ||
# Copyright (C) 2025 Huawei Technologies Co., Ltd. | ||
# SPDX-License-Identifier: Apache-2.0 | ||
|
||
set -e | ||
IMAGE_REPO=${IMAGE_REPO:-"opea"} | ||
IMAGE_TAG=${IMAGE_TAG:-"latest"} | ||
echo "REGISTRY=IMAGE_REPO=${IMAGE_REPO}" | ||
echo "TAG=IMAGE_TAG=${IMAGE_TAG}" | ||
zhihangdeng marked this conversation as resolved.
Show resolved
Hide resolved
|
||
export REGISTRY=${IMAGE_REPO} | ||
export TAG=${IMAGE_TAG} | ||
export MODEL_CACHE=${model_cache:-"./data"} | ||
|
||
WORKPATH=$(dirname "$PWD") | ||
LOG_PATH="$WORKPATH/tests" | ||
ip_address=$(hostname -I | awk '{print $1}') | ||
|
||
function build_docker_images() { | ||
opea_branch=${opea_branch:-"main"} | ||
|
||
cd $WORKPATH/docker_image_build | ||
git clone --depth 1 --branch ${opea_branch} https://github.com/opea-project/GenAIComps.git | ||
pushd GenAIComps | ||
echo "GenAIComps test commit is $(git rev-parse HEAD)" | ||
docker build --no-cache -t ${REGISTRY}/comps-base:${TAG}-openeuler --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f Dockerfile.openEuler . | ||
popd && sleep 1s | ||
|
||
echo "Build all the images with --no-cache, check docker_image_build.log for details..." | ||
service_list="audioqna-openeuler audioqna-ui-openeuler whisper-openeuler speecht5-openeuler" | ||
docker compose -f build.yaml build ${service_list} --no-cache > ${LOG_PATH}/docker_image_build.log | ||
|
||
docker images && sleep 1s | ||
} | ||
|
||
function start_services() { | ||
cd $WORKPATH/docker_compose/intel/cpu/xeon/ | ||
export host_ip=${ip_address} | ||
source set_env.sh | ||
# sed -i "s/backend_address/$ip_address/g" $WORKPATH/ui/svelte/.env | ||
zhihangdeng marked this conversation as resolved.
Show resolved
Hide resolved
|
||
|
||
# Start Docker Containers | ||
docker compose -f compose_openeuler.yaml up -d > ${LOG_PATH}/start_services_with_compose.log | ||
n=0 | ||
until [[ "$n" -ge 200 ]]; do | ||
docker logs vllm-service > $LOG_PATH/vllm_service_start.log 2>&1 | ||
if grep -q complete $LOG_PATH/vllm_service_start.log; then | ||
break | ||
fi | ||
sleep 5s | ||
n=$((n+1)) | ||
done | ||
} | ||
|
||
|
||
function validate_megaservice() { | ||
response=$(http_proxy="" curl http://${ip_address}:3008/v1/audioqna -XPOST -d '{"audio": "UklGRigAAABXQVZFZm10IBIAAAABAAEARKwAAIhYAQACABAAAABkYXRhAgAAAAEA", "max_tokens":64}' -H 'Content-Type: application/json') | ||
# always print the log | ||
docker logs whisper-service > $LOG_PATH/whisper-service.log | ||
docker logs speecht5-service > $LOG_PATH/tts-service.log | ||
docker logs vllm-service > $LOG_PATH/vllm-service.log | ||
docker logs audioqna-xeon-backend-server > $LOG_PATH/audioqna-xeon-backend-server.log | ||
echo "$response" | sed 's/^"//;s/"$//' | base64 -d > speech.mp3 | ||
|
||
if [[ $(file speech.mp3) == *"RIFF"* ]]; then | ||
echo "Result correct." | ||
else | ||
echo "Result wrong." | ||
exit 1 | ||
fi | ||
|
||
} | ||
|
||
function stop_docker() { | ||
cd $WORKPATH/docker_compose/intel/cpu/xeon/ | ||
docker compose -f compose_openeuler.yaml stop && docker compose rm -f | ||
} | ||
|
||
function main() { | ||
|
||
echo "::group::stop_docker" | ||
stop_docker | ||
echo "::endgroup::" | ||
|
||
echo "::group::build_docker_images" | ||
if [[ "$IMAGE_REPO" == "opea" ]]; then build_docker_images; fi | ||
echo "::endgroup::" | ||
|
||
echo "::group::start_services" | ||
start_services | ||
echo "::endgroup::" | ||
|
||
echo "::group::validate_megaservice" | ||
validate_megaservice | ||
echo "::endgroup::" | ||
|
||
echo "::group::stop_docker" | ||
stop_docker | ||
docker system prune -f | ||
echo "::endgroup::" | ||
|
||
} | ||
|
||
main |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,30 @@ | ||
# Copyright (C) 2025 Huawei Technologies Co., Ltd. | ||
# SPDX-License-Identifier: Apache-2.0 | ||
|
||
# Use node 20.11.1 as the base image | ||
FROM openeuler/node:20.11.1-oe2403lts | ||
|
||
# Update package manager and install Git | ||
RUN yum update -y && \ | ||
yum install -y \ | ||
git && \ | ||
yum clean all && \ | ||
rm -rf /var/cache/yum | ||
|
||
# Copy the front-end code repository | ||
COPY svelte /home/user/svelte | ||
|
||
# Set the working directory | ||
WORKDIR /home/user/svelte | ||
|
||
# Install front-end dependencies | ||
RUN npm install | ||
|
||
# Build the front-end application | ||
RUN npm run build | ||
|
||
# Expose the port of the front-end application | ||
EXPOSE 5173 | ||
|
||
# Run the front-end application in preview mode | ||
CMD ["npm", "run", "preview", "--", "--port", "5173", "--host", "0.0.0.0"] |
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Uh oh!
There was an error while loading. Please reload this page.