You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Please provide a detailed written description of what llama.cpp did, instead. build_log.txt
Windows 11, full ROCM SDK 5.5,
cmake version 3.26.4
GNU Make 4.4.1
Built for x86_64-w64-mingw32
g++.exe (MinGW-W64 x86_64-msvcrt-posix-seh, built by Brecht Sanders) 13.1.0
Copyright (C) 2023 Free Software Foundation, Inc.
# Failure Information (for bugs)
Please help provide information about the failure / bug.
# Steps to Reproduce
Please provide detailed steps for reproducing the issue. We are not sitting in front of your screen, so the more detail the better.
1. step 1
cmake .. -G "Ninja" -DCMAKE_BUILD_TYPE=Release -DLLAMA_HIPBLAS=ON -DLLAMA_CUDA_DMMV_X=64 -DLLAMA_CUDA_MMV_Y=4 -DCMAKE_C_COMPILER="C:/Program Files/AMD/ROCm/5.5/bin/clang.exe" -DCMAKE_CXX_COMPILER="C:/Program Files/AMD/ROCm/5.5/bin/clang++.exe" -DAMDGPU_TARGETS="gfx1100"
3. step 2
cmake --build . --config Release
# Failure Logs
FAILED: CMakeFiles/ggml-rocm.dir/ggml-cuda.cu.obj
C:\PROGRA~1\AMD\ROCm\5.5\bin\CLANG_~1.EXE -DGGML_CUDA_DMMV_X=128 -DGGML_CUDA_MMV_Y=4 -DGGML_USE_CUBLAS -DGGML_USE_HIPBLAS -DK_QUANTS_PER_ITERATION=2 -D_CRT_SECURE_NO_WARNINGS -D_XOPEN_SOURCE=600 -D__HIP_PLATFORM_AMD__=1 -D__HIP_PLATFORM_HCC__=1 -isystem "C:/Program Files/AMD/ROCm/5.5/include" -O3 -DNDEBUG -D_DLL -D_MT -Xclang --dependent-lib=msvcrt -std=gnu++14 -mllvm -amdgpu-early-inline-all=true -mllvm -amdgpu-function-calls=false -x hip --offload-arch=gfx1100 -MD -MT CMakeFiles/ggml-rocm.dir/ggml-cuda.cu.obj -MF CMakeFiles\ggml-rocm.dir\ggml-cuda.cu.obj.d -o CMakeFiles/ggml-rocm.dir/ggml-cuda.cu.obj -c "W:/git/New folder/llama.cpp/ggml-cuda.cu"
W:/git/New folder/llama.cpp/ggml-cuda.cu:8394:5: error: unknown type name 'cublasComputeType_t'
cublasComputeType_t cu_compute_type = CUBLAS_COMPUTE_16F;
^
W:/git/New folder/llama.cpp/ggml-cuda.cu:8395:5: error: unknown type name 'cudaDataType_t'
cudaDataType_t cu_data_type = CUDA_R_16F;
^
2 errors generated when compiling for gfx1100.
[build_log.txt](https://github.com/ggerganov/llama.cpp/files/13707336/build_log.txt)
The text was updated successfully, but these errors were encountered:
@slaren Can you explain to me better what it means, I'm a newbie at this. Does the problem have a solution? Or is the board not useful for inference? I'm not a developer, just a user. Thank you very much for helping me! hugs.
@slaren Can you explain to me better what it means, I'm a newbie at this. Does the problem have a solution? Or is the board not useful for inference? I'm not a developer, just a user. Thank you very much for helping me! hugs.
I'm sorry for my ignorance, I know I'm in the wrong place. But, if you said that the problem was corrected, that's a sign that I can make an inference with the board. Thank you very much. Now, I need to go somewhere where they give me a step by step guide. I know this isn't the place, but you gave me hope. I'm going to bother other people on discord and reddit. Again, thank you very much for your promptness! you are innovating the world!
Uh oh!
There was an error while loading. Please reload this page.
Prerequisites
Please answer the following questions for yourself before submitting an issue.
Expected Behavior
I have compiled many times successful but
after changes from b1658, i was not able to compiled.
Current Behavior
instruction
cmake .. -G "Ninja" -DCMAKE_BUILD_TYPE=Release -DLLAMA_HIPBLAS=ON -DLLAMA_CUDA_DMMV_X=64 -DLLAMA_CUDA_MMV_Y=4 -DCMAKE_C_COMPILER="C:/Program Files/AMD/ROCm/5.5/bin/clang.exe" -DCMAKE_CXX_COMPILER="C:/Program Files/AMD/ROCm/5.5/bin/clang++.exe" -DAMDGPU_TARGETS="gfx1100"
cmake --build . -j 16
Please provide a detailed written description of what
llama.cpp
did, instead.build_log.txt
Windows 11, full ROCM SDK 5.5,
cmake version 3.26.4
GNU Make 4.4.1
Built for x86_64-w64-mingw32
g++.exe (MinGW-W64 x86_64-msvcrt-posix-seh, built by Brecht Sanders) 13.1.0
Copyright (C) 2023 Free Software Foundation, Inc.
The text was updated successfully, but these errors were encountered: