-
-
Notifications
You must be signed in to change notification settings - Fork 2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Failed install on Apple silicon #1258
Comments
Please check this comment #1197 (comment) |
I was able to solve it by linking protoc again.
|
Is it succeed to build? It is ok we get the |
No. |
Built with BUILD_GRPC_FOR_BACKEND_LLAMA=on make backend/cpp/llama/grpc-server And then rerunning the |
@NinjAiBot my output looks just like yours, and it's working for me—I just followed the next steps in Example: Build on mac to download ggml-gpt4all-j.bin and ask it how it was. Try it! Thanks @renzo4web for the |
Just used LM-Studio instead. Was the easiest way to spin up a server to chat to a model which is what I needed to do |
LocalAI version:
Most recent as of this report
Environment, CPU architecture, OS, and Version:
Describe the bug
Running the installer from the official documentation fails for macOS running ARM64 fails at this part:
cd llama.cpp && mkdir -p build && cd build && cmake .. -DLLAMA_METAL=OFF && cmake --build . --config Release
To Reproduce
Follow these steps on an M2 max mbp
Expected behavior
A successful install
Logs
Full error:
The text was updated successfully, but these errors were encountered: