Skip to content
This repository was archived by the owner on Sep 17, 2024. It is now read-only.

Conversation

@keen99
Copy link

@keen99 keen99 commented Mar 30, 2023

actually check for it's existence before setting it

fixes

main: seed = 1680128377
llama_model_load: loading model from 'gpt4all-lora-quantized.bin' - please wait ...
Illegal instruction: 4

keen99 added a commit to keen99/gpt4all.cpp that referenced this pull request Mar 30, 2023
there was no check.   ported from zanussbaum#2
keen99 added a commit to keen99/gpt4all.cpp that referenced this pull request Mar 30, 2023
...there was no check.  ported upstream from zanussbaum#2 (I dont see any clean path for upstream patches)
ggerganov pushed a commit to ggml-org/llama.cpp that referenced this pull request Mar 30, 2023
...there was no check.  ported upstream from zanussbaum#2 (I dont see any clean path for upstream patches)
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant