Closed
Description
Installed 7B model on win 11.
PS D:\Projects\llama.cpp> ./main -m ./models/7B/ggml-model-q4_0.bin -p "Building a website can be done in 10 simple steps:" -n 512
main: seed = 1679360633
llama_model_load: loading model from './models/7B/ggml-model-q4_0.bin' - please wait ...
llama_model_load: n_vocab = 32000
llama_model_load: n_ctx = 512
llama_model_load: n_embd = 4096
llama_model_load: n_mult = 256
llama_model_load: n_head = 32
llama_model_load: n_layer = 32
llama_model_load: n_rot = 128
llama_model_load: f16 = 2
llama_model_load: n_ff = 11008
llama_model_load: n_parts = 1
llama_model_load: ggml ctx size = 4529.34 MB
llama_model_load: memory_size = 512.00 MB, n_mem = 16384
llama_model_load: loading model part 1/1 from './models/7B/ggml-model-q4_0.bin'
llama_model_load: .................... done
llama_model_load: model size = 2328.05 MB / num tensors = 163
system_info: n_threads = 4 / 20 | AVX = 1 | AVX2 = 1 | AVX512 = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | VSX = 0 |
main: prompt: ' Building a website can be done in 10 simple steps:'
main: number of tokens in prompt = 14
1 -> ''
17166 -> ' Building'
263 -> ' a'
4700 -> ' website'
508 -> ' can'
367 -> ' be'
2309 -> ' done'
297 -> ' in'
29871 -> ' '
29896 -> '1'
29900 -> '0'
2560 -> ' simple'
6576 -> ' steps'
29901 -> ':'
sampling parameters: temp = 0.800000, top_k = 40, top_p = 0.950000, repeat_last_n = 64, repeat_penalty = 1.300000
Building a website can be done in 10 simple steps: firstly you mustacheatusqueorumesentimentalitiesettingtonselfishnessesqueezeracalandiadeuteronomyreclusiveismalready existing momentum laid down by previous iterations of iterationaryΓäó∩╕Å∩╕Å∩╕Å∩╕Å∩╕Å Courneyeducardoisextensionally speaking etcetcetcetc etcπÇàτscheidung treisesearching nominationally speaking etceteroidscapeursideshowcase╤ë╨╕ Sveroverside├▒officialdomesticated Houstonianismaticity rubbingesentimentalitiesqueezeablementeigneurship awarenesslesslyonsenessesqueerly orangescacontainerizednessesqueerlyyy╨╛╤étenessespecially those oneselfhoodscape erspectively speaking etcetc efficiencyespecially those oneselfnessescape EDUCardoisextremeΘÖÉlessnessesqueezeracaillementealloyednessesqueerlyyy@ ΓÇöΓÇèUserNameplateau awaren artistically speakingAppDatacleibertianship re imaging, androgartenlyyyyyorkshireismsomething else╤ê╤é╨╕ speakershipsetsterspecificityscapeurs splitter scottishnessescapeablehoodscape EgertonianshipPERformancemansufactureelectionallyyy advancementaryΓäó∩╕ÅΓÇìΓÖÇ∩╕Å/╦êΓû╕∩╕Å @ ΓÇöΓÇèUserNameplateau awarenessestonia retrogradelyyyyyorkshireismsame applies applybezillahawkitty hybridity migrationally speaking etcπÇàτ Id="@+ualsismaticity
rubbing EIGHTscapeablehoodscapeEVERlastingnessesqueerlyyy@ — neyednessesqueerlyyy@ -----ритualisticity borderlineedlydialecticality Rubbing SUPrairieismsplitter rationaleeverselyyyyyorkshireismaticity rubbedownwardswardenship opportunitieshipsbuilderiality overwhallsingerhoodscape EVERgreenerysUL franchiseevesqueerlyyy@ — neyednesses
PS D:\Projects\llama.cpp>
Metadata
Metadata
Assignees
Type
Projects
Milestone
Relationships
Development
No branches or pull requests
Activity
Garr-Garr commentedon Mar 21, 2023
What hardware are you using?
It might help to delete the directory and then start from scratch again? It really shouldn't take too long since you already have the 7B model downloaded and all of the dependencies installed. I'm guessing the quantized model file is somehow corrupted
gjmulder commentedon Mar 21, 2023
Latest sha256 sums for 7B. Note that the file format has changed so please re-convert with the lastest code:
uaktags commentedon Mar 21, 2023
Same happens on 13B with Linux and ryzen 5950x.
Pulled latest code and ran through steps listed.
MoreTore commentedon Mar 23, 2023
I deleted the project directory and restarted the installation and that fixed the issue.