Description
Use main to decode sample audio like samples/gb1.wav and samples/gb0.wav.
Once the the timestamp is larger than 00:01:22, it will crash.
whisper.cpp$ ./main -m models/ggml-base.en.bin -f "samples/gb0.wav" -pc
whisper_init_from_file_no_state: loading model from 'models/ggml-base.en.bin'
whisper_model_load: loading model
whisper_model_load: n_vocab = 51864
whisper_model_load: n_audio_ctx = 1500
whisper_model_load: n_audio_state = 512
whisper_model_load: n_audio_head = 8
whisper_model_load: n_audio_layer = 6
whisper_model_load: n_text_ctx = 448
whisper_model_load: n_text_state = 512
whisper_model_load: n_text_head = 8
whisper_model_load: n_text_layer = 6
whisper_model_load: n_mels = 80
whisper_model_load: f16 = 1
whisper_model_load: type = 2
whisper_model_load: mem required = 215.00 MB (+ 6.00 MB per decoder)
whisper_model_load: adding 1607 extra tokens
whisper_model_load: model ctx = 140.60 MB
whisper_model_load: model size = 140.54 MB
whisper_init_state: kv self size = 5.25 MB
whisper_init_state: kv cross size = 17.58 MB
system_info: n_threads = 4 / 104 | AVX = 1 | AVX2 = 1 | AVX512 = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | VSX = 0 |
main: processing 'samples/gb0.wav' (2037760 samples, 127.4 sec), 4 threads, 1 processors, lang = en, task = transcribe, timestamps = 1 ...
[00:00:00.000 --> 00:00:03.240] Good morning. This Tuesday is Election Day.
[00:00:03.240 --> 00:00:06.000] After months of spirited debate and vigorous campaigning,
[00:00:06.000 --> 00:00:08.640] the time has come for Americans to make important decisions
[00:00:08.640 --> 00:00:10.120] about our nation's future.
[00:00:10.120 --> 00:00:13.760] I encourage all Americans to go to the polls and vote.
[00:00:13.760 --> 00:00:16.120] Election season brings out the spirit of competition
[00:00:16.120 --> 00:00:18.080] between our political parties.
[00:00:18.080 --> 00:00:20.260] And that competition is an essential part
[00:00:20.260 --> 00:00:21.760] of a healthy democracy.
[00:00:21.760 --> 00:00:23.520] But as the campaigns come to a close,
[00:00:23.520 --> 00:00:26.000] Republicans, Democrats, and independents
[00:00:26.000 --> 00:00:29.120] can find common ground on at least one point.
[00:00:29.120 --> 00:00:31.560] Our system of representative democracy
[00:00:31.560 --> 00:00:34.440] is one of America's greatest strengths.
[00:00:34.440 --> 00:00:36.240] The United States was founded on the belief
[00:00:36.240 --> 00:00:38.240] that all men are created equal.
[00:00:38.240 --> 00:00:41.440] Every election day, millions of Americans of all races,
[00:00:41.440 --> 00:00:43.440] religions, and backgrounds step into voting
[00:00:43.440 --> 00:00:45.280] booths throughout the nation.
[00:00:45.280 --> 00:00:47.780] Whether they are rich or poor or old or young,
[00:00:47.780 --> 00:00:50.680] each of them has an equal share in choosing the path
[00:00:50.680 --> 00:00:52.440] that our country will take.
[00:00:52.440 --> 00:00:54.880] And every ballot they cast is a reminder
[00:00:54.880 --> 00:00:58.280] that our founding principles are alive and well.
ggml_new_tensor_impl: not enough space in the scratch memory
Segmentation fault (core dumped)
gdb info (with debug C flag)
[00:00:57.440 --> 00:01:04.120] In an age when spaceflight has come to seem almost routine, it is easy to overlook the
[00:01:04.120 --> 00:01:10.400] dangers of travel by rocket and the difficulties of navigating the fierce outer atmosphere of
[00:01:10.400 --> 00:01:12.600] the Earth.
[00:01:12.600 --> 00:01:19.280] These astronauts knew the dangers, and they faced them willingly, knowing they had a high
[00:01:19.280 --> 00:01:22.960] and noble purpose in life.
[New Thread 0x7fffe2820640 (LWP 259196)]
[New Thread 0x7fffe201f640 (LWP 259197)]
[New Thread 0x7fffe181e640 (LWP 259198)]
[Thread 0x7fffe181e640 (LWP 259198) exited]
[New Thread 0x7fffe181e640 (LWP 259199)]
[Thread 0x7fffe201f640 (LWP 259197) exited]
[Thread 0x7fffe2820640 (LWP 259196) exited]
[New Thread 0x7fffe201f640 (LWP 259200)]
[New Thread 0x7fffe2820640 (LWP 259201)]
ggml_new_tensor_impl: not enough space in the scratch memory
main: ggml.c:2788: ggml_new_tensor_impl: Assertion `false' failed.
[Thread 0x7fffe2820640 (LWP 259201) exited]
[Thread 0x7fffe201f640 (LWP 259200) exited]
[Thread 0x7fffe181e640 (LWP 259199) exited]
Thread 1 "main" received signal SIGABRT, Aborted.
__pthread_kill_implementation (no_tid=0, signo=6, threadid=140737352644416) at ./nptl/pthread_kill.c:44
44 ./nptl/pthread_kill.c: No such file or directory.
(gdb) bt
#0 __pthread_kill_implementation (no_tid=0, signo=6, threadid=140737352644416) at ./nptl/pthread_kill.c:44
#1 __pthread_kill_internal (signo=6, threadid=140737352644416) at ./nptl/pthread_kill.c:78
#2 __GI___pthread_kill (threadid=140737352644416, signo=signo@entry=6) at ./nptl/pthread_kill.c:89
#3 0x00007ffff7842476 in __GI_raise (sig=sig@entry=6) at ../sysdeps/posix/raise.c:26
#4 0x00007ffff78287f3 in __GI_abort () at ./stdlib/abort.c:79
#5 0x00007ffff782871b in __assert_fail_base (fmt=0x7ffff79dd150 "%s%s%s:%u: %s%sAssertion `%s' failed.\n%n", assertion=0x55555560b7cf "false", file=0x55555560b348 "ggml.c", line=2788, function=)
at ./assert/assert.c:92
#6 0x00007ffff7839e96 in __GI___assert_fail (assertion=0x55555560b7cf "false", file=0x55555560b348 "ggml.c", line=2788, function=0x55555560cbf0 <PRETTY_FUNCTION.36> "ggml_new_tensor_impl")
at ./assert/assert.c:101
#7 0x00005555555ae477 in ggml_new_tensor_impl (ctx=0x5555556e8fe8 <g_state+680>, type=GGML_TYPE_F32, n_dims=3, ne=0x7ffffffe4fc0, data=0x0) at ggml.c:2788
#8 0x00005555555ae8ad in ggml_new_tensor (ctx=0x5555556e8fe8 <g_state+680>, type=GGML_TYPE_F32, n_dims=3, ne=0x7ffffffe4fc0) at ggml.c:2868
#9 0x00005555555b179a in ggml_mul_mat (ctx=0x5555556e8fe8 <g_state+680>, a=0x7fffe715ca90, b=0x7fffe715c9d0) at ggml.c:3913
#10 0x00005555555cdd52 in whisper_decode_internal (wctx=..., wstate=..., decoder=..., tokens=0x555555701cf0, n_tokens=226, n_past=0, n_threads=4) at whisper.cpp:2011
#11 0x00005555555d6a14 in whisper_full_with_state (ctx=0x555555702fe0, state=0x555555f0afa0, params=..., samples=0x7fffe3022010, n_samples=3179927) at whisper.cpp:3958
#12 0x00005555555d89e3 in whisper_full (ctx=0x555555702fe0, params=..., samples=0x7fffe3022010, n_samples=3179927) at whisper.cpp:4409
#13 0x00005555555d8aab in whisper_full_parallel (ctx=0x555555702fe0, params=..., samples=0x7fffe3022010, n_samples=3179927, n_processors=1) at whisper.cpp:4419
#14 0x000055555556224a in main (argc=6, argv=0x7fffffffddd8) at examples/main/main.cpp:752
(gdb) li
39 in ./nptl/pthread_kill.c
(gdb) li
39 in ./nptl/pthread_kill.c
(gdb) li ggml.c:2788
2783 .next = NULL,
2784 };
2785 } else {
2786 if (ctx->scratch.offs + size_needed > ctx->scratch.size) {
2787 GGML_PRINT("%s: not enough space in the scratch memory\n", func);
2788 assert(false);
2789 return NULL;
2790 }
2791
2792 if (cur_end + sizeof(struct ggml_tensor) + GGML_OBJECT_SIZE > ctx->mem_size) {
##git log
it is from latest code
whisper.cpp$ git log
commit 7cd1d3b (HEAD -> master, origin/master, origin/HEAD)
Author: Georgi Gerganov [email protected]
Date: Mon Mar 27 21:28:00 2023 +0300
talk-llama : try to fix windows build ..
commit 82637b8
Author: Georgi Gerganov [email protected]
Date: Mon Mar 27 21:02:35 2023 +0300
readme : add talk-llama example to the table
commit 4a0deb8
Author: Georgi Gerganov [email protected]
Date: Mon Mar 27 21:00:32 2023 +0300
talk-llama : add new example + sync ggml from llama.cpp (#664)
* talk-llama : talk with LLaMA AI
* talk.llama : disable EOS token
* talk-llama : add README instructions
* ggml : fix build in debug