[2024 Mar 13] Add llama_synchronize() + llama_context_params.n_ubatch ggerganov#6017 [2024 Mar 8] llama_kv_cache_seq_rm() returns a bool instead of void, and new llama_n_seq_max() returns the upper limit of acceptable seq_id in batches (relevant when dealing with multiple sequence...
(made change in * EVT_USE) * * 33 5/08/98 1:31p Jeff * if EVT_USE returns true, it won't remove item from inventory on use * * 32 4/24/98 7:09p Jeff * added a flag for non-useable inventory items * * 31 4/23/98 12:02p Jeff * added a limit to how...
// At this point we know that we aren't bailing, and will // continue to resolve seed hits. } // while(!gws_[i].done()) } } // Short-circuited because a limit, e.g. -k, -m or -M, was exceeded return EXTEND_EXHAUSTED_CANDIDATES; } /** * Given a collection...
true : false; } return &m_L[loop.m_loop_index]; } ON_Brep* ON_BrepBox( const ON_3dPoint* box_corners, ON_Brep* pBrep ) { ON_Brep* brep = 0; int vi, ei, fi, si, c2i; if (box_corners) { if ( pBrep ) { pBrep->Destroy(); brep = pBrep; } else brep = new ON_...
_T("解像度が上限を超えています。") : _T("Resolution is over limit."); if (nullptr == feature) PrintMes(RGY_LOG_ERROR, _T("%s: %dx%d [上限: %dx%d]\n"), error_mes, m_uEncWidth, m_uEncHeight, codecFeature->getCapLimit(NV_ENC_CAPS_WIDTH_MAX), codecFeature->getCapLimi...
sv_ipratelimit.h sv_log.cpp sv_log.h sv_logofile.cpp sv_logofile.h sv_main.cpp sv_main.h sv_master.cpp sv_master_legacy.cpp sv_master_legacy.h sv_packedentities.cpp sv_packedentities.h sv_plugin.cpp sv_plugin.h sv_precache.cpp sv_precache.h sv_rcon.cpp sv_rcon.h ...
[2024 Mar 8] `llama_kv_cache_seq_rm()` returns a `bool` instead of `void`, and new `llama_n_seq_max()` returns the upper limit of acceptable `seq_id` in batches (relevant when dealing with multiple sequences) https://github.com/ggerganov/llama.cpp/pull/5328 - [2024 M...
[2024 Mar 13] Add llama_synchronize() + llama_context_params.n_ubatch #6017 [2024 Mar 8] llama_kv_cache_seq_rm() returns a bool instead of void, and new llama_n_seq_max() returns the upper limit of acceptable seq_id in batches (relevant when dealing with multiple sequences) #532...
[2024 Mar 8] llama_kv_cache_seq_rm() returns a bool instead of void, and new llama_n_seq_max() returns the upper limit of acceptable seq_id in batches (relevant when dealing with multiple sequences) #5328 [2024 Mar 4] Embeddings API updated #5796 [2024 Mar 3] struct llama_context...
server : add option to time limit the generation phase (ggerganov#9865) Oct 12, 2024 ggml ggml : move more prints to the ggml log system (ggerganov#9839) Oct 11, 2024 gguf-py llama : improve infill support and special token detection (ggerganov… Oct 12, 2024 ...