distributed_executor_backend="ray", 107+ enforce_eager=True,# reduce test time 108+ ) 109+ engine=LLMEngine.from_engine_args(engine_args) 110+ assertengine.model_executor.uses_ray vllm/config.py +3 Original
ray_workers_use_nsight, worker_use_ray=args.worker_use_ray, use_v2_block_manager=args.use_v2_block_manager, enable_chunked_prefill=args.enable_chunked_prefill, download_dir=args.download_dir, block_size=args.block_size, disable_custom_all_reduce=args.disable_custom_all_reduce, gpu_memory_...
Fix `ValueError: Unrecognized distributed executor backend tp. Supported values are 'ray', 'mp' 'uni', 'external_launcher' or custom ExecutorBase subclass.` ### Does this PR introduce _any_ user-facing change? No ### How was this patch tested? Test on my local node Signed-off-by: ...
Merged dtrifiromerged 1 commit intoopendatahub-io:mainfromdtrifiro:entrypoint-cleanup Jul 10, 2024 +2−2 dtrifirocommentedJul 10, 2024 this is the default when--worker-use-rayis not provided and more than 1 GPU is available Dockerfile.ubi: get rid of --distributed-executor-backend=mp ...
ray-project/ray#50275 Closed lulmer pushed a commit to lulmer/vllm that referenced this pull request Apr 7, 2025 [bugfix] respect distributed_executor_backend in world_size=1 (vllm-p… … 0666250 Sign up for free to join this conversation on GitHub. Already have an account? Sign...