
BentoML 近日推出了 llm-optimizer,一个开源的 LLM 推理基准测试与优化工具。该工具支持多种推理框架,并可兼容任意开源大语言模型。
llm-optimizer 的目标是将原本繁琐的手动性能调优过程自动化。用户只需几个命令,即可集中化地运行结构化实验、施加约束条件并可视化分析结果。
使用示例:
llm-optimizer estimate \ --model meta-llama/Llama-3.1-8B-Instruct \ --input-len 1024 \ --output-len 512 \ --gpu A100 \ --num-gpus 2
预期输出:
=== Configuration ===Model: meta-llama/Llama-3.1-8B-InstructGPU: 2x A100Precision: fp16Input/Output: 1024/512 tokensTarget: throughputFetching model configuration...Model: 8029995008.0B parameters, 32 layers=== Performance Analysis ===Best Latency (concurrency=1): TTFT: 43.1 ms ITL: 2.6 ms E2E: 1.39 sBest Throughput (concurrency=512): Output: 18873.3 tokens/s Input: 23767.8 tokens/s Requests: 14.24 req/s Bottleneck: Memory=== Roofline Analysis ===Hardware Ops/Byte Ratio: 142.5 ops/bytePrefill Arithmetic Intensity: 52205.5 ops/byteDecode Arithmetic Intensity: 50.9 ops/bytePrefill Phase: Compute BoundDecode Phase: Memory Bound=== Concurrency Analysis ===KV Cache Memory Limit: 688 concurrent requestsPrefill Compute Limit: 8 concurrent requestsDecode Capacity Limit: 13 concurrent requestsTheoretical Overall Limit: 8 concurrent requestsEmpirical Optimal Concurrency: 16 concurrent requests=== Tuning Commands ===--- SGLANG ---Simple (concurrency + TP/DP): llm-optimizer --framework sglang --model meta-llama/Llama-3.1-8B-Instruct --gpus 2 --host 127.0.0.1 --server-args "tp_size*dp_size=[(1, 2), (2, 1)]" --client-args "num_prompts=1000;dataset_name=sharegpt;random_input=1024;random_output=512;num_prompts=1000;max_concurrency=[256, 512, 768]" --output-dir tuning_results --output-json tuning_results/config_1_sglang.jsonAdvanced (additional parameters): llm-optimizer --framework sglang --model meta-llama/Llama-3.1-8B-Instruct --gpus 2 --host 127.0.0.1 --server-args "tp_size*dp_size=[(1, 2), (2, 1)];chunked_prefill_size=[1434, 2048, 2662];schedule_conservativeness=[0.3, 0.6, 1.0];schedule_policy=fcfs" --client-args "num_prompts=1000;dataset_name=sharegpt;random_input=1024;random_output=512;num_prompts=1000;max_concurrency=[256, 512, 768]" --output-dir tuning_results --output-json tuning_results/config_1_sglang.json--- VLLM ---Simple (concurrency + TP/DP): llm-optimizer --framework vllm --model meta-llama/Llama-3.1-8B-Instruct --gpus 2 --host 127.0.0.1 --server-args "tensor_parallel_size*data_parallel_size=[(1, 2), (2, 1)]" --client-args "num_prompts=1000;dataset_name=sharegpt;random_input=1024;random_output=512;num_prompts=1000;max_concurrency=[256, 512, 768]" --output-dir tuning_results --output-json tuning_results/config_1_vllm.jsonAdvanced (additional parameters): llm-optimizer --framework vllm --model meta-llama/Llama-3.1-8B-Instruct --gpus 2 --host 127.0.0.1 --server-args "tensor_parallel_size*data_parallel_size=[(1, 2), (2, 1)];max_num_batched_tokens=[1024, 1177, 1331]" --client-args "num_prompts=1000;dataset_name=sharegpt;random_input=1024;random_output=512;num_prompts=1000;max_concurrency=[256, 512, 768]" --output-dir tuning_results --output-json tuning_results/config_1_vllm.json
此工具应对了 LLM 部署中的典型难题:如何在不依赖反复试错的前提下,找到延迟、吞吐量与成本之间的最优平衡。llm-optimizer 提供了一种系统化探索模型性能空间的方法,通过自动执行基准测试和配置搜索,显著减少了人为猜测和重复劳动。
项目开源地址:https://www.php.cn/link/c11a6c8821cdb24676ff61d9b59c10a0
以上就是BentoML 发布 llm-optimizer,LLM 推理和性能优化开源工具的详细内容,更多请关注php中文网其它相关文章!
Copyright 2014-2025 https://www.php.cn/ All Rights Reserved | php.cn | 湘ICP备2023035733号