We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
There was an error while loading. Please reload this page.
1 parent d4ead4e commit a1d796eCopy full SHA for a1d796e
README.md
@@ -1,5 +1,5 @@
1
# Inference Benchmarker
2
-## The best tool for benchmarking inference engines and LLM performance
+**The best tool for benchmarking inference engines and LLM performance**
3
4
5
Benchmarking inference servers for text generation models presents unique challenges.
@@ -27,7 +27,6 @@ It can be used to benchmark any text generation server that exposes an OpenAI-co
27
28
<!-- TOC -->
29
* [Inference Benchmarker](#inference-benchmarker)
30
- * [The best tool for benchmarking inference engines and LLM performance](#the-best-tool-for-benchmarking-inference-engines-and-llm-performance)
31
* [Features](#features)
32
* [Table of contents](#table-of-contents)
33
* [Get started](#get-started)
0 commit comments