Skip to content

Speculative Decoding

Speculative decoding is a technique to speed up inference of large language models by using a smaller model. The idea is to use a smaller model to generate a set of candidate tokens, and then use a larger model to score these candidates. Please see this great X post by Andrej Karpathy on the subject.

In short, Speculative decoding is an optimization technique for inference that makes educated guesses about future tokens while generating the current token, all within a single forward pass. It incorporates a verification mechanism to ensure the correctness of these speculated tokens, thereby guaranteeing that the overall output of speculative decoding is identical to that of vanilla decoding. Optimizing the cost of inference of large language models (LLMs) is arguably one of the most critical factors in reducing the cost of generative AI and increasing its adoption. Towards this goal, various inference optimization techniques are available, including custom kernels, dynamic batching of input requests, and quantization of large models. Aphrodite implements many of these techniques, which you will find in the next pages.

Reference:

The next sections will explain each method and how to use them with Aphrodite.