vllm.renderers.protocol ¶
RendererLike ¶
Bases: Protocol
Source code in vllm/renderers/protocol.py
from_config classmethod ¶
from_config(
config: ModelConfig, tokenizer_kwargs: dict[str, Any]
) -> RendererLike
get_tokenizer ¶
get_tokenizer() -> TokenizerLike
render_messages ¶
render_messages(
messages: list[ChatCompletionMessageParam], **kwargs
) -> tuple[
list[ConversationMessage], TextPrompt | TokensPrompt
]
render_messages_async async ¶
render_messages_async(
messages: list[ChatCompletionMessageParam], **kwargs
) -> tuple[
list[ConversationMessage], TextPrompt | TokensPrompt
]