NVIDIA Dynamo
The NVIDIA Dynamo Platform is a high-performance, low-latency inference platform designed to serve all AI models—across any framework, architecture, or deployment scale.
This document provides step-by-step instructions about distributed inferencing using Dynamo through various examples and guides.
Welcome to our GitHub. NVIDIA Dynamo flexible, component based, data center scale inference serving framework designed to meet the demands of complex use cases including those of Generative AI. Designed to enable developers to implement and customize routing, load balancing, scaling and workflow definitions at the data center scale without sacrificing performance or ease of use.
An abstraction library to abstract memory of heterogeneous devices, i.e., CPU, GPU, storage, and enable most efficient and low-latency communication among them, integrated with distributed inference servers such as Triton. This library will target distributed inference communication patterns to effectively transfer the KV cache in disaggregated LLM serving platform.