Skip to main content
Version: v0.5.1

Presets

The current supported model families with preset configurations are listed below.

Model FamilyCompatible KAITO Versions
falconv0.0.1+
mistralv0.2.0+
phi2v0.2.0+
phi3v0.3.0+
phi4v0.4.5+
qwen7bv0.4.1+
qwen32bv0.4.5+
llama3v0.4.6+

Validation

Each preset model has its own hardware requirements in terms of GPU count and GPU memory defined in the respective model.go file. KAITO controller performs a validation check of whether the specified SKU and node count are sufficient to run the model or not. In case the provided SKU is not in the known list, the controller bypasses the validation check which means users need to ensure the model can run with the provided SKU.

Distributed inference

For models that support distributed inference, when the node count is larger than one, torch distributed elastic is configured with master/worker pods running in multiple nodes and the service endpoint is the master pod.

The following preset models support multi-node distributed inference:

Model FamilyModelsMulti-Node Support
llama3llama-3.3-70b-instruct

For detailed information on configuring and using multi-node inference, see the Multi-Node Inference documentation.