Model Type
LLM
Framework
OpenVINO
Algorithm
Transformer-based LLM (Phi)
Technical Details
Runs a compact LLM locally using OpenVINO to accelerate inference on Intel devices. Supports conversational context, basic instruction following, and customizable knowledge domains. Ideal for on-device assistants with no cloud dependency.