Example
To use the HuggingFaceRunner, first initialize the runner and load a pipeline configuration. The runner automatically caches pipelines based on task, model, device, and data type.Running Inference
Run Model Inference:Executes the loaded pipeline with given inputs and optional parameters.
GPU & Precision Support
Enable GPU Execution:If CUDA is available, the runner automatically switches to GPU execution.
Reduces memory usage and improves performance on supported GPUs.

