ort_runtime_setup

ort_runtime_setup

Functions

Name Description
configure_onnxruntime_acceleration Configure ONNX Runtime library loading as early as possible.

configure_onnxruntime_acceleration

ort_runtime_setup.configure_onnxruntime_acceleration(
    prefer_cuda=True,
    allow_coreml_on_macos=True,
    verbose=True,
)

Configure ONNX Runtime library loading as early as possible.

Call this before importing libraries that create ONNX Runtime sessions, e.g. before importing forensicface / insightface wrappers.

Returns a providers list suitable for ort.InferenceSession(…).

Windows

Uses os.add_dll_directory + PATH + ort.preload_dlls(directory=““).

Linux

Adds NVIDIA wheel lib dirs to LD_LIBRARY_PATH for child processes, preloads shared libraries by absolute path with ctypes, then calls ort.preload_dlls(directory=““) where available.

macOS

CUDA is not available through onnxruntime-gpu. Optionally returns CoreMLExecutionProvider if present, otherwise CPUExecutionProvider.