Python SDK Users: Looking for Python documentation? See Python SDK API Reference.This page documents the Julia SDK (NimbusSDK.jl).
The NimbusSDK.jl Julia package provides production-ready Bayesian inference for Brain-Computer Interface (BCI) applications. Built on RxInfer.jl, it offers three models (NimbusLDA, NimbusQDA, and NimbusProbit) with batch and streaming inference capabilities. Note: NimbusSTS is currently available in the Python SDK only.
Preprocessed EEG features (CSP, bandpower, etc.) - not raw EEG
What changed? NimbusSDK.jl is now a public wrapper package in the Julia General Registry. The proprietary inference core (NimbusSDKCore) is automatically installed when you provide your API key. No more private registry setup needed!
Install the proprietary NimbusSDKCore with your API key. This is a one-time setup that downloads and configures the commercial inference engine.
install_core(api_key::String) -> Bool
Parameters:
api_key::String - Your NimbusSDK API key (format: nbci_live_... or nbci_test_...)
Returns:true if installation successfulExample:
using NimbusSDK# One-time setup (downloads and installs core)NimbusSDK.install_core("nbci_live_...")# After installation, you can use the SDK in any projectusing NimbusSDKmodel = load_model(NimbusLDA, "motor_imagery_4class_v1")
The core installation is persistent. You only need to run install_core() once per machine. After that, simply using NimbusSDK will work in any Julia project.
Verify that the core is installed and working correctly.
check_installation() -> Bool
Returns:true if core is installed and operational
Note: check_installation() is provided by the NimbusSDK wrapper package. For direct NimbusSDKCore usage, check authentication status using NimbusSDKCore.AUTH_STATE[].
For most users: Authentication is handled automatically by NimbusSDK.install_core(). The functions below are for advanced users working directly with NimbusSDKCore.
Primary Name: Bayesian LDA (Bayesian Linear Discriminant Analysis) API Name: NimbusLDA Mathematical Model: Pooled Gaussian Classifier (PGC)Linear Discriminant Analysis with shared precision matrix. Fast inference with good performance for well-separated classes.Fields:
mean_posteriors::Vector - Full posterior distributions for class means (MvNormal objects, one per class)
precision_posterior - Full posterior distribution for shared precision matrix (Wishart object, shared across all classes)
priors::Vector{Float64} - Empirical class priors from training data (must sum to 1.0)
metadata::ModelMetadata - Model metadata
dof_offset::Int - Degrees of freedom offset used during training (default: 2)
mean_prior_precision::Float64 - Mean prior precision strength used during training (default: 0.01)
Accessing model parameters: To get point estimates from posterior distributions, use mean(model.mean_posteriors[k]) for class means and mean(model.precision_posterior) for the precision matrix. The SDK stores full posterior distributions (not just point estimates) for proper Bayesian inference.
Primary Name: Bayesian QDA API Name: NimbusQDA Mathematical Model: Heteroscedastic Gaussian Classifier (HGC)Gaussian classifier with class-specific covariance matrices. More flexible, handles overlapping distributions.Fields:
mean_posteriors::Vector - Full posterior distributions for class means (MvNormal objects, one per class)
precision_posteriors::Vector - Full posterior distributions for precision matrices (Wishart objects, one per class)
priors::Vector{Float64} - Empirical class priors from training data (must sum to 1.0)
metadata::ModelMetadata - Model metadata
dof_offset::Int - Degrees of freedom offset used during training (default: 2)
mean_prior_precision::Float64 - Mean prior precision strength used during training (default: 0.01)
Accessing model parameters: To get point estimates from posterior distributions, use mean(model.mean_posteriors[k]) for class means and mean(model.precision_posteriors[k]) for class-specific precision matrices. The SDK stores full posterior distributions (not just point estimates) for proper Bayesian inference.
Fine-tune a pre-trained model with subject-specific data (faster than training from scratch).
calibrate_model( base_model, calib_data::BCIData; iterations::Int = 20) -> Model
Parameters:
base_model - Pre-trained model to calibrate
calib_data::BCIData - Calibration data with labels
iterations::Int - Number of calibration iterations (default: 20)
Hyperparameters preserved (v0.2.0+): calibrate_model() automatically uses the same hyperparameters (dof_offset, mean_prior_precision, etc.) as the base model. You cannot override them during calibration.
Example:
base_model = load_model(NimbusLDA, "motor_imagery_baseline_v1")personalized_model = calibrate_model(base_model, calib_data; iterations=20)# The personalized model inherits all hyperparameters from base_model
iterations::Int - Number of inference iterations (default: 10)
Returns:
struct BatchResult predictions::Vector{Int} # Predicted class for each trial confidences::Vector{Float64} # Confidence (max posterior) for each trial posteriors::Matrix{Float64} # Full posterior distributions (n_trials × n_classes) free_energy::Union{Float64, Nothing} # Mean RxInfer free energy if available entropy::Vector{Float64} # Shannon entropy per trial (bits) mean_entropy::Float64 # Average entropy across trials mahalanobis_distances::Matrix{Float64} # Distances to each class center (n_trials × n_classes) outlier_scores::Vector{Float64} # Minimum distance to any class (per trial) latency_ms::Int # Total batch latency in milliseconds per_trial_latency_ms::Vector{Float64} # Latency per trial in milliseconds balance::Float64 # Class distribution balance (0–1) confidence_calibration::Union{CalibrationMetrics, Nothing} # Calibration metrics if labels availableend
session::StreamingSession - Active streaming session
chunk::Array{Float64, 2} - Chunk data (n_features × chunk_size)
iterations::Int - Number of inference iterations for this chunk (default: 10)
Returns:ChunkResult
struct ChunkResult prediction::Int # Predicted class for this chunk confidence::Float64 # Confidence for this chunk posterior::Vector{Float64} # Posterior distribution for this chunk latency_ms::Float64 # Processing time for this chunk (ms)end
Example:
for chunk in eeg_stream result = process_chunk(session, chunk) println("Prediction: $(result.prediction), Confidence: $(result.confidence)")end
Returns:StreamingResult with final prediction and diagnostics
struct StreamingResult prediction::Int # Aggregated prediction confidence::Float64 # Aggregated confidence posterior::Vector{Float64} # Aggregated posterior entropy::Float64 # Entropy of final posterior (bits) aggregation_method::Symbol # Aggregation method used n_chunks::Int # Number of chunks in trial latency_ms::Float64 # Total latency (ms) chunk_latencies_ms::Vector{Float64} # Latency per chunk balance::Float64 # Class distribution balance across chunks confidence_calibration::Union{CalibrationMetrics, Nothing} # Calibration metrics if label providedend
Example:
# Process trialfor chunk in trial_chunks process_chunk(session, chunk)end# Get final predictionfinal_result = finalize_trial(session; method=:weighted_vote, temporal_weighting=true)println("Final prediction: $(final_result.prediction)")println("Confidence: $(final_result.confidence)")println("Aggregation method: $(final_result.aggregation_method)")
paradigm - Optional filter by BCI paradigm (:motor_imagery, :p300, :ssvep)
model_type - Optional filter by model type (:NimbusLDA, :NimbusQDA, :NimbusProbit)
Returns: Vector of model information dictionaries with keys: name, version, type, paradigm, n_features, n_classes, requires_licenseExample:
using NimbusSDKCore# List all available modelsall_models = list_available_models()# Filter by paradigmmi_models = list_available_models(paradigm=:motor_imagery)# Filter by model typelda_models = list_available_models(model_type=:NimbusLDA)# Print model informationfor model in all_models println("$(model.name): $(model.type) - $(model.paradigm)")end
Returns:true if your license allows access, false otherwiseExample:
if check_model_license_compatibility("motor_imagery_4class_v1") model = load_model(NimbusLDA, "motor_imagery_4class_v1")else @warn "Your license does not allow access to this model"end
Validate BCI data for common issues before inference.
validate_data(data::BCIData) -> Bool
Description:
Validates data for NaN/Inf values, correct dimensions, and provides warnings for suspicious data patterns.Returns:true if validation passesThrows: Error if validation failsExample:
# Validate data before inferencetry validate_data(data) println("✓ Data validation passed") results = predict_batch(model, data)catch e if isa(e, DataValidationError) @error "Data validation failed: $(error_msg(e))" else rethrow(e) endend
Returns:true if model and data are compatibleExample:
model = load_model(NimbusLDA, "motor_imagery_4class_v1")data = BCIData(features, metadata, labels)if check_model_compatibility(model, data) results = predict_batch(model, data)else @error "Model and data are incompatible"end
Critical for cross-session BCI! EEG amplitude varies 50-200% across sessions. Proper normalization improves accuracy by 15-30%.
Feature normalization is essential for BCI models used across different sessions or subjects. NimbusSDK provides comprehensive normalization utilities.
:robust - Robust normalization using median and MAD (outlier-resistant)
:none - No normalization
Returns:NormalizationParams object with computed statisticsExample:
# Compute normalization params from training datatrain_features = randn(16, 250, 100)norm_params = estimate_normalization_params(train_features; method=:zscore)# Save params with model for consistent test-time normalization@save "model.jld2" model norm_params
Normalization should be computed AFTER feature extraction but BEFORE model training. The same normalization parameters must be applied to test data.
Returns: Normalized features with same shape as input
This function computes normalization parameters from the input data itself. For proper train/test separation, use estimate_normalization_params() and apply_normalization() separately.
Example:
features = randn(16, 250, 100)normalized = normalize_features(features; method=:zscore)
# 1. Estimate params from TRAINING data onlytrain_features = csp_features_train # (16 × 250 × 80)norm_params = estimate_normalization_params(train_features; method=:zscore)# 2. Apply to BOTH training and test datatrain_norm = apply_normalization(train_features, norm_params)test_norm = apply_normalization(test_features, norm_params)# 3. Save params with your model@save "model_with_norm.jld2" model norm_params# 4. Later: Load and apply same params@load "model_with_norm.jld2" model norm_paramsnew_data_norm = apply_normalization(new_data, norm_params)
Common Pitfalls:❌ Never normalize train and test separately
❌ Never normalize raw EEG (do it after features)
❌ Never forget to save normalization paramsSee the complete Feature Normalization guide for detailed documentation.
struct BCIPerformanceMetrics accuracy::Float64 # Classification accuracy (0–1) information_transfer_rate::Float64 # ITR in bits/minute false_positive_rate::Float64 # Average FPR across classes false_negative_rate::Float64 # Average FNR across classes mean_confidence::Float64 # Average confidence across trials mean_trial_duration::Float64 # Trial duration in seconds selection_rate::Float64 # Successful selections per minuteend