Skip to content

Utilities API Reference

SuperQuantX provides comprehensive utility modules for optimization, visualization, benchmarking, data handling, and command-line operations. This API reference covers all utility functions and classes.

Optimization Utilities

Circuit Optimization

superquantx.utils.optimization.optimize_circuit

optimize_circuit(cost_function: Callable[[ndarray], float], initial_params: ndarray, gradient_function: Callable[[ndarray], ndarray] | None = None, optimizer: str = 'adam', max_iterations: int = 100, tolerance: float = 1e-06, learning_rate: float = 0.01, verbose: bool = False) -> dict[str, Any]

Optimize quantum circuit parameters.

Parameters:

Name Type Description Default
cost_function Callable[[ndarray], float]

Function to minimize f(params) -> cost

required
initial_params ndarray

Initial parameter values

required
gradient_function Callable[[ndarray], ndarray] | None

Function to compute gradients (optional)

None
optimizer str

Optimizer type ('adam', 'sgd', 'lbfgs')

'adam'
max_iterations int

Maximum number of iterations

100
tolerance float

Convergence tolerance

1e-06
learning_rate float

Learning rate for gradient-based optimizers

0.01
verbose bool

Whether to print progress

False

Returns:

Type Description
dict[str, Any]

Dictionary with optimization results

Source code in src/superquantx/utils/optimization.py
def optimize_circuit(
    cost_function: Callable[[np.ndarray], float],
    initial_params: np.ndarray,
    gradient_function: Callable[[np.ndarray], np.ndarray] | None = None,
    optimizer: str = 'adam',
    max_iterations: int = 100,
    tolerance: float = 1e-6,
    learning_rate: float = 0.01,
    verbose: bool = False
) -> dict[str, Any]:
    """Optimize quantum circuit parameters.

    Args:
        cost_function: Function to minimize f(params) -> cost
        initial_params: Initial parameter values
        gradient_function: Function to compute gradients (optional)
        optimizer: Optimizer type ('adam', 'sgd', 'lbfgs')
        max_iterations: Maximum number of iterations
        tolerance: Convergence tolerance
        learning_rate: Learning rate for gradient-based optimizers
        verbose: Whether to print progress

    Returns:
        Dictionary with optimization results

    """
    start_time = time.time()

    # Initialize optimizer
    if optimizer == 'adam':
        opt = AdamOptimizer(learning_rate)
    elif optimizer == 'sgd':
        opt = GradientDescentOptimizer(learning_rate)
    else:
        raise ValueError(f"Unknown optimizer: {optimizer}")

    params = initial_params.copy()
    costs = []

    # If no gradient function provided, use finite differences
    if gradient_function is None:
        def gradient_function(p):
            return finite_difference_gradient(cost_function, p)

    for iteration in range(max_iterations):
        # Compute cost and gradient
        cost = cost_function(params)
        gradients = gradient_function(params)

        costs.append(cost)

        if verbose and iteration % 10 == 0:
            print(f"Iteration {iteration}: Cost = {cost:.6f}")

        # Check convergence
        if iteration > 0 and abs(costs[-2] - cost) < tolerance:
            if verbose:
                print(f"Converged at iteration {iteration}")
            break

        # Update parameters
        params = opt.step(params, gradients)

    optimization_time = time.time() - start_time

    return {
        'optimal_params': params,
        'optimal_cost': costs[-1],
        'cost_history': costs,
        'n_iterations': len(costs),
        'converged': len(costs) < max_iterations,
        'optimization_time': optimization_time,
        'optimizer': optimizer
    }

superquantx.utils.optimization.optimize_parameters

optimize_parameters(objective_function: Callable, bounds: list[tuple[float, float]], method: str = 'scipy', max_evaluations: int = 1000, random_state: int | None = None) -> dict[str, Any]

Optimize parameters using various methods.

Parameters:

Name Type Description Default
objective_function Callable

Function to minimize

required
bounds list[tuple[float, float]]

Parameter bounds as list of (min, max) tuples

required
method str

Optimization method ('scipy', 'random_search', 'grid_search')

'scipy'
max_evaluations int

Maximum function evaluations

1000
random_state int | None

Random seed

None

Returns:

Type Description
dict[str, Any]

Optimization results dictionary

Source code in src/superquantx/utils/optimization.py
def optimize_parameters(
    objective_function: Callable,
    bounds: list[tuple[float, float]],
    method: str = 'scipy',
    max_evaluations: int = 1000,
    random_state: int | None = None
) -> dict[str, Any]:
    """Optimize parameters using various methods.

    Args:
        objective_function: Function to minimize
        bounds: Parameter bounds as list of (min, max) tuples
        method: Optimization method ('scipy', 'random_search', 'grid_search')
        max_evaluations: Maximum function evaluations
        random_state: Random seed

    Returns:
        Optimization results dictionary

    """
    if method == 'scipy':
        return _scipy_optimize(objective_function, bounds, max_evaluations)
    elif method == 'random_search':
        return _random_search_optimize(objective_function, bounds, max_evaluations, random_state)
    elif method == 'grid_search':
        return _grid_search_optimize(objective_function, bounds, max_evaluations)
    else:
        raise ValueError(f"Unknown optimization method: {method}")

Optimizers

superquantx.utils.optimization.gradient_descent

gradient_descent(cost_function: Callable[[ndarray], float], gradient_function: Callable[[ndarray], ndarray], initial_params: ndarray, learning_rate: float = 0.01, max_iterations: int = 1000, tolerance: float = 1e-06) -> tuple[np.ndarray, list[float]]

Perform gradient descent optimization.

Parameters:

Name Type Description Default
cost_function Callable[[ndarray], float]

Cost function to minimize

required
gradient_function Callable[[ndarray], ndarray]

Function returning gradients

required
initial_params ndarray

Initial parameter values

required
learning_rate float

Learning rate

0.01
max_iterations int

Maximum iterations

1000
tolerance float

Convergence tolerance

1e-06

Returns:

Type Description
tuple[ndarray, list[float]]

Tuple of (optimal_params, cost_history)

Source code in src/superquantx/utils/optimization.py
def gradient_descent(
    cost_function: Callable[[np.ndarray], float],
    gradient_function: Callable[[np.ndarray], np.ndarray],
    initial_params: np.ndarray,
    learning_rate: float = 0.01,
    max_iterations: int = 1000,
    tolerance: float = 1e-6
) -> tuple[np.ndarray, list[float]]:
    """Perform gradient descent optimization.

    Args:
        cost_function: Cost function to minimize
        gradient_function: Function returning gradients
        initial_params: Initial parameter values
        learning_rate: Learning rate
        max_iterations: Maximum iterations
        tolerance: Convergence tolerance

    Returns:
        Tuple of (optimal_params, cost_history)

    """
    params = initial_params.copy()
    cost_history = []

    for i in range(max_iterations):
        cost = cost_function(params)
        cost_history.append(cost)

        if i > 0 and abs(cost_history[-2] - cost) < tolerance:
            break

        gradients = gradient_function(params)
        params = params - learning_rate * gradients

    return params, cost_history

superquantx.utils.optimization.adam_optimizer

adam_optimizer(cost_function: Callable[[ndarray], float], gradient_function: Callable[[ndarray], ndarray], initial_params: ndarray, learning_rate: float = 0.001, beta1: float = 0.9, beta2: float = 0.999, epsilon: float = 1e-08, max_iterations: int = 1000, tolerance: float = 1e-06) -> tuple[np.ndarray, list[float]]

Perform Adam optimization.

Parameters:

Name Type Description Default
cost_function Callable[[ndarray], float]

Cost function to minimize

required
gradient_function Callable[[ndarray], ndarray]

Function returning gradients

required
initial_params ndarray

Initial parameter values

required
learning_rate float

Learning rate

0.001
beta1 float

Exponential decay rate for first moment

0.9
beta2 float

Exponential decay rate for second moment

0.999
epsilon float

Small constant for numerical stability

1e-08
max_iterations int

Maximum iterations

1000
tolerance float

Convergence tolerance

1e-06

Returns:

Type Description
tuple[ndarray, list[float]]

Tuple of (optimal_params, cost_history)

Source code in src/superquantx/utils/optimization.py
def adam_optimizer(
    cost_function: Callable[[np.ndarray], float],
    gradient_function: Callable[[np.ndarray], np.ndarray],
    initial_params: np.ndarray,
    learning_rate: float = 0.001,
    beta1: float = 0.9,
    beta2: float = 0.999,
    epsilon: float = 1e-8,
    max_iterations: int = 1000,
    tolerance: float = 1e-6
) -> tuple[np.ndarray, list[float]]:
    """Perform Adam optimization.

    Args:
        cost_function: Cost function to minimize
        gradient_function: Function returning gradients
        initial_params: Initial parameter values
        learning_rate: Learning rate
        beta1: Exponential decay rate for first moment
        beta2: Exponential decay rate for second moment
        epsilon: Small constant for numerical stability
        max_iterations: Maximum iterations
        tolerance: Convergence tolerance

    Returns:
        Tuple of (optimal_params, cost_history)

    """
    optimizer = AdamOptimizer(learning_rate, beta1, beta2, epsilon)
    params = initial_params.copy()
    cost_history = []

    for i in range(max_iterations):
        cost = cost_function(params)
        cost_history.append(cost)

        if i > 0 and abs(cost_history[-2] - cost) < tolerance:
            break

        gradients = gradient_function(params)
        params = optimizer.step(params, gradients)

    return params, cost_history

Visualization Utilities

Result Visualization

superquantx.utils.visualization.visualize_results

visualize_results(results: dict[str, Any], plot_type: str = 'optimization', backend: str = 'matplotlib', save_path: str | None = None, **kwargs) -> None

Visualize quantum machine learning results.

Parameters:

Name Type Description Default
results dict[str, Any]

Results dictionary from algorithm execution

required
plot_type str

Type of plot ('optimization', 'classification', 'regression')

'optimization'
backend str

Plotting backend ('matplotlib' or 'plotly')

'matplotlib'
save_path str | None

Path to save the plot

None
**kwargs

Additional plotting arguments

{}
Source code in src/superquantx/utils/visualization.py
def visualize_results(
    results: dict[str, Any],
    plot_type: str = 'optimization',
    backend: str = 'matplotlib',
    save_path: str | None = None,
    **kwargs
) -> None:
    """Visualize quantum machine learning results.

    Args:
        results: Results dictionary from algorithm execution
        plot_type: Type of plot ('optimization', 'classification', 'regression')
        backend: Plotting backend ('matplotlib' or 'plotly')
        save_path: Path to save the plot
        **kwargs: Additional plotting arguments

    """
    if backend == 'matplotlib' and not HAS_MATPLOTLIB:
        raise ImportError("matplotlib is required for matplotlib backend")
    if backend == 'plotly' and not HAS_PLOTLY:
        raise ImportError("plotly is required for plotly backend")

    if plot_type == 'optimization':
        plot_optimization_history(results, backend=backend, save_path=save_path, **kwargs)
    elif plot_type == 'classification':
        plot_classification_results(results, backend=backend, save_path=save_path, **kwargs)
    elif plot_type == 'regression':
        plot_regression_results(results, backend=backend, save_path=save_path, **kwargs)
    else:
        raise ValueError(f"Unknown plot type: {plot_type}")

superquantx.utils.visualization.plot_optimization_history

plot_optimization_history(results: dict[str, Any], backend: str = 'matplotlib', save_path: str | None = None, **kwargs) -> None

Plot optimization history from algorithm results.

Parameters:

Name Type Description Default
results dict[str, Any]

Results containing 'cost_history' or similar

required
backend str

Plotting backend

'matplotlib'
save_path str | None

Path to save the plot

None
**kwargs

Additional plotting arguments

{}
Source code in src/superquantx/utils/visualization.py
def plot_optimization_history(
    results: dict[str, Any],
    backend: str = 'matplotlib',
    save_path: str | None = None,
    **kwargs
) -> None:
    """Plot optimization history from algorithm results.

    Args:
        results: Results containing 'cost_history' or similar
        backend: Plotting backend
        save_path: Path to save the plot
        **kwargs: Additional plotting arguments

    """
    # Extract cost history
    cost_history = None
    if 'cost_history' in results:
        cost_history = results['cost_history']
    elif 'loss_history' in results:
        cost_history = results['loss_history']
    elif 'objective_history' in results:
        cost_history = results['objective_history']

    if cost_history is None:
        raise ValueError("No optimization history found in results")

    if backend == 'matplotlib':
        _plot_optimization_matplotlib(cost_history, save_path, **kwargs)
    elif backend == 'plotly':
        _plot_optimization_plotly(cost_history, save_path, **kwargs)
    else:
        raise ValueError(f"Unknown backend: {backend}")

Quantum State Visualization

superquantx.utils.visualization.plot_circuit

plot_circuit(circuit_data: dict[str, Any], backend: str = 'matplotlib', save_path: str | None = None, **kwargs) -> None

Plot quantum circuit diagram.

Parameters:

Name Type Description Default
circuit_data dict[str, Any]

Circuit information dictionary

required
backend str

Plotting backend

'matplotlib'
save_path str | None

Path to save the plot

None
**kwargs

Additional plotting arguments

{}
Source code in src/superquantx/utils/visualization.py
def plot_circuit(
    circuit_data: dict[str, Any],
    backend: str = 'matplotlib',
    save_path: str | None = None,
    **kwargs
) -> None:
    """Plot quantum circuit diagram.

    Args:
        circuit_data: Circuit information dictionary
        backend: Plotting backend
        save_path: Path to save the plot
        **kwargs: Additional plotting arguments

    """
    if backend == 'matplotlib':
        _plot_circuit_matplotlib(circuit_data, save_path, **kwargs)
    elif backend == 'plotly':
        _plot_circuit_plotly(circuit_data, save_path, **kwargs)
    else:
        raise ValueError(f"Unknown backend: {backend}")

superquantx.utils.visualization.plot_quantum_state

plot_quantum_state(state_vector: ndarray, backend: str = 'matplotlib', representation: str = 'bar', save_path: str | None = None, **kwargs) -> None

Plot quantum state vector.

Parameters:

Name Type Description Default
state_vector ndarray

Complex quantum state vector

required
backend str

Plotting backend

'matplotlib'
representation str

How to represent state ('bar', 'phase', 'bloch')

'bar'
save_path str | None

Path to save the plot

None
**kwargs

Additional plotting arguments

{}
Source code in src/superquantx/utils/visualization.py
def plot_quantum_state(
    state_vector: np.ndarray,
    backend: str = 'matplotlib',
    representation: str = 'bar',
    save_path: str | None = None,
    **kwargs
) -> None:
    """Plot quantum state vector.

    Args:
        state_vector: Complex quantum state vector
        backend: Plotting backend
        representation: How to represent state ('bar', 'phase', 'bloch')
        save_path: Path to save the plot
        **kwargs: Additional plotting arguments

    """
    if backend == 'matplotlib':
        _plot_state_matplotlib(state_vector, representation, save_path, **kwargs)
    elif backend == 'plotly':
        _plot_state_plotly(state_vector, representation, save_path, **kwargs)
    else:
        raise ValueError(f"Unknown backend: {backend}")

superquantx.utils.visualization.plot_bloch_sphere

plot_bloch_sphere(state_vector: ndarray, backend: str = 'matplotlib', save_path: str | None = None, **kwargs) -> None

Plot quantum state on Bloch sphere (for single qubit states).

Parameters:

Name Type Description Default
state_vector ndarray

Single qubit state vector [α, β]

required
backend str

Plotting backend

'matplotlib'
save_path str | None

Path to save the plot

None
**kwargs

Additional plotting arguments

{}
Source code in src/superquantx/utils/visualization.py
def plot_bloch_sphere(
    state_vector: np.ndarray,
    backend: str = 'matplotlib',
    save_path: str | None = None,
    **kwargs
) -> None:
    """Plot quantum state on Bloch sphere (for single qubit states).

    Args:
        state_vector: Single qubit state vector [α, β]
        backend: Plotting backend
        save_path: Path to save the plot
        **kwargs: Additional plotting arguments

    """
    if len(state_vector) != 2:
        raise ValueError("Bloch sphere visualization only supports single qubit states")

    # Convert to Bloch vector
    bloch_vector = _state_to_bloch_vector(state_vector)

    if backend == 'matplotlib':
        _plot_bloch_matplotlib(bloch_vector, save_path, **kwargs)
    elif backend == 'plotly':
        _plot_bloch_plotly(bloch_vector, save_path, **kwargs)
    else:
        raise ValueError(f"Unknown backend: {backend}")

Benchmarking Utilities

Algorithm Benchmarking

superquantx.utils.benchmarking.benchmark_algorithm

benchmark_algorithm(algorithm: Any, datasets: list[tuple[str, Any]], metrics: list[str] | None = None, n_runs: int = 1, verbose: bool = True) -> list[BenchmarkResult]

Benchmark quantum algorithm performance across multiple datasets.

Parameters:

Name Type Description Default
algorithm Any

Quantum algorithm instance

required
datasets list[tuple[str, Any]]

List of (name, dataset) tuples

required
metrics list[str] | None

List of metrics to compute

None
n_runs int

Number of runs for averaging

1
verbose bool

Whether to print progress

True

Returns:

Type Description
list[BenchmarkResult]

List of benchmark results

Source code in src/superquantx/utils/benchmarking.py
def benchmark_algorithm(
    algorithm: Any,
    datasets: list[tuple[str, Any]],
    metrics: list[str] | None = None,
    n_runs: int = 1,
    verbose: bool = True
) -> list[BenchmarkResult]:
    """Benchmark quantum algorithm performance across multiple datasets.

    Args:
        algorithm: Quantum algorithm instance
        datasets: List of (name, dataset) tuples
        metrics: List of metrics to compute
        n_runs: Number of runs for averaging
        verbose: Whether to print progress

    Returns:
        List of benchmark results

    """
    if metrics is None:
        metrics = ['accuracy', 'execution_time', 'memory_usage']

    results = []

    for dataset_name, dataset in datasets:
        if verbose:
            print(f"Benchmarking {algorithm.__class__.__name__} on {dataset_name}...")

        dataset_results = []

        for run in range(n_runs):
            if verbose and n_runs > 1:
                print(f"  Run {run + 1}/{n_runs}")

            result = _run_single_benchmark(
                algorithm, dataset_name, dataset, metrics
            )
            dataset_results.append(result)

        # Average results if multiple runs
        if n_runs > 1:
            averaged_result = _average_benchmark_results(dataset_results)
            results.append(averaged_result)
        else:
            results.extend(dataset_results)

    return results

superquantx.utils.benchmarking.benchmark_backend

benchmark_backend(backends: list[Any], test_circuit: Callable, n_qubits_range: list[int] = None, n_shots: int = 1024, verbose: bool = True) -> dict[str, list[BenchmarkResult]]

Benchmark different quantum backends.

Parameters:

Name Type Description Default
backends list[Any]

List of backend instances

required
test_circuit Callable

Function that creates test circuit

required
n_qubits_range list[int]

Range of qubit numbers to test

None
n_shots int

Number of shots for each measurement

1024
verbose bool

Whether to print progress

True

Returns:

Type Description
dict[str, list[BenchmarkResult]]

Dictionary mapping backend names to benchmark results

Source code in src/superquantx/utils/benchmarking.py
def benchmark_backend(
    backends: list[Any],
    test_circuit: Callable,
    n_qubits_range: list[int] = None,
    n_shots: int = 1024,
    verbose: bool = True
) -> dict[str, list[BenchmarkResult]]:
    """Benchmark different quantum backends.

    Args:
        backends: List of backend instances
        test_circuit: Function that creates test circuit
        n_qubits_range: Range of qubit numbers to test
        n_shots: Number of shots for each measurement
        verbose: Whether to print progress

    Returns:
        Dictionary mapping backend names to benchmark results

    """
    if n_qubits_range is None:
        n_qubits_range = [2, 4, 6, 8]
    results = {}

    for backend in backends:
        backend_name = getattr(backend, 'name', backend.__class__.__name__)
        if verbose:
            print(f"Benchmarking backend: {backend_name}")

        backend_results = []

        for n_qubits in n_qubits_range:
            if verbose:
                print(f"  Testing {n_qubits} qubits...")

            try:
                start_time = time.time()
                start_memory = _get_memory_usage()

                # Create and run circuit
                circuit = test_circuit(n_qubits)
                result = backend.run(circuit, shots=n_shots)

                execution_time = time.time() - start_time
                memory_usage = _get_memory_usage() - start_memory if start_memory else None

                benchmark_result = BenchmarkResult(
                    algorithm_name="test_circuit",
                    backend_name=backend_name,
                    dataset_name=f"{n_qubits}_qubits",
                    execution_time=execution_time,
                    memory_usage=memory_usage,
                    accuracy=None,
                    loss=None,
                    n_parameters=None,
                    n_qubits=n_qubits,
                    n_iterations=None,
                    success=True,
                    error_message=None,
                    metadata={
                        'n_shots': n_shots,
                        'result_counts': getattr(result, 'counts', None)
                    }
                )

            except Exception as e:
                benchmark_result = BenchmarkResult(
                    algorithm_name="test_circuit",
                    backend_name=backend_name,
                    dataset_name=f"{n_qubits}_qubits",
                    execution_time=0,
                    memory_usage=None,
                    accuracy=None,
                    loss=None,
                    n_parameters=None,
                    n_qubits=n_qubits,
                    n_iterations=None,
                    success=False,
                    error_message=str(e),
                    metadata={'n_shots': n_shots}
                )

            backend_results.append(benchmark_result)

        results[backend_name] = backend_results

    return results

Performance Analysis

superquantx.utils.benchmarking.performance_metrics

performance_metrics(y_true: ndarray, y_pred: ndarray, task_type: str = 'classification') -> dict[str, float]

Compute performance metrics for predictions.

Parameters:

Name Type Description Default
y_true ndarray

True labels/values

required
y_pred ndarray

Predicted labels/values

required
task_type str

Type of task ('classification' or 'regression')

'classification'

Returns:

Type Description
dict[str, float]

Dictionary of computed metrics

Source code in src/superquantx/utils/benchmarking.py
def performance_metrics(
    y_true: np.ndarray,
    y_pred: np.ndarray,
    task_type: str = 'classification'
) -> dict[str, float]:
    """Compute performance metrics for predictions.

    Args:
        y_true: True labels/values
        y_pred: Predicted labels/values
        task_type: Type of task ('classification' or 'regression')

    Returns:
        Dictionary of computed metrics

    """
    metrics = {}

    if task_type == 'classification':
        # Accuracy
        metrics['accuracy'] = np.mean(y_true == y_pred)

        # Precision, Recall, F1 for binary classification
        if len(np.unique(y_true)) == 2:
            tp = np.sum((y_true == 1) & (y_pred == 1))
            fp = np.sum((y_true == 0) & (y_pred == 1))
            fn = np.sum((y_true == 1) & (y_pred == 0))

            precision = tp / (tp + fp) if (tp + fp) > 0 else 0
            recall = tp / (tp + fn) if (tp + fn) > 0 else 0
            f1 = 2 * precision * recall / (precision + recall) if (precision + recall) > 0 else 0

            metrics['precision'] = precision
            metrics['recall'] = recall
            metrics['f1_score'] = f1

        # Confusion matrix elements
        unique_labels = np.unique(y_true)
        confusion_matrix = np.zeros((len(unique_labels), len(unique_labels)))

        for i, true_label in enumerate(unique_labels):
            for j, pred_label in enumerate(unique_labels):
                confusion_matrix[i, j] = np.sum((y_true == true_label) & (y_pred == pred_label))

        metrics['confusion_matrix'] = confusion_matrix.tolist()

    elif task_type == 'regression':
        # Mean Squared Error
        mse = np.mean((y_true - y_pred) ** 2)
        metrics['mse'] = mse
        metrics['rmse'] = np.sqrt(mse)

        # Mean Absolute Error
        metrics['mae'] = np.mean(np.abs(y_true - y_pred))

        # R-squared
        ss_res = np.sum((y_true - y_pred) ** 2)
        ss_tot = np.sum((y_true - np.mean(y_true)) ** 2)
        r_squared = 1 - (ss_res / ss_tot) if ss_tot > 0 else 0
        metrics['r_squared'] = r_squared

        # Explained variance
        metrics['explained_variance'] = 1 - np.var(y_true - y_pred) / np.var(y_true)

    return metrics

superquantx.utils.benchmarking.compare_algorithms

compare_algorithms(algorithms: list[Any], dataset: Any, metrics: list[str] = None, n_runs: int = 3, verbose: bool = True) -> dict[str, Any]

Compare multiple algorithms on the same dataset.

Parameters:

Name Type Description Default
algorithms list[Any]

List of algorithm instances

required
dataset Any

Dataset to use for comparison

required
metrics list[str]

Metrics to compare

None
n_runs int

Number of runs for averaging

3
verbose bool

Whether to print progress

True

Returns:

Type Description
dict[str, Any]

Comparison results dictionary

Source code in src/superquantx/utils/benchmarking.py
def compare_algorithms(
    algorithms: list[Any],
    dataset: Any,
    metrics: list[str] = None,
    n_runs: int = 3,
    verbose: bool = True
) -> dict[str, Any]:
    """Compare multiple algorithms on the same dataset.

    Args:
        algorithms: List of algorithm instances
        dataset: Dataset to use for comparison
        metrics: Metrics to compare
        n_runs: Number of runs for averaging
        verbose: Whether to print progress

    Returns:
        Comparison results dictionary

    """
    if metrics is None:
        metrics = ['accuracy', 'execution_time']
    comparison_results = {
        'algorithms': [],
        'metrics': metrics,
        'n_runs': n_runs,
        'results': {}
    }

    for algorithm in algorithms:
        algorithm_name = algorithm.__class__.__name__
        comparison_results['algorithms'].append(algorithm_name)

        if verbose:
            print(f"Running {algorithm_name}...")

        # Run benchmark
        benchmark_results = benchmark_algorithm(
            algorithm,
            [('comparison_dataset', dataset)],
            metrics=metrics,
            n_runs=n_runs,
            verbose=False
        )

        # Extract averaged metrics
        result = benchmark_results[0]
        comparison_results['results'][algorithm_name] = {
            'execution_time': result.execution_time,
            'accuracy': result.accuracy,
            'memory_usage': result.memory_usage,
            'success': result.success,
            'error_message': result.error_message
        }

    # Find best performing algorithm for each metric
    comparison_results['best_algorithm'] = {}
    for metric in metrics:
        if metric == 'execution_time' or metric == 'memory_usage':
            # Lower is better
            best_value = float('inf')
            best_algorithm = None
            for alg_name, results in comparison_results['results'].items():
                if results.get(metric) and results[metric] < best_value:
                    best_value = results[metric]
                    best_algorithm = alg_name
        else:
            # Higher is better
            best_value = float('-inf')
            best_algorithm = None
            for alg_name, results in comparison_results['results'].items():
                if results.get(metric) and results[metric] > best_value:
                    best_value = results[metric]
                    best_algorithm = alg_name

        comparison_results['best_algorithm'][metric] = best_algorithm

    return comparison_results

Feature Mapping Utilities

Quantum Feature Maps

superquantx.utils.feature_mapping.QuantumFeatureMap

QuantumFeatureMap(n_features: int, reps: int = 1, entanglement: str = 'full', parameter_prefix: str = 'x')

Bases: ABC

Abstract base class for quantum feature maps.

Feature maps encode classical data into quantum states by applying parameterized quantum gates based on the input features.

Source code in src/superquantx/utils/feature_mapping.py
def __init__(
    self,
    n_features: int,
    reps: int = 1,
    entanglement: str = 'full',
    parameter_prefix: str = 'x'
):
    self.n_features = n_features
    self.reps = reps
    self.entanglement = entanglement
    self.parameter_prefix = parameter_prefix
    self.n_qubits = n_features
    self._parameters = []

Attributes

num_parameters property

num_parameters: int

Number of parameters in the feature map.

Functions

map_data_point

map_data_point(x: ndarray) -> dict[str, Any]

Map a single data point to quantum circuit parameters.

Parameters:

Name Type Description Default
x ndarray

Input data point of length n_features

required

Returns:

Type Description
dict[str, Any]

Circuit representation with parameters

Source code in src/superquantx/utils/feature_mapping.py
def map_data_point(self, x: np.ndarray) -> dict[str, Any]:
    """Map a single data point to quantum circuit parameters.

    Args:
        x: Input data point of length n_features

    Returns:
        Circuit representation with parameters

    """
    if len(x) != self.n_features:
        raise ValueError(f"Expected {self.n_features} features, got {len(x)}")

    # Repeat the data point for each repetition
    parameters = np.tile(x, self.reps)

    return self._build_circuit(parameters)

map_data

map_data(X: ndarray) -> list[dict[str, Any]]

Map multiple data points to quantum circuits.

Parameters:

Name Type Description Default
X ndarray

Input data of shape (n_samples, n_features)

required

Returns:

Type Description
list[dict[str, Any]]

List of circuit representations

Source code in src/superquantx/utils/feature_mapping.py
def map_data(self, X: np.ndarray) -> list[dict[str, Any]]:
    """Map multiple data points to quantum circuits.

    Args:
        X: Input data of shape (n_samples, n_features)

    Returns:
        List of circuit representations

    """
    return [self.map_data_point(x) for x in X]

superquantx.utils.feature_mapping.create_feature_map

create_feature_map(feature_map_type: str, n_features: int, **kwargs) -> QuantumFeatureMap

Factory function to create quantum feature maps.

Parameters:

Name Type Description Default
feature_map_type str

Type of feature map ('Z', 'ZZ', 'Pauli')

required
n_features int

Number of input features

required
**kwargs

Additional arguments for specific feature maps

{}

Returns:

Type Description
QuantumFeatureMap

QuantumFeatureMap instance

Source code in src/superquantx/utils/feature_mapping.py
def create_feature_map(
    feature_map_type: str,
    n_features: int,
    **kwargs
) -> QuantumFeatureMap:
    """Factory function to create quantum feature maps.

    Args:
        feature_map_type: Type of feature map ('Z', 'ZZ', 'Pauli')
        n_features: Number of input features
        **kwargs: Additional arguments for specific feature maps

    Returns:
        QuantumFeatureMap instance

    """
    feature_map_type = feature_map_type.upper()

    if feature_map_type == 'Z':
        return ZFeatureMap(n_features, **kwargs)
    elif feature_map_type == 'ZZ':
        return ZZFeatureMap(n_features, **kwargs)
    elif feature_map_type == 'PAULI':
        return PauliFeatureMap(n_features, **kwargs)
    else:
        raise ValueError(f"Unknown feature map type: {feature_map_type}")

Specific Feature Maps

superquantx.utils.feature_mapping.pauli_feature_map

pauli_feature_map(n_features: int, paulis: list[str] = None, reps: int = 1, alpha: float = 2.0, entanglement: str = 'full') -> PauliFeatureMap

Create a Pauli feature map with specified Pauli strings.

Parameters:

Name Type Description Default
n_features int

Number of input features

required
paulis list[str]

List of Pauli strings to use

None
reps int

Number of repetitions

1
alpha float

Scaling factor

2.0
entanglement str

Entanglement pattern

'full'

Returns:

Type Description
PauliFeatureMap

PauliFeatureMap instance

Source code in src/superquantx/utils/feature_mapping.py
def pauli_feature_map(
    n_features: int,
    paulis: list[str] = None,
    reps: int = 1,
    alpha: float = 2.0,
    entanglement: str = 'full'
) -> PauliFeatureMap:
    """Create a Pauli feature map with specified Pauli strings.

    Args:
        n_features: Number of input features
        paulis: List of Pauli strings to use
        reps: Number of repetitions
        alpha: Scaling factor
        entanglement: Entanglement pattern

    Returns:
        PauliFeatureMap instance

    """
    if paulis is None:
        paulis = ['Z', 'ZZ']
    return PauliFeatureMap(
        n_features=n_features,
        paulis=paulis,
        reps=reps,
        alpha=alpha,
        entanglement=entanglement
    )

superquantx.utils.feature_mapping.zz_feature_map

zz_feature_map(n_features: int, reps: int = 1, entanglement: str = 'linear', alpha: float = 2.0) -> ZZFeatureMap

Create a ZZ feature map with specified parameters.

Parameters:

Name Type Description Default
n_features int

Number of input features

required
reps int

Number of repetitions

1
entanglement str

Entanglement pattern ('linear', 'circular', 'full')

'linear'
alpha float

Scaling factor

2.0

Returns:

Type Description
ZZFeatureMap

ZZFeatureMap instance

Source code in src/superquantx/utils/feature_mapping.py
def zz_feature_map(
    n_features: int,
    reps: int = 1,
    entanglement: str = 'linear',
    alpha: float = 2.0
) -> ZZFeatureMap:
    """Create a ZZ feature map with specified parameters.

    Args:
        n_features: Number of input features
        reps: Number of repetitions
        entanglement: Entanglement pattern ('linear', 'circular', 'full')
        alpha: Scaling factor

    Returns:
        ZZFeatureMap instance

    """
    return ZZFeatureMap(
        n_features=n_features,
        reps=reps,
        entanglement=entanglement,
        alpha=alpha
    )

Quantum Utilities

Quantum Information Measures

superquantx.utils.quantum_utils.fidelity

fidelity(state1: ndarray, state2: ndarray, validate: bool = True) -> float

Calculate quantum fidelity between two quantum states.

For pure states |ψ₁⟩ and |ψ₂⟩: F(ψ₁, ψ₂) = |⟨ψ₁|ψ₂⟩|²

For mixed states ρ₁ and ρ₂: F(ρ₁, ρ₂) = Tr(√(√ρ₁ ρ₂ √ρ₁))²

Parameters:

Name Type Description Default
state1 ndarray

First quantum state (vector or density matrix)

required
state2 ndarray

Second quantum state (vector or density matrix)

required
validate bool

Whether to validate inputs

True

Returns:

Type Description
float

Fidelity value between 0 and 1

Source code in src/superquantx/utils/quantum_utils.py
def fidelity(
    state1: np.ndarray,
    state2: np.ndarray,
    validate: bool = True
) -> float:
    """Calculate quantum fidelity between two quantum states.

    For pure states |ψ₁⟩ and |ψ₂⟩:
    F(ψ₁, ψ₂) = |⟨ψ₁|ψ₂⟩|²

    For mixed states ρ₁ and ρ₂:
    F(ρ₁, ρ₂) = Tr(√(√ρ₁ ρ₂ √ρ₁))²

    Args:
        state1: First quantum state (vector or density matrix)
        state2: Second quantum state (vector or density matrix)
        validate: Whether to validate inputs

    Returns:
        Fidelity value between 0 and 1

    """
    if validate:
        _validate_quantum_state(state1)
        _validate_quantum_state(state2)

    # Check if states are vectors (pure states) or matrices (mixed states)
    is_pure1 = len(state1.shape) == 1
    is_pure2 = len(state2.shape) == 1

    if is_pure1 and is_pure2:
        # Both pure states
        overlap = np.vdot(state1, state2)
        return abs(overlap) ** 2

    elif is_pure1 and not is_pure2:
        # state1 pure, state2 mixed
        rho2 = state2
        psi1 = state1.reshape(-1, 1)
        return np.real(np.conj(psi1).T @ rho2 @ psi1)[0, 0]

    elif not is_pure1 and is_pure2:
        # state1 mixed, state2 pure
        rho1 = state1
        psi2 = state2.reshape(-1, 1)
        return np.real(np.conj(psi2).T @ rho1 @ psi2)[0, 0]

    else:
        # Both mixed states
        rho1, rho2 = state1, state2

        # F = Tr(√(√ρ₁ ρ₂ √ρ₁))²
        sqrt_rho1 = sqrtm(rho1)
        M = sqrt_rho1 @ rho2 @ sqrt_rho1
        sqrt_M = sqrtm(M)

        fid = np.real(np.trace(sqrt_M)) ** 2

        # Ensure fidelity is in [0, 1] (numerical errors can cause small violations)
        return np.clip(fid, 0, 1)

superquantx.utils.quantum_utils.trace_distance

trace_distance(state1: ndarray, state2: ndarray, validate: bool = True) -> float

Calculate trace distance between two quantum states.

For quantum states ρ₁ and ρ₂: D(ρ₁, ρ₂) = (½) * Tr(|ρ₁ - ρ₂|)

Parameters:

Name Type Description Default
state1 ndarray

First quantum state

required
state2 ndarray

Second quantum state

required
validate bool

Whether to validate inputs

True

Returns:

Type Description
float

Trace distance between 0 and 1

Source code in src/superquantx/utils/quantum_utils.py
def trace_distance(
    state1: np.ndarray,
    state2: np.ndarray,
    validate: bool = True
) -> float:
    """Calculate trace distance between two quantum states.

    For quantum states ρ₁ and ρ₂:
    D(ρ₁, ρ₂) = (1/2) * Tr(|ρ₁ - ρ₂|)

    Args:
        state1: First quantum state
        state2: Second quantum state
        validate: Whether to validate inputs

    Returns:
        Trace distance between 0 and 1

    """
    if validate:
        _validate_quantum_state(state1)
        _validate_quantum_state(state2)

    # Convert to density matrices if needed
    rho1 = _to_density_matrix(state1)
    rho2 = _to_density_matrix(state2)

    # Compute difference
    diff = rho1 - rho2

    # Compute eigenvalues and take absolute values
    eigenvals = np.linalg.eigvals(diff)
    abs_eigenvals = np.abs(eigenvals)

    # Trace distance is half the sum of absolute eigenvalues
    return 0.5 * np.sum(abs_eigenvals)

superquantx.utils.quantum_utils.quantum_mutual_information

quantum_mutual_information(joint_state: ndarray, subsystem_dims: tuple[int, int], validate: bool = True) -> float

Calculate quantum mutual information between two subsystems.

I(A:B) = S(ρₐ) + S(ρᵦ) - S(ρₐᵦ)

where S(ρ) is the von Neumann entropy.

Parameters:

Name Type Description Default
joint_state ndarray

Joint quantum state of both subsystems

required
subsystem_dims tuple[int, int]

Dimensions of subsystems (dim_A, dim_B)

required
validate bool

Whether to validate inputs

True

Returns:

Type Description
float

Quantum mutual information

Source code in src/superquantx/utils/quantum_utils.py
def quantum_mutual_information(
    joint_state: np.ndarray,
    subsystem_dims: tuple[int, int],
    validate: bool = True
) -> float:
    """Calculate quantum mutual information between two subsystems.

    I(A:B) = S(ρₐ) + S(ρᵦ) - S(ρₐᵦ)

    where S(ρ) is the von Neumann entropy.

    Args:
        joint_state: Joint quantum state of both subsystems
        subsystem_dims: Dimensions of subsystems (dim_A, dim_B)
        validate: Whether to validate inputs

    Returns:
        Quantum mutual information

    """
    if validate:
        _validate_quantum_state(joint_state)

    rho_AB = _to_density_matrix(joint_state)
    dim_A, dim_B = subsystem_dims

    if rho_AB.shape[0] != dim_A * dim_B:
        raise ValueError(f"State dimension {rho_AB.shape[0]} doesn't match subsystem dims {dim_A * dim_B}")

    # Partial traces
    rho_A = partial_trace(rho_AB, (dim_A, dim_B), trace_out=1)
    rho_B = partial_trace(rho_AB, (dim_A, dim_B), trace_out=0)

    # Von Neumann entropies
    S_A = von_neumann_entropy(rho_A)
    S_B = von_neumann_entropy(rho_B)
    S_AB = von_neumann_entropy(rho_AB)

    return S_A + S_B - S_AB

superquantx.utils.quantum_utils.entanglement_measure

entanglement_measure(state: ndarray, subsystem_dims: tuple[int, int], measure: str = 'negativity', validate: bool = True) -> float

Calculate entanglement measure for a bipartite quantum state.

Parameters:

Name Type Description Default
state ndarray

Quantum state (pure or mixed)

required
subsystem_dims tuple[int, int]

Dimensions of subsystems (dim_A, dim_B)

required
measure str

Type of measure ('negativity', 'concurrence', 'entropy')

'negativity'
validate bool

Whether to validate inputs

True

Returns:

Type Description
float

Entanglement measure value

Source code in src/superquantx/utils/quantum_utils.py
def entanglement_measure(
    state: np.ndarray,
    subsystem_dims: tuple[int, int],
    measure: str = 'negativity',
    validate: bool = True
) -> float:
    """Calculate entanglement measure for a bipartite quantum state.

    Args:
        state: Quantum state (pure or mixed)
        subsystem_dims: Dimensions of subsystems (dim_A, dim_B)
        measure: Type of measure ('negativity', 'concurrence', 'entropy')
        validate: Whether to validate inputs

    Returns:
        Entanglement measure value

    """
    if validate:
        _validate_quantum_state(state)

    if measure == 'negativity':
        return negativity(state, subsystem_dims)
    elif measure == 'concurrence':
        return concurrence(state, subsystem_dims)
    elif measure == 'entropy':
        return entanglement_entropy(state, subsystem_dims)
    else:
        raise ValueError(f"Unknown entanglement measure: {measure}")

Classical Utilities

Machine Learning Utilities

superquantx.utils.classical_utils.cross_validation

cross_validation(algorithm: Any, X: ndarray, y: ndarray, cv_folds: int = 5, scoring: str = 'accuracy', stratify: bool = True, random_state: int | None = 42, verbose: bool = False) -> CrossValidationResult

Perform k-fold cross-validation on quantum algorithm.

Parameters:

Name Type Description Default
algorithm Any

Quantum algorithm instance

required
X ndarray

Feature matrix

required
y ndarray

Target vector

required
cv_folds int

Number of CV folds

5
scoring str

Scoring metric ('accuracy', 'mse', 'mae')

'accuracy'
stratify bool

Whether to use stratified CV for classification

True
random_state int | None

Random seed

42
verbose bool

Whether to print progress

False

Returns:

Type Description
CrossValidationResult

CrossValidationResult with scores and timing info

Source code in src/superquantx/utils/classical_utils.py
def cross_validation(
    algorithm: Any,
    X: np.ndarray,
    y: np.ndarray,
    cv_folds: int = 5,
    scoring: str = 'accuracy',
    stratify: bool = True,
    random_state: int | None = 42,
    verbose: bool = False
) -> CrossValidationResult:
    """Perform k-fold cross-validation on quantum algorithm.

    Args:
        algorithm: Quantum algorithm instance
        X: Feature matrix
        y: Target vector
        cv_folds: Number of CV folds
        scoring: Scoring metric ('accuracy', 'mse', 'mae')
        stratify: Whether to use stratified CV for classification
        random_state: Random seed
        verbose: Whether to print progress

    Returns:
        CrossValidationResult with scores and timing info

    """
    len(X)

    # Choose cross-validation strategy
    if stratify and _is_classification_task(y):
        cv = StratifiedKFold(n_splits=cv_folds, shuffle=True, random_state=random_state)
    else:
        cv = KFold(n_splits=cv_folds, shuffle=True, random_state=random_state)

    scores = []
    fold_times = []

    for fold_idx, (train_idx, val_idx) in enumerate(cv.split(X, y)):
        if verbose:
            print(f"Fold {fold_idx + 1}/{cv_folds}")

        X_train, X_val = X[train_idx], X[val_idx]
        y_train, y_val = y[train_idx], y[val_idx]

        start_time = time.time()

        # Train algorithm
        algorithm.fit(X_train, y_train)

        # Make predictions
        y_pred = algorithm.predict(X_val)

        fold_time = time.time() - start_time
        fold_times.append(fold_time)

        # Calculate score
        if scoring == 'accuracy':
            score = accuracy_score(y_val, y_pred)
        elif scoring == 'mse':
            score = -mean_squared_error(y_val, y_pred)  # Negative for "higher is better"
        elif scoring == 'mae':
            score = -np.mean(np.abs(y_val - y_pred))
        else:
            raise ValueError(f"Unknown scoring metric: {scoring}")

        scores.append(score)

        if verbose:
            print(f"  Score: {score:.4f}, Time: {fold_time:.2f}s")

    return CrossValidationResult(
        scores=scores,
        mean_score=np.mean(scores),
        std_score=np.std(scores),
        fold_times=fold_times,
        mean_time=np.mean(fold_times)
    )

superquantx.utils.classical_utils.hyperparameter_search

hyperparameter_search(algorithm_class: type, param_grid: dict[str, list[Any]], X: ndarray, y: ndarray, cv_folds: int = 3, scoring: str = 'accuracy', n_jobs: int = 1, random_state: int | None = 42, verbose: bool = False) -> dict[str, Any]

Perform grid search for hyperparameter optimization.

Parameters:

Name Type Description Default
algorithm_class type

Quantum algorithm class

required
param_grid dict[str, list[Any]]

Dictionary of parameter names and values to try

required
X ndarray

Feature matrix

required
y ndarray

Target vector

required
cv_folds int

Number of CV folds

3
scoring str

Scoring metric

'accuracy'
n_jobs int

Number of parallel jobs (not implemented)

1
random_state int | None

Random seed

42
verbose bool

Whether to print progress

False

Returns:

Type Description
dict[str, Any]

Dictionary with best parameters and results

Source code in src/superquantx/utils/classical_utils.py
def hyperparameter_search(
    algorithm_class: type,
    param_grid: dict[str, list[Any]],
    X: np.ndarray,
    y: np.ndarray,
    cv_folds: int = 3,
    scoring: str = 'accuracy',
    n_jobs: int = 1,
    random_state: int | None = 42,
    verbose: bool = False
) -> dict[str, Any]:
    """Perform grid search for hyperparameter optimization.

    Args:
        algorithm_class: Quantum algorithm class
        param_grid: Dictionary of parameter names and values to try
        X: Feature matrix
        y: Target vector
        cv_folds: Number of CV folds
        scoring: Scoring metric
        n_jobs: Number of parallel jobs (not implemented)
        random_state: Random seed
        verbose: Whether to print progress

    Returns:
        Dictionary with best parameters and results

    """
    # Generate all parameter combinations
    param_names = list(param_grid.keys())
    param_values = list(param_grid.values())
    param_combinations = list(itertools.product(*param_values))

    best_score = float('-inf')
    best_params = None
    all_results = []

    if verbose:
        print(f"Testing {len(param_combinations)} parameter combinations...")

    for i, param_values in enumerate(param_combinations):
        # Create parameter dictionary
        params = dict(zip(param_names, param_values, strict=False))

        if verbose:
            print(f"Combination {i+1}/{len(param_combinations)}: {params}")

        try:
            # Create algorithm instance with these parameters
            algorithm = algorithm_class(**params)

            # Perform cross-validation
            cv_result = cross_validation(
                algorithm, X, y, cv_folds=cv_folds,
                scoring=scoring, random_state=random_state,
                verbose=False
            )

            mean_score = cv_result.mean_score

            # Track results
            result = {
                'params': params,
                'mean_score': mean_score,
                'std_score': cv_result.std_score,
                'mean_time': cv_result.mean_time
            }
            all_results.append(result)

            # Update best score
            if mean_score > best_score:
                best_score = mean_score
                best_params = params

            if verbose:
                print(f"  Score: {mean_score:.4f} ± {cv_result.std_score:.4f}")

        except Exception as e:
            if verbose:
                print(f"  Failed: {str(e)}")

            result = {
                'params': params,
                'mean_score': None,
                'std_score': None,
                'mean_time': None,
                'error': str(e)
            }
            all_results.append(result)

    return {
        'best_params': best_params,
        'best_score': best_score,
        'all_results': all_results,
        'n_combinations': len(param_combinations)
    }

superquantx.utils.classical_utils.model_selection

model_selection(algorithms: list[tuple[str, type, dict[str, Any]]], X: ndarray, y: ndarray, cv_folds: int = 5, scoring: str = 'accuracy', test_size: float = 0.2, random_state: int | None = 42, verbose: bool = False) -> dict[str, Any]

Compare multiple algorithms and select the best one.

Parameters:

Name Type Description Default
algorithms list[tuple[str, type, dict[str, Any]]]

List of (name, class, params) tuples

required
X ndarray

Feature matrix

required
y ndarray

Target vector

required
cv_folds int

Number of CV folds

5
scoring str

Scoring metric

'accuracy'
test_size float

Proportion for test set

0.2
random_state int | None

Random seed

42
verbose bool

Whether to print progress

False

Returns:

Type Description
dict[str, Any]

Dictionary with model selection results

Source code in src/superquantx/utils/classical_utils.py
def model_selection(
    algorithms: list[tuple[str, type, dict[str, Any]]],
    X: np.ndarray,
    y: np.ndarray,
    cv_folds: int = 5,
    scoring: str = 'accuracy',
    test_size: float = 0.2,
    random_state: int | None = 42,
    verbose: bool = False
) -> dict[str, Any]:
    """Compare multiple algorithms and select the best one.

    Args:
        algorithms: List of (name, class, params) tuples
        X: Feature matrix
        y: Target vector
        cv_folds: Number of CV folds
        scoring: Scoring metric
        test_size: Proportion for test set
        random_state: Random seed
        verbose: Whether to print progress

    Returns:
        Dictionary with model selection results

    """
    # Split data
    X_train, X_test, y_train, y_test = train_test_split(
        X, y, test_size=test_size, random_state=random_state,
        stratify=y if _is_classification_task(y) else None
    )

    results = {}
    best_algorithm = None
    best_score = float('-inf')

    for name, algorithm_class, params in algorithms:
        if verbose:
            print(f"Evaluating {name}...")

        try:
            # Create algorithm instance
            algorithm = algorithm_class(**params)

            # Cross-validation on training set
            cv_result = cross_validation(
                algorithm, X_train, y_train, cv_folds=cv_folds,
                scoring=scoring, random_state=random_state,
                verbose=False
            )

            # Final evaluation on test set
            algorithm.fit(X_train, y_train)
            y_pred = algorithm.predict(X_test)

            if scoring == 'accuracy':
                test_score = accuracy_score(y_test, y_pred)
            elif scoring == 'mse':
                test_score = -mean_squared_error(y_test, y_pred)
            elif scoring == 'mae':
                test_score = -np.mean(np.abs(y_test - y_pred))
            else:
                test_score = 0  # Fallback

            results[name] = {
                'cv_mean': cv_result.mean_score,
                'cv_std': cv_result.std_score,
                'test_score': test_score,
                'mean_time': cv_result.mean_time,
                'params': params
            }

            # Track best algorithm
            if cv_result.mean_score > best_score:
                best_score = cv_result.mean_score
                best_algorithm = name

            if verbose:
                print(f"  CV: {cv_result.mean_score:.4f} ± {cv_result.std_score:.4f}")
                print(f"  Test: {test_score:.4f}")

        except Exception as e:
            if verbose:
                print(f"  Failed: {str(e)}")

            results[name] = {
                'cv_mean': None,
                'cv_std': None,
                'test_score': None,
                'mean_time': None,
                'params': params,
                'error': str(e)
            }

    return {
        'results': results,
        'best_algorithm': best_algorithm,
        'best_score': best_score
    }

superquantx.utils.classical_utils.data_splitting

data_splitting(X: ndarray, y: ndarray, train_size: float = 0.7, val_size: float = 0.15, test_size: float = 0.15, stratify: bool = True, random_state: int | None = 42) -> tuple[np.ndarray, ...]

Split data into train, validation, and test sets.

Parameters:

Name Type Description Default
X ndarray

Feature matrix

required
y ndarray

Target vector

required
train_size float

Proportion for training

0.7
val_size float

Proportion for validation

0.15
test_size float

Proportion for testing

0.15
stratify bool

Whether to stratify splits for classification

True
random_state int | None

Random seed

42

Returns:

Type Description
tuple[ndarray, ...]

Tuple of (X_train, X_val, X_test, y_train, y_val, y_test)

Source code in src/superquantx/utils/classical_utils.py
def data_splitting(
    X: np.ndarray,
    y: np.ndarray,
    train_size: float = 0.7,
    val_size: float = 0.15,
    test_size: float = 0.15,
    stratify: bool = True,
    random_state: int | None = 42
) -> tuple[np.ndarray, ...]:
    """Split data into train, validation, and test sets.

    Args:
        X: Feature matrix
        y: Target vector
        train_size: Proportion for training
        val_size: Proportion for validation
        test_size: Proportion for testing
        stratify: Whether to stratify splits for classification
        random_state: Random seed

    Returns:
        Tuple of (X_train, X_val, X_test, y_train, y_val, y_test)

    """
    if not np.isclose(train_size + val_size + test_size, 1.0):
        raise ValueError("Split sizes must sum to 1.0")

    stratify_target = y if (stratify and _is_classification_task(y)) else None

    # First split: separate test set
    X_temp, X_test, y_temp, y_test = train_test_split(
        X, y, test_size=test_size, random_state=random_state,
        stratify=stratify_target
    )

    # Second split: separate train and validation
    relative_val_size = val_size / (train_size + val_size)
    stratify_temp = y_temp if (stratify and _is_classification_task(y)) else None

    X_train, X_val, y_train, y_val = train_test_split(
        X_temp, y_temp, test_size=relative_val_size,
        random_state=random_state, stratify=stratify_temp
    )

    return X_train, X_val, X_test, y_train, y_val, y_test

Datasets

Quantum-Adapted Classical Datasets

superquantx.datasets.load_iris_quantum

load_iris_quantum(n_features: Optional[int] = None, encoding: str = 'amplitude', normalize: bool = True, test_size: float = 0.2, random_state: Optional[int] = 42) -> Tuple[np.ndarray, np.ndarray, np.ndarray, np.ndarray, Dict[str, Any]]

Load and preprocess the Iris dataset for quantum machine learning.

Parameters:

Name Type Description Default
n_features Optional[int]

Number of features to keep (default: all 4)

None
encoding str

Type of quantum encoding ('amplitude', 'angle', 'basis')

'amplitude'
normalize bool

Whether to normalize features

True
test_size float

Proportion of dataset for testing

0.2
random_state Optional[int]

Random seed for reproducibility

42

Returns:

Type Description
Tuple[ndarray, ndarray, ndarray, ndarray, Dict[str, Any]]

Tuple of (X_train, X_test, y_train, y_test, metadata)

Source code in src/superquantx/datasets/quantum_datasets.py
def load_iris_quantum(
    n_features: Optional[int] = None,
    encoding: str = 'amplitude',
    normalize: bool = True,
    test_size: float = 0.2,
    random_state: Optional[int] = 42
) -> Tuple[np.ndarray, np.ndarray, np.ndarray, np.ndarray, Dict[str, Any]]:
    """Load and preprocess the Iris dataset for quantum machine learning.

    Args:
        n_features: Number of features to keep (default: all 4)
        encoding: Type of quantum encoding ('amplitude', 'angle', 'basis')
        normalize: Whether to normalize features
        test_size: Proportion of dataset for testing
        random_state: Random seed for reproducibility

    Returns:
        Tuple of (X_train, X_test, y_train, y_test, metadata)

    """
    # Load dataset
    iris = datasets.load_iris()
    X, y = iris.data, iris.target

    # Feature selection
    if n_features is not None and n_features < X.shape[1]:
        # Select features with highest variance
        feature_vars = np.var(X, axis=0)
        selected_features = np.argsort(feature_vars)[-n_features:]
        X = X[:, selected_features]
        feature_names = [iris.feature_names[i] for i in selected_features]
    else:
        feature_names = iris.feature_names
        n_features = X.shape[1]

    # Normalization
    if normalize:
        if encoding == 'amplitude':
            X = normalize_quantum_data(X, method='l2')
        elif encoding == 'angle':
            scaler = MinMaxScaler(feature_range=(0, 2*np.pi))
            X = scaler.fit_transform(X)
        else:
            scaler = StandardScaler()
            X = scaler.fit_transform(X)

    # Train-test split
    X_train, X_test, y_train, y_test = train_test_split(
        X, y, test_size=test_size, random_state=random_state, stratify=y
    )

    # Metadata
    metadata = {
        'dataset_name': 'iris',
        'n_samples': len(X),
        'n_features': n_features,
        'n_classes': len(np.unique(y)),
        'class_names': iris.target_names.tolist(),
        'feature_names': feature_names,
        'encoding': encoding,
        'normalized': normalize
    }

    return X_train, X_test, y_train, y_test, metadata

superquantx.datasets.load_wine_quantum

load_wine_quantum(n_features: Optional[int] = 8, encoding: str = 'amplitude', normalize: bool = True, test_size: float = 0.2, random_state: Optional[int] = 42) -> Tuple[np.ndarray, np.ndarray, np.ndarray, np.ndarray, Dict[str, Any]]

Load and preprocess the Wine dataset for quantum machine learning.

Parameters:

Name Type Description Default
n_features Optional[int]

Number of top features to keep (default: 8)

8
encoding str

Type of quantum encoding

'amplitude'
normalize bool

Whether to normalize features

True
test_size float

Proportion of dataset for testing

0.2
random_state Optional[int]

Random seed for reproducibility

42

Returns:

Type Description
Tuple[ndarray, ndarray, ndarray, ndarray, Dict[str, Any]]

Tuple of (X_train, X_test, y_train, y_test, metadata)

Source code in src/superquantx/datasets/quantum_datasets.py
def load_wine_quantum(
    n_features: Optional[int] = 8,
    encoding: str = 'amplitude',
    normalize: bool = True,
    test_size: float = 0.2,
    random_state: Optional[int] = 42
) -> Tuple[np.ndarray, np.ndarray, np.ndarray, np.ndarray, Dict[str, Any]]:
    """Load and preprocess the Wine dataset for quantum machine learning.

    Args:
        n_features: Number of top features to keep (default: 8)
        encoding: Type of quantum encoding
        normalize: Whether to normalize features
        test_size: Proportion of dataset for testing
        random_state: Random seed for reproducibility

    Returns:
        Tuple of (X_train, X_test, y_train, y_test, metadata)

    """
    # Load dataset
    wine = datasets.load_wine()
    X, y = wine.data, wine.target

    # Feature selection based on variance
    if n_features is not None and n_features < X.shape[1]:
        feature_vars = np.var(X, axis=0)
        selected_features = np.argsort(feature_vars)[-n_features:]
        X = X[:, selected_features]
        feature_names = [wine.feature_names[i] for i in selected_features]
    else:
        feature_names = wine.feature_names
        n_features = X.shape[1]

    # Normalization
    if normalize:
        if encoding == 'amplitude':
            X = normalize_quantum_data(X, method='l2')
        elif encoding == 'angle':
            scaler = MinMaxScaler(feature_range=(0, 2*np.pi))
            X = scaler.fit_transform(X)
        else:
            scaler = StandardScaler()
            X = scaler.fit_transform(X)

    # Train-test split
    X_train, X_test, y_train, y_test = train_test_split(
        X, y, test_size=test_size, random_state=random_state, stratify=y
    )

    metadata = {
        'dataset_name': 'wine',
        'n_samples': len(X),
        'n_features': n_features,
        'n_classes': len(np.unique(y)),
        'class_names': wine.target_names.tolist(),
        'feature_names': feature_names,
        'encoding': encoding,
        'normalized': normalize
    }

    return X_train, X_test, y_train, y_test, metadata

superquantx.datasets.load_digits_quantum

load_digits_quantum(n_classes: int = 10, n_pixels: Optional[int] = 32, encoding: str = 'amplitude', normalize: bool = True, test_size: float = 0.2, random_state: Optional[int] = 42) -> Tuple[np.ndarray, np.ndarray, np.ndarray, np.ndarray, Dict[str, Any]]

Load and preprocess the Digits dataset for quantum machine learning.

Parameters:

Name Type Description Default
n_classes int

Number of digit classes to include (2-10)

10
n_pixels Optional[int]

Number of pixels to keep (reduces from 64)

32
encoding str

Type of quantum encoding

'amplitude'
normalize bool

Whether to normalize features

True
test_size float

Proportion of dataset for testing

0.2
random_state Optional[int]

Random seed for reproducibility

42

Returns:

Type Description
Tuple[ndarray, ndarray, ndarray, ndarray, Dict[str, Any]]

Tuple of (X_train, X_test, y_train, y_test, metadata)

Source code in src/superquantx/datasets/quantum_datasets.py
def load_digits_quantum(
    n_classes: int = 10,
    n_pixels: Optional[int] = 32,
    encoding: str = 'amplitude',
    normalize: bool = True,
    test_size: float = 0.2,
    random_state: Optional[int] = 42
) -> Tuple[np.ndarray, np.ndarray, np.ndarray, np.ndarray, Dict[str, Any]]:
    """Load and preprocess the Digits dataset for quantum machine learning.

    Args:
        n_classes: Number of digit classes to include (2-10)
        n_pixels: Number of pixels to keep (reduces from 64)
        encoding: Type of quantum encoding
        normalize: Whether to normalize features
        test_size: Proportion of dataset for testing
        random_state: Random seed for reproducibility

    Returns:
        Tuple of (X_train, X_test, y_train, y_test, metadata)

    """
    # Load dataset
    digits = datasets.load_digits()
    X, y = digits.data, digits.target

    # Class filtering
    if n_classes < 10:
        mask = y < n_classes
        X, y = X[mask], y[mask]

    # Feature selection (pixel reduction)
    if n_pixels is not None and n_pixels < X.shape[1]:
        # Select pixels with highest variance
        pixel_vars = np.var(X, axis=0)
        selected_pixels = np.argsort(pixel_vars)[-n_pixels:]
        X = X[:, selected_pixels]
    else:
        n_pixels = X.shape[1]

    # Normalization
    if normalize:
        if encoding == 'amplitude':
            X = normalize_quantum_data(X, method='l2')
        elif encoding == 'angle':
            scaler = MinMaxScaler(feature_range=(0, 2*np.pi))
            X = scaler.fit_transform(X)
        else:
            scaler = StandardScaler()
            X = scaler.fit_transform(X)

    # Train-test split
    X_train, X_test, y_train, y_test = train_test_split(
        X, y, test_size=test_size, random_state=random_state, stratify=y
    )

    metadata = {
        'dataset_name': 'digits',
        'n_samples': len(X),
        'n_features': n_pixels,
        'n_classes': n_classes,
        'original_shape': (8, 8),
        'encoding': encoding,
        'normalized': normalize
    }

    return X_train, X_test, y_train, y_test, metadata

superquantx.datasets.load_breast_cancer_quantum

load_breast_cancer_quantum(n_features: Optional[int] = 16, encoding: str = 'amplitude', normalize: bool = True, test_size: float = 0.2, random_state: Optional[int] = 42) -> Tuple[np.ndarray, np.ndarray, np.ndarray, np.ndarray, Dict[str, Any]]

Load and preprocess the Breast Cancer dataset for quantum machine learning.

Parameters:

Name Type Description Default
n_features Optional[int]

Number of top features to keep

16
encoding str

Type of quantum encoding

'amplitude'
normalize bool

Whether to normalize features

True
test_size float

Proportion of dataset for testing

0.2
random_state Optional[int]

Random seed for reproducibility

42

Returns:

Type Description
Tuple[ndarray, ndarray, ndarray, ndarray, Dict[str, Any]]

Tuple of (X_train, X_test, y_train, y_test, metadata)

Source code in src/superquantx/datasets/quantum_datasets.py
def load_breast_cancer_quantum(
    n_features: Optional[int] = 16,
    encoding: str = 'amplitude',
    normalize: bool = True,
    test_size: float = 0.2,
    random_state: Optional[int] = 42
) -> Tuple[np.ndarray, np.ndarray, np.ndarray, np.ndarray, Dict[str, Any]]:
    """Load and preprocess the Breast Cancer dataset for quantum machine learning.

    Args:
        n_features: Number of top features to keep
        encoding: Type of quantum encoding
        normalize: Whether to normalize features
        test_size: Proportion of dataset for testing
        random_state: Random seed for reproducibility

    Returns:
        Tuple of (X_train, X_test, y_train, y_test, metadata)

    """
    # Load dataset
    cancer = datasets.load_breast_cancer()
    X, y = cancer.data, cancer.target

    # Feature selection based on correlation with target
    if n_features is not None and n_features < X.shape[1]:
        correlations = np.abs([np.corrcoef(X[:, i], y)[0, 1] for i in range(X.shape[1])])
        selected_features = np.argsort(correlations)[-n_features:]
        X = X[:, selected_features]
        feature_names = [cancer.feature_names[i] for i in selected_features]
    else:
        feature_names = cancer.feature_names
        n_features = X.shape[1]

    # Normalization
    if normalize:
        if encoding == 'amplitude':
            X = normalize_quantum_data(X, method='l2')
        elif encoding == 'angle':
            scaler = MinMaxScaler(feature_range=(0, 2*np.pi))
            X = scaler.fit_transform(X)
        else:
            scaler = StandardScaler()
            X = scaler.fit_transform(X)

    # Train-test split
    X_train, X_test, y_train, y_test = train_test_split(
        X, y, test_size=test_size, random_state=random_state, stratify=y
    )

    metadata = {
        'dataset_name': 'breast_cancer',
        'n_samples': len(X),
        'n_features': n_features,
        'n_classes': 2,
        'class_names': cancer.target_names.tolist(),
        'feature_names': feature_names,
        'encoding': encoding,
        'normalized': normalize
    }

    return X_train, X_test, y_train, y_test, metadata

Synthetic Data Generators

superquantx.datasets.generate_classification_data

generate_classification_data(n_samples: int = 200, n_features: int = 4, n_classes: int = 2, n_redundant: int = 0, n_informative: Optional[int] = None, class_sep: float = 1.0, encoding: str = 'amplitude', normalize: bool = True, test_size: float = 0.2, random_state: Optional[int] = 42) -> Tuple[np.ndarray, np.ndarray, np.ndarray, np.ndarray, Dict[str, Any]]

Generate synthetic classification data for quantum machine learning.

Parameters:

Name Type Description Default
n_samples int

Number of samples to generate

200
n_features int

Number of features (should be power of 2 for quantum efficiency)

4
n_classes int

Number of classes

2
n_redundant int

Number of redundant features

0
n_informative Optional[int]

Number of informative features (default: n_features)

None
class_sep float

Class separation factor

1.0
encoding str

Type of quantum encoding

'amplitude'
normalize bool

Whether to normalize features

True
test_size float

Proportion for test split

0.2
random_state Optional[int]

Random seed

42

Returns:

Type Description
Tuple[ndarray, ndarray, ndarray, ndarray, Dict[str, Any]]

Tuple of (X_train, X_test, y_train, y_test, metadata)

Source code in src/superquantx/datasets/synthetic.py
def generate_classification_data(
    n_samples: int = 200,
    n_features: int = 4,
    n_classes: int = 2,
    n_redundant: int = 0,
    n_informative: Optional[int] = None,
    class_sep: float = 1.0,
    encoding: str = 'amplitude',
    normalize: bool = True,
    test_size: float = 0.2,
    random_state: Optional[int] = 42
) -> Tuple[np.ndarray, np.ndarray, np.ndarray, np.ndarray, Dict[str, Any]]:
    """Generate synthetic classification data for quantum machine learning.

    Args:
        n_samples: Number of samples to generate
        n_features: Number of features (should be power of 2 for quantum efficiency)
        n_classes: Number of classes
        n_redundant: Number of redundant features
        n_informative: Number of informative features (default: n_features)
        class_sep: Class separation factor
        encoding: Type of quantum encoding
        normalize: Whether to normalize features
        test_size: Proportion for test split
        random_state: Random seed

    Returns:
        Tuple of (X_train, X_test, y_train, y_test, metadata)

    """
    if n_informative is None:
        n_informative = n_features

    # Generate synthetic data
    X, y = make_classification(
        n_samples=n_samples,
        n_features=n_features,
        n_informative=n_informative,
        n_redundant=n_redundant,
        n_classes=n_classes,
        class_sep=class_sep,
        random_state=random_state
    )

    # Normalization for quantum encoding
    if normalize:
        if encoding == 'amplitude':
            X = normalize_quantum_data(X, method='l2')
        elif encoding == 'angle':
            scaler = MinMaxScaler(feature_range=(0, 2*np.pi))
            X = scaler.fit_transform(X)
        else:
            scaler = StandardScaler()
            X = scaler.fit_transform(X)

    # Train-test split
    n_train = int(n_samples * (1 - test_size))
    indices = np.random.RandomState(random_state).permutation(n_samples)

    train_idx = indices[:n_train]
    test_idx = indices[n_train:]

    X_train, X_test = X[train_idx], X[test_idx]
    y_train, y_test = y[train_idx], y[test_idx]

    metadata = {
        'dataset_type': 'synthetic_classification',
        'n_samples': n_samples,
        'n_features': n_features,
        'n_classes': n_classes,
        'n_informative': n_informative,
        'n_redundant': n_redundant,
        'class_sep': class_sep,
        'encoding': encoding,
        'normalized': normalize
    }

    return X_train, X_test, y_train, y_test, metadata

superquantx.datasets.generate_regression_data

generate_regression_data(n_samples: int = 200, n_features: int = 4, n_informative: Optional[int] = None, noise: float = 0.1, encoding: str = 'amplitude', normalize: bool = True, test_size: float = 0.2, random_state: Optional[int] = 42) -> Tuple[np.ndarray, np.ndarray, np.ndarray, np.ndarray, Dict[str, Any]]

Generate synthetic regression data for quantum machine learning.

Parameters:

Name Type Description Default
n_samples int

Number of samples to generate

200
n_features int

Number of features

4
n_informative Optional[int]

Number of informative features

None
noise float

Noise level in target

0.1
encoding str

Type of quantum encoding

'amplitude'
normalize bool

Whether to normalize features

True
test_size float

Proportion for test split

0.2
random_state Optional[int]

Random seed

42

Returns:

Type Description
Tuple[ndarray, ndarray, ndarray, ndarray, Dict[str, Any]]

Tuple of (X_train, X_test, y_train, y_test, metadata)

Source code in src/superquantx/datasets/synthetic.py
def generate_regression_data(
    n_samples: int = 200,
    n_features: int = 4,
    n_informative: Optional[int] = None,
    noise: float = 0.1,
    encoding: str = 'amplitude',
    normalize: bool = True,
    test_size: float = 0.2,
    random_state: Optional[int] = 42
) -> Tuple[np.ndarray, np.ndarray, np.ndarray, np.ndarray, Dict[str, Any]]:
    """Generate synthetic regression data for quantum machine learning.

    Args:
        n_samples: Number of samples to generate
        n_features: Number of features
        n_informative: Number of informative features
        noise: Noise level in target
        encoding: Type of quantum encoding
        normalize: Whether to normalize features
        test_size: Proportion for test split
        random_state: Random seed

    Returns:
        Tuple of (X_train, X_test, y_train, y_test, metadata)

    """
    if n_informative is None:
        n_informative = n_features

    # Generate synthetic regression data
    X, y = make_regression(
        n_samples=n_samples,
        n_features=n_features,
        n_informative=n_informative,
        noise=noise,
        random_state=random_state
    )

    # Normalization
    if normalize:
        if encoding == 'amplitude':
            X = normalize_quantum_data(X, method='l2')
        elif encoding == 'angle':
            scaler = MinMaxScaler(feature_range=(0, 2*np.pi))
            X = scaler.fit_transform(X)
        else:
            scaler = StandardScaler()
            X = scaler.fit_transform(X)

        # Normalize targets
        y = (y - np.mean(y)) / np.std(y)

    # Train-test split
    n_train = int(n_samples * (1 - test_size))
    indices = np.random.RandomState(random_state).permutation(n_samples)

    train_idx = indices[:n_train]
    test_idx = indices[n_train:]

    X_train, X_test = X[train_idx], X[test_idx]
    y_train, y_test = y[train_idx], y[test_idx]

    metadata = {
        'dataset_type': 'synthetic_regression',
        'n_samples': n_samples,
        'n_features': n_features,
        'n_informative': n_informative,
        'noise': noise,
        'encoding': encoding,
        'normalized': normalize
    }

    return X_train, X_test, y_train, y_test, metadata

superquantx.datasets.generate_clustering_data

generate_clustering_data(n_samples: int = 200, n_features: int = 4, n_clusters: int = 3, cluster_std: float = 1.0, center_box: Tuple[float, float] = (-10.0, 10.0), encoding: str = 'amplitude', normalize: bool = True, random_state: Optional[int] = 42) -> Tuple[np.ndarray, np.ndarray, Dict[str, Any]]

Generate synthetic clustering data for quantum machine learning.

Parameters:

Name Type Description Default
n_samples int

Number of samples to generate

200
n_features int

Number of features

4
n_clusters int

Number of clusters

3
cluster_std float

Standard deviation of clusters

1.0
center_box Tuple[float, float]

Bounding box for cluster centers

(-10.0, 10.0)
encoding str

Type of quantum encoding

'amplitude'
normalize bool

Whether to normalize features

True
random_state Optional[int]

Random seed

42

Returns:

Type Description
Tuple[ndarray, ndarray, Dict[str, Any]]

Tuple of (X, y_true, metadata)

Source code in src/superquantx/datasets/synthetic.py
def generate_clustering_data(
    n_samples: int = 200,
    n_features: int = 4,
    n_clusters: int = 3,
    cluster_std: float = 1.0,
    center_box: Tuple[float, float] = (-10., 10.),
    encoding: str = 'amplitude',
    normalize: bool = True,
    random_state: Optional[int] = 42
) -> Tuple[np.ndarray, np.ndarray, Dict[str, Any]]:
    """Generate synthetic clustering data for quantum machine learning.

    Args:
        n_samples: Number of samples to generate
        n_features: Number of features
        n_clusters: Number of clusters
        cluster_std: Standard deviation of clusters
        center_box: Bounding box for cluster centers
        encoding: Type of quantum encoding
        normalize: Whether to normalize features
        random_state: Random seed

    Returns:
        Tuple of (X, y_true, metadata)

    """
    # Generate clustering data
    X, y_true = make_blobs(
        n_samples=n_samples,
        n_features=n_features,
        centers=n_clusters,
        cluster_std=cluster_std,
        center_box=center_box,
        random_state=random_state
    )

    # Normalization
    if normalize:
        if encoding == 'amplitude':
            X = normalize_quantum_data(X, method='l2')
        elif encoding == 'angle':
            scaler = MinMaxScaler(feature_range=(0, 2*np.pi))
            X = scaler.fit_transform(X)
        else:
            scaler = StandardScaler()
            X = scaler.fit_transform(X)

    metadata = {
        'dataset_type': 'synthetic_clustering',
        'n_samples': n_samples,
        'n_features': n_features,
        'n_clusters': n_clusters,
        'cluster_std': cluster_std,
        'encoding': encoding,
        'normalized': normalize
    }

    return X, y_true, metadata

superquantx.datasets.generate_portfolio_data

generate_portfolio_data(n_assets: int = 8, n_scenarios: int = 100, risk_level: float = 0.2, correlation: float = 0.3, normalize: bool = True, random_state: Optional[int] = 42) -> Tuple[np.ndarray, np.ndarray, np.ndarray, Dict[str, Any]]

Generate synthetic portfolio optimization data for quantum finance algorithms.

Parameters:

Name Type Description Default
n_assets int

Number of assets in portfolio

8
n_scenarios int

Number of return scenarios

100
risk_level float

Overall risk level (volatility)

0.2
correlation float

Average correlation between assets

0.3
normalize bool

Whether to normalize returns

True
random_state Optional[int]

Random seed

42

Returns:

Type Description
Tuple[ndarray, ndarray, ndarray, Dict[str, Any]]

Tuple of (returns, covariance_matrix, expected_returns, metadata)

Source code in src/superquantx/datasets/synthetic.py
def generate_portfolio_data(
    n_assets: int = 8,
    n_scenarios: int = 100,
    risk_level: float = 0.2,
    correlation: float = 0.3,
    normalize: bool = True,
    random_state: Optional[int] = 42
) -> Tuple[np.ndarray, np.ndarray, np.ndarray, Dict[str, Any]]:
    """Generate synthetic portfolio optimization data for quantum finance algorithms.

    Args:
        n_assets: Number of assets in portfolio
        n_scenarios: Number of return scenarios
        risk_level: Overall risk level (volatility)
        correlation: Average correlation between assets
        normalize: Whether to normalize returns
        random_state: Random seed

    Returns:
        Tuple of (returns, covariance_matrix, expected_returns, metadata)

    """
    np.random.seed(random_state)

    # Generate expected returns
    expected_returns = np.random.uniform(0.05, 0.20, n_assets)

    # Generate correlation matrix
    correlations = np.full((n_assets, n_assets), correlation)
    np.fill_diagonal(correlations, 1.0)

    # Add some randomness to correlations
    noise = np.random.uniform(-0.1, 0.1, (n_assets, n_assets))
    correlations += (noise + noise.T) / 2
    np.fill_diagonal(correlations, 1.0)

    # Ensure positive definite
    correlations = np.maximum(correlations, -0.99)
    correlations = np.minimum(correlations, 0.99)

    # Generate volatilities
    volatilities = np.random.uniform(
        risk_level * 0.5,
        risk_level * 1.5,
        n_assets
    )

    # Create covariance matrix
    covariance_matrix = np.outer(volatilities, volatilities) * correlations

    # Generate return scenarios
    returns = np.random.multivariate_normal(
        expected_returns,
        covariance_matrix,
        n_scenarios
    )

    if normalize:
        returns = normalize_quantum_data(returns, method='l2')

    metadata = {
        'dataset_type': 'portfolio_optimization',
        'n_assets': n_assets,
        'n_scenarios': n_scenarios,
        'risk_level': risk_level,
        'avg_correlation': correlation,
        'normalized': normalize
    }

    return returns, covariance_matrix, expected_returns, metadata

Molecular Datasets

superquantx.datasets.load_molecule

load_molecule(name: str, bond_length: Optional[float] = None, basis: str = 'sto-3g') -> Tuple[Molecule, Dict[str, Any]]

Load a predefined molecule for quantum simulation.

Parameters:

Name Type Description Default
name str

Molecule name ('H2', 'LiH', 'BeH2', 'H2O', 'NH3', 'CH4')

required
bond_length Optional[float]

Custom bond length (if applicable)

None
basis str

Basis set for quantum chemistry calculations

'sto-3g'

Returns:

Type Description
Tuple[Molecule, Dict[str, Any]]

Tuple of (molecule, metadata)

Source code in src/superquantx/datasets/molecular.py
def load_molecule(
    name: str,
    bond_length: Optional[float] = None,
    basis: str = 'sto-3g'
) -> Tuple[Molecule, Dict[str, Any]]:
    """Load a predefined molecule for quantum simulation.

    Args:
        name: Molecule name ('H2', 'LiH', 'BeH2', 'H2O', 'NH3', 'CH4')
        bond_length: Custom bond length (if applicable)
        basis: Basis set for quantum chemistry calculations

    Returns:
        Tuple of (molecule, metadata)

    """
    molecules = {
        'H2': _create_h2_molecule,
        'LiH': _create_lih_molecule,
        'BeH2': _create_beh2_molecule,
        'H2O': _create_h2o_molecule,
        'NH3': _create_nh3_molecule,
        'CH4': _create_ch4_molecule
    }

    if name not in molecules:
        raise ValueError(f"Unknown molecule: {name}. Available: {list(molecules.keys())}")

    molecule, metadata = molecules[name](bond_length, basis)
    return molecule, metadata

superquantx.datasets.load_h2_molecule

load_h2_molecule(bond_length: float = 0.735, basis: str = 'sto-3g') -> Tuple[Molecule, Dict[str, Any]]

Load H2 molecule with specified bond length.

Parameters:

Name Type Description Default
bond_length float

H-H bond length in Angstroms

0.735
basis str

Basis set

'sto-3g'

Returns:

Type Description
Tuple[Molecule, Dict[str, Any]]

Tuple of (molecule, metadata)

Source code in src/superquantx/datasets/molecular.py
def load_h2_molecule(
    bond_length: float = 0.735,
    basis: str = 'sto-3g'
) -> Tuple[Molecule, Dict[str, Any]]:
    """Load H2 molecule with specified bond length.

    Args:
        bond_length: H-H bond length in Angstroms
        basis: Basis set

    Returns:
        Tuple of (molecule, metadata)

    """
    return _create_h2_molecule(bond_length, basis)

superquantx.datasets.load_lih_molecule

load_lih_molecule(bond_length: float = 1.595, basis: str = 'sto-3g') -> Tuple[Molecule, Dict[str, Any]]

Load LiH molecule with specified bond length.

Parameters:

Name Type Description Default
bond_length float

Li-H bond length in Angstroms

1.595
basis str

Basis set

'sto-3g'

Returns:

Type Description
Tuple[Molecule, Dict[str, Any]]

Tuple of (molecule, metadata)

Source code in src/superquantx/datasets/molecular.py
def load_lih_molecule(
    bond_length: float = 1.595,
    basis: str = 'sto-3g'
) -> Tuple[Molecule, Dict[str, Any]]:
    """Load LiH molecule with specified bond length.

    Args:
        bond_length: Li-H bond length in Angstroms
        basis: Basis set

    Returns:
        Tuple of (molecule, metadata)

    """
    return _create_lih_molecule(bond_length, basis)

superquantx.datasets.load_beh2_molecule

load_beh2_molecule(bond_length: float = 1.326, basis: str = 'sto-3g') -> Tuple[Molecule, Dict[str, Any]]

Load BeH2 molecule with specified bond length.

Parameters:

Name Type Description Default
bond_length float

Be-H bond length in Angstroms

1.326
basis str

Basis set

'sto-3g'

Returns:

Type Description
Tuple[Molecule, Dict[str, Any]]

Tuple of (molecule, metadata)

Source code in src/superquantx/datasets/molecular.py
def load_beh2_molecule(
    bond_length: float = 1.326,
    basis: str = 'sto-3g'
) -> Tuple[Molecule, Dict[str, Any]]:
    """Load BeH2 molecule with specified bond length.

    Args:
        bond_length: Be-H bond length in Angstroms
        basis: Basis set

    Returns:
        Tuple of (molecule, metadata)

    """
    return _create_beh2_molecule(bond_length, basis)

Data Preprocessing

superquantx.datasets.QuantumFeatureEncoder

QuantumFeatureEncoder(encoding_type: str = 'amplitude')

Base class for quantum feature encoding strategies.

Quantum feature encoding maps classical data to quantum states, which is crucial for quantum machine learning algorithms.

Source code in src/superquantx/datasets/preprocessing.py
def __init__(self, encoding_type: str = 'amplitude'):
    self.encoding_type = encoding_type
    self.is_fitted = False

Functions

fit

fit(X: ndarray) -> QuantumFeatureEncoder

Fit the encoder to training data.

Source code in src/superquantx/datasets/preprocessing.py
def fit(self, X: np.ndarray) -> 'QuantumFeatureEncoder':
    """Fit the encoder to training data."""
    raise NotImplementedError

transform

transform(X: ndarray) -> np.ndarray

Transform data using the fitted encoder.

Source code in src/superquantx/datasets/preprocessing.py
def transform(self, X: np.ndarray) -> np.ndarray:
    """Transform data using the fitted encoder."""
    raise NotImplementedError

fit_transform

fit_transform(X: ndarray) -> np.ndarray

Fit encoder and transform data in one step.

Source code in src/superquantx/datasets/preprocessing.py
def fit_transform(self, X: np.ndarray) -> np.ndarray:
    """Fit encoder and transform data in one step."""
    return self.fit(X).transform(X)

superquantx.datasets.AmplitudeEncoder

AmplitudeEncoder(normalize_samples: bool = True, normalize_features: bool = False)

Bases: QuantumFeatureEncoder

Amplitude encoding for quantum machine learning.

Encodes classical data as amplitudes of quantum states. Each data point becomes a quantum state |ψ⟩ = Σᵢ xᵢ|i⟩.

The data is normalized so that ||x||₂ = 1 for proper quantum state encoding.

Source code in src/superquantx/datasets/preprocessing.py
def __init__(self, normalize_samples: bool = True, normalize_features: bool = False):
    super().__init__('amplitude')
    self.normalize_samples = normalize_samples
    self.normalize_features = normalize_features
    self.feature_scaler = None

Functions

fit

fit(X: ndarray) -> AmplitudeEncoder

Fit the amplitude encoder.

Parameters:

Name Type Description Default
X ndarray

Training data of shape (n_samples, n_features)

required

Returns:

Type Description
AmplitudeEncoder

Self for method chaining

Source code in src/superquantx/datasets/preprocessing.py
def fit(self, X: np.ndarray) -> 'AmplitudeEncoder':
    """Fit the amplitude encoder.

    Args:
        X: Training data of shape (n_samples, n_features)

    Returns:
        Self for method chaining

    """
    if self.normalize_features:
        self.feature_scaler = StandardScaler()
        self.feature_scaler.fit(X)

    self.is_fitted = True
    return self

transform

transform(X: ndarray) -> np.ndarray

Transform data using amplitude encoding.

Parameters:

Name Type Description Default
X ndarray

Data to transform of shape (n_samples, n_features)

required

Returns:

Type Description
ndarray

Transformed data with proper normalization

Source code in src/superquantx/datasets/preprocessing.py
def transform(self, X: np.ndarray) -> np.ndarray:
    """Transform data using amplitude encoding.

    Args:
        X: Data to transform of shape (n_samples, n_features)

    Returns:
        Transformed data with proper normalization

    """
    if not self.is_fitted:
        raise ValueError("Encoder must be fitted before transform")

    X_encoded = X.copy()

    # Feature normalization
    if self.normalize_features and self.feature_scaler is not None:
        X_encoded = self.feature_scaler.transform(X_encoded)

    # Sample normalization (L2 norm = 1 for each sample)
    if self.normalize_samples:
        norms = np.linalg.norm(X_encoded, axis=1, keepdims=True)
        norms[norms == 0] = 1  # Avoid division by zero
        X_encoded = X_encoded / norms

    return X_encoded

superquantx.datasets.AngleEncoder

AngleEncoder(angle_range: tuple = (0, 2 * np.pi))

Bases: QuantumFeatureEncoder

Angle encoding for quantum machine learning.

Encodes classical features as rotation angles in quantum circuits. Each feature xᵢ becomes a rotation angle, typically in [0, 2π].

Source code in src/superquantx/datasets/preprocessing.py
def __init__(self, angle_range: tuple = (0, 2 * np.pi)):
    super().__init__('angle')
    self.angle_range = angle_range
    self.scaler = None

Functions

fit

fit(X: ndarray) -> AngleEncoder

Fit the angle encoder.

Parameters:

Name Type Description Default
X ndarray

Training data of shape (n_samples, n_features)

required

Returns:

Type Description
AngleEncoder

Self for method chaining

Source code in src/superquantx/datasets/preprocessing.py
def fit(self, X: np.ndarray) -> 'AngleEncoder':
    """Fit the angle encoder.

    Args:
        X: Training data of shape (n_samples, n_features)

    Returns:
        Self for method chaining

    """
    self.scaler = MinMaxScaler(feature_range=self.angle_range)
    self.scaler.fit(X)
    self.is_fitted = True
    return self

transform

transform(X: ndarray) -> np.ndarray

Transform data using angle encoding.

Parameters:

Name Type Description Default
X ndarray

Data to transform of shape (n_samples, n_features)

required

Returns:

Type Description
ndarray

Data scaled to angle range

Source code in src/superquantx/datasets/preprocessing.py
def transform(self, X: np.ndarray) -> np.ndarray:
    """Transform data using angle encoding.

    Args:
        X: Data to transform of shape (n_samples, n_features)

    Returns:
        Data scaled to angle range

    """
    if not self.is_fitted:
        raise ValueError("Encoder must be fitted before transform")

    return self.scaler.transform(X)

superquantx.datasets.normalize_quantum_data

normalize_quantum_data(X: ndarray, method: Literal['l1', 'l2', 'max'] = 'l2', axis: int = 1) -> np.ndarray

Normalize data for quantum machine learning.

Parameters:

Name Type Description Default
X ndarray

Data to normalize of shape (n_samples, n_features)

required
method Literal['l1', 'l2', 'max']

Normalization method ('l1', 'l2', or 'max')

'l2'
axis int

Axis along which to normalize (1 for samples, 0 for features)

1

Returns:

Type Description
ndarray

Normalized data

Source code in src/superquantx/datasets/preprocessing.py
def normalize_quantum_data(
    X: np.ndarray,
    method: Literal['l1', 'l2', 'max'] = 'l2',
    axis: int = 1
) -> np.ndarray:
    """Normalize data for quantum machine learning.

    Args:
        X: Data to normalize of shape (n_samples, n_features)
        method: Normalization method ('l1', 'l2', or 'max')
        axis: Axis along which to normalize (1 for samples, 0 for features)

    Returns:
        Normalized data

    """
    if method == 'l1':
        norms = np.sum(np.abs(X), axis=axis, keepdims=True)
    elif method == 'l2':
        norms = np.sqrt(np.sum(X ** 2, axis=axis, keepdims=True))
    elif method == 'max':
        norms = np.max(np.abs(X), axis=axis, keepdims=True)
    else:
        raise ValueError(f"Unknown normalization method: {method}")

    # Avoid division by zero
    norms[norms == 0] = 1

    return X / norms

Command Line Interface

Main CLI Application

superquantx.cli.main

Main CLI entry point for SuperQuantX.

This module provides the main CLI application using Click framework, with subcommands for various quantum machine learning operations.

Functions

cli

cli(ctx: Context, config: str | None, verbose: bool)

SuperQuantX: Building the Foundation for Quantum Agentic AI

Deploy quantum-enhanced autonomous agents and AI systems in minutes. From quantum circuits to intelligent agents across all quantum platforms.

Examples:

sqx create-agent trading # Deploy quantum trading agent sqx run automl --data portfolio # Quantum AutoML optimization sqx run qsvm --data iris # Traditional quantum algorithm sqx benchmark quantum-vs-classical # Performance comparison sqx benchmark --backend all # Benchmark all backends

Source code in src/superquantx/cli/main.py
@click.group()
@click.version_option(version=__version__, prog_name='SuperQuantX')
@click.option(
    '--config', '-c',
    type=click.Path(exists=True),
    help='Path to configuration file'
)
@click.option(
    '--verbose', '-v',
    is_flag=True,
    help='Enable verbose output'
)
@click.pass_context
def cli(ctx: click.Context, config: str | None, verbose: bool):
    """SuperQuantX: Building the Foundation for Quantum Agentic AI

    Deploy quantum-enhanced autonomous agents and AI systems in minutes.
    From quantum circuits to intelligent agents across all quantum platforms.

    Examples:
        sqx create-agent trading               # Deploy quantum trading agent
        sqx run automl --data portfolio        # Quantum AutoML optimization
        sqx run qsvm --data iris               # Traditional quantum algorithm
        sqx benchmark quantum-vs-classical     # Performance comparison
        sqx benchmark --backend all # Benchmark all backends

    """
    # Ensure context object exists
    ctx.ensure_object(dict)

    # Store global options
    ctx.obj['config'] = config
    ctx.obj['verbose'] = verbose

    # Load configuration if specified
    if config:
        try:
            sqx.configure(config_file=config)
            if verbose:
                click.echo(f"Loaded configuration from {config}")
        except Exception as e:
            click.echo(f"Error loading configuration: {e}", err=True)
            sys.exit(1)

version

version(ctx: Context)

Show detailed version information.

Source code in src/superquantx/cli/main.py
@cli.command()
@click.pass_context
def version(ctx: click.Context):
    """Show detailed version information."""
    click.echo(f"SuperQuantX version: {sqx.__version__}")
    click.echo(f"Python version: {sys.version}")

    if ctx.obj.get('verbose'):
        click.echo("\nBackend versions:")
        backend_info = sqx.get_backend_info()
        for backend, version in backend_info.items():
            status = version if version else "Not installed"
            click.echo(f"  {backend}: {status}")

shell

shell()

Start interactive SuperQuantX shell.

Source code in src/superquantx/cli/main.py
@cli.command()
def shell():
    """Start interactive SuperQuantX shell."""
    try:
        import matplotlib.pyplot as plt

        # Import common modules for convenience
        import numpy as np
        from IPython import embed

        banner = """
        SuperQuantX Interactive Shell
        =============================

        Available imports:
        - superquantx as sqx
        - numpy as np
        - matplotlib.pyplot as plt

        Try: sqx.algorithms.QuantumSVM()
        """

        embed(banner1=banner, exit_msg="Goodbye!")

    except ImportError:
        click.echo("IPython not available. Install with: pip install ipython")
        sys.exit(1)

examples

examples(output: str)

Generate example scripts and notebooks.

Source code in src/superquantx/cli/main.py
@cli.command()
@click.option(
    '--output', '-o',
    type=click.Path(),
    default='superquantx_examples',
    help='Output directory for examples'
)
def examples(output: str):
    """Generate example scripts and notebooks."""
    output_path = Path(output)
    output_path.mkdir(exist_ok=True)

    # Basic example
    basic_example = '''#!/usr/bin/env python3
"""
Basic SuperQuantX Example: Quantum SVM Classification
"""

import numpy as np
import superquantx as sqx

# Load quantum-adapted Iris dataset
X_train, X_test, y_train, y_test, metadata = sqx.datasets.load_iris_quantum(
    n_features=4, encoding='amplitude'
)

print(f"Dataset: {metadata['dataset_name']}")
print(f"Training samples: {len(X_train)}")
print(f"Test samples: {len(X_test)}")
print(f"Features: {metadata['n_features']}")
print(f"Classes: {metadata['n_classes']}")

# Create quantum SVM with automatic backend selection
qsvm = sqx.QuantumSVM(
    backend='auto',
    feature_map='ZZFeatureMap',
    quantum_kernel=True
)

# Train the model
print("\\nTraining Quantum SVM...")
qsvm.fit(X_train, y_train)

# Make predictions
y_pred = qsvm.predict(X_test)

# Calculate accuracy
accuracy = np.mean(y_pred == y_test)
print(f"Test Accuracy: {accuracy:.4f}")

# Visualize results
sqx.visualize_results({
    'y_true': y_test,
    'y_pred': y_pred,
    'algorithm': 'QuantumSVM'
}, plot_type='classification')
'''

    # VQE example
    vqe_example = '''#!/usr/bin/env python3
"""
SuperQuantX VQE Example: H2 Molecule Ground State
"""

import superquantx as sqx

# Load H2 molecule
molecule, metadata = sqx.datasets.load_h2_molecule(bond_length=0.735)

print(f"Molecule: {molecule.name}")
print(f"Bond length: {metadata['bond_length']} Å")
print(f"Expected ground state energy: {metadata['ground_state_energy']} Ha")

# Create VQE algorithm
vqe = sqx.VQE(
    backend='auto',
    ansatz='UCCSD',
    optimizer='Adam'
)

# Run VQE optimization
print("\\nRunning VQE optimization...")
result = vqe.compute_minimum_eigenvalue(molecule)

print(f"VQE ground state energy: {result['eigenvalue']:.6f} Ha")
print(f"Number of iterations: {result['n_iterations']}")
print(f"Optimization time: {result['optimization_time']:.2f} s")

# Plot optimization history
sqx.plot_optimization_history(result)
'''

    # Write examples
    examples_to_create = [
        ('basic_qsvm.py', basic_example),
        ('vqe_h2.py', vqe_example)
    ]

    for filename, content in examples_to_create:
        example_path = output_path / filename
        with open(example_path, 'w') as f:
            f.write(content)
        click.echo(f"Created: {example_path}")

    click.echo(f"\\nExamples created in: {output_path}")
    click.echo("Run with: python basic_qsvm.py")

create_app

create_app()

Create and return the CLI application.

Source code in src/superquantx/cli/main.py
def create_app():
    """Create and return the CLI application."""
    return cli

main

main()

Main entry point for the CLI.

Source code in src/superquantx/cli/main.py
def main():
    """Main entry point for the CLI."""
    try:
        cli()
    except KeyboardInterrupt:
        click.echo("\\nInterrupted by user")
        sys.exit(130)
    except Exception as e:
        click.echo(f"Error: {e}", err=True)
        sys.exit(1)

superquantx.cli.create_app

create_app()

Create and return the CLI application.

Source code in src/superquantx/cli/main.py
def create_app():
    """Create and return the CLI application."""
    return cli

CLI Commands

superquantx.cli.run_algorithm

run_algorithm(algorithm: str, data: str, backend: str, output: str | None, config_file: str | None, verbose: bool)

Run a quantum algorithm on specified dataset.

Source code in src/superquantx/cli/commands.py
@click.command('run')
@click.argument('algorithm')
@click.option(
    '--data', '-d',
    default='iris',
    help='Dataset to use (iris, wine, digits, synthetic)'
)
@click.option(
    '--backend', '-b',
    default='auto',
    help='Quantum backend to use'
)
@click.option(
    '--output', '-o',
    type=click.Path(),
    help='Output file for results'
)
@click.option(
    '--config-file', '-c',
    type=click.Path(exists=True),
    help='Algorithm configuration file'
)
@click.option(
    '--verbose', '-v',
    is_flag=True,
    help='Enable verbose output'
)
def run_algorithm(
    algorithm: str,
    data: str,
    backend: str,
    output: str | None,
    config_file: str | None,
    verbose: bool
):
    """Run a quantum algorithm on specified dataset."""
    if verbose:
        click.echo(f"Running {algorithm} on {data} dataset using {backend} backend")

    try:
        # Load dataset
        if data == 'iris':
            X_train, X_test, y_train, y_test, metadata = sqx.datasets.load_iris_quantum()
        elif data == 'wine':
            X_train, X_test, y_train, y_test, metadata = sqx.datasets.load_wine_quantum()
        elif data == 'synthetic':
            X_train, X_test, y_train, y_test, metadata = sqx.datasets.generate_classification_data()
        else:
            click.echo(f"Unknown dataset: {data}", err=True)
            sys.exit(1)

        if verbose:
            click.echo(f"Loaded {metadata['dataset_name']} dataset")
            click.echo(f"Training samples: {len(X_train)}")
            click.echo(f"Features: {metadata['n_features']}")

        # Load configuration
        config = {}
        if config_file:
            with open(config_file) as f:
                config = json.load(f)

        # Create algorithm
        algorithm_map = {
            'qsvm': sqx.QuantumSVM,
            'qaoa': sqx.QAOA,
            'vqe': sqx.VQE,
            'qnn': sqx.QuantumNN,
            'qkmeans': sqx.QuantumKMeans,
            'qpca': sqx.QuantumPCA
        }

        if algorithm.lower() not in algorithm_map:
            click.echo(f"Unknown algorithm: {algorithm}", err=True)
            click.echo(f"Available: {list(algorithm_map.keys())}")
            sys.exit(1)

        AlgorithmClass = algorithm_map[algorithm.lower()]

        # Set backend
        config['backend'] = backend

        alg = AlgorithmClass(**config)

        if verbose:
            click.echo(f"Created {AlgorithmClass.__name__} with config: {config}")

        # Train
        click.echo("Training...")
        start_time = time.time()

        alg.fit(X_train, y_train)

        training_time = time.time() - start_time

        # Predict
        click.echo("Evaluating...")
        y_pred = alg.predict(X_test)

        # Calculate metrics
        accuracy = np.mean(y_pred == y_test)

        # Results
        results = {
            'algorithm': AlgorithmClass.__name__,
            'dataset': metadata['dataset_name'],
            'backend': backend,
            'accuracy': float(accuracy),
            'training_time': training_time,
            'n_train_samples': len(X_train),
            'n_test_samples': len(X_test),
            'n_features': metadata['n_features']
        }

        # Output results
        click.echo("\\nResults:")
        click.echo(f"Accuracy: {accuracy:.4f}")
        click.echo(f"Training time: {training_time:.2f}s")

        if output:
            with open(output, 'w') as f:
                json.dump(results, f, indent=2)
            click.echo(f"Results saved to {output}")

        if verbose:
            click.echo(f"Full results: {results}")

    except Exception as e:
        click.echo(f"Error running algorithm: {e}", err=True)
        if verbose:
            import traceback
            traceback.print_exc()
        sys.exit(1)

superquantx.cli.list_algorithms

list_algorithms(category: str, verbose: bool)

List available quantum algorithms.

Source code in src/superquantx/cli/commands.py
@click.command('list-algorithms')
@click.option(
    '--category', '-c',
    type=click.Choice(['all', 'classification', 'regression', 'clustering', 'optimization']),
    default='all',
    help='Algorithm category to list'
)
@click.option(
    '--verbose', '-v',
    is_flag=True,
    help='Show detailed algorithm information'
)
def list_algorithms(category: str, verbose: bool):
    """List available quantum algorithms."""
    algorithms = {
        'classification': [
            ('QuantumSVM', 'Quantum Support Vector Machine'),
            ('QuantumNN', 'Quantum Neural Network'),
            ('HybridClassifier', 'Hybrid Classical-Quantum Classifier')
        ],
        'regression': [
            ('QuantumNN', 'Quantum Neural Network (regression mode)')
        ],
        'clustering': [
            ('QuantumKMeans', 'Quantum K-Means Clustering'),
            ('QuantumPCA', 'Quantum Principal Component Analysis')
        ],
        'optimization': [
            ('QAOA', 'Quantum Approximate Optimization Algorithm'),
            ('VQE', 'Variational Quantum Eigensolver')
        ]
    }

    click.echo("Available Quantum Algorithms")
    click.echo("=" * 40)

    categories_to_show = [category] if category != 'all' else list(algorithms.keys())

    for cat in categories_to_show:
        if cat in algorithms:
            click.echo(f"\\n{cat.title()}:")

            for alg_name, description in algorithms[cat]:
                if verbose:
                    click.echo(f"  • {alg_name}")
                    click.echo(f"    {description}")

                    # Try to get algorithm class and show parameters
                    try:
                        getattr(sqx.algorithms, alg_name)
                        # This is a simplified approach - real implementation would
                        # need to inspect the class properly
                        click.echo(f"    Module: superquantx.algorithms.{alg_name}")
                    except AttributeError:
                        pass
                    click.echo()
                else:
                    click.echo(f"  • {alg_name}: {description}")

superquantx.cli.list_backends

list_backends(available_only: bool)

List quantum computing backends.

Source code in src/superquantx/cli/commands.py
@click.command('list-backends')
@click.option(
    '--available-only', '-a',
    is_flag=True,
    help='Show only installed backends'
)
def list_backends(available_only: bool):
    """List quantum computing backends."""
    backends = {
        'Local Simulators': [
            ('simulator', 'SuperQuantX built-in simulator'),
            ('pennylane_local', 'PennyLane local simulators'),
            ('qiskit_aer', 'Qiskit Aer simulator')
        ],
        'Cloud Simulators': [
            ('pennylane_cloud', 'PennyLane cloud devices'),
            ('qiskit_ibm', 'IBM Quantum simulators'),
            ('braket_sv1', 'AWS Braket SV1 simulator'),
            ('cirq_cloud', 'Google Cirq cloud simulators')
        ],
        'Quantum Hardware': [
            ('ibm_quantum', 'IBM Quantum hardware'),
            ('braket_hardware', 'AWS Braket hardware access'),
            ('azure_quantum', 'Azure Quantum hardware'),
            ('rigetti_qcs', 'Rigetti Quantum Cloud Services')
        ]
    }

    backend_info = sqx.get_backend_info()

    click.echo("Quantum Computing Backends")
    click.echo("=" * 40)

    for category, backend_list in backends.items():
        click.echo(f"\\n{category}:")

        for backend_name, description in backend_list:
            # Check if backend is available (simplified logic)
            is_available = any(info for info in backend_info.values())

            if available_only and not is_available:
                continue

            status = "✓" if is_available else "✗"
            click.echo(f"  {status} {backend_name}: {description}")

superquantx.cli.benchmark

benchmark(algorithms: str, datasets: str, backends: str, output: str, runs: int)

Benchmark algorithms across datasets and backends.

Source code in src/superquantx/cli/commands.py
@click.command()
@click.option(
    '--algorithms', '-a',
    default='qsvm,qnn',
    help='Comma-separated list of algorithms to benchmark'
)
@click.option(
    '--datasets', '-d',
    default='iris,wine',
    help='Comma-separated list of datasets'
)
@click.option(
    '--backends', '-b',
    default='simulator',
    help='Comma-separated list of backends'
)
@click.option(
    '--output', '-o',
    type=click.Path(),
    default='benchmark_results.json',
    help='Output file for benchmark results'
)
@click.option(
    '--runs', '-r',
    default=3,
    help='Number of runs for averaging'
)
def benchmark(
    algorithms: str,
    datasets: str,
    backends: str,
    output: str,
    runs: int
):
    """Benchmark algorithms across datasets and backends."""
    alg_list = algorithms.split(',')
    dataset_list = datasets.split(',')
    backend_list = backends.split(',')

    click.echo("SuperQuantX Benchmark")
    click.echo("=" * 30)
    click.echo(f"Algorithms: {alg_list}")
    click.echo(f"Datasets: {dataset_list}")
    click.echo(f"Backends: {backend_list}")
    click.echo(f"Runs per combination: {runs}")
    click.echo()

    results = []
    total_combinations = len(alg_list) * len(dataset_list) * len(backend_list)
    current = 0

    for algorithm in alg_list:
        for dataset in dataset_list:
            for backend in backend_list:
                current += 1
                click.echo(f"[{current}/{total_combinations}] {algorithm} on {dataset} with {backend}")

                try:
                    # This would call the actual benchmark function
                    # For now, simulate results
                    result = {
                        'algorithm': algorithm,
                        'dataset': dataset,
                        'backend': backend,
                        'accuracy': np.random.uniform(0.7, 0.95),
                        'execution_time': np.random.uniform(1.0, 10.0),
                        'success': True
                    }
                    results.append(result)

                    click.echo(f"  Accuracy: {result['accuracy']:.4f}")
                    click.echo(f"  Time: {result['execution_time']:.2f}s")

                except Exception as e:
                    click.echo(f"  Failed: {e}")
                    results.append({
                        'algorithm': algorithm,
                        'dataset': dataset,
                        'backend': backend,
                        'success': False,
                        'error': str(e)
                    })

    # Save results
    with open(output, 'w') as f:
        json.dump(results, f, indent=2)

    click.echo(f"\\nBenchmark complete. Results saved to {output}")

superquantx.cli.configure

configure(backend: str | None, shots: int | None, seed: int | None, show: bool)

Configure SuperQuantX settings.

Source code in src/superquantx/cli/commands.py
@click.command()
@click.option(
    '--backend', '-b',
    help='Set default backend'
)
@click.option(
    '--shots', '-s',
    type=int,
    help='Set default number of shots'
)
@click.option(
    '--seed',
    type=int,
    help='Set random seed'
)
@click.option(
    '--show',
    is_flag=True,
    help='Show current configuration'
)
def configure(backend: str | None, shots: int | None, seed: int | None, show: bool):
    """Configure SuperQuantX settings."""
    if show:
        click.echo("Current SuperQuantX Configuration:")
        click.echo("=" * 40)
        config = sqx.config
        for key, value in config.items():
            click.echo(f"{key}: {value}")
        return

    # Update configuration
    config_updates = {}
    if backend:
        config_updates['default_backend'] = backend
        click.echo(f"Set default backend to: {backend}")

    if shots:
        config_updates['shots'] = shots
        click.echo(f"Set default shots to: {shots}")

    if seed:
        config_updates['random_seed'] = seed
        click.echo(f"Set random seed to: {seed}")

    if config_updates:
        sqx.configure(**config_updates)
        click.echo("Configuration updated successfully")
    else:
        click.echo("No configuration changes specified")

superquantx.cli.info

info()

Show system and backend information.

Source code in src/superquantx/cli/commands.py
@click.command()
def info():
    """Show system and backend information."""
    click.echo("SuperQuantX System Information")
    click.echo("=" * 40)

    # Basic info
    click.echo(f"Version: {sqx.__version__}")
    click.echo(f"Installation path: {sqx.__file__}")

    # Backend information
    click.echo("\\nAvailable Backends:")
    backend_info = sqx.get_backend_info()

    for backend_name, version in backend_info.items():
        status = "✓" if version else "✗"
        version_str = version if version else "Not installed"
        click.echo(f"  {status} {backend_name}: {version_str}")

    # Configuration
    config = sqx.config
    click.echo("\\nCurrent Configuration:")
    click.echo(f"  Default backend: {config.get('default_backend', 'auto')}")
    click.echo(f"  Random seed: {config.get('random_seed', 42)}")
    click.echo(f"  Shots: {config.get('shots', 1024)}")

    # Hardware info
    try:
        import psutil
        click.echo("\\nSystem Resources:")
        click.echo(f"  CPU cores: {psutil.cpu_count()}")
        click.echo(f"  RAM: {psutil.virtual_memory().total / (1024**3):.1f} GB")
    except ImportError:
        pass

Usage Examples

Optimization Workflow

import superquantx as sqx
import numpy as np

# Create parameterized circuit
backend = sqx.get_backend('simulator')
circuit = backend.create_circuit(4)

# Add parameterized gates
params = sqx.Parameter('theta', shape=(8,))
for i in range(4):
    circuit = backend.add_gate(circuit, 'ry', i, [params[i]])

for i in range(3):
    circuit = backend.add_gate(circuit, 'cx', [i, i+1])

for i in range(4):
    circuit = backend.add_gate(circuit, 'ry', i, [params[i+4]])

# Define cost function
def cost_function(parameters):
    bound_circuit = circuit.bind_parameters({params: parameters})
    result = backend.execute_circuit(bound_circuit)
    # Calculate some cost based on measurement results
    counts = result['counts']
    return -sum(int(bitstring, 2) * count for bitstring, count in counts.items())

# Optimize parameters
from superquantx.utils import optimize_parameters, adam_optimizer

optimal_params = optimize_parameters(
    cost_function=cost_function,
    initial_params=np.random.random(8) * 2 * np.pi,
    optimizer=adam_optimizer(learning_rate=0.01),
    max_iterations=100
)

print(f"Optimal parameters: {optimal_params}")

Visualization Example

from superquantx.utils import visualize_results, plot_optimization_history

# Execute circuit with optimal parameters
final_circuit = circuit.bind_parameters({params: optimal_params})
result = backend.execute_circuit(final_circuit, shots=1000)

# Visualize measurement results
fig = visualize_results(
    result['counts'],
    title='Optimized Circuit Results',
    plot_type='histogram'
)
fig.show()

# Plot optimization history
history = {
    'iteration': list(range(100)),
    'cost': [cost_function(p) for p in optimization_history],
    'gradient_norm': [np.linalg.norm(g) for g in gradient_history]
}

plot_optimization_history(history, metrics=['cost', 'gradient_norm'])

Benchmarking Example

from superquantx.utils import benchmark_algorithm, compare_algorithms

# Define test problem
def test_classification_problem():
    X, y = sqx.datasets.load_iris_quantum()
    return X[:100], y[:100]  # Use subset for faster benchmarking

# Benchmark single algorithm
qsvm_metrics = benchmark_algorithm(
    algorithm_class=sqx.QuantumSVM,
    problem_generator=test_classification_problem,
    backend='simulator',
    n_trials=5,
    metrics=['accuracy', 'training_time', 'inference_time']
)

print("QSVM Benchmarks:")
for metric, value in qsvm_metrics.items():
    print(f"  {metric}: {value:.4f} ± {qsvm_metrics[f'{metric}_std']:.4f}")

# Compare multiple algorithms
algorithms = {
    'QSVM': sqx.QuantumSVM,
    'QNN': sqx.QuantumNN,
    'Hybrid': sqx.HybridClassifier
}

comparison = compare_algorithms(
    algorithms=algorithms,
    problem_generator=test_classification_problem,
    backend='simulator',
    n_trials=3
)

# Print comparison table
print("\nAlgorithm Comparison:")
print("Algorithm | Accuracy | Training Time | Inference Time")
print("-" * 50)
for algo_name, metrics in comparison.items():
    print(f"{algo_name:9} | {metrics['accuracy']:.3f}    | {metrics['training_time']:.3f}s        | {metrics['inference_time']:.3f}s")

Feature Mapping Example

from superquantx.utils import QuantumFeatureMap, zz_feature_map

# Create custom feature map
feature_map = QuantumFeatureMap(
    feature_dimension=4,
    reps=2,
    entanglement='linear',
    rotation_gates=['ry', 'rz']
)

# Sample data
X = np.random.random((10, 4))

# Encode data into quantum circuits
encoded_circuits = []
for x in X:
    circuit = feature_map.encode(x, backend=backend)
    encoded_circuits.append(circuit)

# Pre-built ZZ feature map
zz_map = zz_feature_map(feature_dimension=4, reps=2)
zz_circuit = zz_map.encode(X[0], backend=backend)

# Calculate quantum kernel matrix
def quantum_kernel(x1, x2, feature_map, backend):
    """Calculate quantum kernel between two data points."""
    circuit1 = feature_map.encode(x1, backend)
    circuit2 = feature_map.encode(x2, backend)

    # Create kernel circuit: |0⟩ -> U†(x2) U(x1) |0⟩
    kernel_circuit = circuit1.compose(circuit2.inverse())

    # Measure overlap
    result = backend.execute_circuit(kernel_circuit)
    prob_zero = result['counts'].get('0' * kernel_circuit.n_qubits, 0) / sum(result['counts'].values())

    return prob_zero

# Compute kernel matrix
kernel_matrix = np.zeros((len(X), len(X)))
for i in range(len(X)):
    for j in range(i, len(X)):
        kernel_val = quantum_kernel(X[i], X[j], feature_map, backend)
        kernel_matrix[i, j] = kernel_val
        kernel_matrix[j, i] = kernel_val

print(f"Quantum kernel matrix shape: {kernel_matrix.shape}")

Dataset Usage Example

# Load quantum-adapted datasets
X_iris, y_iris = sqx.datasets.load_iris_quantum()
X_wine, y_wine = sqx.datasets.load_wine_quantum()

print(f"Iris dataset: {X_iris.shape} features, {len(set(y_iris))} classes")
print(f"Wine dataset: {X_wine.shape} features, {len(set(y_wine))} classes")

# Generate synthetic data
X_synthetic, y_synthetic = sqx.datasets.generate_classification_data(
    n_samples=200,
    n_features=4,
    n_classes=3,
    n_informative=3,
    n_clusters_per_class=1,
    random_state=42
)

# Portfolio data for financial applications
portfolio_data = sqx.datasets.generate_portfolio_data(
    n_assets=10,
    n_time_periods=100,
    correlation_structure='block',
    volatility_regime='changing'
)

print(f"Portfolio returns shape: {portfolio_data['returns'].shape}")
print(f"Risk factors: {list(portfolio_data['risk_factors'].keys())}")

# Molecular datasets for quantum chemistry
h2_data = sqx.datasets.load_h2_molecule(bond_length=0.735)
print(f"H2 molecule: {h2_data['n_qubits']} qubits, {h2_data['n_orbitals']} orbitals")
print(f"Ground state energy: {h2_data['ground_energy']:.6f} Ha")

Data Preprocessing Example

from superquantx.datasets import QuantumFeatureEncoder, AmplitudeEncoder, AngleEncoder

# Amplitude encoding
amplitude_encoder = AmplitudeEncoder()
X_normalized = amplitude_encoder.fit_transform(X_iris)

print(f"Original range: [{X_iris.min():.3f}, {X_iris.max():.3f}]")
print(f"Encoded range: [{X_normalized.min():.3f}, {X_normalized.max():.3f}]")

# Angle encoding
angle_encoder = AngleEncoder(encoding_type='linear')
X_angles = angle_encoder.fit_transform(X_iris)

print(f"Angle encoding shape: {X_angles.shape}")
print(f"Angle range: [{X_angles.min():.3f}, {X_angles.max():.3f}]")

# Quantum feature encoding with dimensionality reduction
quantum_encoder = QuantumFeatureEncoder(
    target_dimension=8,  # Reduce to 8 features for quantum circuit
    encoding_method='pca',
    normalization='standard'
)

X_quantum = quantum_encoder.fit_transform(X_iris)
print(f"Quantum encoding: {X_iris.shape} -> {X_quantum.shape}")

Command Line Usage

# List available algorithms
superquantx list-algorithms

# List available backends  
superquantx list-backends

# Run algorithm from command line
superquantx run-algorithm QSVM \
    --data iris \
    --backend simulator \
    --feature-map ZZFeatureMap \
    --shots 1000 \
    --output results.json

# Benchmark algorithms
superquantx benchmark \
    --algorithms QSVM,QNN,HybridClassifier \
    --dataset wine \
    --backend simulator \
    --trials 5 \
    --output benchmark_results.csv

# Configure SuperQuantX
superquantx configure \
    --backend-preference "pennylane,qiskit,simulator" \
    --default-shots 1024 \
    --optimization-level 2

# Get system information
superquantx info --backends --versions --capabilities

Quantum Information Analysis

from superquantx.utils import fidelity, trace_distance, entanglement_measure

# Create two quantum states
backend = sqx.get_backend('simulator')

# Bell state
bell_circuit = backend.create_circuit(2)
bell_circuit = backend.add_gate(bell_circuit, 'h', 0)
bell_circuit = backend.add_gate(bell_circuit, 'cx', [0, 1])
bell_state = backend.get_statevector(bell_circuit)

# Random state  
random_circuit = backend.create_circuit(2)
random_circuit = backend.add_gate(random_circuit, 'ry', 0, [np.pi/3])
random_circuit = backend.add_gate(random_circuit, 'rz', 1, [np.pi/4])
random_state = backend.get_statevector(random_circuit)

# Calculate quantum information measures
state_fidelity = fidelity(bell_state, random_state)
trace_dist = trace_distance(bell_state, random_state)
entanglement = entanglement_measure(bell_state)

print(f"Fidelity: {state_fidelity:.4f}")
print(f"Trace distance: {trace_dist:.4f}")
print(f"Entanglement (Bell state): {entanglement:.4f}")

Best Practices

Optimization Guidelines

  1. Start Simple: Begin with basic optimizers before advanced methods
  2. Monitor Convergence: Track optimization metrics throughout training
  3. Parameter Initialization: Use informed initial parameter guesses
  4. Early Stopping: Implement convergence criteria to avoid overfitting

Visualization Standards

  1. Consistent Styling: Use consistent color schemes and layouts
  2. Clear Labels: Always include axis labels and titles
  3. Error Bars: Show confidence intervals when appropriate
  4. Interactive Plots: Use interactive visualizations for complex data

Benchmarking Protocol

  1. Multiple Trials: Run multiple independent trials for statistical significance
  2. Controlled Environment: Fix random seeds for reproducible results
  3. Baseline Comparison: Always compare against classical baselines
  4. Resource Tracking: Monitor computational resources (time, memory, shots)

Data Handling

  1. Preprocessing: Always preprocess data appropriately for quantum algorithms
  2. Validation: Use proper train/validation/test splits
  3. Feature Scaling: Normalize features to appropriate ranges
  4. Dimensionality: Consider quantum-appropriate feature dimensions

For additional examples and advanced usage patterns, see: - Optimization Tutorial - Visualization Guide - Benchmarking Best Practices - Data Preprocessing Guide