Skip to content

Algorithms API Reference

SuperQuantX provides a comprehensive collection of quantum algorithms for machine learning, optimization, cryptography, and simulation. All algorithms support multiple quantum backends and provide unified interfaces for easy integration.

Base Classes

Base Algorithm

superquantx.algorithms.BaseQuantumAlgorithm

BaseQuantumAlgorithm(backend: str | Any, shots: int = 1024, seed: int | None = None, optimization_level: int = 1, **kwargs)

Bases: ABC

Abstract base class for all quantum machine learning algorithms.

This class defines the common interface that all quantum algorithms must implement, providing consistency across different algorithm types and backends.

Parameters:

Name Type Description Default
backend str | Any

Quantum backend to use for computation

required
shots int

Number of measurement shots (default: 1024)

1024
seed int | None

Random seed for reproducibility

None
optimization_level int

Circuit optimization level (0-3)

1
**kwargs

Additional algorithm-specific parameters

{}

Initialize the quantum algorithm.

Source code in src/superquantx/algorithms/base_algorithm.py
def __init__(
    self,
    backend: str | Any,
    shots: int = 1024,
    seed: int | None = None,
    optimization_level: int = 1,
    **kwargs
) -> None:
    """Initialize the quantum algorithm."""
    self.backend = self._initialize_backend(backend)
    self.shots = shots
    self.seed = seed
    self.optimization_level = optimization_level

    # Algorithm state
    self.is_fitted = False
    self.training_history = []
    self.best_params = None
    self.best_score = None

    # Store additional parameters
    self.algorithm_params = kwargs

    # Performance tracking
    self.execution_times = []
    self.backend_stats = {}

    logger.info(f"Initialized {self.__class__.__name__} with backend {type(self.backend).__name__}")

Functions

fit abstractmethod

fit(X: ndarray, y: ndarray | None = None, **kwargs) -> BaseQuantumAlgorithm

Train the quantum algorithm.

Parameters:

Name Type Description Default
X ndarray

Training data features

required
y ndarray | None

Training data labels (for supervised learning)

None
**kwargs

Additional training parameters

{}

Returns:

Type Description
BaseQuantumAlgorithm

Self for method chaining

Source code in src/superquantx/algorithms/base_algorithm.py
@abstractmethod
def fit(self, X: np.ndarray, y: np.ndarray | None = None, **kwargs) -> 'BaseQuantumAlgorithm':
    """Train the quantum algorithm.

    Args:
        X: Training data features
        y: Training data labels (for supervised learning)
        **kwargs: Additional training parameters

    Returns:
        Self for method chaining

    """
    pass

predict abstractmethod

predict(X: ndarray, **kwargs) -> np.ndarray

Make predictions using the trained algorithm.

Parameters:

Name Type Description Default
X ndarray

Input data for prediction

required
**kwargs

Additional prediction parameters

{}

Returns:

Type Description
ndarray

Predictions array

Source code in src/superquantx/algorithms/base_algorithm.py
@abstractmethod
def predict(self, X: np.ndarray, **kwargs) -> np.ndarray:
    """Make predictions using the trained algorithm.

    Args:
        X: Input data for prediction
        **kwargs: Additional prediction parameters

    Returns:
        Predictions array

    """
    pass

score

score(X: ndarray, y: ndarray, **kwargs) -> float

Compute the algorithm's score on the given test data.

Parameters:

Name Type Description Default
X ndarray

Test data features

required
y ndarray

True test data labels

required
**kwargs

Additional scoring parameters

{}

Returns:

Type Description
float

Algorithm score (higher is better)

Source code in src/superquantx/algorithms/base_algorithm.py
def score(self, X: np.ndarray, y: np.ndarray, **kwargs) -> float:
    """Compute the algorithm's score on the given test data.

    Args:
        X: Test data features
        y: True test data labels
        **kwargs: Additional scoring parameters

    Returns:
        Algorithm score (higher is better)

    """
    predictions = self.predict(X, **kwargs)
    return self._compute_score(predictions, y)

get_params

get_params(deep: bool = True) -> dict[str, Any]

Get algorithm parameters.

Parameters:

Name Type Description Default
deep bool

Whether to return deep copy of parameters

True

Returns:

Type Description
dict[str, Any]

Parameter dictionary

Source code in src/superquantx/algorithms/base_algorithm.py
def get_params(self, deep: bool = True) -> dict[str, Any]:
    """Get algorithm parameters.

    Args:
        deep: Whether to return deep copy of parameters

    Returns:
        Parameter dictionary

    """
    params = {
        'backend': self.backend,
        'shots': self.shots,
        'seed': self.seed,
        'optimization_level': self.optimization_level,
    }
    params.update(self.algorithm_params)
    return params

set_params

set_params(**params) -> BaseQuantumAlgorithm

Set algorithm parameters.

Parameters:

Name Type Description Default
**params

Parameters to set

{}

Returns:

Type Description
BaseQuantumAlgorithm

Self for method chaining

Source code in src/superquantx/algorithms/base_algorithm.py
def set_params(self, **params) -> 'BaseQuantumAlgorithm':
    """Set algorithm parameters.

    Args:
        **params: Parameters to set

    Returns:
        Self for method chaining

    """
    for key, value in params.items():
        if hasattr(self, key):
            setattr(self, key, value)
        else:
            self.algorithm_params[key] = value
    return self

save_model

save_model(filepath: str) -> None

Save the trained model to disk.

Parameters:

Name Type Description Default
filepath str

Path where to save the model

required
Source code in src/superquantx/algorithms/base_algorithm.py
def save_model(self, filepath: str) -> None:
    """Save the trained model to disk.

    Args:
        filepath: Path where to save the model

    """
    import pickle

    if not self.is_fitted:
        logger.warning("Model is not fitted yet. Saving unfitted model.")

    model_data = {
        'class': self.__class__.__name__,
        'params': self.get_params(),
        'is_fitted': self.is_fitted,
        'training_history': self.training_history,
        'best_params': self.best_params,
        'best_score': self.best_score,
    }

    with open(filepath, 'wb') as f:
        pickle.dump(model_data, f)

    logger.info(f"Model saved to {filepath}")

load_model classmethod

load_model(filepath: str) -> BaseQuantumAlgorithm

Load a trained model from disk.

Parameters:

Name Type Description Default
filepath str

Path to the saved model

required

Returns:

Type Description
BaseQuantumAlgorithm

Loaded algorithm instance

Source code in src/superquantx/algorithms/base_algorithm.py
@classmethod
def load_model(cls, filepath: str) -> 'BaseQuantumAlgorithm':
    """Load a trained model from disk.

    Args:
        filepath: Path to the saved model

    Returns:
        Loaded algorithm instance

    """
    import pickle

    with open(filepath, 'rb') as f:
        model_data = pickle.load(f)

    # Create new instance with saved parameters
    instance = cls(**model_data['params'])
    instance.is_fitted = model_data['is_fitted']
    instance.training_history = model_data['training_history']
    instance.best_params = model_data['best_params']
    instance.best_score = model_data['best_score']

    logger.info(f"Model loaded from {filepath}")
    return instance

benchmark

benchmark(X: ndarray, y: ndarray | None = None, runs: int = 5) -> dict[str, Any]

Benchmark algorithm performance.

Parameters:

Name Type Description Default
X ndarray

Test data

required
y ndarray | None

Test labels (optional)

None
runs int

Number of benchmark runs

5

Returns:

Type Description
dict[str, Any]

Benchmark results dictionary

Source code in src/superquantx/algorithms/base_algorithm.py
def benchmark(self, X: np.ndarray, y: np.ndarray | None = None, runs: int = 5) -> dict[str, Any]:
    """Benchmark algorithm performance.

    Args:
        X: Test data
        y: Test labels (optional)
        runs: Number of benchmark runs

    Returns:
        Benchmark results dictionary

    """
    execution_times = []
    scores = []

    for i in range(runs):
        start_time = time.time()

        if y is not None:
            score = self.score(X, y)
            scores.append(score)
        else:
            self.predict(X)

        execution_time = time.time() - start_time
        execution_times.append(execution_time)

    results = {
        'execution_times': execution_times,
        'mean_execution_time': np.mean(execution_times),
        'std_execution_time': np.std(execution_times),
        'min_execution_time': np.min(execution_times),
        'max_execution_time': np.max(execution_times),
    }

    if scores:
        results.update({
            'scores': scores,
            'mean_score': np.mean(scores),
            'std_score': np.std(scores),
            'min_score': np.min(scores),
            'max_score': np.max(scores),
        })

    return results

get_circuit_info

get_circuit_info() -> dict[str, Any]

Get information about the quantum circuit.

Returns:

Type Description
dict[str, Any]

Circuit information dictionary

Source code in src/superquantx/algorithms/base_algorithm.py
def get_circuit_info(self) -> dict[str, Any]:
    """Get information about the quantum circuit.

    Returns:
        Circuit information dictionary

    """
    return {
        'backend': type(self.backend).__name__,
        'shots': self.shots,
        'optimization_level': self.optimization_level,
    }

reset

reset() -> None

Reset algorithm to untrained state.

Source code in src/superquantx/algorithms/base_algorithm.py
def reset(self) -> None:
    """Reset algorithm to untrained state."""
    self.is_fitted = False
    self.training_history = []
    self.best_params = None
    self.best_score = None
    self.execution_times = []
    self.backend_stats = {}

    logger.info(f"Reset {self.__class__.__name__} to untrained state")

Quantum Result

superquantx.algorithms.QuantumResult dataclass

QuantumResult(result: Any, metadata: dict[str, Any], execution_time: float, backend_info: dict[str, Any], error: str | None = None, intermediate_results: dict[str, Any] | None = None)

Container for quantum algorithm results.

This class provides a standardized way to return results from quantum algorithms, including the main result, metadata, and performance metrics.

Attributes:

Name Type Description
result Any

The main algorithm result

metadata dict[str, Any]

Additional information about the computation

execution_time float

Time taken to execute the algorithm (seconds)

backend_info dict[str, Any]

Information about the backend used

error str | None

Error information if computation failed

intermediate_results dict[str, Any] | None

Optional intermediate results for debugging

Machine Learning Algorithms

Quantum Support Vector Machine

superquantx.algorithms.QuantumSVM

QuantumSVM(backend: str | Any, feature_map: str = 'ZZFeatureMap', feature_map_reps: int = 2, C: float = 1.0, gamma: float | None = None, quantum_kernel: Callable | None = None, shots: int = 1024, normalize_data: bool = True, **kwargs)

Bases: SupervisedQuantumAlgorithm

Quantum Support Vector Machine for classification.

This implementation uses quantum feature maps to transform data into a high-dimensional Hilbert space where linear separation is possible. The quantum kernel is computed using quantum circuits.

The algorithm works by: 1. Encoding classical data into quantum states using feature maps 2. Computing quantum kernels between data points 3. Training a classical SVM using the quantum kernel matrix

Parameters:

Name Type Description Default
backend str | Any

Quantum backend for circuit execution

required
feature_map str

Type of quantum feature map ('ZZFeatureMap', 'PauliFeatureMap', etc.)

'ZZFeatureMap'
feature_map_reps int

Number of repetitions in the feature map

2
C float

Regularization parameter for SVM

1.0
gamma float | None

Kernel coefficient (for RBF-like quantum kernels)

None
quantum_kernel Callable | None

Custom quantum kernel function

None
shots int

Number of measurement shots

1024
**kwargs

Additional parameters

{}
Example

qsvm = QuantumSVM(backend='pennylane', feature_map='ZZFeatureMap') qsvm.fit(X_train, y_train) predictions = qsvm.predict(X_test) accuracy = qsvm.score(X_test, y_test)

Source code in src/superquantx/algorithms/quantum_svm.py
def __init__(
    self,
    backend: str | Any,
    feature_map: str = 'ZZFeatureMap',
    feature_map_reps: int = 2,
    C: float = 1.0,
    gamma: float | None = None,
    quantum_kernel: Callable | None = None,
    shots: int = 1024,
    normalize_data: bool = True,
    **kwargs
) -> None:
    super().__init__(backend=backend, shots=shots, **kwargs)

    self.feature_map = feature_map
    self.feature_map_reps = feature_map_reps
    self.C = C
    self.gamma = gamma
    self.quantum_kernel = quantum_kernel
    self.normalize_data = normalize_data

    # Classical components
    self.svm = None
    self.scaler = StandardScaler() if normalize_data else None

    # Quantum components
    self.kernel_matrix_ = None
    self.feature_map_circuit_ = None

    # Training data storage (needed for kernel computation)
    self.X_train_ = None

    logger.info(f"Initialized QuantumSVM with feature_map={feature_map}, reps={feature_map_reps}")

Functions

fit

fit(X: ndarray, y: ndarray, **kwargs) -> QuantumSVM

Train the quantum SVM.

Parameters:

Name Type Description Default
X ndarray

Training data features

required
y ndarray

Training data labels

required
**kwargs

Additional training parameters

{}

Returns:

Type Description
QuantumSVM

Self for method chaining

Source code in src/superquantx/algorithms/quantum_svm.py
def fit(self, X: np.ndarray, y: np.ndarray, **kwargs) -> 'QuantumSVM':
    """Train the quantum SVM.

    Args:
        X: Training data features
        y: Training data labels
        **kwargs: Additional training parameters

    Returns:
        Self for method chaining

    """
    logger.info(f"Training QuantumSVM on {X.shape[0]} samples with {X.shape[1]} features")

    # Validate and preprocess data
    super().fit(X, y, **kwargs)

    if self.normalize_data:
        X = self.scaler.fit_transform(X)

    self.X_train_ = X.copy()

    # Create quantum feature map
    self.feature_map_circuit_ = self._create_feature_map(X.shape[1])

    # Compute quantum kernel matrix
    logger.info("Computing quantum kernel matrix...")
    self.kernel_matrix_ = self._compute_quantum_kernel(X)

    # Train classical SVM with quantum kernel
    self.svm = SVC(
        kernel='precomputed',
        C=self.C,
    )

    self.svm.fit(self.kernel_matrix_, y)
    self.is_fitted = True

    # Compute training accuracy
    train_predictions = self.predict(X)
    train_accuracy = accuracy_score(y, train_predictions)

    self.training_history.append({
        'train_accuracy': train_accuracy,
        'n_support_vectors': self.svm.n_support_,
        'kernel_matrix_shape': self.kernel_matrix_.shape,
    })

    logger.info(f"Training completed. Accuracy: {train_accuracy:.3f}, "
               f"Support vectors: {sum(self.svm.n_support_)}")

    return self

predict

predict(X: ndarray, **kwargs) -> np.ndarray

Make predictions using the trained quantum SVM.

Parameters:

Name Type Description Default
X ndarray

Input data for prediction

required
**kwargs

Additional prediction parameters

{}

Returns:

Type Description
ndarray

Predicted labels

Source code in src/superquantx/algorithms/quantum_svm.py
def predict(self, X: np.ndarray, **kwargs) -> np.ndarray:
    """Make predictions using the trained quantum SVM.

    Args:
        X: Input data for prediction
        **kwargs: Additional prediction parameters

    Returns:
        Predicted labels

    """
    if not self.is_fitted:
        raise ValueError("Model must be fitted before making predictions")

    if self.normalize_data:
        X = self.scaler.transform(X)

    # Compute kernel matrix between test data and training data
    test_kernel = self._compute_quantum_kernel(X, self.X_train_)

    # Make predictions using the trained SVM
    predictions = self.svm.predict(test_kernel)

    return predictions

predict_proba

predict_proba(X: ndarray, **kwargs) -> np.ndarray

Predict class probabilities.

Parameters:

Name Type Description Default
X ndarray

Input data for prediction

required
**kwargs

Additional parameters

{}

Returns:

Type Description
ndarray

Predicted class probabilities

Source code in src/superquantx/algorithms/quantum_svm.py
def predict_proba(self, X: np.ndarray, **kwargs) -> np.ndarray:
    """Predict class probabilities.

    Args:
        X: Input data for prediction
        **kwargs: Additional parameters

    Returns:
        Predicted class probabilities

    """
    if not self.is_fitted:
        raise ValueError("Model must be fitted before making predictions")

    if self.normalize_data:
        X = self.scaler.transform(X)

    test_kernel = self._compute_quantum_kernel(X, self.X_train_)

    # Need to recreate SVM with probability=True for probabilities
    if not hasattr(self.svm, 'predict_proba'):
        logger.warning("Probability prediction not available, returning decision scores")
        return self.decision_function(X)

    return self.svm.predict_proba(test_kernel)

decision_function

decision_function(X: ndarray) -> np.ndarray

Compute decision function values.

Parameters:

Name Type Description Default
X ndarray

Input data

required

Returns:

Type Description
ndarray

Decision function values

Source code in src/superquantx/algorithms/quantum_svm.py
def decision_function(self, X: np.ndarray) -> np.ndarray:
    """Compute decision function values.

    Args:
        X: Input data

    Returns:
        Decision function values

    """
    if not self.is_fitted:
        raise ValueError("Model must be fitted before computing decision function")

    if self.normalize_data:
        X = self.scaler.transform(X)

    test_kernel = self._compute_quantum_kernel(X, self.X_train_)
    return self.svm.decision_function(test_kernel)

get_support_vectors

get_support_vectors() -> np.ndarray

Get support vectors from the trained model.

Source code in src/superquantx/algorithms/quantum_svm.py
def get_support_vectors(self) -> np.ndarray:
    """Get support vectors from the trained model."""
    if not self.is_fitted:
        raise ValueError("Model must be fitted to get support vectors")

    return self.X_train_[self.svm.support_]

get_quantum_kernel_matrix

get_quantum_kernel_matrix(X: ndarray | None = None) -> np.ndarray

Get the quantum kernel matrix.

Parameters:

Name Type Description Default
X ndarray | None

Data to compute kernel matrix for (default: training data)

None

Returns:

Type Description
ndarray

Quantum kernel matrix

Source code in src/superquantx/algorithms/quantum_svm.py
def get_quantum_kernel_matrix(self, X: np.ndarray | None = None) -> np.ndarray:
    """Get the quantum kernel matrix.

    Args:
        X: Data to compute kernel matrix for (default: training data)

    Returns:
        Quantum kernel matrix

    """
    if X is None:
        if self.kernel_matrix_ is None:
            raise ValueError("No kernel matrix available")
        return self.kernel_matrix_
    else:
        if self.normalize_data:
            X = self.scaler.transform(X)
        return self._compute_quantum_kernel(X)

analyze_kernel

analyze_kernel() -> dict[str, Any]

Analyze properties of the quantum kernel.

Returns:

Type Description
dict[str, Any]

Dictionary with kernel analysis results

Source code in src/superquantx/algorithms/quantum_svm.py
def analyze_kernel(self) -> dict[str, Any]:
    """Analyze properties of the quantum kernel.

    Returns:
        Dictionary with kernel analysis results

    """
    if self.kernel_matrix_ is None:
        raise ValueError("Model must be fitted to analyze kernel")

    K = self.kernel_matrix_

    # Compute kernel properties
    eigenvalues = np.linalg.eigvals(K)

    analysis = {
        'kernel_shape': K.shape,
        'kernel_rank': np.linalg.matrix_rank(K),
        'condition_number': np.linalg.cond(K),
        'trace': np.trace(K),
        'frobenius_norm': np.linalg.norm(K, 'fro'),
        'eigenvalue_stats': {
            'mean': np.mean(eigenvalues),
            'std': np.std(eigenvalues),
            'min': np.min(eigenvalues),
            'max': np.max(eigenvalues),
        },
        'is_positive_definite': np.all(eigenvalues > 0),
    }

    return analysis

get_params

get_params(deep: bool = True) -> dict[str, Any]

Get algorithm parameters.

Source code in src/superquantx/algorithms/quantum_svm.py
def get_params(self, deep: bool = True) -> dict[str, Any]:
    """Get algorithm parameters."""
    params = super().get_params(deep)
    params.update({
        'feature_map': self.feature_map,
        'feature_map_reps': self.feature_map_reps,
        'C': self.C,
        'gamma': self.gamma,
        'normalize_data': self.normalize_data,
    })
    return params

set_params

set_params(**params) -> QuantumSVM

Set algorithm parameters.

Source code in src/superquantx/algorithms/quantum_svm.py
def set_params(self, **params) -> 'QuantumSVM':
    """Set algorithm parameters."""
    if self.is_fitted and any(key in params for key in
                             ['feature_map', 'feature_map_reps', 'C', 'gamma']):
        logger.warning("Changing core parameters requires refitting the model")
        self.is_fitted = False

    return super().set_params(**params)

Quantum Neural Network

superquantx.algorithms.QuantumNN

QuantumNN(backend: str | Any, n_layers: int = 3, architecture: str = 'hybrid', encoding: str = 'angle', entanglement: str = 'linear', measurement: str = 'expectation', optimizer: str = 'adam', learning_rate: float = 0.01, batch_size: int = 32, max_epochs: int = 100, shots: int = 1024, task_type: str = 'classification', **kwargs)

Bases: SupervisedQuantumAlgorithm

Quantum Neural Network for classification and regression.

This implementation uses parameterized quantum circuits as neural network layers, with classical optimization to train the quantum parameters.

The network can be configured with different architectures: - Pure quantum: Only quantum layers - Hybrid: Combination of quantum and classical layers - Variational: Variational quantum circuits with measurement

Parameters:

Name Type Description Default
backend str | Any

Quantum backend for circuit execution

required
n_layers int

Number of quantum layers

3
architecture str

Network architecture ('pure', 'hybrid', 'variational')

'hybrid'
encoding str

Data encoding method ('amplitude', 'angle', 'basis')

'angle'
entanglement str

Entanglement pattern ('linear', 'circular', 'full')

'linear'
measurement str

Measurement strategy ('expectation', 'sampling', 'statevector')

'expectation'
optimizer str

Classical optimizer for training

'adam'
learning_rate float

Learning rate for training

0.01
batch_size int

Training batch size

32
shots int

Number of measurement shots

1024
**kwargs

Additional parameters

{}
Example

qnn = QuantumNN(backend='pennylane', n_layers=3, architecture='hybrid') qnn.fit(X_train, y_train) predictions = qnn.predict(X_test) accuracy = qnn.score(X_test, y_test)

Source code in src/superquantx/algorithms/quantum_nn.py
def __init__(
    self,
    backend: str | Any,
    n_layers: int = 3,
    architecture: str = 'hybrid',
    encoding: str = 'angle',
    entanglement: str = 'linear',
    measurement: str = 'expectation',
    optimizer: str = 'adam',
    learning_rate: float = 0.01,
    batch_size: int = 32,
    max_epochs: int = 100,
    shots: int = 1024,
    task_type: str = 'classification',
    **kwargs
) -> None:
    super().__init__(backend=backend, shots=shots, **kwargs)

    self.n_layers = n_layers
    self.architecture = architecture
    self.encoding = encoding
    self.entanglement = entanglement
    self.measurement = measurement
    self.optimizer_name = optimizer
    self.learning_rate = learning_rate
    self.batch_size = batch_size
    self.max_epochs = max_epochs
    self.task_type = task_type

    # Network components
    self.quantum_layers = []
    self.classical_layers = []
    self.n_qubits = None
    self.n_params = None

    # Training components
    self.weights = None
    self.encoder = LabelEncoder() if task_type == 'classification' else None
    self.scaler = StandardScaler()
    self.optimizer = None

    # Training history
    self.loss_history = []
    self.accuracy_history = []

    logger.info(f"Initialized QuantumNN with {n_layers} layers, architecture={architecture}")

Functions

fit

fit(X: ndarray, y: ndarray, **kwargs) -> QuantumNN

Train the quantum neural network.

Parameters:

Name Type Description Default
X ndarray

Training data features

required
y ndarray

Training data labels

required
**kwargs

Additional training parameters

{}

Returns:

Type Description
QuantumNN

Self for method chaining

Source code in src/superquantx/algorithms/quantum_nn.py
def fit(self, X: np.ndarray, y: np.ndarray, **kwargs) -> 'QuantumNN':
    """Train the quantum neural network.

    Args:
        X: Training data features
        y: Training data labels
        **kwargs: Additional training parameters

    Returns:
        Self for method chaining

    """
    logger.info(f"Training QuantumNN on {X.shape[0]} samples with {X.shape[1]} features")

    # Validate and preprocess data
    super().fit(X, y, **kwargs)

    # Set number of classes for classification BEFORE building network
    if self.task_type == 'classification':
        unique_classes = np.unique(y)
        self.n_classes_ = len(unique_classes)
        logger.info(f"Detected {self.n_classes_} classes for classification: {unique_classes}")

    # Scale features
    X = self.scaler.fit_transform(X)

    # Encode labels for classification after setting n_classes_
    if self.task_type == 'classification' and self.encoder:
        y = self.encoder.fit_transform(y)

    # Determine network architecture
    self.n_qubits = self._determine_qubits(X.shape[1])
    self._build_network()

    # Reset training history
    self.loss_history = []
    self.accuracy_history = []

    logger.info(f"Training network with {self.n_qubits} qubits and {self.n_params} parameters")

    # Training loop
    for epoch in range(self.max_epochs):
        epoch_losses = []
        epoch_accuracies = []

        # Mini-batch training
        for i in range(0, len(X), self.batch_size):
            X_batch = X[i:i + self.batch_size]
            y_batch = y[i:i + self.batch_size]

            # Forward pass
            y_pred = self._forward_pass(X_batch, self.weights)

            # Compute loss
            loss = self._compute_loss(y_batch, y_pred)
            epoch_losses.append(loss)

            # Compute accuracy for classification
            if self.task_type == 'classification':
                y_pred_labels = np.argmax(y_pred, axis=1)
                accuracy = accuracy_score(y_batch, y_pred_labels)
                epoch_accuracies.append(accuracy)

            # Compute gradients and update weights
            gradients = self._compute_gradients(X_batch, y_batch, self.weights)
            self._update_weights(gradients)

        # Record epoch statistics
        epoch_loss = np.mean(epoch_losses)
        self.loss_history.append(epoch_loss)

        if self.task_type == 'classification' and epoch_accuracies:
            epoch_accuracy = np.mean(epoch_accuracies)
            self.accuracy_history.append(epoch_accuracy)

            if epoch % 10 == 0:
                logger.info(f"Epoch {epoch}: Loss = {epoch_loss:.4f}, Accuracy = {epoch_accuracy:.4f}")
        else:
            if epoch % 10 == 0:
                logger.info(f"Epoch {epoch}: Loss = {epoch_loss:.4f}")

        # Early stopping check
        if len(self.loss_history) > 10 and self._check_early_stopping():
            logger.info(f"Early stopping at epoch {epoch}")
            break

    self.is_fitted = True

    # Final training statistics
    final_loss = self.loss_history[-1]
    logger.info(f"Training completed. Final loss: {final_loss:.4f}")

    return self

predict

predict(X: ndarray, **kwargs) -> np.ndarray

Make predictions using the trained quantum neural network.

Parameters:

Name Type Description Default
X ndarray

Input data for prediction

required
**kwargs

Additional prediction parameters

{}

Returns:

Type Description
ndarray

Predicted labels or values

Source code in src/superquantx/algorithms/quantum_nn.py
def predict(self, X: np.ndarray, **kwargs) -> np.ndarray:
    """Make predictions using the trained quantum neural network.

    Args:
        X: Input data for prediction
        **kwargs: Additional prediction parameters

    Returns:
        Predicted labels or values

    """
    if not self.is_fitted:
        raise ValueError("Model must be fitted before making predictions")

    # Scale features
    X = self.scaler.transform(X)

    # Forward pass
    y_pred = self._forward_pass(X, self.weights)

    if self.task_type == 'classification':
        # Return class labels
        predictions = np.argmax(y_pred, axis=1)
        if self.encoder:
            predictions = self.encoder.inverse_transform(predictions)
        return predictions
    else:
        # Return continuous values for regression
        return y_pred.flatten()

predict_proba

predict_proba(X: ndarray, **kwargs) -> np.ndarray

Predict class probabilities.

Parameters:

Name Type Description Default
X ndarray

Input data for prediction

required
**kwargs

Additional parameters

{}

Returns:

Type Description
ndarray

Predicted class probabilities

Source code in src/superquantx/algorithms/quantum_nn.py
def predict_proba(self, X: np.ndarray, **kwargs) -> np.ndarray:
    """Predict class probabilities.

    Args:
        X: Input data for prediction
        **kwargs: Additional parameters

    Returns:
        Predicted class probabilities

    """
    if self.task_type != 'classification':
        raise ValueError("predict_proba only available for classification tasks")

    if not self.is_fitted:
        raise ValueError("Model must be fitted before making predictions")

    # Scale features
    X = self.scaler.transform(X)

    # Forward pass returns probabilities for classification
    return self._forward_pass(X, self.weights)

get_circuit_depth

get_circuit_depth() -> int

Get the depth of the quantum circuit.

Source code in src/superquantx/algorithms/quantum_nn.py
def get_circuit_depth(self) -> int:
    """Get the depth of the quantum circuit."""
    if hasattr(self.backend, 'get_circuit_depth'):
        return self.backend.get_circuit_depth(self.quantum_layers)
    else:
        return self.n_layers * 2  # Estimate

get_training_history

get_training_history() -> dict[str, list[float]]

Get training history.

Source code in src/superquantx/algorithms/quantum_nn.py
def get_training_history(self) -> dict[str, list[float]]:
    """Get training history."""
    history = {'loss': self.loss_history}
    if self.accuracy_history:
        history['accuracy'] = self.accuracy_history
    return history

analyze_expressivity

analyze_expressivity() -> dict[str, Any]

Analyze the expressivity of the quantum neural network.

Source code in src/superquantx/algorithms/quantum_nn.py
def analyze_expressivity(self) -> dict[str, Any]:
    """Analyze the expressivity of the quantum neural network."""
    analysis = {
        'n_qubits': self.n_qubits,
        'n_layers': self.n_layers,
        'n_parameters': self.n_params,
        'circuit_depth': self.get_circuit_depth(),
        'entanglement_pattern': self.entanglement,
        'encoding_method': self.encoding,
    }

    # Estimate expressivity metrics
    analysis.update({
        'parameter_space_dimension': self.n_params,
        'hilbert_space_dimension': 2**self.n_qubits,
        'expressivity_ratio': self.n_params / (2**self.n_qubits),
    })

    return analysis

get_params

get_params(deep: bool = True) -> dict[str, Any]

Get quantum neural network parameters.

Source code in src/superquantx/algorithms/quantum_nn.py
def get_params(self, deep: bool = True) -> dict[str, Any]:
    """Get quantum neural network parameters."""
    params = super().get_params(deep)
    params.update({
        'n_layers': self.n_layers,
        'architecture': self.architecture,
        'encoding': self.encoding,
        'entanglement': self.entanglement,
        'measurement': self.measurement,
        'optimizer': self.optimizer_name,
        'learning_rate': self.learning_rate,
        'batch_size': self.batch_size,
        'max_epochs': self.max_epochs,
        'task_type': self.task_type,
    })
    return params

set_params

set_params(**params) -> QuantumNN

Set quantum neural network parameters.

Source code in src/superquantx/algorithms/quantum_nn.py
def set_params(self, **params) -> 'QuantumNN':
    """Set quantum neural network parameters."""
    if self.is_fitted and any(key in params for key in
                             ['n_layers', 'architecture', 'encoding', 'entanglement']):
        logger.warning("Changing core parameters requires refitting the model")
        self.is_fitted = False

    return super().set_params(**params)

superquantx.algorithms.QuantumNeuralNetwork module-attribute

QuantumNeuralNetwork = QuantumNN

Hybrid Classifier

superquantx.algorithms.HybridClassifier

HybridClassifier(backend: str | Any, hybrid_mode: str = 'ensemble', quantum_algorithms: list[str] | None = None, classical_algorithms: list[str] | None = None, quantum_weight: float = 0.5, feature_selection: bool = False, meta_learner: str = 'logistic_regression', shots: int = 1024, normalize_data: bool = True, **kwargs)

Bases: SupervisedQuantumAlgorithm

Hybrid Classical-Quantum Classifier.

This classifier combines classical and quantum machine learning algorithms to leverage the strengths of both approaches. It can operate in different modes: - Ensemble: Combines predictions from multiple quantum and classical models - Sequential: Uses quantum features as input to classical models - Voting: Majority voting among quantum and classical predictions - Stacking: Uses meta-learner to combine quantum and classical predictions

Parameters:

Name Type Description Default
backend str | Any

Quantum backend for quantum components

required
hybrid_mode str

Mode of operation ('ensemble', 'sequential', 'voting', 'stacking')

'ensemble'
quantum_algorithms list[str] | None

List of quantum algorithms to include

None
classical_algorithms list[str] | None

List of classical algorithms to include

None
quantum_weight float

Weight for quantum predictions (0-1)

0.5
feature_selection bool

Whether to use quantum feature selection

False
meta_learner str

Meta-learning algorithm for stacking mode

'logistic_regression'
shots int

Number of measurement shots

1024
**kwargs

Additional parameters

{}
Example

hybrid = HybridClassifier( ... backend='pennylane', ... hybrid_mode='ensemble', ... quantum_algorithms=['quantum_svm', 'quantum_nn'], ... classical_algorithms=['random_forest', 'svm'] ... ) hybrid.fit(X_train, y_train) predictions = hybrid.predict(X_test)

Source code in src/superquantx/algorithms/hybrid_classifier.py
def __init__(
    self,
    backend: str | Any,
    hybrid_mode: str = 'ensemble',
    quantum_algorithms: list[str] | None = None,
    classical_algorithms: list[str] | None = None,
    quantum_weight: float = 0.5,
    feature_selection: bool = False,
    meta_learner: str = 'logistic_regression',
    shots: int = 1024,
    normalize_data: bool = True,
    **kwargs
) -> None:
    super().__init__(backend=backend, shots=shots, **kwargs)

    self.hybrid_mode = hybrid_mode
    self.quantum_algorithms = quantum_algorithms or ['quantum_svm']
    self.classical_algorithms = classical_algorithms or ['random_forest']
    self.quantum_weight = quantum_weight
    self.feature_selection = feature_selection
    self.meta_learner_name = meta_learner
    self.normalize_data = normalize_data

    # Initialize models
    self.quantum_models = {}
    self.classical_models = {}
    self.meta_learner = None
    self.feature_selector = None

    # Data preprocessing
    self.scaler = StandardScaler() if normalize_data else None
    self.label_encoder = LabelEncoder()

    # Model performance tracking
    self.quantum_scores = {}
    self.classical_scores = {}
    self.hybrid_score = None
    self.feature_importance_ = None

    self._initialize_models()

    logger.info(f"Initialized HybridClassifier with mode={hybrid_mode}")
    logger.info(f"Quantum algorithms: {self.quantum_algorithms}")
    logger.info(f"Classical algorithms: {self.classical_algorithms}")

Functions

fit

fit(X: ndarray, y: ndarray, **kwargs) -> HybridClassifier

Train the hybrid classifier.

Parameters:

Name Type Description Default
X ndarray

Training data features

required
y ndarray

Training data labels

required
**kwargs

Additional training parameters

{}

Returns:

Type Description
HybridClassifier

Self for method chaining

Source code in src/superquantx/algorithms/hybrid_classifier.py
def fit(self, X: np.ndarray, y: np.ndarray, **kwargs) -> 'HybridClassifier':
    """Train the hybrid classifier.

    Args:
        X: Training data features
        y: Training data labels
        **kwargs: Additional training parameters

    Returns:
        Self for method chaining

    """
    logger.info(f"Training HybridClassifier on {X.shape[0]} samples with {X.shape[1]} features")

    # Validate and preprocess data
    super().fit(X, y, **kwargs)

    # Normalize features
    if self.normalize_data:
        X = self.scaler.fit_transform(X)

    # Encode labels
    y_encoded = self.label_encoder.fit_transform(y)

    # Apply feature selection
    X_selected = self._apply_feature_selection(X, y_encoded)

    # Train quantum models
    self.quantum_scores = self._train_quantum_models(X_selected, y_encoded)

    # Train classical models
    self.classical_scores = self._train_classical_models(X_selected, y_encoded)

    # Train meta-learner for stacking mode
    if self.hybrid_mode == 'stacking' and self.meta_learner is not None:
        logger.info("Training meta-learner for stacking")

        # Get base model predictions for meta-training
        quantum_preds, classical_preds = self._get_base_predictions(X_selected)

        meta_features = []
        if len(quantum_preds) > 0:
            for pred_proba in quantum_preds:
                meta_features.append(pred_proba)
        if len(classical_preds) > 0:
            for pred_proba in classical_preds:
                meta_features.append(pred_proba)

        if meta_features:
            meta_X = np.concatenate(meta_features, axis=1)
            self.meta_learner.fit(meta_X, y_encoded)

    # Train sequential model if needed
    if self.hybrid_mode == 'sequential':
        # Retrain classical models with quantum features
        quantum_features = []

        for name, model in self.quantum_models.items():
            try:
                if hasattr(model, 'transform'):
                    features = model.transform(X_selected)
                elif hasattr(model, 'decision_function'):
                    features = model.decision_function(X_selected)
                    if len(features.shape) == 1:
                        features = features.reshape(-1, 1)
                else:
                    features = model.predict_proba(X_selected)

                quantum_features.append(features)

            except Exception as e:
                logger.error(f"Failed to extract features from {name}: {e}")

        if quantum_features:
            quantum_feature_matrix = np.concatenate(quantum_features, axis=1)

            # Retrain classical models with quantum features
            for name, model in self.classical_models.items():
                try:
                    model.fit(quantum_feature_matrix, y_encoded)
                except Exception as e:
                    logger.error(f"Failed to retrain {name} with quantum features: {e}")

    self.is_fitted = True

    # Compute hybrid performance
    predictions = self.predict(X)
    self.hybrid_score = accuracy_score(y, predictions)

    logger.info(f"Hybrid classifier training completed. Accuracy: {self.hybrid_score:.4f}")

    return self

predict

predict(X: ndarray, **kwargs) -> np.ndarray

Make predictions using the hybrid classifier.

Parameters:

Name Type Description Default
X ndarray

Input data for prediction

required
**kwargs

Additional prediction parameters

{}

Returns:

Type Description
ndarray

Predicted labels

Source code in src/superquantx/algorithms/hybrid_classifier.py
def predict(self, X: np.ndarray, **kwargs) -> np.ndarray:
    """Make predictions using the hybrid classifier.

    Args:
        X: Input data for prediction
        **kwargs: Additional prediction parameters

    Returns:
        Predicted labels

    """
    if not self.is_fitted:
        raise ValueError("Model must be fitted before making predictions")

    # Normalize features
    if self.normalize_data:
        X = self.scaler.transform(X)

    # Apply feature selection
    X_selected = self._apply_feature_selection(X)

    # Make predictions based on hybrid mode
    if self.hybrid_mode == 'ensemble':
        predictions = self._ensemble_predict(X_selected)
    elif self.hybrid_mode == 'voting':
        predictions = self._voting_predict(X_selected)
    elif self.hybrid_mode == 'sequential':
        predictions = self._sequential_predict(X_selected)
    elif self.hybrid_mode == 'stacking':
        predictions = self._stacking_predict(X_selected)
    else:
        raise ValueError(f"Unknown hybrid mode: {self.hybrid_mode}")

    # Decode labels
    return self.label_encoder.inverse_transform(predictions)

predict_proba

predict_proba(X: ndarray, **kwargs) -> np.ndarray

Predict class probabilities.

Parameters:

Name Type Description Default
X ndarray

Input data for prediction

required
**kwargs

Additional parameters

{}

Returns:

Type Description
ndarray

Predicted class probabilities

Source code in src/superquantx/algorithms/hybrid_classifier.py
def predict_proba(self, X: np.ndarray, **kwargs) -> np.ndarray:
    """Predict class probabilities.

    Args:
        X: Input data for prediction
        **kwargs: Additional parameters

    Returns:
        Predicted class probabilities

    """
    if not self.is_fitted:
        raise ValueError("Model must be fitted before making predictions")

    # Normalize features
    if self.normalize_data:
        X = self.scaler.transform(X)

    # Apply feature selection
    X_selected = self._apply_feature_selection(X)

    # Get base predictions
    quantum_preds, classical_preds = self._get_base_predictions(X_selected)

    if self.hybrid_mode == 'ensemble':
        # Weighted average of probabilities
        combined_pred = np.zeros((X.shape[0], self.n_classes_))

        if len(quantum_preds) > 0:
            quantum_avg = np.mean(quantum_preds, axis=0)
            combined_pred += self.quantum_weight * quantum_avg

        if len(classical_preds) > 0:
            classical_avg = np.mean(classical_preds, axis=0)
            combined_pred += (1 - self.quantum_weight) * classical_avg

        return combined_pred

    elif self.hybrid_mode == 'stacking' and self.meta_learner is not None:
        # Use meta-learner probabilities
        meta_features = []

        if len(quantum_preds) > 0:
            for pred_proba in quantum_preds:
                meta_features.append(pred_proba)
        if len(classical_preds) > 0:
            for pred_proba in classical_preds:
                meta_features.append(pred_proba)

        if meta_features:
            meta_X = np.concatenate(meta_features, axis=1)
            if hasattr(self.meta_learner, 'predict_proba'):
                return self.meta_learner.predict_proba(meta_X)

    # Fallback: convert predictions to probabilities
    predictions = self.predict(X)
    pred_encoded = self.label_encoder.transform(predictions)
    prob_matrix = np.zeros((len(predictions), self.n_classes_))
    prob_matrix[np.arange(len(predictions)), pred_encoded] = 1.0

    return prob_matrix

get_model_performance

get_model_performance() -> dict[str, Any]

Get detailed performance metrics for all models.

Source code in src/superquantx/algorithms/hybrid_classifier.py
def get_model_performance(self) -> dict[str, Any]:
    """Get detailed performance metrics for all models."""
    performance = {
        'quantum_scores': self.quantum_scores.copy(),
        'classical_scores': self.classical_scores.copy(),
        'hybrid_score': self.hybrid_score,
        'hybrid_mode': self.hybrid_mode,
    }

    # Add quantum advantage metrics
    if self.quantum_scores and self.classical_scores:
        best_quantum = max(self.quantum_scores.values()) if self.quantum_scores else 0
        best_classical = max(self.classical_scores.values()) if self.classical_scores else 0

        performance.update({
            'best_quantum_score': best_quantum,
            'best_classical_score': best_classical,
            'quantum_advantage': best_quantum - best_classical,
            'hybrid_vs_best_quantum': self.hybrid_score - best_quantum if self.hybrid_score else 0,
            'hybrid_vs_best_classical': self.hybrid_score - best_classical if self.hybrid_score else 0,
        })

    return performance

get_feature_importance

get_feature_importance() -> np.ndarray | None

Get feature importance from feature selection.

Source code in src/superquantx/algorithms/hybrid_classifier.py
def get_feature_importance(self) -> np.ndarray | None:
    """Get feature importance from feature selection."""
    return self.feature_importance_

cross_validate

cross_validate(X: ndarray, y: ndarray, cv: int = 5) -> dict[str, Any]

Perform cross-validation on the hybrid classifier.

Source code in src/superquantx/algorithms/hybrid_classifier.py
def cross_validate(self, X: np.ndarray, y: np.ndarray, cv: int = 5) -> dict[str, Any]:
    """Perform cross-validation on the hybrid classifier."""
    if not self.is_fitted:
        raise ValueError("Model must be fitted before cross-validation")

    try:
        scores = cross_val_score(self, X, y, cv=cv, scoring='accuracy')

        return {
            'cv_scores': scores.tolist(),
            'cv_mean': np.mean(scores),
            'cv_std': np.std(scores),
            'cv_min': np.min(scores),
            'cv_max': np.max(scores),
        }

    except Exception as e:
        logger.error(f"Cross-validation failed: {e}")
        return {'error': str(e)}

get_params

get_params(deep: bool = True) -> dict[str, Any]

Get hybrid classifier parameters.

Source code in src/superquantx/algorithms/hybrid_classifier.py
def get_params(self, deep: bool = True) -> dict[str, Any]:
    """Get hybrid classifier parameters."""
    params = super().get_params(deep)
    params.update({
        'hybrid_mode': self.hybrid_mode,
        'quantum_algorithms': self.quantum_algorithms,
        'classical_algorithms': self.classical_algorithms,
        'quantum_weight': self.quantum_weight,
        'feature_selection': self.feature_selection,
        'meta_learner': self.meta_learner_name,
        'normalize_data': self.normalize_data,
    })
    return params

set_params

set_params(**params) -> HybridClassifier

Set hybrid classifier parameters.

Source code in src/superquantx/algorithms/hybrid_classifier.py
def set_params(self, **params) -> 'HybridClassifier':
    """Set hybrid classifier parameters."""
    if self.is_fitted and any(key in params for key in
                             ['hybrid_mode', 'quantum_algorithms', 'classical_algorithms']):
        logger.warning("Changing core parameters requires refitting the model")
        self.is_fitted = False

    return super().set_params(**params)

Quantum Principal Component Analysis

superquantx.algorithms.QuantumPCA

QuantumPCA(backend: str | Any, n_components: int = 2, method: str = 'vqe', encoding: str = 'amplitude', max_iterations: int = 1000, tolerance: float = 1e-06, shots: int = 1024, classical_fallback: bool = True, normalize_data: bool = True, **kwargs)

Bases: UnsupervisedQuantumAlgorithm

Quantum Principal Component Analysis for dimensionality reduction.

This implementation uses quantum algorithms to perform PCA, potentially offering exponential speedup for certain types of data matrices.

The algorithm can use different quantum approaches: - Quantum Matrix Inversion: For density matrix diagonalization - Variational Quantum Eigensolver: For finding principal eigenvectors - Quantum Phase Estimation: For eigenvalue extraction - Quantum Singular Value Decomposition: Direct SVD approach

Parameters:

Name Type Description Default
backend str | Any

Quantum backend for circuit execution

required
n_components int

Number of principal components to extract

2
method str

Quantum method ('vqe', 'phase_estimation', 'matrix_inversion', 'qsvd')

'vqe'
encoding str

Data encoding method ('amplitude', 'dense', 'sparse')

'amplitude'
max_iterations int

Maximum iterations for variational methods

1000
tolerance float

Convergence tolerance

1e-06
shots int

Number of measurement shots

1024
classical_fallback bool

Use classical PCA if quantum fails

True
**kwargs

Additional parameters

{}
Example

qpca = QuantumPCA(backend='pennylane', n_components=3, method='vqe') qpca.fit(X_train) X_reduced = qpca.transform(X_test) X_reconstructed = qpca.inverse_transform(X_reduced)

Source code in src/superquantx/algorithms/quantum_pca.py
def __init__(
    self,
    backend: str | Any,
    n_components: int = 2,
    method: str = 'vqe',
    encoding: str = 'amplitude',
    max_iterations: int = 1000,
    tolerance: float = 1e-6,
    shots: int = 1024,
    classical_fallback: bool = True,
    normalize_data: bool = True,
    **kwargs
) -> None:
    super().__init__(backend=backend, shots=shots, **kwargs)

    self.n_components = n_components
    self.method = method
    self.encoding = encoding
    self.max_iterations = max_iterations
    self.tolerance = tolerance
    self.classical_fallback = classical_fallback
    self.normalize_data = normalize_data

    # PCA components
    self.components_ = None
    self.eigenvalues_ = None
    self.mean_ = None
    self.explained_variance_ = None
    self.explained_variance_ratio_ = None

    # Quantum-specific attributes
    self.density_matrix_ = None
    self.quantum_state_ = None
    self.n_qubits = None

    # Classical components for fallback/comparison
    self.scaler = StandardScaler() if normalize_data else None
    self.classical_pca = PCA(n_components=n_components)

    # Method-specific parameters
    self.vqe_params = None
    self.convergence_history = []

    logger.info(f"Initialized QuantumPCA with method={method}, n_components={n_components}")

Functions

fit

fit(X: ndarray, y: ndarray | None = None, **kwargs) -> QuantumPCA

Fit quantum PCA to the data.

Parameters:

Name Type Description Default
X ndarray

Training data

required
y ndarray | None

Ignored (unsupervised learning)

None
**kwargs

Additional fitting parameters

{}

Returns:

Type Description
QuantumPCA

Self for method chaining

Source code in src/superquantx/algorithms/quantum_pca.py
def fit(self, X: np.ndarray, y: np.ndarray | None = None, **kwargs) -> 'QuantumPCA':
    """Fit quantum PCA to the data.

    Args:
        X: Training data
        y: Ignored (unsupervised learning)
        **kwargs: Additional fitting parameters

    Returns:
        Self for method chaining

    """
    logger.info(f"Fitting QuantumPCA to data of shape {X.shape}")

    # Validate and preprocess data
    super().fit(X, y, **kwargs)

    # Prepare data
    X_processed = self._prepare_data_matrix(X)

    # Determine quantum circuit size
    self.n_qubits = self._determine_qubits(X.shape[1])

    # Choose quantum method
    if self.method == 'vqe':
        density_matrix = self._create_density_matrix(X_processed)
        eigenvalues, eigenvectors = self._quantum_eigensolver_vqe(density_matrix)
    elif self.method == 'phase_estimation':
        density_matrix = self._create_density_matrix(X_processed)
        eigenvalues, eigenvectors = self._quantum_phase_estimation(density_matrix)
    elif self.method == 'matrix_inversion':
        density_matrix = self._create_density_matrix(X_processed)
        eigenvalues, eigenvectors = self._quantum_matrix_inversion(density_matrix)
    elif self.method == 'qsvd':
        eigenvalues, eigenvectors = self._quantum_svd(X_processed)
    else:
        raise ValueError(f"Unknown quantum PCA method: {self.method}")

    # Store results
    self.eigenvalues_ = eigenvalues
    self.components_ = eigenvectors.T  # Store as rows
    self.explained_variance_ = eigenvalues

    # Calculate explained variance ratio
    total_variance = np.sum(eigenvalues) if np.sum(eigenvalues) > 0 else 1.0
    self.explained_variance_ratio_ = eigenvalues / total_variance

    # Fit classical PCA for comparison/fallback
    if self.classical_fallback:
        try:
            self.classical_pca.fit(X_processed)
        except Exception as e:
            logger.warning(f"Classical PCA fitting failed: {e}")

    self.is_fitted = True

    logger.info(f"Quantum PCA completed. Explained variance ratio: {self.explained_variance_ratio_}")

    return self

transform

transform(X: ndarray) -> np.ndarray

Transform data to lower dimensional space.

Parameters:

Name Type Description Default
X ndarray

Data to transform

required

Returns:

Type Description
ndarray

Transformed data

Source code in src/superquantx/algorithms/quantum_pca.py
def transform(self, X: np.ndarray) -> np.ndarray:
    """Transform data to lower dimensional space.

    Args:
        X: Data to transform

    Returns:
        Transformed data

    """
    if not self.is_fitted:
        raise ValueError("QuantumPCA must be fitted before transform")

    # Preprocess data
    if self.normalize_data:
        X = self.scaler.transform(X)

    # Center data
    X_centered = X - self.mean_

    # Project onto principal components
    X_transformed = X_centered @ self.components_.T

    return X_transformed

inverse_transform

inverse_transform(X_transformed: ndarray) -> np.ndarray

Reconstruct data from lower dimensional representation.

Parameters:

Name Type Description Default
X_transformed ndarray

Transformed data

required

Returns:

Type Description
ndarray

Reconstructed data

Source code in src/superquantx/algorithms/quantum_pca.py
def inverse_transform(self, X_transformed: np.ndarray) -> np.ndarray:
    """Reconstruct data from lower dimensional representation.

    Args:
        X_transformed: Transformed data

    Returns:
        Reconstructed data

    """
    if not self.is_fitted:
        raise ValueError("QuantumPCA must be fitted before inverse_transform")

    # Reconstruct in original space
    X_reconstructed = X_transformed @ self.components_

    # Add back the mean
    X_reconstructed += self.mean_

    # Inverse scaling if applied
    if self.normalize_data:
        X_reconstructed = self.scaler.inverse_transform(X_reconstructed)

    return X_reconstructed

fit_transform

fit_transform(X: ndarray, y: ndarray | None = None) -> np.ndarray

Fit PCA and transform data in one step.

Source code in src/superquantx/algorithms/quantum_pca.py
def fit_transform(self, X: np.ndarray, y: np.ndarray | None = None) -> np.ndarray:
    """Fit PCA and transform data in one step."""
    return self.fit(X, y).transform(X)

predict

predict(X: ndarray, **kwargs) -> np.ndarray

Transform data (alias for transform method).

Source code in src/superquantx/algorithms/quantum_pca.py
def predict(self, X: np.ndarray, **kwargs) -> np.ndarray:
    """Transform data (alias for transform method)."""
    return self.transform(X)

get_quantum_advantage_metrics

get_quantum_advantage_metrics() -> dict[str, Any]

Analyze potential quantum advantage.

Source code in src/superquantx/algorithms/quantum_pca.py
def get_quantum_advantage_metrics(self) -> dict[str, Any]:
    """Analyze potential quantum advantage."""
    if not self.is_fitted:
        raise ValueError("Must fit model first")

    n_features = self.components_.shape[1]

    metrics = {
        'data_dimension': n_features,
        'reduced_dimension': self.n_components,
        'compression_ratio': n_features / self.n_components,
        'quantum_circuit_qubits': self.n_qubits,
        'quantum_vs_classical_qubits': self.n_qubits / int(np.ceil(np.log2(n_features))),
    }

    # Potential speedup estimates (theoretical)
    classical_complexity = n_features ** 3  # O(d^3) for eigendecomposition
    quantum_complexity = self.n_qubits ** 2 * np.log(n_features)  # Estimated quantum complexity

    metrics.update({
        'classical_complexity_estimate': classical_complexity,
        'quantum_complexity_estimate': quantum_complexity,
        'theoretical_speedup': classical_complexity / quantum_complexity if quantum_complexity > 0 else 1,
    })

    return metrics

compare_with_classical

compare_with_classical(X: ndarray) -> dict[str, Any]

Compare quantum PCA results with classical PCA.

Source code in src/superquantx/algorithms/quantum_pca.py
def compare_with_classical(self, X: np.ndarray) -> dict[str, Any]:
    """Compare quantum PCA results with classical PCA."""
    if not self.is_fitted or not hasattr(self.classical_pca, 'components_'):
        raise ValueError("Both quantum and classical PCA must be fitted")

    # Transform data with both methods
    X_quantum = self.transform(X)
    X_classical = self.classical_pca.transform(X - self.mean_)

    # Compute reconstruction errors
    X_quantum_reconstructed = self.inverse_transform(X_quantum)
    X_classical_reconstructed = self.classical_pca.inverse_transform(X_classical) + self.mean_

    quantum_error = np.mean((X - X_quantum_reconstructed) ** 2)
    classical_error = np.mean((X - X_classical_reconstructed) ** 2)

    # Compare explained variance
    quantum_var_ratio = np.sum(self.explained_variance_ratio_)
    classical_var_ratio = np.sum(self.classical_pca.explained_variance_ratio_)

    # Component similarity (using absolute cosine similarity)
    component_similarities = []
    min_components = min(self.n_components, len(self.classical_pca.components_))

    for i in range(min_components):
        # Cosine similarity between components
        cos_sim = np.abs(np.dot(self.components_[i], self.classical_pca.components_[i]))
        cos_sim /= (np.linalg.norm(self.components_[i]) * np.linalg.norm(self.classical_pca.components_[i]))
        component_similarities.append(cos_sim)

    return {
        'quantum_reconstruction_error': quantum_error,
        'classical_reconstruction_error': classical_error,
        'error_ratio': quantum_error / classical_error if classical_error > 0 else float('inf'),
        'quantum_variance_explained': quantum_var_ratio,
        'classical_variance_explained': classical_var_ratio,
        'variance_explained_ratio': quantum_var_ratio / classical_var_ratio if classical_var_ratio > 0 else float('inf'),
        'component_similarities': component_similarities,
        'mean_component_similarity': np.mean(component_similarities) if component_similarities else 0,
    }

analyze_convergence

analyze_convergence() -> dict[str, Any]

Analyze convergence properties of the quantum algorithm.

Source code in src/superquantx/algorithms/quantum_pca.py
def analyze_convergence(self) -> dict[str, Any]:
    """Analyze convergence properties of the quantum algorithm."""
    if not self.convergence_history:
        return {'message': 'No convergence history available'}

    convergence_data = np.array(self.convergence_history)

    return {
        'total_iterations': len(convergence_data),
        'final_cost': convergence_data[-1],
        'initial_cost': convergence_data[0],
        'cost_reduction': convergence_data[0] - convergence_data[-1],
        'converged': abs(convergence_data[-1] - convergence_data[-2]) < self.tolerance if len(convergence_data) > 1 else False,
        'convergence_rate': np.mean(np.diff(convergence_data)) if len(convergence_data) > 1 else 0,
    }

get_params

get_params(deep: bool = True) -> dict[str, Any]

Get quantum PCA parameters.

Source code in src/superquantx/algorithms/quantum_pca.py
def get_params(self, deep: bool = True) -> dict[str, Any]:
    """Get quantum PCA parameters."""
    params = super().get_params(deep)
    params.update({
        'n_components': self.n_components,
        'method': self.method,
        'encoding': self.encoding,
        'max_iterations': self.max_iterations,
        'tolerance': self.tolerance,
        'classical_fallback': self.classical_fallback,
        'normalize_data': self.normalize_data,
    })
    return params

set_params

set_params(**params) -> QuantumPCA

Set quantum PCA parameters.

Source code in src/superquantx/algorithms/quantum_pca.py
def set_params(self, **params) -> 'QuantumPCA':
    """Set quantum PCA parameters."""
    if self.is_fitted and any(key in params for key in
                             ['n_components', 'method', 'encoding']):
        logger.warning("Changing core parameters requires refitting the model")
        self.is_fitted = False

    return super().set_params(**params)

Quantum K-Means

superquantx.algorithms.QuantumKMeans

QuantumKMeans(backend: str | Any, n_clusters: int = 3, method: str = 'distance', distance_metric: str = 'euclidean', encoding: str = 'amplitude', max_iterations: int = 300, tolerance: float = 0.0001, init_method: str = 'k-means++', shots: int = 1024, classical_fallback: bool = True, normalize_data: bool = True, random_state: int | None = None, **kwargs)

Bases: UnsupervisedQuantumAlgorithm

Quantum K-Means clustering algorithm.

This implementation uses quantum algorithms to perform K-means clustering, potentially offering speedup for high-dimensional data through quantum distance calculations and amplitude estimation.

The algorithm can use different quantum approaches: - Quantum Distance Calculation: Use quantum circuits to compute distances - Quantum Amplitude Estimation: For probabilistic distance measurements - Variational Quantum Clustering: Use VQC for cluster optimization - Quantum Annealing: For global cluster optimization

Parameters:

Name Type Description Default
backend str | Any

Quantum backend for circuit execution

required
n_clusters int

Number of clusters (k)

3
method str

Quantum method ('distance', 'amplitude', 'variational', 'annealing')

'distance'
distance_metric str

Distance metric ('euclidean', 'manhattan', 'quantum')

'euclidean'
encoding str

Data encoding method ('amplitude', 'angle', 'dense')

'amplitude'
max_iterations int

Maximum iterations for clustering

300
tolerance float

Convergence tolerance

0.0001
init_method str

Centroid initialization ('random', 'k-means++', 'quantum')

'k-means++'
shots int

Number of measurement shots

1024
classical_fallback bool

Use classical K-means if quantum fails

True
**kwargs

Additional parameters

{}
Example

qkmeans = QuantumKMeans(backend='pennylane', n_clusters=3, method='distance') qkmeans.fit(X_train) labels = qkmeans.predict(X_test) centroids = qkmeans.cluster_centers_

Source code in src/superquantx/algorithms/quantum_kmeans.py
def __init__(
    self,
    backend: str | Any,
    n_clusters: int = 3,
    method: str = 'distance',
    distance_metric: str = 'euclidean',
    encoding: str = 'amplitude',
    max_iterations: int = 300,
    tolerance: float = 1e-4,
    init_method: str = 'k-means++',
    shots: int = 1024,
    classical_fallback: bool = True,
    normalize_data: bool = True,
    random_state: int | None = None,
    **kwargs
) -> None:
    super().__init__(backend=backend, shots=shots, **kwargs)

    self.n_clusters = n_clusters
    self.method = method
    self.distance_metric = distance_metric
    self.encoding = encoding
    self.max_iterations = max_iterations
    self.tolerance = tolerance
    self.init_method = init_method
    self.classical_fallback = classical_fallback
    self.normalize_data = normalize_data
    self.random_state = random_state

    # Clustering results
    self.cluster_centers_ = None
    self.labels_ = None
    self.inertia_ = None
    self.n_iter_ = None

    # Quantum-specific attributes
    self.n_qubits = None
    self.quantum_distances_ = None
    self.convergence_history = []

    # Classical components
    self.scaler = StandardScaler() if normalize_data else None
    self.classical_kmeans = KMeans(
        n_clusters=n_clusters,
        max_iter=max_iterations,
        tol=tolerance,
        random_state=random_state
    )

    # Set random seed
    if random_state is not None:
        np.random.seed(random_state)

    logger.info(f"Initialized QuantumKMeans with method={method}, n_clusters={n_clusters}")

Functions

fit

fit(X: ndarray, y: ndarray | None = None, **kwargs) -> QuantumKMeans

Fit quantum K-means to the data.

Parameters:

Name Type Description Default
X ndarray

Training data

required
y ndarray | None

Ignored (unsupervised learning)

None
**kwargs

Additional fitting parameters

{}

Returns:

Type Description
QuantumKMeans

Self for method chaining

Source code in src/superquantx/algorithms/quantum_kmeans.py
def fit(self, X: np.ndarray, y: np.ndarray | None = None, **kwargs) -> 'QuantumKMeans':
    """Fit quantum K-means to the data.

    Args:
        X: Training data
        y: Ignored (unsupervised learning)
        **kwargs: Additional fitting parameters

    Returns:
        Self for method chaining

    """
    logger.info(f"Fitting QuantumKMeans to data of shape {X.shape}")

    # Validate and preprocess data
    super().fit(X, y, **kwargs)

    # Normalize data if specified
    if self.normalize_data:
        X = self.scaler.fit_transform(X)

    # Determine quantum circuit requirements
    self.n_qubits = self._determine_qubits(X.shape[1])

    # Initialize centroids
    centroids = self._initialize_centroids(X)

    # Reset convergence history
    self.convergence_history = []

    # Main K-means iteration loop
    for iteration in range(self.max_iterations):
        # Store old centroids for convergence check
        old_centroids = centroids.copy()

        # Compute distances and assign clusters
        distances = self._compute_distances_batch(X, centroids)
        labels = self._assign_clusters(distances)

        # Update centroids
        centroids = self._update_centroids(X, labels)

        # Compute inertia
        inertia = self._compute_inertia(X, labels, centroids)
        self.convergence_history.append(inertia)

        # Check convergence
        if self._check_convergence(old_centroids, centroids):
            logger.info(f"Converged after {iteration + 1} iterations")
            break

        if iteration % 10 == 0:
            logger.info(f"Iteration {iteration}: Inertia = {inertia:.6f}")

    # Store final results
    self.cluster_centers_ = centroids
    self.labels_ = labels
    self.inertia_ = inertia
    self.n_iter_ = iteration + 1

    # Fit classical K-means for comparison
    if self.classical_fallback:
        try:
            self.classical_kmeans.fit(X)
        except Exception as e:
            logger.warning(f"Classical K-means fitting failed: {e}")

    self.is_fitted = True

    logger.info(f"Quantum K-means completed. Final inertia: {self.inertia_:.6f}")

    return self

predict

predict(X: ndarray, **kwargs) -> np.ndarray

Predict cluster labels for new data.

Parameters:

Name Type Description Default
X ndarray

Data to cluster

required
**kwargs

Additional parameters

{}

Returns:

Type Description
ndarray

Cluster labels

Source code in src/superquantx/algorithms/quantum_kmeans.py
def predict(self, X: np.ndarray, **kwargs) -> np.ndarray:
    """Predict cluster labels for new data.

    Args:
        X: Data to cluster
        **kwargs: Additional parameters

    Returns:
        Cluster labels

    """
    if not self.is_fitted:
        raise ValueError("QuantumKMeans must be fitted before prediction")

    # Normalize data if specified
    if self.normalize_data:
        X = self.scaler.transform(X)

    # Compute distances to centroids
    distances = self._compute_distances_batch(X, self.cluster_centers_)

    # Assign to nearest cluster
    return self._assign_clusters(distances)

fit_predict

fit_predict(X: ndarray, y: ndarray | None = None) -> np.ndarray

Fit K-means and return cluster labels.

Source code in src/superquantx/algorithms/quantum_kmeans.py
def fit_predict(self, X: np.ndarray, y: np.ndarray | None = None) -> np.ndarray:
    """Fit K-means and return cluster labels."""
    return self.fit(X, y).labels_

transform

transform(X: ndarray) -> np.ndarray

Transform data to cluster-distance space.

Source code in src/superquantx/algorithms/quantum_kmeans.py
def transform(self, X: np.ndarray) -> np.ndarray:
    """Transform data to cluster-distance space."""
    if not self.is_fitted:
        raise ValueError("QuantumKMeans must be fitted before transform")

    # Normalize data if specified
    if self.normalize_data:
        X = self.scaler.transform(X)

    # Return distances to all centroids
    return self._compute_distances_batch(X, self.cluster_centers_)

get_quantum_advantage_metrics

get_quantum_advantage_metrics() -> dict[str, Any]

Analyze potential quantum advantage.

Source code in src/superquantx/algorithms/quantum_kmeans.py
def get_quantum_advantage_metrics(self) -> dict[str, Any]:
    """Analyze potential quantum advantage."""
    if not self.is_fitted:
        raise ValueError("Must fit model first")

    n_features = self.cluster_centers_.shape[1]
    n_samples = self.n_samples_

    metrics = {
        'data_dimension': n_features,
        'n_samples': n_samples,
        'n_clusters': self.n_clusters,
        'quantum_circuit_qubits': self.n_qubits,
        'encoding_efficiency': n_features / self.n_qubits if self.n_qubits > 0 else 1,
    }

    # Complexity estimates
    classical_complexity = n_samples * self.n_clusters * n_features * self.n_iter_
    quantum_complexity = n_samples * self.n_clusters * self.n_qubits * self.shots * self.n_iter_

    metrics.update({
        'classical_complexity_estimate': classical_complexity,
        'quantum_complexity_estimate': quantum_complexity,
        'theoretical_speedup': classical_complexity / quantum_complexity if quantum_complexity > 0 else 1,
    })

    return metrics

compare_with_classical

compare_with_classical(X: ndarray, y_true: ndarray | None = None) -> dict[str, Any]

Compare quantum K-means results with classical K-means.

Source code in src/superquantx/algorithms/quantum_kmeans.py
def compare_with_classical(self, X: np.ndarray, y_true: np.ndarray | None = None) -> dict[str, Any]:
    """Compare quantum K-means results with classical K-means."""
    if not self.is_fitted or not hasattr(self.classical_kmeans, 'cluster_centers_'):
        raise ValueError("Both quantum and classical K-means must be fitted")

    # Get predictions from both methods
    quantum_labels = self.predict(X)
    classical_labels = self.classical_kmeans.predict(X if not self.normalize_data
                                                    else self.scaler.transform(X))

    comparison = {
        'quantum_inertia': self.inertia_,
        'classical_inertia': self.classical_kmeans.inertia_,
        'inertia_ratio': self.inertia_ / self.classical_kmeans.inertia_ if self.classical_kmeans.inertia_ > 0 else float('inf'),
        'quantum_iterations': self.n_iter_,
        'classical_iterations': self.classical_kmeans.n_iter_,
    }

    # Compute silhouette scores
    try:
        if len(np.unique(quantum_labels)) > 1:
            quantum_silhouette = silhouette_score(X, quantum_labels)
            comparison['quantum_silhouette'] = quantum_silhouette

        if len(np.unique(classical_labels)) > 1:
            classical_silhouette = silhouette_score(X, classical_labels)
            comparison['classical_silhouette'] = classical_silhouette

        if 'quantum_silhouette' in comparison and 'classical_silhouette' in comparison:
            comparison['silhouette_ratio'] = quantum_silhouette / classical_silhouette

    except Exception as e:
        logger.warning(f"Silhouette score computation failed: {e}")

    # Compare with ground truth if available
    if y_true is not None:
        try:
            quantum_ari = adjusted_rand_score(y_true, quantum_labels)
            classical_ari = adjusted_rand_score(y_true, classical_labels)

            comparison.update({
                'quantum_adjusted_rand_score': quantum_ari,
                'classical_adjusted_rand_score': classical_ari,
                'ari_ratio': quantum_ari / classical_ari if classical_ari != 0 else float('inf'),
            })

        except Exception as e:
            logger.warning(f"Adjusted rand score computation failed: {e}")

    # Compare centroid similarities
    centroid_distances = []
    for i in range(min(len(self.cluster_centers_), len(self.classical_kmeans.cluster_centers_))):
        dist = np.linalg.norm(self.cluster_centers_[i] - self.classical_kmeans.cluster_centers_[i])
        centroid_distances.append(dist)

    if centroid_distances:
        comparison.update({
            'centroid_distances': centroid_distances,
            'mean_centroid_distance': np.mean(centroid_distances),
            'max_centroid_distance': np.max(centroid_distances),
        })

    return comparison

analyze_convergence

analyze_convergence() -> dict[str, Any]

Analyze convergence properties.

Source code in src/superquantx/algorithms/quantum_kmeans.py
def analyze_convergence(self) -> dict[str, Any]:
    """Analyze convergence properties."""
    if not self.convergence_history:
        return {'message': 'No convergence history available'}

    inertias = np.array(self.convergence_history)

    return {
        'total_iterations': len(inertias),
        'final_inertia': inertias[-1],
        'initial_inertia': inertias[0],
        'inertia_reduction': inertias[0] - inertias[-1] if len(inertias) > 0 else 0,
        'convergence_rate': np.mean(np.diff(inertias)) if len(inertias) > 1 else 0,
        'converged': len(inertias) < self.max_iterations,
        'inertia_variance': np.var(inertias[-10:]) if len(inertias) >= 10 else np.var(inertias),
    }

get_params

get_params(deep: bool = True) -> dict[str, Any]

Get quantum K-means parameters.

Source code in src/superquantx/algorithms/quantum_kmeans.py
def get_params(self, deep: bool = True) -> dict[str, Any]:
    """Get quantum K-means parameters."""
    params = super().get_params(deep)
    params.update({
        'n_clusters': self.n_clusters,
        'method': self.method,
        'distance_metric': self.distance_metric,
        'encoding': self.encoding,
        'max_iterations': self.max_iterations,
        'tolerance': self.tolerance,
        'init_method': self.init_method,
        'classical_fallback': self.classical_fallback,
        'normalize_data': self.normalize_data,
        'random_state': self.random_state,
    })
    return params

set_params

set_params(**params) -> QuantumKMeans

Set quantum K-means parameters.

Source code in src/superquantx/algorithms/quantum_kmeans.py
def set_params(self, **params) -> 'QuantumKMeans':
    """Set quantum K-means parameters."""
    if self.is_fitted and any(key in params for key in
                             ['n_clusters', 'method', 'distance_metric', 'encoding']):
        logger.warning("Changing core parameters requires refitting the model")
        self.is_fitted = False

    return super().set_params(**params)

Optimization Algorithms

Variational Quantum Eigensolver

superquantx.algorithms.VQE

VQE(hamiltonian: ndarray | Any, ansatz: str | Callable = 'RealAmplitudes', backend: str | Any = 'simulator', optimizer: str = 'COBYLA', shots: int = 1024, maxiter: int = 1000, initial_params: ndarray | None = None, include_custom_gates: bool = False, client=None, **kwargs)

Bases: OptimizationQuantumAlgorithm

Variational Quantum Eigensolver for finding ground states.

VQE is a hybrid quantum-classical algorithm that uses a parameterized quantum circuit (ansatz) to find the ground state energy of a given Hamiltonian by minimizing the expectation value.

The algorithm works by: 1. Preparing a parameterized quantum state |ψ(θ)⟩ 2. Measuring the expectation value ⟨ψ(θ)|H|ψ(θ)⟩ 3. Classically optimizing parameters θ to minimize energy 4. Iterating until convergence

Parameters:

Name Type Description Default
backend str | Any

Quantum backend for circuit execution

'simulator'
hamiltonian ndarray | Any

Target Hamiltonian (matrix or operator)

required
ansatz str | Callable

Parameterized circuit ansatz ('UCCSD', 'RealAmplitudes', etc.)

'RealAmplitudes'
optimizer str

Classical optimizer ('COBYLA', 'L-BFGS-B', etc.)

'COBYLA'
shots int

Number of measurement shots

1024
maxiter int

Maximum optimization iterations

1000
initial_params ndarray | None

Initial parameter values

None
**kwargs

Additional parameters

{}
Example

Define H2 molecule Hamiltonian

H2_hamiltonian = create_h2_hamiltonian(bond_distance=0.74) vqe = VQE(backend='pennylane', hamiltonian=H2_hamiltonian, ansatz='UCCSD') result = vqe.optimize() ground_energy = result.result['optimal_value']

Source code in src/superquantx/algorithms/vqe.py
def __init__(
    self,
    hamiltonian: np.ndarray | Any,
    ansatz: str | Callable = 'RealAmplitudes',
    backend: str | Any = 'simulator',
    optimizer: str = 'COBYLA',
    shots: int = 1024,
    maxiter: int = 1000,
    initial_params: np.ndarray | None = None,
    include_custom_gates: bool = False,
    client = None,
    **kwargs
) -> None:
    super().__init__(backend=backend, shots=shots, **kwargs)

    self.hamiltonian = hamiltonian
    self.ansatz = ansatz
    self.optimizer = optimizer
    self.maxiter = maxiter
    self.initial_params = initial_params
    self.include_custom_gates = include_custom_gates
    self.client = client

    # VQE-specific attributes
    self.n_qubits = None
    self.n_params = None
    self.ansatz_circuit = None
    self.hamiltonian_terms = None

    # Convergence tracking
    self.energy_history = []
    self.gradient_history = []
    self.convergence_threshold = 1e-6

    self._initialize_hamiltonian()

    logger.info(f"Initialized VQE with ansatz={ansatz}, optimizer={optimizer}")

Functions

fit

fit(X: ndarray | None = None, y: ndarray | None = None, **kwargs) -> VQE

Fit VQE (setup for optimization).

Parameters:

Name Type Description Default
X ndarray | None

Not used in VQE

None
y ndarray | None

Not used in VQE

None
**kwargs

Additional parameters

{}

Returns:

Type Description
VQE

Self for method chaining

Source code in src/superquantx/algorithms/vqe.py
def fit(self, X: np.ndarray | None = None, y: np.ndarray | None = None, **kwargs) -> 'VQE':
    """Fit VQE (setup for optimization).

    Args:
        X: Not used in VQE
        y: Not used in VQE
        **kwargs: Additional parameters

    Returns:
        Self for method chaining

    """
    logger.info(f"Setting up VQE for {self.n_qubits} qubits")

    # Determine number of parameters based on ansatz
    self.n_params = self._get_ansatz_param_count()

    # Initialize parameters if not provided
    if self.initial_params is None:
        self.initial_params = self._generate_initial_params()

    # Reset histories
    self.energy_history = []
    self.gradient_history = []
    self.optimization_history_ = []

    self.is_fitted = True
    return self

predict

predict(X: ndarray | None = None, **kwargs) -> np.ndarray

Get ground state wavefunction coefficients.

Parameters:

Name Type Description Default
X ndarray | None

Not used

None
**kwargs

Additional parameters

{}

Returns:

Type Description
ndarray

Ground state wavefunction

Source code in src/superquantx/algorithms/vqe.py
def predict(self, X: np.ndarray | None = None, **kwargs) -> np.ndarray:
    """Get ground state wavefunction coefficients.

    Args:
        X: Not used
        **kwargs: Additional parameters

    Returns:
        Ground state wavefunction

    """
    if not self.optimal_params_:
        raise ValueError("VQE must be optimized before prediction")

    # Create circuit with optimal parameters
    circuit = self._create_ansatz_circuit(self.optimal_params_)

    # Get state vector
    if hasattr(self.backend, 'get_statevector'):
        statevector = self.backend.get_statevector(circuit)
    else:
        # Fallback: return random normalized state
        statevector = np.random.random(2**self.n_qubits) + 1j * np.random.random(2**self.n_qubits)
        statevector /= np.linalg.norm(statevector)

    return np.array(statevector)

get_energy_landscape

get_energy_landscape(param_indices: list[int], param_ranges: list[tuple[float, float]], resolution: int = 20) -> dict[str, Any]

Compute energy landscape for visualization.

Parameters:

Name Type Description Default
param_indices list[int]

Indices of parameters to vary

required
param_ranges list[tuple[float, float]]

Ranges for each parameter

required
resolution int

Number of points per dimension

20

Returns:

Type Description
dict[str, Any]

Dictionary with landscape data

Source code in src/superquantx/algorithms/vqe.py
def get_energy_landscape(self, param_indices: list[int], param_ranges: list[tuple[float, float]],
                       resolution: int = 20) -> dict[str, Any]:
    """Compute energy landscape for visualization.

    Args:
        param_indices: Indices of parameters to vary
        param_ranges: Ranges for each parameter
        resolution: Number of points per dimension

    Returns:
        Dictionary with landscape data

    """
    if len(param_indices) != 2:
        raise ValueError("Energy landscape visualization supports only 2 parameters")

    if not self.optimal_params_:
        raise ValueError("VQE must be optimized to compute landscape")

    param1_range = np.linspace(*param_ranges[0], resolution)
    param2_range = np.linspace(*param_ranges[1], resolution)

    landscape = np.zeros((resolution, resolution))
    base_params = self.optimal_params_.copy()

    for i, p1 in enumerate(param1_range):
        for j, p2 in enumerate(param2_range):
            params = base_params.copy()
            params[param_indices[0]] = p1
            params[param_indices[1]] = p2
            landscape[i, j] = self._compute_expectation_value(params)

    return {
        'param1_range': param1_range,
        'param2_range': param2_range,
        'landscape': landscape,
        'optimal_params': self.optimal_params_[param_indices],
        'param_indices': param_indices
    }

analyze_convergence

analyze_convergence() -> dict[str, Any]

Analyze VQE convergence properties.

Returns:

Type Description
dict[str, Any]

Convergence analysis results

Source code in src/superquantx/algorithms/vqe.py
def analyze_convergence(self) -> dict[str, Any]:
    """Analyze VQE convergence properties.

    Returns:
        Convergence analysis results

    """
    if not self.energy_history:
        raise ValueError("No optimization history available")

    energies = np.array(self.energy_history)
    gradients = np.array(self.gradient_history) if self.gradient_history else None

    # Basic convergence metrics
    analysis = {
        'final_energy': energies[-1],
        'energy_variance': np.var(energies[-10:]) if len(energies) >= 10 else np.var(energies),
        'total_iterations': len(energies),
        'energy_change': abs(energies[-1] - energies[0]) if len(energies) > 1 else 0,
    }

    # Convergence detection
    if len(energies) >= 10:
        recent_change = abs(energies[-1] - energies[-10])
        analysis['converged'] = recent_change < self.convergence_threshold
    else:
        analysis['converged'] = False

    # Gradient analysis
    if gradients is not None and len(gradients) > 0:
        analysis.update({
            'final_gradient_norm': gradients[-1],
            'gradient_trend': 'decreasing' if gradients[-1] < gradients[0] else 'increasing',
            'min_gradient_norm': np.min(gradients),
        })

    # Identify plateaus and oscillations
    if len(energies) >= 20:
        # Check for plateaus (little change over many iterations)
        plateau_threshold = self.convergence_threshold * 10
        recent_energies = energies[-20:]
        energy_std = np.std(recent_energies)
        analysis['plateau_detected'] = energy_std < plateau_threshold

        # Check for oscillations
        energy_diff = np.diff(energies[-20:])
        sign_changes = np.sum(np.diff(np.sign(energy_diff)) != 0)
        analysis['oscillation_detected'] = sign_changes > len(energy_diff) * 0.7

    return analysis

compare_with_exact

compare_with_exact(exact_ground_energy: float) -> dict[str, Any]

Compare VQE result with exact ground state energy.

Parameters:

Name Type Description Default
exact_ground_energy float

Known exact ground state energy

required

Returns:

Type Description
dict[str, Any]

Comparison analysis

Source code in src/superquantx/algorithms/vqe.py
def compare_with_exact(self, exact_ground_energy: float) -> dict[str, Any]:
    """Compare VQE result with exact ground state energy.

    Args:
        exact_ground_energy: Known exact ground state energy

    Returns:
        Comparison analysis

    """
    if not self.optimal_value_:
        raise ValueError("VQE must be optimized for comparison")

    error = abs(self.optimal_value_ - exact_ground_energy)
    relative_error = error / abs(exact_ground_energy) if exact_ground_energy != 0 else float('inf')

    return {
        'vqe_energy': self.optimal_value_,
        'exact_energy': exact_ground_energy,
        'absolute_error': error,
        'relative_error': relative_error,
        'chemical_accuracy': error < 1.6e-3,  # 1 kcal/mol in Hartree
        'energy_above_ground': max(0, self.optimal_value_ - exact_ground_energy)
    }

get_params

get_params(deep: bool = True) -> dict[str, Any]

Get VQE parameters.

Source code in src/superquantx/algorithms/vqe.py
def get_params(self, deep: bool = True) -> dict[str, Any]:
    """Get VQE parameters."""
    params = super().get_params(deep)
    params.update({
        'ansatz': self.ansatz,
        'optimizer': self.optimizer,
        'maxiter': self.maxiter,
        'include_custom_gates': self.include_custom_gates,
        'convergence_threshold': self.convergence_threshold,
    })
    return params

set_params

set_params(**params) -> VQE

Set VQE parameters.

Source code in src/superquantx/algorithms/vqe.py
def set_params(self, **params) -> 'VQE':
    """Set VQE parameters."""
    if self.is_fitted and any(key in params for key in ['ansatz', 'hamiltonian']):
        logger.warning("Changing core parameters requires refitting the model")
        self.is_fitted = False

    return super().set_params(**params)

find_ground_state

find_ground_state() -> float

Find the ground state energy of the Hamiltonian.

This is a convenience method that combines fit() and _run_optimization() to find the ground state energy in a single call.

Returns:

Type Description
float

Ground state energy

Example

vqe = VQE(backend='simulator', hamiltonian=hamiltonian) ground_energy = vqe.find_ground_state()

Source code in src/superquantx/algorithms/vqe.py
def find_ground_state(self) -> float:
    """Find the ground state energy of the Hamiltonian.

    This is a convenience method that combines fit() and _run_optimization()
    to find the ground state energy in a single call.

    Returns:
        Ground state energy

    Example:
        >>> vqe = VQE(backend='simulator', hamiltonian=hamiltonian)
        >>> ground_energy = vqe.find_ground_state()
    """
    # Fit if not already fitted
    if not self.is_fitted:
        self.fit()

    # Run optimization
    self._run_optimization()

    # Return the optimal energy
    return self.optimal_value_

superquantx.algorithms.create_vqe_for_molecule

create_vqe_for_molecule(molecule_name: str, bond_distance: float = None, backend: str = 'simulator', ansatz: str = 'UCCSD', optimizer: str = 'COBYLA', client=None) -> VQE

Create a VQE instance pre-configured for molecular simulation.

Parameters:

Name Type Description Default
molecule_name str

Name of the molecule (e.g., 'H2', 'LiH')

required
bond_distance float

Bond distance for the molecule (uses default if None)

None
backend str

Quantum backend to use

'simulator'
ansatz str

Ansatz circuit type

'UCCSD'
optimizer str

Classical optimizer

'COBYLA'
client

Optional client for quantum execution

None

Returns:

Type Description
VQE

Configured VQE instance

Source code in src/superquantx/algorithms/vqe.py
def create_vqe_for_molecule(
    molecule_name: str,
    bond_distance: float = None,
    backend: str = 'simulator',
    ansatz: str = 'UCCSD',
    optimizer: str = 'COBYLA',
    client = None
) -> VQE:
    """Create a VQE instance pre-configured for molecular simulation.

    Args:
        molecule_name: Name of the molecule (e.g., 'H2', 'LiH')
        bond_distance: Bond distance for the molecule (uses default if None)
        backend: Quantum backend to use
        ansatz: Ansatz circuit type
        optimizer: Classical optimizer
        client: Optional client for quantum execution

    Returns:
        Configured VQE instance

    """
    # Import molecular data utilities
    try:
        from ..datasets.molecular import get_molecular_hamiltonian
        hamiltonian = get_molecular_hamiltonian(molecule_name, bond_distance)
    except ImportError:
        # Fallback: create simple hamiltonian for testing
        from ..gates import Hamiltonian

        if molecule_name.upper() == 'H2':
            # Simple H2 Hamiltonian approximation as Pauli strings
            hamiltonian_dict = {
                "ZZ": -1.0523732,
                "ZI": -0.39793742,
                "IZ": -0.39793742,
                "XX": -0.01128010,
                "YY": 0.01128010
            }
            hamiltonian = Hamiltonian.from_dict(hamiltonian_dict)
        elif molecule_name.upper() in ['LIH', 'H2O', 'NH3']:
            # Generic 2-qubit Hamiltonian for other molecules
            hamiltonian_dict = {
                "ZI": -1.0,
                "IZ": 0.5,
                "XX": 0.2
            }
            hamiltonian = Hamiltonian.from_dict(hamiltonian_dict)
        else:
            raise ValueError(f"Unknown molecule: {molecule_name}")

    return VQE(
        hamiltonian=hamiltonian,
        ansatz=ansatz,
        backend=backend,
        optimizer=optimizer,
        client=client
    )

Quantum Approximate Optimization Algorithm

superquantx.algorithms.QAOA

QAOA(backend: str | Any, p: int = 1, problem_hamiltonian: Callable | None = None, mixer_hamiltonian: Callable | None = None, initial_state: str = 'uniform_superposition', optimizer: str = 'COBYLA', shots: int = 1024, maxiter: int = 1000, **kwargs)

Bases: OptimizationQuantumAlgorithm

Quantum Approximate Optimization Algorithm for combinatorial optimization.

QAOA is a hybrid quantum-classical algorithm that alternates between quantum evolution and classical parameter optimization to find approximate solutions to combinatorial optimization problems.

The algorithm works by: 1. Preparing an initial superposition state 2. Applying alternating problem and mixer Hamiltonians 3. Measuring the quantum state 4. Classically optimizing the parameters

Parameters:

Name Type Description Default
backend str | Any

Quantum backend for circuit execution

required
p int

Number of QAOA layers (depth)

1
problem_hamiltonian Callable | None

Problem Hamiltonian function

None
mixer_hamiltonian Callable | None

Mixer Hamiltonian function

None
initial_state str

Initial quantum state preparation

'uniform_superposition'
optimizer str

Classical optimizer ('COBYLA', 'L-BFGS-B', etc.)

'COBYLA'
shots int

Number of measurement shots

1024
maxiter int

Maximum optimization iterations

1000
**kwargs

Additional parameters

{}
Example

Define Max-Cut problem

def problem_ham(gamma, graph): ... return create_maxcut_hamiltonian(gamma, graph) qaoa = QAOA(backend='pennylane', p=2, problem_hamiltonian=problem_ham) result = qaoa.optimize(graph_data)

Source code in src/superquantx/algorithms/qaoa.py
def __init__(
    self,
    backend: str | Any,
    p: int = 1,
    problem_hamiltonian: Callable | None = None,
    mixer_hamiltonian: Callable | None = None,
    initial_state: str = 'uniform_superposition',
    optimizer: str = 'COBYLA',
    shots: int = 1024,
    maxiter: int = 1000,
    **kwargs
) -> None:
    super().__init__(backend=backend, shots=shots, **kwargs)

    self.p = p
    self.problem_hamiltonian = problem_hamiltonian
    self.mixer_hamiltonian = mixer_hamiltonian or self._default_mixer
    self.initial_state = initial_state
    self.optimizer = optimizer
    self.maxiter = maxiter

    # QAOA-specific parameters
    self.n_qubits = None
    self.problem_instance = None
    self.circuit = None

    # Parameter bounds
    self.gamma_bounds = (0, 2*np.pi)
    self.beta_bounds = (0, np.pi)

    logger.info(f"Initialized QAOA with p={p}, optimizer={optimizer}")

Functions

fit

fit(X: ndarray, y: ndarray | None = None, **kwargs) -> QAOA

Fit QAOA to problem instance.

Parameters:

Name Type Description Default
X ndarray

Problem instance data (e.g., adjacency matrix for Max-Cut)

required
y ndarray | None

Not used in QAOA

None
**kwargs

Additional parameters

{}

Returns:

Type Description
QAOA

Self for method chaining

Source code in src/superquantx/algorithms/qaoa.py
def fit(self, X: np.ndarray, y: np.ndarray | None = None, **kwargs) -> 'QAOA':
    """Fit QAOA to problem instance.

    Args:
        X: Problem instance data (e.g., adjacency matrix for Max-Cut)
        y: Not used in QAOA
        **kwargs: Additional parameters

    Returns:
        Self for method chaining

    """
    logger.info(f"Fitting QAOA to problem instance of shape {X.shape}")

    self.problem_instance = X
    self.n_qubits = self._infer_qubits(X)

    # Reset optimization history
    self.optimization_history_ = []

    self.is_fitted = True
    return self

predict

predict(X: ndarray, **kwargs) -> np.ndarray

Get optimal solution from QAOA results.

Parameters:

Name Type Description Default
X ndarray

Problem instance (not used if same as training)

required
**kwargs

Additional parameters

{}

Returns:

Type Description
ndarray

Optimal bit string solution

Source code in src/superquantx/algorithms/qaoa.py
def predict(self, X: np.ndarray, **kwargs) -> np.ndarray:
    """Get optimal solution from QAOA results.

    Args:
        X: Problem instance (not used if same as training)
        **kwargs: Additional parameters

    Returns:
        Optimal bit string solution

    """
    if not self.is_fitted or not self.optimal_params_:
        raise ValueError("QAOA must be fitted and optimized before prediction")

    # Create circuit with optimal parameters
    circuit = self._create_qaoa_circuit(self.optimal_params_)

    # Sample from the optimized quantum state
    if hasattr(self.backend, 'sample_circuit'):
        samples = self.backend.sample_circuit(circuit, shots=self.shots)
        # Return most frequent bit string
        unique, counts = np.unique(samples, axis=0, return_counts=True)
        best_solution = unique[np.argmax(counts)]
    else:
        # Fallback: return random solution
        best_solution = np.random.randint(0, 2, self.n_qubits)

    return best_solution

get_optimization_landscape

get_optimization_landscape(param_range: tuple[float, float], resolution: int = 50) -> dict[str, Any]

Compute optimization landscape for visualization.

Parameters:

Name Type Description Default
param_range tuple[float, float]

Range of parameters to explore

required
resolution int

Number of points per dimension

50

Returns:

Type Description
dict[str, Any]

Dictionary with landscape data

Source code in src/superquantx/algorithms/qaoa.py
def get_optimization_landscape(self, param_range: tuple[float, float], resolution: int = 50) -> dict[str, Any]:
    """Compute optimization landscape for visualization.

    Args:
        param_range: Range of parameters to explore
        resolution: Number of points per dimension

    Returns:
        Dictionary with landscape data

    """
    if self.p != 1:
        logger.warning("Landscape visualization only supported for p=1")
        return {}

    gamma_range = np.linspace(*param_range, resolution)
    beta_range = np.linspace(*param_range, resolution)

    landscape = np.zeros((resolution, resolution))

    for i, gamma in enumerate(gamma_range):
        for j, beta in enumerate(beta_range):
            params = np.array([gamma, beta])
            landscape[i, j] = -self._objective_function(params)

    return {
        'gamma_range': gamma_range,
        'beta_range': beta_range,
        'landscape': landscape,
        'optimal_params': self.optimal_params_ if hasattr(self, 'optimal_params_') else None
    }

analyze_solution_quality

analyze_solution_quality(true_optimum: float | None = None) -> dict[str, Any]

Analyze quality of QAOA solution.

Parameters:

Name Type Description Default
true_optimum float | None

Known optimal value for comparison

None

Returns:

Type Description
dict[str, Any]

Analysis results

Source code in src/superquantx/algorithms/qaoa.py
def analyze_solution_quality(self, true_optimum: float | None = None) -> dict[str, Any]:
    """Analyze quality of QAOA solution.

    Args:
        true_optimum: Known optimal value for comparison

    Returns:
        Analysis results

    """
    if not self.optimal_value_:
        raise ValueError("No optimal solution available")

    analysis = {
        'qaoa_value': self.optimal_value_,
        'n_layers': self.p,
        'n_parameters': 2 * self.p,
        'optimization_iterations': len(self.optimization_history_),
    }

    if true_optimum is not None:
        approximation_ratio = self.optimal_value_ / true_optimum
        analysis.update({
            'true_optimum': true_optimum,
            'approximation_ratio': approximation_ratio,
            'relative_error': abs(1 - approximation_ratio),
        })

    # Analyze convergence
    if len(self.optimization_history_) > 1:
        costs = [-entry['cost'] for entry in self.optimization_history_]
        analysis.update({
            'convergence_achieved': costs[-1] == max(costs),
            'improvement_over_random': self.optimal_value_ - np.mean(costs[:5]) if len(costs) >= 5 else 0,
            'final_cost_variance': np.var(costs[-10:]) if len(costs) >= 10 else 0,
        })

    return analysis

get_params

get_params(deep: bool = True) -> dict[str, Any]

Get QAOA parameters.

Source code in src/superquantx/algorithms/qaoa.py
def get_params(self, deep: bool = True) -> dict[str, Any]:
    """Get QAOA parameters."""
    params = super().get_params(deep)
    params.update({
        'p': self.p,
        'optimizer': self.optimizer,
        'initial_state': self.initial_state,
        'maxiter': self.maxiter,
        'gamma_bounds': self.gamma_bounds,
        'beta_bounds': self.beta_bounds,
    })
    return params

set_params

set_params(**params) -> QAOA

Set QAOA parameters.

Source code in src/superquantx/algorithms/qaoa.py
def set_params(self, **params) -> 'QAOA':
    """Set QAOA parameters."""
    if self.is_fitted and any(key in params for key in ['p', 'problem_hamiltonian', 'mixer_hamiltonian']):
        logger.warning("Changing core parameters requires refitting the model")
        self.is_fitted = False

    return super().set_params(**params)

Quantum Agents

Base Quantum Agent

superquantx.algorithms.quantum_agents.QuantumAgent

QuantumAgent(backend: str | Any, agent_config: dict[str, Any] | None = None, shots: int = 1024, **kwargs)

Bases: BaseQuantumAlgorithm, ABC

Base class for quantum agents.

Quantum agents are specialized combinations of quantum algorithms designed for specific problem domains. They provide high-level interfaces for complex quantum machine learning workflows.

Parameters:

Name Type Description Default
backend str | Any

Quantum backend for circuit execution

required
agent_config dict[str, Any] | None

Configuration dictionary for the agent

None
shots int

Number of measurement shots

1024
**kwargs

Additional parameters

{}
Source code in src/superquantx/algorithms/quantum_agents.py
def __init__(
    self,
    backend: str | Any,
    agent_config: dict[str, Any] | None = None,
    shots: int = 1024,
    **kwargs
) -> None:
    super().__init__(backend=backend, shots=shots, **kwargs)

    self.agent_config = agent_config or {}
    self.algorithms = {}
    self.results_history = []
    self.performance_metrics = {}

    self._initialize_agent()

    logger.info(f"Initialized {self.__class__.__name__}")

Functions

solve abstractmethod

solve(problem_instance: Any, **kwargs) -> QuantumResult

Solve a problem using the quantum agent.

Parameters:

Name Type Description Default
problem_instance Any

Problem data/specification

required
**kwargs

Additional solving parameters

{}

Returns:

Type Description
QuantumResult

Solution result

Source code in src/superquantx/algorithms/quantum_agents.py
@abstractmethod
def solve(self, problem_instance: Any, **kwargs) -> QuantumResult:
    """Solve a problem using the quantum agent.

    Args:
        problem_instance: Problem data/specification
        **kwargs: Additional solving parameters

    Returns:
        Solution result

    """
    pass

get_agent_info

get_agent_info() -> dict[str, Any]

Get information about the agent and its algorithms.

Source code in src/superquantx/algorithms/quantum_agents.py
def get_agent_info(self) -> dict[str, Any]:
    """Get information about the agent and its algorithms."""
    return {
        'agent_type': self.__class__.__name__,
        'algorithms': list(self.algorithms.keys()),
        'config': self.agent_config,
        'backend': type(self.backend).__name__,
        'performance_metrics': self.performance_metrics,
    }

Specialized Agents

superquantx.algorithms.quantum_agents.QuantumPortfolioAgent

QuantumPortfolioAgent(backend: str | Any, risk_model: str = 'mean_variance', optimization_objective: str = 'sharpe', constraints: list[dict] | None = None, rebalancing_frequency: str = 'monthly', **kwargs)

Bases: QuantumAgent

Quantum agent for portfolio optimization.

This agent combines QAOA and VQE algorithms to solve portfolio optimization problems including mean-variance optimization, risk parity, and constrained optimization.

Parameters:

Name Type Description Default
backend str | Any

Quantum backend

required
risk_model str

Risk model to use ('mean_variance', 'black_litterman', 'factor')

'mean_variance'
optimization_objective str

Objective function ('return', 'sharpe', 'risk_parity')

'sharpe'
constraints list[dict] | None

List of constraint specifications

None
rebalancing_frequency str

How often to rebalance

'monthly'
**kwargs

Additional parameters

{}
Example

agent = QuantumPortfolioAgent( ... backend='pennylane', ... risk_model='mean_variance', ... optimization_objective='sharpe' ... ) result = agent.solve(portfolio_data) optimal_weights = result.result['weights']

Source code in src/superquantx/algorithms/quantum_agents.py
def __init__(
    self,
    backend: str | Any,
    risk_model: str = 'mean_variance',
    optimization_objective: str = 'sharpe',
    constraints: list[dict] | None = None,
    rebalancing_frequency: str = 'monthly',
    **kwargs
) -> None:
    agent_config = {
        'risk_model': risk_model,
        'optimization_objective': optimization_objective,
        'constraints': constraints or [],
        'rebalancing_frequency': rebalancing_frequency,
    }

    super().__init__(backend=backend, agent_config=agent_config, **kwargs)

    self.risk_model = risk_model
    self.optimization_objective = optimization_objective
    self.constraints = constraints or []
    self.rebalancing_frequency = rebalancing_frequency

    # Portfolio-specific data
    self.returns_data = None
    self.covariance_matrix = None
    self.expected_returns = None
    self.optimal_weights = None

Functions

fit

fit(X: ndarray, y: ndarray | None = None, **kwargs) -> QuantumPortfolioAgent

Fit the portfolio agent to historical data.

Parameters:

Name Type Description Default
X ndarray

Historical returns data (samples x assets)

required
y ndarray | None

Not used

None
**kwargs

Additional parameters

{}

Returns:

Type Description
QuantumPortfolioAgent

Self

Source code in src/superquantx/algorithms/quantum_agents.py
def fit(self, X: np.ndarray, y: np.ndarray | None = None, **kwargs) -> 'QuantumPortfolioAgent':
    """Fit the portfolio agent to historical data.

    Args:
        X: Historical returns data (samples x assets)
        y: Not used
        **kwargs: Additional parameters

    Returns:
        Self

    """
    logger.info(f"Fitting portfolio agent to data with {X.shape[1]} assets")

    self.returns_data = X

    # Compute statistics
    self.expected_returns = np.mean(X, axis=0)
    self.covariance_matrix = np.cov(X.T)

    # Prepare optimization problem
    risk_aversion = kwargs.get('risk_aversion', 1.0)
    hamiltonian = self._prepare_portfolio_hamiltonian(
        self.expected_returns, self.covariance_matrix, risk_aversion
    )

    # Setup VQE with portfolio Hamiltonian
    self.algorithms['vqe'].hamiltonian = hamiltonian
    self.algorithms['vqe'].fit()

    # Setup QAOA for discrete version
    self.algorithms['qaoa'].fit(X)

    self.is_fitted = True
    return self

predict

predict(X: ndarray, **kwargs) -> np.ndarray

Predict optimal portfolio weights.

Source code in src/superquantx/algorithms/quantum_agents.py
def predict(self, X: np.ndarray, **kwargs) -> np.ndarray:
    """Predict optimal portfolio weights."""
    if not self.is_fitted:
        raise ValueError("Agent must be fitted before prediction")

    # Use the trained algorithms to find optimal weights
    result = self.solve(X, **kwargs)
    return result.result.get('weights', np.ones(X.shape[1]) / X.shape[1])

solve

solve(problem_instance: ndarray, **kwargs) -> QuantumResult

Solve portfolio optimization problem.

Parameters:

Name Type Description Default
problem_instance ndarray

Returns data or problem specification

required
**kwargs

Solving parameters

{}

Returns:

Type Description
QuantumResult

Portfolio optimization result

Source code in src/superquantx/algorithms/quantum_agents.py
def solve(self, problem_instance: np.ndarray, **kwargs) -> QuantumResult:
    """Solve portfolio optimization problem.

    Args:
        problem_instance: Returns data or problem specification
        **kwargs: Solving parameters

    Returns:
        Portfolio optimization result

    """
    start_time = time.time()

    try:
        method = kwargs.get('method', 'vqe')

        if method == 'vqe':
            # Use VQE for continuous optimization
            vqe_result = self.algorithms['vqe'].optimize()
            optimal_params = vqe_result['optimal_params']

            # Convert quantum parameters to portfolio weights
            n_assets = len(self.expected_returns)
            weights = self._params_to_weights(optimal_params, n_assets)

        elif method == 'qaoa':
            # Use QAOA for discrete optimization
            self.algorithms['qaoa'].optimize(lambda x: self._portfolio_objective(x))
            optimal_solution = self.algorithms['qaoa'].predict(problem_instance)

            # Convert binary solution to weights
            weights = self._binary_to_weights(optimal_solution)

        else:
            raise ValueError(f"Unknown optimization method: {method}")

        # Apply constraints
        weights = self._apply_constraints(weights)

        # Compute portfolio metrics
        expected_return = np.dot(weights, self.expected_returns)
        portfolio_risk = np.sqrt(np.dot(weights, np.dot(self.covariance_matrix, weights)))
        sharpe_ratio = expected_return / portfolio_risk if portfolio_risk > 0 else 0

        result = {
            'weights': weights,
            'expected_return': expected_return,
            'risk': portfolio_risk,
            'sharpe_ratio': sharpe_ratio,
            'method': method,
        }

        return QuantumResult(
            result=result,
            metadata={
                'n_assets': len(weights),
                'optimization_method': method,
                'constraints_applied': len(self.constraints),
            },
            execution_time=time.time() - start_time,
            backend_info=self.get_circuit_info(),
        )

    except Exception as e:
        logger.error(f"Portfolio optimization failed: {e}")
        return QuantumResult(
            result=None,
            metadata={'error': str(e)},
            execution_time=time.time() - start_time,
            backend_info=self.get_circuit_info(),
            error=str(e),
        )

superquantx.algorithms.quantum_agents.QuantumOptimizationAgent

QuantumOptimizationAgent(backend: str | Any, problem_type: str = 'combinatorial', algorithms: list[str] | None = None, **kwargs)

Bases: QuantumAgent

Quantum agent for optimization problems.

This agent provides a unified interface for solving various optimization problems using QAOA, VQE, and other quantum optimization algorithms.

Parameters:

Name Type Description Default
backend str | Any

Quantum backend

required
problem_type str

Type of optimization ('combinatorial', 'continuous', 'mixed')

'combinatorial'
algorithms list[str] | None

List of algorithms to use

None
**kwargs

Additional parameters

{}
Source code in src/superquantx/algorithms/quantum_agents.py
def __init__(
    self,
    backend: str | Any,
    problem_type: str = 'combinatorial',
    algorithms: list[str] | None = None,
    **kwargs
) -> None:
    agent_config = {
        'problem_type': problem_type,
        'algorithms': algorithms or ['qaoa', 'vqe'],
    }

    self.problem_type = problem_type
    self.available_algorithms = algorithms or ['qaoa', 'vqe']

    super().__init__(backend=backend, agent_config=agent_config, **kwargs)

Functions

fit

fit(X: ndarray, y: ndarray | None = None, **kwargs) -> QuantumOptimizationAgent

Fit optimization algorithms to problem.

Source code in src/superquantx/algorithms/quantum_agents.py
def fit(self, X: np.ndarray, y: np.ndarray | None = None, **kwargs) -> 'QuantumOptimizationAgent':
    """Fit optimization algorithms to problem."""
    for name, algorithm in self.algorithms.items():
        algorithm.fit(X, y)

    self.is_fitted = True
    return self

predict

predict(X: ndarray, **kwargs) -> np.ndarray

Predict optimal solution.

Source code in src/superquantx/algorithms/quantum_agents.py
def predict(self, X: np.ndarray, **kwargs) -> np.ndarray:
    """Predict optimal solution."""
    if not self.is_fitted:
        raise ValueError("Agent must be fitted before prediction")

    # Use the algorithm most suitable for the problem type
    if self.problem_type == 'combinatorial' and 'qaoa' in self.algorithms:
        return self.algorithms['qaoa'].predict(X)
    elif self.problem_type == 'continuous' and 'vqe' in self.algorithms:
        return self.algorithms['vqe'].predict(X)
    else:
        # Use first available algorithm
        first_algo = next(iter(self.algorithms.values()))
        return first_algo.predict(X)

solve

solve(problem_instance: Any, **kwargs) -> QuantumResult

Solve optimization problem.

Source code in src/superquantx/algorithms/quantum_agents.py
def solve(self, problem_instance: Any, **kwargs) -> QuantumResult:
    """Solve optimization problem."""
    import time

    start_time = time.time()

    try:
        # Choose algorithm based on problem type
        if self.problem_type == 'combinatorial' and 'qaoa' in self.algorithms:
            algorithm = self.algorithms['qaoa']
            result = algorithm.optimize(problem_instance, **kwargs)
        elif self.problem_type == 'continuous' and 'vqe' in self.algorithms:
            algorithm = self.algorithms['vqe']
            result = algorithm.optimize(**kwargs)
        else:
            # Use first available algorithm
            algorithm = next(iter(self.algorithms.values()))
            if hasattr(algorithm, 'optimize'):
                result = algorithm.optimize(problem_instance, **kwargs)
            else:
                raise ValueError("No suitable optimization algorithm available")

        return QuantumResult(
            result=result,
            metadata={
                'problem_type': self.problem_type,
                'algorithm_used': algorithm.__class__.__name__,
            },
            execution_time=time.time() - start_time,
            backend_info=self.get_circuit_info(),
        )

    except Exception as e:
        logger.error(f"Optimization failed: {e}")
        return QuantumResult(
            result=None,
            metadata={'error': str(e)},
            execution_time=time.time() - start_time,
            backend_info=self.get_circuit_info(),
            error=str(e),
        )

superquantx.algorithms.quantum_agents.QuantumClassificationAgent

QuantumClassificationAgent(backend: str | Any, algorithms: list[str] | None = None, ensemble_method: str = 'voting', auto_tune: bool = False, **kwargs)

Bases: QuantumAgent

Quantum agent for classification tasks.

This agent combines multiple quantum classifiers and provides automatic model selection, hyperparameter optimization, and ensemble methods for robust classification.

Parameters:

Name Type Description Default
backend str | Any

Quantum backend

required
algorithms list[str] | None

List of algorithms to include ('quantum_svm', 'quantum_nn', 'hybrid')

None
ensemble_method str

How to combine predictions ('voting', 'weighted', 'stacking')

'voting'
auto_tune bool

Whether to automatically tune hyperparameters

False
**kwargs

Additional parameters

{}
Source code in src/superquantx/algorithms/quantum_agents.py
def __init__(
    self,
    backend: str | Any,
    algorithms: list[str] | None = None,
    ensemble_method: str = 'voting',
    auto_tune: bool = False,
    **kwargs
) -> None:
    agent_config = {
        'algorithms': algorithms or ['quantum_svm', 'quantum_nn'],
        'ensemble_method': ensemble_method,
        'auto_tune': auto_tune,
    }

    super().__init__(backend=backend, agent_config=agent_config, **kwargs)

    self.ensemble_method = ensemble_method
    self.auto_tune = auto_tune
    self.available_algorithms = algorithms or ['quantum_svm', 'quantum_nn']

Functions

fit

fit(X: ndarray, y: ndarray, **kwargs) -> QuantumClassificationAgent

Fit all classification algorithms.

Source code in src/superquantx/algorithms/quantum_agents.py
def fit(self, X: np.ndarray, y: np.ndarray, **kwargs) -> 'QuantumClassificationAgent':
    """Fit all classification algorithms."""
    logger.info(f"Fitting classification agent with {len(self.algorithms)} algorithms")

    for name, algorithm in self.algorithms.items():
        try:
            logger.info(f"Training {name}")
            algorithm.fit(X, y)

            # Evaluate performance
            predictions = algorithm.predict(X)
            accuracy = accuracy_score(y, predictions)
            self.performance_metrics[name] = accuracy

            logger.info(f"{name} training accuracy: {accuracy:.4f}")

        except Exception as e:
            logger.error(f"Failed to train {name}: {e}")
            self.performance_metrics[name] = 0.0

    self.is_fitted = True
    return self

predict

predict(X: ndarray, **kwargs) -> np.ndarray

Make ensemble predictions.

Source code in src/superquantx/algorithms/quantum_agents.py
def predict(self, X: np.ndarray, **kwargs) -> np.ndarray:
    """Make ensemble predictions."""
    if not self.is_fitted:
        raise ValueError("Agent must be fitted before prediction")

    predictions = {}

    # Get predictions from all algorithms
    for name, algorithm in self.algorithms.items():
        try:
            predictions[name] = algorithm.predict(X)
        except Exception as e:
            logger.error(f"Failed to get predictions from {name}: {e}")

    if not predictions:
        raise ValueError("No successful predictions from any algorithm")

    # Combine predictions based on ensemble method
    if self.ensemble_method == 'voting':
        return self._majority_voting(predictions)
    elif self.ensemble_method == 'weighted':
        return self._weighted_voting(predictions)
    else:
        # Return best performing algorithm's predictions
        best_algo = max(self.performance_metrics.items(), key=lambda x: x[1])[0]
        return predictions.get(best_algo, list(predictions.values())[0])

solve

solve(problem_instance: tuple[ndarray, ndarray], **kwargs) -> QuantumResult

Solve classification problem.

Source code in src/superquantx/algorithms/quantum_agents.py
def solve(self, problem_instance: tuple[np.ndarray, np.ndarray], **kwargs) -> QuantumResult:
    """Solve classification problem."""
    import time

    X, y = problem_instance
    start_time = time.time()

    try:
        # Fit and predict
        self.fit(X, y)
        predictions = self.predict(X)

        # Compute metrics
        accuracy = accuracy_score(y, predictions)

        result = {
            'predictions': predictions,
            'accuracy': accuracy,
            'individual_performances': self.performance_metrics.copy(),
            'ensemble_method': self.ensemble_method,
        }

        return QuantumResult(
            result=result,
            metadata={
                'n_samples': len(X),
                'n_features': X.shape[1],
                'n_classes': len(np.unique(y)),
                'algorithms_used': list(self.algorithms.keys()),
            },
            execution_time=time.time() - start_time,
            backend_info=self.get_circuit_info(),
        )

    except Exception as e:
        logger.error(f"Classification failed: {e}")
        return QuantumResult(
            result=None,
            metadata={'error': str(e)},
            execution_time=time.time() - start_time,
            backend_info=self.get_circuit_info(),
            error=str(e),
        )

Examples and Usage Patterns

Machine Learning Example

import numpy as np
from sklearn.datasets import make_classification
import superquantx as sqx

# Generate sample data
X, y = make_classification(
    n_samples=100, 
    n_features=4, 
    n_classes=2, 
    random_state=42
)

# Split data
X_train, X_test = X[:80], X[80:]
y_train, y_test = y[:80], y[80:]

# Create and train Quantum SVM
qsvm = sqx.QuantumSVM(
    backend='simulator',
    feature_map='ZFeatureMap',
    num_features=4
)

qsvm.fit(X_train, y_train)
predictions = qsvm.predict(X_test)
accuracy = qsvm.score(X_test, y_test)

print(f"Quantum SVM Accuracy: {accuracy:.3f}")

VQE Molecule Example

import superquantx as sqx

# Create VQE for H2 molecule
vqe = sqx.create_vqe_for_molecule(
    molecule='H2',
    bond_length=0.735,  # Angstroms
    backend='simulator'
)

# Find ground state
ground_energy = vqe.find_ground_state()
print(f"H2 Ground State Energy: {ground_energy:.6f} Ha")

# Get optimized parameters
optimal_params = vqe.get_optimal_parameters()
print(f"Optimal parameters: {optimal_params}")

QAOA Optimization Example

import superquantx as sqx

# Define Max-Cut problem
edges = [(0, 1), (1, 2), (2, 3), (3, 0), (0, 2)]

qaoa = sqx.QAOA(
    problem_type='max_cut',
    graph_edges=edges,
    layers=3,
    backend='simulator'
)

# Solve optimization problem
solution = qaoa.solve()
optimal_value = qaoa.get_optimal_value()

print(f"Max-Cut Solution: {solution}")
print(f"Cut Value: {optimal_value}")

Quantum Agent Example

import superquantx as sqx

# Create quantum portfolio optimization agent
portfolio_agent = sqx.QuantumPortfolioAgent(
    backend='simulator',
    risk_tolerance=0.1,
    optimization_method='qaoa'
)

# Sample portfolio data
assets = ['AAPL', 'GOOGL', 'MSFT', 'TSLA']
returns = [0.12, 0.15, 0.08, 0.20]
risks = [0.05, 0.10, 0.03, 0.15]

# Optimize portfolio
optimal_weights = portfolio_agent.optimize_portfolio(
    assets=assets,
    expected_returns=returns,
    risk_estimates=risks,
    budget=10000
)

print("Optimal Portfolio:")
for asset, weight in zip(assets, optimal_weights):
    print(f"{asset}: {weight:.2%}")

Best Practices

Algorithm Selection

  1. For Classification: Use QuantumSVM for small datasets, QuantumNN for complex patterns
  2. For Optimization: Use VQE for chemistry problems, QAOA for combinatorial optimization
  3. For Autonomous Tasks: Use specialized QuantumAgent implementations

Backend Optimization

# Choose appropriate backend for algorithm
algorithms_backends = {
    'QuantumSVM': 'pennylane',      # Best autodiff support
    'VQE': 'qiskit',                # Good hardware access
    'QAOA': 'cirq',                 # Flexible circuit construction
    'QuantumAgents': 'simulator',    # Fast prototyping
}

# Use backend-specific optimizations
qsvm = sqx.QuantumSVM(
    backend='pennylane',
    feature_map='ZZFeatureMap',     # More expressive for PennyLane
    optimization_level=2
)

Performance Monitoring

import time
import superquantx as sqx

# Benchmark different algorithms
def benchmark_algorithm(algorithm_class, X, y, **kwargs):
    start_time = time.time()

    algorithm = algorithm_class(**kwargs)
    algorithm.fit(X, y)
    predictions = algorithm.predict(X)

    execution_time = time.time() - start_time
    accuracy = algorithm.score(X, y)

    return {
        'accuracy': accuracy,
        'time': execution_time,
        'algorithm': algorithm_class.__name__
    }

# Compare algorithms
algorithms = [sqx.QuantumSVM, sqx.QuantumNN, sqx.HybridClassifier]
results = []

for algo in algorithms:
    result = benchmark_algorithm(algo, X_train, y_train, backend='simulator')
    results.append(result)
    print(f"{result['algorithm']}: {result['accuracy']:.3f} accuracy in {result['time']:.2f}s")

For complete algorithm implementations and advanced usage patterns, see: - Backend Integration Guide - Tutorial Examples - User Guide