madminer.ml module

class madminer.ml.ConditionalEstimator(features=None, n_hidden=(100, ), activation='tanh', dropout_prob=0.0)[source]

Bases: madminer.ml.Estimator

Abstract class for estimator that is conditional on theta. Subclassed by ParameterizedRatioEstimator, DoubleParameterizedRatioEstimator, and LikelihoodEstimator (but not ScoreEstimator).

Adds functionality to rescale parameters.

Methods

evaluate_log_likelihood(self, \*args, \*\*kwargs) Log likelihood estimation.
evaluate_log_likelihood_ratio(self, \*args, …) Log likelihood ratio estimation.
evaluate_score(self, \*args, \*\*kwargs) Score estimation.
load(self, filename) Loads a trained model from files.
save(self, filename[, save_model]) Saves the trained model to four files: a JSON file with the settings, a pickled pyTorch state dict file, and numpy files for the mean and variance of the inputs (used for input scaling).
calculate_fisher_information  
evaluate  
initialize_input_transform  
initialize_parameter_transform  
train  
initialize_parameter_transform(self, theta, transform=True, overwrite=True)[source]
load(self, filename)[source]

Loads a trained model from files.

Parameters:
filename : str

Path to the files. ‘_settings.json’ and ‘_state_dict.pl’ will be added.

Returns:
None
save(self, filename, save_model=False)[source]

Saves the trained model to four files: a JSON file with the settings, a pickled pyTorch state dict file, and numpy files for the mean and variance of the inputs (used for input scaling).

Parameters:
filename : str

Path to the files. ‘_settings.json’ and ‘_state_dict.pl’ will be added.

save_model : bool, optional

If True, the whole model is saved in addition to the state dict. This is not necessary for loading it again with Estimator.load(), but can be useful for debugging, for instance to plot the computational graph.

Returns:
None
class madminer.ml.DoubleParameterizedRatioEstimator(features=None, n_hidden=(100, ), activation='tanh', dropout_prob=0.0)[source]

Bases: madminer.ml.ConditionalEstimator

A neural estimator of the likelihood ratio as a function of the observation x, the numerator hypothesis theta0, and the denominator hypothesis theta1.

Parameters:
features : list of int or None, optional

Indices of observables (features) that are used as input to the neural networks. If None, all observables are used. Default value: None.

n_hidden : tuple of int, optional

Units in each hidden layer in the neural networks. If method is ‘nde’ or ‘scandal’, this refers to the setup of each individual MADE layer. Default value: (100,).

activation : {‘tanh’, ‘sigmoid’, ‘relu’}, optional

Activation function. Default value: ‘tanh’.

Methods

evaluate_log_likelihood(self, \*args, \*\*kwargs) Log likelihood estimation.
evaluate_log_likelihood_ratio(self, x, …) Evaluates the log likelihood ratio as a function of the observation x, the numerator hypothesis theta0, and the denominator hypothesis theta1.
evaluate_score(self, \*args, \*\*kwargs) Score estimation.
load(self, filename) Loads a trained model from files.
save(self, filename[, save_model]) Saves the trained model to four files: a JSON file with the settings, a pickled pyTorch state dict file, and numpy files for the mean and variance of the inputs (used for input scaling).
train(self, method, x, y, theta0, theta1[, …]) Trains the network.
calculate_fisher_information  
evaluate  
initialize_input_transform  
initialize_parameter_transform  
calculate_fisher_information(self, *args, **kwargs)[source]
evaluate(self, *args, **kwargs)[source]
evaluate_log_likelihood(self, *args, **kwargs)[source]

Log likelihood estimation. Signature depends on the type of estimator. The first returned value is the log likelihood with shape (n_thetas, n_x).

evaluate_log_likelihood_ratio(self, x, theta0, theta1, test_all_combinations=True, evaluate_score=False)[source]

Evaluates the log likelihood ratio as a function of the observation x, the numerator hypothesis theta0, and the denominator hypothesis theta1.

Parameters:
x : str or ndarray

Observations or filename of a pickled numpy array.

theta0 : ndarray or str

Numerator parameter points or filename of a pickled numpy array.

theta1 : ndarray or str

Denominator parameter points or filename of a pickled numpy array.

test_all_combinations : bool, optional

If False, the number of samples in the observable and theta files has to match, and the likelihood ratio is evaluated only for the combinations r(x_i | theta0_i, theta1_i). If True, r(x_i | theta0_j, theta1_j) for all pairwise combinations i, j are evaluated. Default value: True.

evaluate_score : bool, optional

Sets whether in addition to the likelihood ratio the score is evaluated. Default value: False.

Returns:
log_likelihood_ratio : ndarray

The estimated log likelihood ratio. If test_all_combinations is True, the result has shape (n_thetas, n_x). Otherwise, it has shape (n_samples,).

score0 : ndarray or None

None if evaluate_score is False. Otherwise the derived estimated score at theta0. If test_all_combinations is True, the result has shape (n_thetas, n_x, n_parameters). Otherwise, it has shape (n_samples, n_parameters).

score1 : ndarray or None

None if evaluate_score is False. Otherwise the derived estimated score at theta1. If test_all_combinations is True, the result has shape (n_thetas, n_x, n_parameters). Otherwise, it has shape (n_samples, n_parameters).

evaluate_score(self, *args, **kwargs)[source]

Score estimation. Signature depends on the type of estimator. The only returned value is the score with shape (n_x).

train(self, method, x, y, theta0, theta1, r_xz=None, t_xz0=None, t_xz1=None, x_val=None, y_val=None, theta0_val=None, theta1_val=None, r_xz_val=None, t_xz0_val=None, t_xz1_val=None, alpha=1.0, optimizer='amsgrad', n_epochs=50, batch_size=128, initial_lr=0.001, final_lr=0.0001, nesterov_momentum=None, validation_split=0.25, early_stopping=True, scale_inputs=True, shuffle_labels=False, limit_samplesize=None, memmap=False, verbose='some', scale_parameters=True, n_workers=8, clip_gradient=None)[source]

Trains the network.

Parameters:
method : str

The inference method used for training. Allowed values are ‘alice’, ‘alices’, ‘carl’, ‘cascal’, ‘rascal’, and ‘rolr’.

x : ndarray or str

Observations, or filename of a pickled numpy array.

y : ndarray or str

Class labels (0 = numeerator, 1 = denominator), or filename of a pickled numpy array.

theta0 : ndarray or str

Numerator parameter point, or filename of a pickled numpy array.

theta1 : ndarray or str

Denominator parameter point, or filename of a pickled numpy array.

r_xz : ndarray or str or None, optional

Joint likelihood ratio, or filename of a pickled numpy array. Default value: None.

t_xz0 : ndarray or str or None, optional

Joint scores at theta0, or filename of a pickled numpy array. Default value: None.

t_xz1 : ndarray or str or None, optional

Joint scores at theta1, or filename of a pickled numpy array. Default value: None.

x_val : ndarray or str or None, optional

Validation observations, or filename of a pickled numpy array. If None and validation_split > 0, validation data will be randomly selected from the training data. Default value: None.

y_val : ndarray or str or None, optional

Validation labels (0 = numerator, 1 = denominator), or filename of a pickled numpy array. If None and validation_split > 0, validation data will be randomly selected from the training data. Default value: None.

theta0_val : ndarray or str or None, optional

Validation numerator parameter points, or filename of a pickled numpy array. If None and validation_split > 0, validation data will be randomly selected from the training data. Default value: None.

theta1_val : ndarray or str or None, optional

Validation denominator parameter points, or filename of a pickled numpy array. If None and validation_split > 0, validation data will be randomly selected from the training data. Default value: None.

r_xz_val : ndarray or str or None, optional

Validation joint likelihood ratio, or filename of a pickled numpy array. If None and validation_split > 0, validation data will be randomly selected from the training data. Default value: None.

t_xz0_val : ndarray or str or None, optional

Validation joint scores at theta0, or filename of a pickled numpy array. If None and validation_split > 0, validation data will be randomly selected from the training data. Default value: None.

t_xz1_val : ndarray or str or None, optional

Validation joint scores at theta1, or filename of a pickled numpy array. If None and validation_split > 0, validation data will be randomly selected from the training data. Default value: None.

alpha : float, optional

Hyperparameter weighting the score error in the loss function of the ‘alices’, ‘rascal’, and ‘cascal’ methods. Default value: 1.

optimizer : {“adam”, “amsgrad”, “sgd”}, optional

Optimization algorithm. Default value: “amsgrad”.

n_epochs : int, optional

Number of epochs. Default value: 50.

batch_size : int, optional

Batch size. Default value: 128.

initial_lr : float, optional

Learning rate during the first epoch, after which it exponentially decays to final_lr. Default value: 0.001.

final_lr : float, optional

Learning rate during the last epoch. Default value: 0.0001.

nesterov_momentum : float or None, optional

If trainer is “sgd”, sets the Nesterov momentum. Default value: None.

validation_split : float or None, optional

Fraction of samples used for validation and early stopping (if early_stopping is True). If None, the entire sample is used for training and early stopping is deactivated. Default value: 0.25.

early_stopping : bool, optional

Activates early stopping based on the validation loss (only if validation_split is not None). Default value: True.

scale_inputs : bool, optional

Scale the observables to zero mean and unit variance. Default value: True.

shuffle_labels : bool, optional

If True, the labels (y, r_xz, t_xz) are shuffled, while the observations (x) remain in their normal order. This serves as a closure test, in particular as cross-check against overfitting: an estimator trained with shuffle_labels=True should predict to likelihood ratios around 1 and scores around 0.

limit_samplesize : int or None, optional

If not None, only this number of samples (events) is used to train the estimator. Default value: None.

memmap : bool, optional.

If True, training files larger than 1 GB will not be loaded into memory at once. Default value: False.

verbose : {“all”, “many”, “some”, “few”, “none}, optional

Determines verbosity of training. Default value: “some”.

Returns:
None
class madminer.ml.Ensemble(estimators=None)[source]

Bases: object

Ensemble methods for likelihood, likelihood ratio, and score estimation.

Generally, Ensemble instances can be used very similarly to Estimator instances:

  • The initialization of Ensemble takes a list of (trained or untrained) Estimator instances.
  • The methods Ensemble.train_one() and Ensemble.train_all() train the estimators (this can also be done outside of Ensemble).
  • Ensemble.calculate_expectation() can be used to calculate the expectation of the estimation likelihood ratio or the expected estimated score over a validation sample. Ideally (and assuming the correct sampling), these expectation values should be close to zero. Deviations from zero therefore point out that the estimator is probably inaccurate.
  • Ensemble.evaluate_log_likelihood(), Ensemble.evaluate_log_likelihood_ratio(), Ensemble.evaluate_score(), and Ensemble.calculate_fisher_information() can then be used to calculate ensemble predictions.
  • Ensemble.save() and Ensemble.load() can store all estimators in one folder.

The individual estimators in the ensemble can be trained with different methods, but they have to be of the same type: either all estimators are ParameterizedRatioEstimator instances, or all estimators are DoubleParameterizedRatioEstimator instances, or all estimators are ScoreEstimator instances, or all estimators are LikelihoodEstimator instances..

Parameters:
estimators : None or list of Estimator, optional

If int, sets the number of estimators that will be created as new MLForge instances. If list, sets the estimators directly, either from MLForge instances or filenames (that are then loaded with MLForge.load()). If None, the ensemble is initialized without estimators. Note that the estimators have to be consistent: either all of them are trained with a local score method (‘sally’ or ‘sallino’); or all of them are trained with a single-parameterized method (‘carl’, ‘rolr’, ‘rascal’, ‘scandal’, ‘alice’, or ‘alices’); or all of them are trained with a doubly parameterized method (‘carl2’, ‘rolr2’, ‘rascal2’, ‘alice2’, or ‘alices2’). Mixing estimators of different types within one of these three categories is supported, but mixing estimators from different categories is not and will raise a RuntimeException. Default value: None.

Attributes:
estimators : list of Estimator

The estimators in the form of MLForge instances.

Methods

add_estimator(self, estimator) Adds an estimator to the ensemble.
calculate_fisher_information(self, x[, …]) Calculates expected Fisher information matrices for an ensemble of ScoreEstimator instances.
evaluate_log_likelihood(self[, …]) Estimates the log likelihood from each estimator and returns the ensemble mean (and, if calculate_covariance is True, the covariance between them).
evaluate_log_likelihood_ratio(self[, …]) Estimates the log likelihood ratio from each estimator and returns the ensemble mean (and, if calculate_covariance is True, the covariance between them).
evaluate_score(self[, estimator_weights, …]) Estimates the score from each estimator and returns the ensemble mean (and, if calculate_covariance is True, the covariance between them).
load(self, folder) Loads the estimator ensemble from a folder.
save(self, folder[, save_model]) Saves the estimator ensemble to a folder.
train_all(self, \*\*kwargs) Trains all estimators.
train_one(self, i, \*\*kwargs) Trains an individual estimator.
add_estimator(self, estimator)[source]

Adds an estimator to the ensemble.

Parameters:
estimator : Estimator

The estimator.

Returns:
None
calculate_fisher_information(self, x, obs_weights=None, estimator_weights=None, n_events=1, mode='score', calculate_covariance=True, sum_events=True, epsilon_shift=0.001)[source]

Calculates expected Fisher information matrices for an ensemble of ScoreEstimator instances.

There are two ways of calculating the ensemble average. In the default “score” mode, the ensemble average for the score is calculated for each event, and the Fisher information is calculated based on these mean scores. In the “information” mode, the Fisher information is calculated for each estimator separately and the ensemble mean is calculated only for the final Fisher information matrix. The “score” mode is generally assumed to be more precise and is the default.

In the “score” mode, the covariance matrix of the final result is calculated in the following way: - For each event x and each estimator a, the “shifted” predicted score is calculated as

t_a’(x) = t(x) + 1/sqrt(n) * (t_a(x) - t(x)). Here t(x) is the mean score (averaged over the ensemble) for this event, t_a(x) is the prediction of estimator a for this event, and n is the number of estimators. The ensemble variance of these shifted score predictions is equal to the uncertainty on the mean of the ensemble of original predictions.
  • For each estimator a, the shifted Fisher information matrix I_a’ is calculated from the shifted predicted scores.
  • The ensemble covariance between all Fisher information matrices I_a’ is calculated and taken as the measure of uncertainty on the Fisher information calculated from the mean scores.

In the “information” mode, the user has the option to treat all estimators equally (‘committee method’) or to give those with expected score close to zero (as calculated by calculate_expectation()) a higher weight. In this case, the ensemble mean I is calculated as I = sum_i w_i I_i with weights w_i = exp(-vote_expectation_weight |E[t_i]|) / sum_j exp(-vote_expectation_weight |E[t_k]|). Here I_i are the individual estimators and E[t_i] is the expectation value calculated by calculate_expectation().

Parameters:
x : str or ndarray

Sample of observations, or path to numpy file with observations, as saved by the madminer.sampling.SampleAugmenter functions. Note that this sample has to be sampled from the reference parameter where the score is estimated with the SALLY / SALLINO estimator!

obs_weights : None or ndarray, optional

Weights for the observations. If None, all events are taken to have equal weight. Default value: None.

estimator_weights : ndarray or None, optional

Weights for each estimator in the ensemble. If None, all estimators have an equal vote. Default value: None.

n_events : float, optional

Expected number of events for which the kinematic Fisher information should be calculated. Default value: 1.

mode : {“score”, “information”}, optional

If mode is “information”, the Fisher information for each estimator is calculated individually and only then are the sample mean and covariance calculated. If mode is “score”, the sample mean is calculated for the score for each event. Default value: “score”.

calculate_covariance : bool, optional

If True, the covariance between the different estimators is calculated. Default value: True.

sum_events : bool, optional

If True or mode is “information”, the expected Fisher information summed over the events x is calculated. If False and mode is “score”, the per-event Fisher information for each event is returned. Default value: True.

epsilon_shift : float, optional

Small numerical factor in the error propagation. Default value: 0.001.

Returns:
mean_prediction : ndarray

Expected kinematic Fisher information matrix with shape (n_events, n_parameters, n_parameters) if sum_events is False and mode is “score”, or (n_parameters, n_parameters) in any other case.

covariance : ndarray or None

The covariance of the estimated Fisher information matrix. This object has four indices, cov_(ij)(i’j’), ordered as i j i’ j’. It has shape (n_parameters, n_parameters, n_parameters, n_parameters).

evaluate_log_likelihood(self, estimator_weights=None, calculate_covariance=False, **kwargs)[source]

Estimates the log likelihood from each estimator and returns the ensemble mean (and, if calculate_covariance is True, the covariance between them).

Parameters:
estimator_weights : ndarray or None, optional

Weights for each estimator in the ensemble. If None, all estimators have an equal vote. Default value: None.

calculate_covariance : bool, optional

If True, the covariance between the different estimators is calculated. Default value: False.

kwargs

Arguments for the evaluation. See the documentation of the relevant Estimator class.

Returns:
log_likelihood : ndarray

Mean prediction for the log likelihood.

covariance : ndarray or None

If calculate_covariance is True, the covariance matrix between the estimators. Otherwise None.

evaluate_log_likelihood_ratio(self, estimator_weights=None, calculate_covariance=False, **kwargs)[source]

Estimates the log likelihood ratio from each estimator and returns the ensemble mean (and, if calculate_covariance is True, the covariance between them).

Parameters:
estimator_weights : ndarray or None, optional

Weights for each estimator in the ensemble. If None, all estimators have an equal vote. Default value: None.

calculate_covariance : bool, optional

If True, the covariance between the different estimators is calculated. Default value: False.

kwargs

Arguments for the evaluation. See the documentation of the relevant Estimator class.

Returns:
log_likelihood_ratio : ndarray

Mean prediction for the log likelihood ratio.

covariance : ndarray or None

If calculate_covariance is True, the covariance matrix between the estimators. Otherwise None.

evaluate_score(self, estimator_weights=None, calculate_covariance=False, **kwargs)[source]

Estimates the score from each estimator and returns the ensemble mean (and, if calculate_covariance is True, the covariance between them).

Parameters:
estimator_weights : ndarray or None, optional

Weights for each estimator in the ensemble. If None, all estimators have an equal vote. Default value: None.

calculate_covariance : bool, optional

If True, the covariance between the different estimators is calculated. Default value: False.

kwargs

Arguments for the evaluation. See the documentation of the relevant Estimator class.

Returns:
log_likelihood_ratio : ndarray

Mean prediction for the log likelihood ratio.

covariance : ndarray or None

If calculate_covariance is True, the covariance matrix between the estimators. Otherwise None.

load(self, folder)[source]

Loads the estimator ensemble from a folder.

Parameters:
folder : str

Path to the folder.

Returns:
None
save(self, folder, save_model=False)[source]

Saves the estimator ensemble to a folder.

Parameters:
folder : str

Path to the folder.

save_model : bool, optional

If True, the whole model is saved in addition to the state dict. This is not necessary for loading it again with Ensemble.load(), but can be useful for debugging, for instance to plot the computational graph.

Returns:
None
train_all(self, **kwargs)[source]

Trains all estimators. See Estimator.train().

Parameters:
kwargs : dict

Parameters for Estimator.train(). If a value in this dict is a list, it has to have length n_estimators and contain one value of this parameter for each of the estimators. Otherwise the value is used as parameter for the training of all the estimators.

Returns:
None
train_one(self, i, **kwargs)[source]

Trains an individual estimator. See Estimator.train().

Parameters:
i : int

The index 0 <= i < n_estimators of the estimator to be trained.

kwargs : dict

Parameters for Estimator.train().

Returns:
None
class madminer.ml.Estimator(features=None, n_hidden=(100, ), activation='tanh', dropout_prob=0.0)[source]

Bases: object

Abstract class for any ML estimator. Subclassed by ParameterizedRatioEstimator, DoubleParameterizedRatioEstimator, ScoreEstimator, and LikelihoodEstimator.

Each instance of this class represents one neural estimator. The most important functions are:

  • Estimator.train() to train an estimator. The keyword method determines the inference technique and whether a class instance represents a single-parameterized likelihood ratio estimator, a doubly-parameterized likelihood ratio estimator, or a local score estimator.
  • Estimator.evaluate() to evaluate the estimator.
  • Estimator.save() to save the trained model to files.
  • Estimator.load() to load the trained model from files.

Please see the tutorial for a detailed walk-through.

Methods

evaluate_log_likelihood(self, \*args, \*\*kwargs) Log likelihood estimation.
evaluate_log_likelihood_ratio(self, \*args, …) Log likelihood ratio estimation.
evaluate_score(self, \*args, \*\*kwargs) Score estimation.
load(self, filename) Loads a trained model from files.
save(self, filename[, save_model]) Saves the trained model to four files: a JSON file with the settings, a pickled pyTorch state dict file, and numpy files for the mean and variance of the inputs (used for input scaling).
calculate_fisher_information  
evaluate  
initialize_input_transform  
train  
calculate_fisher_information(self, *args, **kwargs)[source]
evaluate(self, *args, **kwargs)[source]
evaluate_log_likelihood(self, *args, **kwargs)[source]

Log likelihood estimation. Signature depends on the type of estimator. The first returned value is the log likelihood with shape (n_thetas, n_x).

evaluate_log_likelihood_ratio(self, *args, **kwargs)[source]

Log likelihood ratio estimation. Signature depends on the type of estimator. The first returned value is the log likelihood ratio with shape (n_thetas, n_x) or (n_x).

evaluate_score(self, *args, **kwargs)[source]

Score estimation. Signature depends on the type of estimator. The only returned value is the score with shape (n_x).

initialize_input_transform(self, x, transform=True, overwrite=True)[source]
load(self, filename)[source]

Loads a trained model from files.

Parameters:
filename : str

Path to the files. ‘_settings.json’ and ‘_state_dict.pl’ will be added.

Returns:
None
save(self, filename, save_model=False)[source]

Saves the trained model to four files: a JSON file with the settings, a pickled pyTorch state dict file, and numpy files for the mean and variance of the inputs (used for input scaling).

Parameters:
filename : str

Path to the files. ‘_settings.json’ and ‘_state_dict.pl’ will be added.

save_model : bool, optional

If True, the whole model is saved in addition to the state dict. This is not necessary for loading it again with Estimator.load(), but can be useful for debugging, for instance to plot the computational graph.

Returns:
None
train(self, *args, **kwargs)[source]
class madminer.ml.LikelihoodEstimator(features=None, n_components=1, n_mades=5, n_hidden=(100, ), activation='tanh', batch_norm=None)[source]

Bases: madminer.ml.ConditionalEstimator

A neural estimator of the density or likelihood evaluated at a reference hypothesis as a function
of the observation x.
Parameters:
features : list of int or None, optional

Indices of observables (features) that are used as input to the neural networks. If None, all observables are used. Default value: None.

n_components : int, optional

The number of Gaussian base components in a MADE MoG. If 1, a plain MADE is used. Default value: 1.

n_mades : int, optional

The number of MADE layers. Default value: 3.

n_hidden : tuple of int, optional

Units in each hidden layer in the neural networks. If method is ‘nde’ or ‘scandal’, this refers to the setup of each individual MADE layer. Default value: (100,).

activation : {‘tanh’, ‘sigmoid’, ‘relu’}, optional

Activation function. Default value: ‘tanh’.

batch_norm : None or floar, optional

If not None, batch normalization is used, where this value sets the alpha parameter in the calculation of the running average of the mean and variance. Default value: None.

Methods

evaluate_log_likelihood(self, x, theta[, …]) Evaluates the log likelihood as a function of the observation x and the parameter point theta.
evaluate_log_likelihood_ratio(self, x, …) Evaluates the log likelihood ratio as a function of the observation x, the numerator parameter point theta0, and the denominator parameter point theta1.
evaluate_score(self, \*args, \*\*kwargs) Score estimation.
load(self, filename) Loads a trained model from files.
save(self, filename[, save_model]) Saves the trained model to four files: a JSON file with the settings, a pickled pyTorch state dict file, and numpy files for the mean and variance of the inputs (used for input scaling).
train(self, method, x, theta[, t_xz, x_val, …]) Trains the network.
calculate_fisher_information  
evaluate  
initialize_input_transform  
initialize_parameter_transform  
calculate_fisher_information(self, *args, **kwargs)[source]
evaluate(self, *args, **kwargs)[source]
evaluate_log_likelihood(self, x, theta, test_all_combinations=True, evaluate_score=False)[source]

Evaluates the log likelihood as a function of the observation x and the parameter point theta.

Parameters:
x : ndarray or str

Sample of observations, or path to numpy file with observations.

theta : ndarray or str

Parameter points, or path to numpy file with parameter points.

test_all_combinations : bool, optional

If method is not ‘sally’ and not ‘sallino’: If False, the number of samples in the observable and theta files has to match, and the likelihood ratio is evaluated only for the combinations r(x_i | theta0_i, theta1_i). If True, r(x_i | theta0_j, theta1_j) for all pairwise combinations i, j are evaluated. Default value: True.

evaluate_score : bool, optional

If method is not ‘sally’ and not ‘sallino’, this sets whether in addition to the likelihood ratio the score is evaluated. Default value: False.

Returns:
log_likelihood : ndarray

The estimated log likelihood. If test_all_combinations is True, the result has shape (n_thetas, n_x). Otherwise, it has shape (n_samples,).

score : ndarray or None

None if evaluate_score is False. Otherwise the derived estimated score at theta. If test_all_combinations is True, the result has shape (n_thetas, n_x, n_parameters). Otherwise, it has shape (n_samples, n_parameters).

evaluate_log_likelihood_ratio(self, x, theta0, theta1, test_all_combinations, evaluate_score=False)[source]

Evaluates the log likelihood ratio as a function of the observation x, the numerator parameter point theta0, and the denominator parameter point theta1.

Parameters:
x : ndarray or str

Sample of observations, or path to numpy file with observations.

theta0 : ndarray or str

Numerator parameters, or path to numpy file.

theta1 : ndarray or str

Denominator parameters, or path to numpy file.

test_all_combinations : bool, optional

If method is not ‘sally’ and not ‘sallino’: If False, the number of samples in the observable and theta files has to match, and the likelihood ratio is evaluated only for the combinations r(x_i | theta0_i, theta1_i). If True, r(x_i | theta0_j, theta1_j) for all pairwise combinations i, j are evaluated. Default value: True.

evaluate_score : bool, optional

If method is not ‘sally’ and not ‘sallino’, this sets whether in addition to the likelihood ratio the score is evaluated. Default value: False.

Returns:
log_likelihood : ndarray

The estimated log likelihood. If test_all_combinations is True, the result has shape (n_thetas, n_x). Otherwise, it has shape (n_samples,).

score : ndarray or None

None if evaluate_score is False. Otherwise the derived estimated score at theta. If test_all_combinations is True, the result has shape (n_thetas, n_x, n_parameters). Otherwise, it has shape (n_samples, n_parameters).

evaluate_score(self, *args, **kwargs)[source]

Score estimation. Signature depends on the type of estimator. The only returned value is the score with shape (n_x).

train(self, method, x, theta, t_xz=None, x_val=None, theta_val=None, t_xz_val=None, alpha=1.0, optimizer='amsgrad', n_epochs=50, batch_size=128, initial_lr=0.001, final_lr=0.0001, nesterov_momentum=None, validation_split=0.25, early_stopping=True, scale_inputs=True, shuffle_labels=False, limit_samplesize=None, memmap=False, verbose='some', scale_parameters=True, n_workers=8, clip_gradient=None)[source]

Trains the network.

Parameters:
method : str

The inference method used for training. Allowed values are ‘nde’ and ‘scandal’.

x : ndarray or str

Observations, or filename of a pickled numpy array.

theta : ndarray or str

Numerator parameter point, or filename of a pickled numpy array.

t_xz : ndarray or str or None, optional

Joint scores at theta, or filename of a pickled numpy array. Default value: None.

x_val : ndarray or str or None, optional

Validation observations, or filename of a pickled numpy array. If None and validation_split > 0, validation data will be randomly selected from the training data. Default value: None.

theta_val : ndarray or str or None, optional

Validation numerator parameter points, or filename of a pickled numpy array. If None and validation_split > 0, validation data will be randomly selected from the training data. Default value: None.

t_xz_val : ndarray or str or None, optional

Validation joint scores at theta, or filename of a pickled numpy array. If None and validation_split > 0, validation data will be randomly selected from the training data. Default value: None.

alpha : float, optional

Hyperparameter weighting the score error in the loss function of the ‘alices’, ‘rascal’, and ‘cascal’ methods. Default value: 1.

optimizer : {“adam”, “amsgrad”, “sgd”}, optional

Optimization algorithm. Default value: “amsgrad”.

n_epochs : int, optional

Number of epochs. Default value: 50.

batch_size : int, optional

Batch size. Default value: 128.

initial_lr : float, optional

Learning rate during the first epoch, after which it exponentially decays to final_lr. Default value: 0.001.

final_lr : float, optional

Learning rate during the last epoch. Default value: 0.0001.

nesterov_momentum : float or None, optional

If trainer is “sgd”, sets the Nesterov momentum. Default value: None.

validation_split : float or None, optional

Fraction of samples used for validation and early stopping (if early_stopping is True). If None, the entire sample is used for training and early stopping is deactivated. Default value: 0.25.

early_stopping : bool, optional

Activates early stopping based on the validation loss (only if validation_split is not None). Default value: True.

scale_inputs : bool, optional

Scale the observables to zero mean and unit variance. Default value: True.

shuffle_labels : bool, optional

If True, the labels (y, r_xz, t_xz) are shuffled, while the observations (x) remain in their normal order. This serves as a closure test, in particular as cross-check against overfitting: an estimator trained with shuffle_labels=True should predict to likelihood ratios around 1 and scores around 0.

limit_samplesize : int or None, optional

If not None, only this number of samples (events) is used to train the estimator. Default value: None.

memmap : bool, optional.

If True, training files larger than 1 GB will not be loaded into memory at once. Default value: False.

verbose : {“all”, “many”, “some”, “few”, “none}, optional

Determines verbosity of training. Default value: “some”.

scale_parameters : bool, optional

Whether parameters are rescaled to mean zero and unit variance before going into the neural network. Default value: True.

Returns:
None
class madminer.ml.MorphingAwareRatioEstimator(morphing_setup_filename, optimize_morphing_basis=False, features=None, n_hidden=(100, ), activation='tanh', dropout_prob=0.0)[source]

Bases: madminer.ml.ParameterizedRatioEstimator

Methods

evaluate_log_likelihood(self, \*args, \*\*kwargs) Log likelihood estimation.
evaluate_log_likelihood_ratio(self, x, theta) Evaluates the log likelihood ratio for given observations x betwen the given parameter point theta and the reference hypothesis.
evaluate_score(self, \*args, \*\*kwargs) Score estimation.
load(self, filename) Loads a trained model from files.
save(self, filename[, save_model]) Saves the trained model to four files: a JSON file with the settings, a pickled pyTorch state dict file, and numpy files for the mean and variance of the inputs (used for input scaling).
train(self, \*args, \*\*kwargs) Trains the network.
calculate_fisher_information  
evaluate  
initialize_input_transform  
initialize_parameter_transform  
train(self, *args, **kwargs)[source]

Trains the network.

Parameters:
method : str

The inference method used for training. Allowed values are ‘alice’, ‘alices’, ‘carl’, ‘cascal’, ‘rascal’, and ‘rolr’.

x : ndarray or str

Observations, or filename of a pickled numpy array.

y : ndarray or str

Class labels (0 = numeerator, 1 = denominator), or filename of a pickled numpy array.

theta : ndarray or str

Numerator parameter point, or filename of a pickled numpy array.

r_xz : ndarray or str or None, optional

Joint likelihood ratio, or filename of a pickled numpy array. Default value: None.

t_xz : ndarray or str or None, optional

Joint scores at theta, or filename of a pickled numpy array. Default value: None.

x_val : ndarray or str or None, optional

Validation observations, or filename of a pickled numpy array. If None and validation_split > 0, validation data will be randomly selected from the training data. Default value: None.

y_val : ndarray or str or None, optional

Validation labels (0 = numerator, 1 = denominator), or filename of a pickled numpy array. If None and validation_split > 0, validation data will be randomly selected from the training data. Default value: None.

theta_val : ndarray or str or None, optional

Validation numerator parameter points, or filename of a pickled numpy array. If None and validation_split > 0, validation data will be randomly selected from the training data. Default value: None.

r_xz_val : ndarray or str or None, optional

Validation joint likelihood ratio, or filename of a pickled numpy array. If None and validation_split > 0, validation data will be randomly selected from the training data. Default value: None.

t_xz_val : ndarray or str or None, optional

Validation joint scores at theta, or filename of a pickled numpy array. If None and validation_split > 0, validation data will be randomly selected from the training data. Default value: None.

alpha : float, optional

Hyperparameter weighting the score error in the loss function of the ‘alices’, ‘rascal’, and ‘cascal’ methods. Default value: 1.

optimizer : {“adam”, “amsgrad”, “sgd”}, optional

Optimization algorithm. Default value: “amsgrad”.

n_epochs : int, optional

Number of epochs. Default value: 50.

batch_size : int, optional

Batch size. Default value: 128.

initial_lr : float, optional

Learning rate during the first epoch, after which it exponentially decays to final_lr. Default value: 0.001.

final_lr : float, optional

Learning rate during the last epoch. Default value: 0.0001.

nesterov_momentum : float or None, optional

If trainer is “sgd”, sets the Nesterov momentum. Default value: None.

validation_split : float or None, optional

Fraction of samples used for validation and early stopping (if early_stopping is True). If None, the entire sample is used for training and early stopping is deactivated. Default value: 0.25.

early_stopping : bool, optional

Activates early stopping based on the validation loss (only if validation_split is not None). Default value: True.

scale_inputs : bool, optional

Scale the observables to zero mean and unit variance. Default value: True.

shuffle_labels : bool, optional

If True, the labels (y, r_xz, t_xz) are shuffled, while the observations (x) remain in their normal order. This serves as a closure test, in particular as cross-check against overfitting: an estimator trained with shuffle_labels=True should predict to likelihood ratios around 1 and scores around 0.

limit_samplesize : int or None, optional

If not None, only this number of samples (events) is used to train the estimator. Default value: None.

memmap : bool, optional.

If True, training files larger than 1 GB will not be loaded into memory at once. Default value: False.

verbose : {“all”, “many”, “some”, “few”, “none}, optional

Determines verbosity of training. Default value: “some”.

scale_parameters : bool, optional

Whether parameters are rescaled to mean zero and unit variance before going into the neural network. Default value: True.

Returns:
results: ndarray

Results from SingleParameterizedRatioTrainer.train or DoubleParameterizedRatioTrainer.train for example

class madminer.ml.ParameterizedRatioEstimator(features=None, n_hidden=(100, ), activation='tanh', dropout_prob=0.0)[source]

Bases: madminer.ml.ConditionalEstimator

A neural estimator of the likelihood ratio as a function of the observation x as well as the numerator hypothesis theta. The reference (denominator) hypothesis is kept fixed at some reference value and NOT modeled by the network.

Parameters:
features : list of int or None, optional

Indices of observables (features) that are used as input to the neural networks. If None, all observables are used. Default value: None.

n_hidden : tuple of int, optional

Units in each hidden layer in the neural networks. If method is ‘nde’ or ‘scandal’, this refers to the setup of each individual MADE layer. Default value: (100,).

activation : {‘tanh’, ‘sigmoid’, ‘relu’}, optional

Activation function. Default value: ‘tanh’.

Methods

evaluate_log_likelihood(self, \*args, \*\*kwargs) Log likelihood estimation.
evaluate_log_likelihood_ratio(self, x, theta) Evaluates the log likelihood ratio for given observations x betwen the given parameter point theta and the reference hypothesis.
evaluate_score(self, \*args, \*\*kwargs) Score estimation.
load(self, filename) Loads a trained model from files.
save(self, filename[, save_model]) Saves the trained model to four files: a JSON file with the settings, a pickled pyTorch state dict file, and numpy files for the mean and variance of the inputs (used for input scaling).
train(self, method, x, y, theta[, r_xz, …]) Trains the network.
calculate_fisher_information  
evaluate  
initialize_input_transform  
initialize_parameter_transform  
calculate_fisher_information(self, *args, **kwargs)[source]
evaluate(self, *args, **kwargs)[source]
evaluate_log_likelihood(self, *args, **kwargs)[source]

Log likelihood estimation. Signature depends on the type of estimator. The first returned value is the log likelihood with shape (n_thetas, n_x).

evaluate_log_likelihood_ratio(self, x, theta, test_all_combinations=True, evaluate_score=False)[source]

Evaluates the log likelihood ratio for given observations x betwen the given parameter point theta and the reference hypothesis.

Parameters:
x : str or ndarray

Observations or filename of a pickled numpy array.

theta : ndarray or str

Parameter points or filename of a pickled numpy array.

test_all_combinations : bool, optional

If False, the number of samples in the observable and theta files has to match, and the likelihood ratio is evaluated only for the combinations r(x_i | theta0_i, theta1_i). If True, r(x_i | theta0_j, theta1_j) for all pairwise combinations i, j are evaluated. Default value: True.

evaluate_score : bool, optional

Sets whether in addition to the likelihood ratio the score is evaluated. Default value: False.

Returns:
log_likelihood_ratio : ndarray

The estimated log likelihood ratio. If test_all_combinations is True, the result has shape (n_thetas, n_x). Otherwise, it has shape (n_samples,).

score : ndarray or None

None if evaluate_score is False. Otherwise the derived estimated score at theta0. If test_all_combinations is True, the result has shape (n_thetas, n_x, n_parameters). Otherwise, it has shape (n_samples, n_parameters).

evaluate_score(self, *args, **kwargs)[source]

Score estimation. Signature depends on the type of estimator. The only returned value is the score with shape (n_x).

train(self, method, x, y, theta, r_xz=None, t_xz=None, x_val=None, y_val=None, theta_val=None, r_xz_val=None, t_xz_val=None, alpha=1.0, optimizer='amsgrad', n_epochs=50, batch_size=128, initial_lr=0.001, final_lr=0.0001, nesterov_momentum=None, validation_split=0.25, early_stopping=True, scale_inputs=True, shuffle_labels=False, limit_samplesize=None, memmap=False, verbose='some', scale_parameters=True, n_workers=8, clip_gradient=None)[source]

Trains the network.

Parameters:
method : str

The inference method used for training. Allowed values are ‘alice’, ‘alices’, ‘carl’, ‘cascal’, ‘rascal’, and ‘rolr’.

x : ndarray or str

Observations, or filename of a pickled numpy array.

y : ndarray or str

Class labels (0 = numeerator, 1 = denominator), or filename of a pickled numpy array.

theta : ndarray or str

Numerator parameter point, or filename of a pickled numpy array.

r_xz : ndarray or str or None, optional

Joint likelihood ratio, or filename of a pickled numpy array. Default value: None.

t_xz : ndarray or str or None, optional

Joint scores at theta, or filename of a pickled numpy array. Default value: None.

x_val : ndarray or str or None, optional

Validation observations, or filename of a pickled numpy array. If None and validation_split > 0, validation data will be randomly selected from the training data. Default value: None.

y_val : ndarray or str or None, optional

Validation labels (0 = numerator, 1 = denominator), or filename of a pickled numpy array. If None and validation_split > 0, validation data will be randomly selected from the training data. Default value: None.

theta_val : ndarray or str or None, optional

Validation numerator parameter points, or filename of a pickled numpy array. If None and validation_split > 0, validation data will be randomly selected from the training data. Default value: None.

r_xz_val : ndarray or str or None, optional

Validation joint likelihood ratio, or filename of a pickled numpy array. If None and validation_split > 0, validation data will be randomly selected from the training data. Default value: None.

t_xz_val : ndarray or str or None, optional

Validation joint scores at theta, or filename of a pickled numpy array. If None and validation_split > 0, validation data will be randomly selected from the training data. Default value: None.

alpha : float, optional

Hyperparameter weighting the score error in the loss function of the ‘alices’, ‘rascal’, and ‘cascal’ methods. Default value: 1.

optimizer : {“adam”, “amsgrad”, “sgd”}, optional

Optimization algorithm. Default value: “amsgrad”.

n_epochs : int, optional

Number of epochs. Default value: 50.

batch_size : int, optional

Batch size. Default value: 128.

initial_lr : float, optional

Learning rate during the first epoch, after which it exponentially decays to final_lr. Default value: 0.001.

final_lr : float, optional

Learning rate during the last epoch. Default value: 0.0001.

nesterov_momentum : float or None, optional

If trainer is “sgd”, sets the Nesterov momentum. Default value: None.

validation_split : float or None, optional

Fraction of samples used for validation and early stopping (if early_stopping is True). If None, the entire sample is used for training and early stopping is deactivated. Default value: 0.25.

early_stopping : bool, optional

Activates early stopping based on the validation loss (only if validation_split is not None). Default value: True.

scale_inputs : bool, optional

Scale the observables to zero mean and unit variance. Default value: True.

shuffle_labels : bool, optional

If True, the labels (y, r_xz, t_xz) are shuffled, while the observations (x) remain in their normal order. This serves as a closure test, in particular as cross-check against overfitting: an estimator trained with shuffle_labels=True should predict to likelihood ratios around 1 and scores around 0.

limit_samplesize : int or None, optional

If not None, only this number of samples (events) is used to train the estimator. Default value: None.

memmap : bool, optional.

If True, training files larger than 1 GB will not be loaded into memory at once. Default value: False.

verbose : {“all”, “many”, “some”, “few”, “none}, optional

Determines verbosity of training. Default value: “some”.

scale_parameters : bool, optional

Whether parameters are rescaled to mean zero and unit variance before going into the neural network. Default value: True.

Returns:
results: ndarray

Results from SingleParameterizedRatioTrainer.train or DoubleParameterizedRatioTrainer.train for example

class madminer.ml.ScoreEstimator(features=None, n_hidden=(100, ), activation='tanh', dropout_prob=0.0)[source]

Bases: madminer.ml.Estimator

A neural estimator of the score evaluated at a fixed reference hypothesis as a function of the
observation x.
Parameters:
features : list of int or None, optional

Indices of observables (features) that are used as input to the neural networks. If None, all observables are used. Default value: None.

n_hidden : tuple of int, optional

Units in each hidden layer in the neural networks. If method is ‘nde’ or ‘scandal’, this refers to the setup of each individual MADE layer. Default value: (100,).

activation : {‘tanh’, ‘sigmoid’, ‘relu’}, optional

Activation function. Default value: ‘tanh’.

Methods

calculate_fisher_information(self, x[, …]) Calculates the expected Fisher information matrix based on the kinematic information in a given number of events.
evaluate_log_likelihood(self, \*args, \*\*kwargs) Log likelihood estimation.
evaluate_log_likelihood_ratio(self, \*args, …) Log likelihood ratio estimation.
evaluate_score(self, x[, nuisance_mode]) Evaluates the score.
load(self, filename) Loads a trained model from files.
save(self, filename[, save_model]) Saves the trained model to four files: a JSON file with the settings, a pickled pyTorch state dict file, and numpy files for the mean and variance of the inputs (used for input scaling).
set_nuisance(self, fisher_information, …) Prepares the calculation of profiled scores, see https://arxiv.org/pdf/1903.01473.pdf.
train(self, method, x, t_xz[, x_val, …]) Trains the network.
evaluate  
initialize_input_transform  
calculate_fisher_information(self, x, weights=None, n_events=1, sum_events=True)[source]

Calculates the expected Fisher information matrix based on the kinematic information in a given number of events.

Parameters:
x : str or ndarray

Sample of observations, or path to numpy file with observations. Note that this sample has to be sampled from the reference parameter where the score is estimated with the SALLY / SALLINO estimator.

weights : None or ndarray, optional

Weights for the observations. If None, all events are taken to have equal weight. Default value: None.

n_events : float, optional

Expected number of events for which the kinematic Fisher information should be calculated. Default value: 1.

sum_events : bool, optional

If True, the expected Fisher information summed over the events x is calculated. If False, the per-event Fisher information for each event is returned. Default value: True.

Returns:
fisher_information : ndarray

Expected kinematic Fisher information matrix with shape (n_events, n_parameters, n_parameters) if sum_events is False or (n_parameters, n_parameters) if sum_events is True.

evaluate(self, *args, **kwargs)[source]
evaluate_log_likelihood(self, *args, **kwargs)[source]

Log likelihood estimation. Signature depends on the type of estimator. The first returned value is the log likelihood with shape (n_thetas, n_x).

evaluate_log_likelihood_ratio(self, *args, **kwargs)[source]

Log likelihood ratio estimation. Signature depends on the type of estimator. The first returned value is the log likelihood ratio with shape (n_thetas, n_x) or (n_x).

evaluate_score(self, x, nuisance_mode='auto')[source]

Evaluates the score.

Parameters:
x : str or ndarray

Observations, or filename of a pickled numpy array.

nuisance_mode : {“auto”, “keep”, “profile”, “project”}

Decides how nuisance parameters are treated. If nuisance_mode is “auto”, the returned score is the (n+k)- dimensional score in the space of n parameters of interest and k nuisance parameters if set_profiling has not been called, and the n-dimensional profiled score in the space of the parameters of interest if it has been called. For “keep”, the returned score is always (n+k)-dimensional. For “profile”, it is the n-dimensional profiled score. For “project”, it is the n-dimensional projected score, i.e. ignoring the nuisance parameters.

Returns:
score : ndarray

Estimated score with shape (n_observations, n_parameters).

load(self, filename)[source]

Loads a trained model from files.

Parameters:
filename : str

Path to the files. ‘_settings.json’ and ‘_state_dict.pl’ will be added.

Returns:
None
save(self, filename, save_model=False)[source]

Saves the trained model to four files: a JSON file with the settings, a pickled pyTorch state dict file, and numpy files for the mean and variance of the inputs (used for input scaling).

Parameters:
filename : str

Path to the files. ‘_settings.json’ and ‘_state_dict.pl’ will be added.

save_model : bool, optional

If True, the whole model is saved in addition to the state dict. This is not necessary for loading it again with Estimator.load(), but can be useful for debugging, for instance to plot the computational graph.

Returns:
None
set_nuisance(self, fisher_information, parameters_of_interest)[source]

Prepares the calculation of profiled scores, see https://arxiv.org/pdf/1903.01473.pdf.

Parameters:
fisher_information : ndarray

Fisher informatioin with shape (n_parameters, n_parameters).

parameters_of_interest : list of int

List of int, with 0 <= remaining_compoinents[i] < n_parameters. Denotes which parameters are kept in the profiling, and their new order.

Returns:
None
train(self, method, x, t_xz, x_val=None, t_xz_val=None, optimizer='amsgrad', n_epochs=50, batch_size=128, initial_lr=0.001, final_lr=0.0001, nesterov_momentum=None, validation_split=0.25, early_stopping=True, scale_inputs=True, shuffle_labels=False, limit_samplesize=None, memmap=False, verbose='some', n_workers=8, clip_gradient=None)[source]

Trains the network.

Parameters:
method : str

The inference method used for training. Currently values are ‘sally’ and ‘sallino’, but at the training stage they are identical. So right now it doesn’t matter which one you use.

x : ndarray or str

Path to an unweighted sample of observations, as saved by the madminer.sampling.SampleAugmenter functions. Required for all inference methods.

t_xz : ndarray or str

Joint scores at the reference hypothesis, or filename of a pickled numpy array.

optimizer : {“adam”, “amsgrad”, “sgd”}, optional

Optimization algorithm. Default value: “amsgrad”.

n_epochs : int, optional

Number of epochs. Default value: 50.

batch_size : int, optional

Batch size. Default value: 128.

initial_lr : float, optional

Learning rate during the first epoch, after which it exponentially decays to final_lr. Default value: 0.001.

final_lr : float, optional

Learning rate during the last epoch. Default value: 0.0001.

nesterov_momentum : float or None, optional

If trainer is “sgd”, sets the Nesterov momentum. Default value: None.

validation_split : float or None, optional

Fraction of samples used for validation and early stopping (if early_stopping is True). If None, the entire sample is used for training and early stopping is deactivated. Default value: 0.25.

early_stopping : bool, optional

Activates early stopping based on the validation loss (only if validation_split is not None). Default value: True.

scale_inputs : bool, optional

Scale the observables to zero mean and unit variance. Default value: True.

shuffle_labels : bool, optional

If True, the labels (y, r_xz, t_xz) are shuffled, while the observations (x) remain in their normal order. This serves as a closure test, in particular as cross-check against overfitting: an estimator trained with shuffle_labels=True should predict to likelihood ratios around 1 and scores around 0.

limit_samplesize : int or None, optional

If not None, only this number of samples (events) is used to train the estimator. Default value: None.

memmap : bool, optional.

If True, training files larger than 1 GB will not be loaded into memory at once. Default value: False.

verbose : {“all”, “many”, “some”, “few”, “none}, optional

Determines verbosity of training. Default value: “some”.

Returns:
None
exception madminer.ml.TheresAGoodReasonThisDoesntWork[source]

Bases: Exception

madminer.ml.load_estimator(filename)[source]