rating_gp.models.gpytorch#

Classes

ExactGPModel(train_x, train_y, likelihood)

PowerLawTransform()

RatingGPMarginalGPyTorch([model_config])

Gaussian Process implementation of the LOAD ESTimation (LOADEST) model

class rating_gp.models.gpytorch.ExactGPModel(train_x, train_y, likelihood)#
cov_base(eta_prior=None)#

Smooth, time-independent base rating curve using a Matern kernel on stage.

cov_bend(eta_prior=None)#

Smooth, time-dependent bending kernel for switchpoint.

cov_periodic(ls_prior=None, eta_prior=None)#

Smooth, time-dependent periodic kernel for seasonal effects.

forward(x)#

Define the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class rating_gp.models.gpytorch.PowerLawTransform#
forward(x)#

Define the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class rating_gp.models.gpytorch.RatingGPMarginalGPyTorch(model_config: ModelConfig = ModelConfig(transform='log'))#

Gaussian Process implementation of the LOAD ESTimation (LOADEST) model

This model currrently uses the marginal likelihood implementation, which is fast but does not account for censored data. Censored data require a slower latent variable implementation.

build_model(X, y, y_unc=None) ExactGP#

Build marginal likelihood version of RatingGP

fit(covariates, target, target_unc=None, iterations=100, optimizer=None, learning_rate=None, early_stopping=False, patience=60, scheduler=True, resume=False, monotonic_penalty_weight: float = 0.0, grid_size: int = 64, monotonic_penalty_interval: int = 1)#

Override fit to inject a monotonicity penalty on the rating curve.

resume: if True, continue training from the last saved iteration until the total

number of iterations is reached. If False, start from iteration 0.

monotonic_penalty_weight: strength of penalty on negative dQ/dStage.

1.0 works well in practice.

grid_size: number of random points to sample over time-stage grid monotonic_penalty_interval: compute the penalty every k iterations (k>=1).

When k>1, the penalty is applied every k-th iteration and scaled by k to maintain the same expected regularization strength, reducing compute.