rating_gp.models.kernels#
Classes
|
|
|
Logarithmic Warp |
|
Wraps a base kernel and applies torch.log(x + eps) to a specified input dimension to avoid log(0). |
|
Power Law Kernel |
|
Wraps a base kernel and applies a PowerLawTransform to the stage input. |
|
Sigmoid Kernel |
|
A time RBF kernel with a stage-variable length scale. |
|
- class rating_gp.models.kernels.InvertedSigmoidKernel(sigmoid_kernel, active_dims=None, b_constraint=None)#
- property b#
Delegate b parameter to the original SigmoidKernel.
- forward(x1, x2, last_dim_is_batch=False, diag=False, **params)#
Computes the covariance between \(\mathbf x_1\) and \(\mathbf x_2\). This method should be implemented by all Kernel subclasses.
- Parameters:
x1 – First set of data (… x N x D).
x2 – Second set of data (… x M x D).
diag – Should the Kernel compute the whole kernel, or just the diag? If True, it must be the case that x1 == x2. (Default: False.)
last_dim_is_batch – If True, treat the last dimension of x1 and x2 as another batch dimension. (Useful for additive structure over the dimensions). (Default: False.)
- Returns:
The kernel matrix or vector. The shape depends on the kernel’s evaluation mode:
full_covar: … x N x M
full_covar with last_dim_is_batch=True: … x K x N x M
diag: … x N
diag with last_dim_is_batch=True: … x K x N
- class rating_gp.models.kernels.LogWarp#
Logarithmic Warp
Note: good smoother
- forward(x)#
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class rating_gp.models.kernels.LogWarpKernel(base_kernel, dim, eps=1e-06)#
Wraps a base kernel and applies torch.log(x + eps) to a specified input dimension to avoid log(0).
- forward(x1, x2=None, **params)#
Computes the covariance between \(\mathbf x_1\) and \(\mathbf x_2\). This method should be implemented by all Kernel subclasses.
- Parameters:
x1 – First set of data (… x N x D).
x2 – Second set of data (… x M x D).
diag – Should the Kernel compute the whole kernel, or just the diag? If True, it must be the case that x1 == x2. (Default: False.)
last_dim_is_batch – If True, treat the last dimension of x1 and x2 as another batch dimension. (Useful for additive structure over the dimensions). (Default: False.)
- Returns:
The kernel matrix or vector. The shape depends on the kernel’s evaluation mode:
full_covar: … x N x M
full_covar with last_dim_is_batch=True: … x K x N x M
diag: … x N
diag with last_dim_is_batch=True: … x K x N
- class rating_gp.models.kernels.PowerLawKernel(a_prior=None, a_constraint=None, b_prior=None, b_constraint=Positive(), c_prior=None, c_constraint=Positive(), **kwargs)#
Power Law Kernel
This kernel is the rating curve power law equivalent to the linear kernel. The power law equation is given by:
f(x) = a + (b * ln(x - c))
- forward(x1, x2, diag=False, **params)#
Computes the covariance between \(\mathbf x_1\) and \(\mathbf x_2\). This method should be implemented by all Kernel subclasses.
- Parameters:
x1 – First set of data (… x N x D).
x2 – Second set of data (… x M x D).
diag – Should the Kernel compute the whole kernel, or just the diag? If True, it must be the case that x1 == x2. (Default: False.)
last_dim_is_batch – If True, treat the last dimension of x1 and x2 as another batch dimension. (Useful for additive structure over the dimensions). (Default: False.)
- Returns:
The kernel matrix or vector. The shape depends on the kernel’s evaluation mode:
full_covar: … x N x M
full_covar with last_dim_is_batch=True: … x K x N x M
diag: … x N
diag with last_dim_is_batch=True: … x K x N
- class rating_gp.models.kernels.PowerLawWarpKernel(base_kernel, powerlaw_transform, stage_dim)#
Wraps a base kernel and applies a PowerLawTransform to the stage input.
- forward(x1, x2=None, **params)#
Computes the covariance between \(\mathbf x_1\) and \(\mathbf x_2\). This method should be implemented by all Kernel subclasses.
- Parameters:
x1 – First set of data (… x N x D).
x2 – Second set of data (… x M x D).
diag – Should the Kernel compute the whole kernel, or just the diag? If True, it must be the case that x1 == x2. (Default: False.)
last_dim_is_batch – If True, treat the last dimension of x1 and x2 as another batch dimension. (Useful for additive structure over the dimensions). (Default: False.)
- Returns:
The kernel matrix or vector. The shape depends on the kernel’s evaluation mode:
full_covar: … x N x M
full_covar with last_dim_is_batch=True: … x K x N x M
diag: … x N
diag with last_dim_is_batch=True: … x K x N
- class rating_gp.models.kernels.SigmoidKernel(b_constraint, b_prior=None, **kwargs)#
Sigmoid Kernel
This kernel can be multiplied by another kernel to give a breakpoint in the data using a sigmoid
Note: The strength of the slope of the sigmoid is currently fixed to a steep slope. This can be turned back into a parameter, but it cause numerical instabilities during fitting.
- forward(x1, x2, last_dim_is_batch=False, diag=False, **params)#
Computes the covariance between \(\mathbf x_1\) and \(\mathbf x_2\). This method should be implemented by all Kernel subclasses.
- Parameters:
x1 – First set of data (… x N x D).
x2 – Second set of data (… x M x D).
diag – Should the Kernel compute the whole kernel, or just the diag? If True, it must be the case that x1 == x2. (Default: False.)
last_dim_is_batch – If True, treat the last dimension of x1 and x2 as another batch dimension. (Useful for additive structure over the dimensions). (Default: False.)
- Returns:
The kernel matrix or vector. The shape depends on the kernel’s evaluation mode:
full_covar: … x N x M
full_covar with last_dim_is_batch=True: … x K x N x M
diag: … x N
diag with last_dim_is_batch=True: … x K x N
- class rating_gp.models.kernels.StageTimeKernel(a_prior=None, a_constraint=Positive(), **kwargs)#
A time RBF kernel with a stage-variable length scale.
A scalar length scale is multiplied by the stage, which is taken to a variable power, to impose a stage variablity.
- forward(x1, x2, **params)#
Computes the covariance between \(\mathbf x_1\) and \(\mathbf x_2\). This method should be implemented by all Kernel subclasses.
- Parameters:
x1 – First set of data (… x N x D).
x2 – Second set of data (… x M x D).
diag – Should the Kernel compute the whole kernel, or just the diag? If True, it must be the case that x1 == x2. (Default: False.)
last_dim_is_batch – If True, treat the last dimension of x1 and x2 as another batch dimension. (Useful for additive structure over the dimensions). (Default: False.)
- Returns:
The kernel matrix or vector. The shape depends on the kernel’s evaluation mode:
full_covar: … x N x M
full_covar with last_dim_is_batch=True: … x K x N x M
diag: … x N
diag with last_dim_is_batch=True: … x K x N
- class rating_gp.models.kernels.TanhWarp#
- forward(x)#
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.