pyemma.coordinates.transform.VAMP¶
-
class
pyemma.coordinates.transform.
VAMP
(lag, dim=None, scaling=None, right=False, epsilon=1e-06, stride=1, skip=0, ncov_max=inf)¶ Variational approach for Markov processes (VAMP)
-
__init__
(lag, dim=None, scaling=None, right=False, epsilon=1e-06, stride=1, skip=0, ncov_max=inf)¶ Variational approach for Markov processes (VAMP) [1]_.
Parameters: - lag (int) – lag time
- dim (float or int, default=None) –
Number of dimensions to keep:
- if dim is not set (None) all available ranks are kept:
- n_components == min(n_samples, n_uncorrelated_features)
- if dim is an integer >= 1, this number specifies the number of dimensions to keep.
- if dim is a float with
0 < dim < 1
, select the number of dimensions such that the amount of kinetic variance that needs to be explained is greater than the percentage specified by dim.
- scaling (None or string) –
Scaling to be applied to the VAMP order parameters upon transformation
- None: no scaling will be applied, variance of the order parameters is 1
- ’kinetic map’ or ‘km’: order parameters are scaled by singular value. Only the left singular functions induce a kinetic map wrt the conventional forward propagator. The right singular functions induce a kinetic map wrt the backward propagator.
- right (boolean) – Whether to compute the right singular functions. If right==True, get_output() will return the right singular functions. Otherwise, get_output() will return the left singular functions. Beware that only frames[tau:, :] of each trajectory returned by get_output() contain valid values of the right singular functions. Conversely, only frames[0:-tau, :] of each trajectory returned by get_output() contain valid values of the left singular functions. The remaining frames might possibly be interpreted as some extrapolation.
- epsilon (float) – eigenvalue cutoff. Eigenvalues of \(C_{00}\) and \(C_{11}\)
with norms <= epsilon will be cut off. The remaining number of
eigenvalues together with the value of
dim
define the size of the output. - stride (int, optional, default = 1) – Use only every stride-th time step. By default, every time step is used.
- skip (int, default=0) – skip the first initial n frames per trajectory.
- ncov_max (int, default=infinity) – limit the memory usage of the algorithm from [3] to an amount that corresponds to ncov_max additional copies of each correlation matrix
Notes
VAMP is a method for dimensionality reduction of Markov processes.
The Koopman operator \(\mathcal{K}\) is an integral operator that describes conditional future expectation values. Let \(p(\mathbf{x},\,\mathbf{y})\) be the conditional probability density of visiting an infinitesimal phase space volume around point \(\mathbf{y}\) at time \(t+\tau\) given that the phase space point \(\mathbf{x}\) was visited at the earlier time \(t\). Then the action of the Koopman operator on a function \(f\) can be written as follows:
\[\mathcal{K}f=\int p(\mathbf{x},\,\mathbf{y})f(\mathbf{y})\,\mathrm{dy}=\mathbb{E}\left[f(\mathbf{x}_{t+\tau}\mid\mathbf{x}_{t}=\mathbf{x})\right]\]The Koopman operator is defined without any reference to an equilibrium distribution. Therefore it is well-defined in situations where the dynamics is irreversible or/and non-stationary such that no equilibrium distribution exists.
If we approximate \(f\) by a linear superposition of ansatz functions \(\boldsymbol{\chi}\) of the conformational degrees of freedom (features), the operator \(\mathcal{K}\) can be approximated by a (finite-dimensional) matrix \(\mathbf{K}\).
The approximation is computed as follows: From the time-dependent input features \(\boldsymbol{\chi}(t)\), we compute the mean \(\boldsymbol{\mu}_{0}\) (\(\boldsymbol{\mu}_{1}\)) from all data excluding the last (first) \(\tau\) steps of every trajectory as follows:
\[ \begin{align}\begin{aligned}\boldsymbol{\mu}_{0} :=\frac{1}{T-\tau}\sum_{t=0}^{T-\tau}\boldsymbol{\chi}(t)\\\boldsymbol{\mu}_{1} :=\frac{1}{T-\tau}\sum_{t=\tau}^{T}\boldsymbol{\chi}(t)\end{aligned}\end{align} \]Next, we compute the instantaneous covariance matrices \(\mathbf{C}_{00}\) and \(\mathbf{C}_{11}\) and the time-lagged covariance matrix \(\mathbf{C}_{01}\) as follows:
\[ \begin{align}\begin{aligned}\mathbf{C}_{00} :=\frac{1}{T-\tau}\sum_{t=0}^{T-\tau}\left[\boldsymbol{\chi}(t)-\boldsymbol{\mu}_{0}\right]\left[\boldsymbol{\chi}(t)-\boldsymbol{\mu}_{0}\right]\\\mathbf{C}_{11} :=\frac{1}{T-\tau}\sum_{t=\tau}^{T}\left[\boldsymbol{\chi}(t)-\boldsymbol{\mu}_{1}\right]\left[\boldsymbol{\chi}(t)-\boldsymbol{\mu}_{1}\right]\\\mathbf{C}_{01} :=\frac{1}{T-\tau}\sum_{t=0}^{T-\tau}\left[\boldsymbol{\chi}(t)-\boldsymbol{\mu}_{0}\right]\left[\boldsymbol{\chi}(t+\tau)-\boldsymbol{\mu}_{1}\right]\end{aligned}\end{align} \]The Koopman matrix is then computed as follows:
\[\mathbf{K}=\mathbf{C}_{00}^{-1}\mathbf{C}_{01}\]It can be shown [1]_ that the leading singular functions of the half-weighted Koopman matrix
\[\bar{\mathbf{K}}:=\mathbf{C}_{00}^{-\frac{1}{2}}\mathbf{C}_{01}\mathbf{C}_{11}^{-\frac{1}{2}}\]encode the best reduced dynamical model for the time series.
The singular functions can be computed by first performing the singular value decomposition
\[\bar{\mathbf{K}}=\mathbf{U}^{\prime}\mathbf{S}\mathbf{V}^{\prime}\]and then mapping the input conformation to the left singular functions \(\boldsymbol{\psi}\) and right singular functions \(\boldsymbol{\phi}\) as follows:
\[ \begin{align}\begin{aligned}\boldsymbol{\psi}(t):=\mathbf{U}^{\prime\top}\mathbf{C}_{00}^{-\frac{1}{2}}\left[\boldsymbol{\chi}(t)-\boldsymbol{\mu}_{0}\right]\\\boldsymbol{\phi}(t):=\mathbf{V}^{\prime\top}\mathbf{C}_{11}^{-\frac{1}{2}}\left[\boldsymbol{\chi}(t)-\boldsymbol{\mu}_{1}\right]\end{aligned}\end{align} \]References
[1] Wu, H. and Noe, F. 2017. Variational approach for learning Markov processes from time series data. arXiv:1707.04659v1 [2] Noe, F. and Clementi, C. 2015. Kinetic distance and kinetic maps from molecular dynamics simulation. J. Chem. Theory. Comput. doi:10.1021/acs.jctc.5b00553 [3] Chan, T. F., Golub G. H., LeVeque R. J. 1979. Updating formulae and pairwiese algorithms for computing sample variances. Technical Report STAN-CS-79-773, Department of Computer Science, Stanford University.
Methods
__init__
(lag[, dim, scaling, right, …])Variational approach for Markov processes (VAMP) [1]_. cktest
([n_observables, observables, …])Do the Chapman-Kolmogorov test by computing predictions for higher lag times and by performing estimations at higher lag times. describe
()Get a descriptive string representation of this class. dimension
()real output dimension after low-rank approximation. estimate
(X, **kwargs)Estimates the model given the data X expectation
(observables, statistics[, …])Compute future expectation of observable or covariance using the approximated Koopman operator. fit
(X[, y])Estimates parameters - for compatibility with sklearn. fit_transform
(X[, y])Fit to data, then transform it. get_output
([dimensions, stride, skip, chunk])Maps all input data of this transformer and returns it as an array or list of arrays get_params
([deep])Get parameters for this estimator. iterator
([stride, lag, chunk, …])creates an iterator to stream over the (transformed) data. load
(file_name[, model_name])Loads a previously saved PyEMMA object from disk. n_chunks
(chunksize[, stride, skip])how many chunks an iterator of this sourcde will output, starting (eg. n_frames_total
([stride, skip])Returns total number of frames. number_of_trajectories
([stride])Returns the number of trajectories. output_type
()By default transformers return single precision floats. partial_fit
(X)incrementally update the covariances and mean. save
(file_name[, model_name, overwrite, …])saves the current state of this object to given file and name. score
([test_data, score_method])Compute the VAMP score for this model or the cross-validation score between self and a second model estimated form different data. set_params
(**params)Set the parameters of this estimator. trajectory_length
(itraj[, stride, skip])Returns the length of trajectory of the requested index. trajectory_lengths
([stride, skip])Returns the length of each trajectory. transform
(X)Maps the input data through the transformer to correspondingly shaped output data array/list. write_to_csv
([filename, extension, …])write all data to csv with numpy.savetxt write_to_hdf5
(filename[, group, …])writes all data of this Iterable to a given HDF5 file. Attributes
chunksize
chunksize defines how much data is being processed at once. cumvar
Cumulative sum of the squared and normalized singular values data_producer
The data producer for this data source object (can be another data source object). default_chunksize
How much data will be processed at once, in case no chunksize has been provided. dim
Number of dimensions to keep epsilon
singular value cutoff. filenames
list of file names the data is originally being read from. in_memory
are results stored in memory? is_random_accessible
Check if self._is_random_accessible is set to true and if all the random access strategies are implemented. is_reader
Property telling if this data source is a reader or not. logger
The logger for this class instance model
The model estimated by this Estimator name
The name of this instance ndim
ntraj
ra_itraj_cuboid
Implementation of random access with slicing that can be up to 3-dimensional, where the first dimension corresponds to the trajectory index, the second dimension corresponds to the frames and the third dimension corresponds to the dimensions of the frames. ra_itraj_jagged
Behaves like ra_itraj_cuboid just that the trajectories are not truncated and returned as a list. ra_itraj_linear
Implementation of random access that takes arguments as the default random access (i.e., up to three dimensions with trajs, frames and dims, respectively), but which considers the frame indexing to be contiguous. ra_linear
Implementation of random access that takes a (maximal) two-dimensional slice where the first component corresponds to the frames and the second component corresponds to the dimensions. scaling
Scaling to be applied to the VAMP order parameters upon transformation show_progress
singular_values
Singular values of the half-weighted Koopman matrix (usually denoted \(\sigma\)) singular_vectors_left
Transformation matrix that represents the linear map from feature space to the space of left singular functions. singular_vectors_right
Transformation matrix that represents the linear map from feature space to the space of right singular functions. -
chunksize
¶ chunksize defines how much data is being processed at once.
-
cktest
(n_observables=None, observables='phi', statistics='psi', mlags=10, n_jobs=None, show_progress=True, iterable=None)¶ Do the Chapman-Kolmogorov test by computing predictions for higher lag times and by performing estimations at higher lag times.
Notes
This method computes two sets of time-lagged covariance matrices
estimates at higher lag times :
\[\left\langle \mathbf{K}(n\tau)g_{i},f_{j}\right\rangle_{\rho_{0}}\]where \(\rho_{0}\) is the empirical distribution implicitly defined by all data points from time steps 0 to T-tau in all trajectories, \(\mathbf{K}(n\tau)\) is a rank-reduced Koopman matrix estimated at the lag-time n*tau and g and f are some functions of the data. Rank-reduction of the Koopman matrix is controlled by the
dim
parameter ofvamp
.predictions at higher lag times :
\[\left\langle \mathbf{K}^{n}(\tau)g_{i},f_{j}\right\rangle_{\rho_{0}}\]where \(\mathbf{K}^{n}\) is the n’th power of the rank-reduced Koopman matrix contained in self.
The Champan-Kolmogorov test is to compare the predictions to the estimates.
Parameters: - n_observables (int, optional, default=None) – Limit the number of default observables (and of default statistics)
to this number.
Only used if observables are None or
statistics
are None. - observables (np.ndarray((input_dimension, n_observables)) or 'phi') – Coefficients that express one or multiple observables \(g\) in the basis of the input features. This parameter can be ‘phi’. In that case, the dominant right singular functions of the Koopman operator estimated at the smallest lag time are used as default observables.
- statistics (np.ndarray((input_dimension, n_statistics)) or 'psi') – Coefficients that express one or multiple statistics \(f\) in the basis of the input features. This parameter can be ‘psi’. In that case, the dominant left singular functions of the Koopman operator estimated at the smallest lag time are used as default statistics.
- mlags (int or int-array, default=10) – multiples of lag times for testing the Model, e.g. range(10). A single int will trigger a range, i.e. mlags=10 maps to mlags=range(10). Note that you need to be able to do a model prediction for each of these lag time multiples, e.g. the value 0 only make sense if model.expectation(lag_multiple=0) will work.
- n_jobs (int, default=None) – how many jobs to use during calculation
- show_progress (bool, default=True) – Show progressbars for calculation?
- iterable (any data format that pyemma.coordinates.vamp() accepts as input, optional) – It iterable is None, the same data source with which VAMP was initialized will be used for all estimation. Otherwise, all estimates (not predictions) from data will be computed from the data contained in iterable.
Returns: vckv – Contains the estimated and the predicted covarince matrices. The object can be plotted with
plot_cktest
with the option y01=False.Return type:
-
cumvar
¶ Cumulative sum of the squared and normalized singular values
Returns: cumvar Return type: 1D np.array
-
data_producer
¶ The data producer for this data source object (can be another data source object). :returns: :rtype: This data source’s data producer.
-
default_chunksize
¶ How much data will be processed at once, in case no chunksize has been provided.
Notes
This variable respects your setting for maximum memory in pyemma.config.default_chunksize
-
describe
()¶ Get a descriptive string representation of this class.
-
dim
¶ Number of dimensions to keep
- if dim is not set (None) all available ranks are kept: n_components == min(n_samples, n_features)
- if dim is an integer >= 1, this number specifies the number
of dimensions to keep. * if dim is a float with
0 < dim < 1
, select the number of dimensions such that the amount of kinetic variance that needs to be explained is greater than the percentage specified by dim.
-
dimension
()¶ real output dimension after low-rank approximation.
-
epsilon
¶ singular value cutoff.
Singular values of \(C0\) with norms <= epsilon will be cut off. The remaining number of singular values define the size of the output.
-
estimate
(X, **kwargs)¶ Estimates the model given the data X
Parameters: - X (object) – A reference to the data from which the model will be estimated
- params (dict) – New estimation parameter values. The parameters must that have been announced in the __init__ method of this estimator. The present settings will overwrite the settings of parameters given in the __init__ method, i.e. the parameter values after this call will be those that have been used for this estimation. Use this option if only one or a few parameters change with respect to the __init__ settings for this run, and if you don’t need to remember the original settings of these changed parameters.
Returns: estimator – The estimated estimator with the model being available.
Return type: object
-
expectation
(observables, statistics, lag_multiple=1, observables_mean_free=False, statistics_mean_free=False)¶ Compute future expectation of observable or covariance using the approximated Koopman operator.
Parameters: - observables (np.ndarray((input_dimension, n_observables))) – Coefficients that express one or multiple observables in the basis of the input features.
- statistics (np.ndarray((input_dimension, n_statistics)), optional) – Coefficients that express one or multiple statistics in the basis of the input features. This parameter can be None. In that case, this method returns the future expectation value of the observable(s).
- lag_multiple (int) – If > 1, extrapolate to a multiple of the estimator’s lag time by assuming Markovianity of the approximated Koopman operator.
- observables_mean_free (bool, default=False) – If true, coefficients in observables refer to the input features with feature means removed. If false, coefficients in observables refer to the unmodified input features.
- statistics_mean_free (bool, default=False) – If true, coefficients in
statistics
refer to the input features with feature means removed. If false, coefficients instatistics
refer to the unmodified input features.
Notes
A “future expectation” of a observable g is the average of g computed over a time window that has the same total length as the input data from which the Koopman operator was estimated but is shifted by lag_multiple*tau time steps into the future (where tau is the lag time).
It is computed with the equation:
\[\mathbb{E}[g]_{\rho_{n}}=\mathbf{q}^{T}\mathbf{P}^{n-1}\mathbf{e}_{1}\]where
\[P_{ij}=\sigma_{i}\langle\psi_{i},\phi_{j}\rangle_{\rho_{1}}\]and
\[q_{i}=\langle g,\phi_{i}\rangle_{\rho_{1}}\]and \(\mathbf{e}_{1}\) is the first canonical unit vector.
A model prediction of time-lagged covariances between the observable f and the statistic g at a lag-time of lag_multiple*tau is computed with the equation:
\[\mathrm{cov}[g,\,f;n\tau]=\mathbf{q}^{T}\mathbf{P}^{n-1}\boldsymbol{\Sigma}\mathbf{r}\]where \(r_{i}=\langle\psi_{i},f\rangle_{\rho_{0}}\) and \(\boldsymbol{\Sigma}=\mathrm{diag(\boldsymbol{\sigma})}\) .
-
filenames
¶ list of file names the data is originally being read from.
Returns: names – list of file names at the beginning of the input chain. Return type: list of str
-
fit
(X, y=None)¶ Estimates parameters - for compatibility with sklearn.
Parameters: X (object) – A reference to the data from which the model will be estimated Returns: estimator – The estimator (self) with estimated model. Return type: object
-
fit_transform
(X, y=None, **fit_params)¶ Fit to data, then transform it. Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X. :param X: Training set. :type X: numpy array of shape [n_samples, n_features] :param y: Target values. :type y: numpy array of shape [n_samples]
Returns: X_new – Transformed array. Return type: numpy array of shape [n_samples, n_features_new]
-
get_output
(dimensions=slice(0, None, None), stride=1, skip=0, chunk=None)¶ Maps all input data of this transformer and returns it as an array or list of arrays
Parameters: - dimensions (list-like of indexes or slice, default=all) – indices of dimensions you like to keep.
- stride (int, default=1) – only take every n’th frame.
- skip (int, default=0) – initially skip n frames of each file.
- chunk (int, default=None) – How many frames to process at once. If not given obtain the chunk size from the source.
Returns: output – the mapped data, where T is the number of time steps of the input data, or if stride > 1, floor(T_in / stride). d is the output dimension of this transformer. If the input consists of a list of trajectories, Y will also be a corresponding list of trajectories
Return type: list of ndarray(T_i, d)
-
get_params
(deep=True)¶ Get parameters for this estimator.
Parameters: deep (boolean, optional) – If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns: params – Parameter names mapped to their values. Return type: mapping of string to any
-
in_memory
¶ are results stored in memory?
-
is_random_accessible
¶ Check if self._is_random_accessible is set to true and if all the random access strategies are implemented. :returns: bool :rtype: Returns True if random accessible via strategies and False otherwise.
-
is_reader
¶ Property telling if this data source is a reader or not. :returns: bool :rtype: True if this data source is a reader and False otherwise
-
iterator
(stride=1, lag=0, chunk=None, return_trajindex=True, cols=None, skip=0)¶ creates an iterator to stream over the (transformed) data.
If your data is too large to fit into memory and you want to incrementally compute some quantities on it, you can create an iterator on a reader or transformer (eg. TICA) to avoid memory overflows.
Parameters: - stride (int, default=1) – Take only every stride’th frame.
- lag (int, default=0) – how many frame to omit for each file.
- chunk (int, default=None) – How many frames to process at once. If not given obtain the chunk size from the source.
- return_trajindex (boolean, default=True) – a chunk of data if return_trajindex is False, otherwise a tuple of (trajindex, data).
- cols (array like, default=None) – return only the given columns.
- skip (int, default=0) – skip ‘n’ first frames of each trajectory.
Returns: iter – a implementation of a DataSourceIterator to stream over the data
Return type: instance of DataSourceIterator
Examples
>>> from pyemma.coordinates import source; import numpy as np >>> data = [np.arange(3), np.arange(4, 7)] >>> reader = source(data) >>> iterator = reader.iterator(chunk=1) >>> for array_index, chunk in iterator: ... print(array_index, chunk) 0 [[0]] 0 [[1]] 0 [[2]] 1 [[4]] 1 [[5]] 1 [[6]]
-
classmethod
load
(file_name, model_name='default')¶ Loads a previously saved PyEMMA object from disk.
Parameters: - file_name (str or file like object (has to provide read method)) – The file like object tried to be read for a serialized object.
- model_name (str, default='default') – if multiple models are contained in the file, these can be accessed by
their name. Use
pyemma.list_models()
to get a representation of all stored models.
Returns: obj
Return type: the de-serialized object
-
logger
¶ The logger for this class instance
-
model
¶ The model estimated by this Estimator
-
n_chunks
(chunksize, stride=1, skip=0)¶ how many chunks an iterator of this sourcde will output, starting (eg. after calling reset())
Parameters: - chunksize –
- stride –
- skip –
-
n_frames_total
(stride=1, skip=0)¶ Returns total number of frames.
Parameters: - stride (int) – return value is the number of frames in trajectories when running through them with a step size of stride.
- skip (int, default=0) – skip the first initial n frames per trajectory.
Returns: n_frames_total – total number of frames.
Return type: int
-
name
¶ The name of this instance
-
number_of_trajectories
(stride=1)¶ Returns the number of trajectories.
Parameters: stride (None (default) or np.ndarray) – Returns: int Return type: number of trajectories
-
output_type
()¶ By default transformers return single precision floats.
-
partial_fit
(X)¶ incrementally update the covariances and mean.
Parameters: X (array, list of arrays, PyEMMA reader) – input data. Notes
The projection matrix is first being calculated upon its first access.
-
ra_itraj_cuboid
¶ Implementation of random access with slicing that can be up to 3-dimensional, where the first dimension corresponds to the trajectory index, the second dimension corresponds to the frames and the third dimension corresponds to the dimensions of the frames.
The with the frame slice selected frames will be loaded from each in the trajectory-slice selected trajectories and then sliced with the dimension slice. For example: The data consists out of three trajectories with length 10, 20, 10, respectively. The slice data[:, :15, :3] returns a 3D array of shape (3, 10, 3), where the first component corresponds to the three trajectories, the second component corresponds to 10 frames (note that the last 5 frames are being truncated as the other two trajectories only have 10 frames) and the third component corresponds to the selected first three dimensions.
Returns: Returns an object that allows access by slices in the described manner.
-
ra_itraj_jagged
¶ Behaves like ra_itraj_cuboid just that the trajectories are not truncated and returned as a list.
Returns: Returns an object that allows access by slices in the described manner.
-
ra_itraj_linear
¶ Implementation of random access that takes arguments as the default random access (i.e., up to three dimensions with trajs, frames and dims, respectively), but which considers the frame indexing to be contiguous. Therefore, it returns a simple 2D array.
Returns: A 2D array of the sliced data containing [frames, dims].
-
ra_linear
¶ Implementation of random access that takes a (maximal) two-dimensional slice where the first component corresponds to the frames and the second component corresponds to the dimensions. Here it is assumed that the frame indexing is contiguous, i.e., the first frame of the second trajectory has the index of the last frame of the first trajectory plus one.
Returns: Returns an object that allows access by slices in the described manner.
-
save
(file_name, model_name='default', overwrite=False, save_streaming_chain=False)¶ saves the current state of this object to given file and name.
Parameters: - file_name (str) – path to desired output file
- model_name (str, default='default') – creates a group named ‘model_name’ in the given file, which will contain all of the data. If the name already exists, and overwrite is False (default) will raise a RuntimeError.
- overwrite (bool, default=False) – Should overwrite existing model names?
- save_streaming_chain (boolean, default=False) – if True, the data_producer(s) of this object will also be saved in the given file.
Examples
>>> import pyemma, numpy as np >>> from pyemma.util.contexts import named_temporary_file >>> m = pyemma.msm.MSM(P=np.array([[0.1, 0.9], [0.9, 0.1]]))
>>> with named_temporary_file() as file: # doctest: +SKIP ... m.save(file, 'simple') # doctest: +SKIP ... inst_restored = pyemma.load(file, 'simple') # doctest: +SKIP >>> np.testing.assert_equal(m.P, inst_restored.P) # doctest: +SKIP
-
scaling
¶ Scaling to be applied to the VAMP order parameters upon transformation
- None: no scaling will be applied, variance of the order parameters is 1
- ‘kinetic map’ or ‘km’: order parameters are scaled by singular value
Only the left singular functions induce a kinetic map. Therefore scaling=’km’ is only effective if right is False.
-
score
(test_data=None, score_method='VAMP2')¶ Compute the VAMP score for this model or the cross-validation score between self and a second model estimated form different data.
Parameters: - test_data (any data format that pyemma.coordinates.vamp() accepts as input) –
If test_data is not None, this method computes the cross-validation score between self and a VAMP model estimated from test_data. It is assumed that self was estimated from the “training” data and test_data is the test data. The score is computed for one realization of self and test_data. Estimation of the average cross-validation score and partitioning of data into test and training part is not performed by this method.
If test_data is None, this method computes the VAMP score for the model contained in self.
The model that is estimated from test_data will inherit all hyperparameters from self.
- score_method (str, optional, default='VAMP2') –
Available scores are based on the variational approach for Markov processes [1]_:
- ’VAMPE’ Approximation error of the estimated Koopman operator with respect to
- the true Koopman operator up to an additive constant [1]_ .
Returns: score – If test_data is not None, returns the cross-validation VAMP score between self and the model estimated from test_data. Otherwise return the selected VAMP-score of self.
Return type: float
References
[1] Wu, H. and Noe, F. 2017. Variational approach for learning Markov processes from time series data. arXiv:1707.04659v1 [2] Noe, F. and Clementi, C. 2015. Kinetic distance and kinetic maps from molecular dynamics simulation. J. Chem. Theory. Comput. doi:10.1021/acs.jctc.5b00553 - test_data (any data format that pyemma.coordinates.vamp() accepts as input) –
-
set_params
(**params)¶ Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as pipelines). The former have parameters of the form
<component>__<parameter>
so that it’s possible to update each component of a nested object. :returns: :rtype: self
-
singular_values
¶ Singular values of the half-weighted Koopman matrix (usually denoted \(\sigma\))
Returns: singular values Return type: 1-D np.array
-
singular_vectors_left
¶ Transformation matrix that represents the linear map from feature space to the space of left singular functions.
Notes
Left “singular vectors” U of the VAMP problem (equation 13 in [1]_), columnwise
Returns: vectors – Coefficients that express the left singular functions in the basis of mean-free input features. Return type: 2-D ndarray References
[1] Wu, H. and Noe, F. 2017. Variational approach for learning Markov processes from time series data. arXiv:1707.04659v1
-
singular_vectors_right
¶ Transformation matrix that represents the linear map from feature space to the space of right singular functions.
Notes
Right “singular vectors” V of the VAMP problem (equation 13 in [1]_), columnwise
Returns: vectors – Coefficients that express the right singular functions in the basis of mean-free input features. Return type: 2-D ndarray References
[1] Wu, H. and Noe, F. 2017. Variational approach for learning Markov processes from time series data. arXiv:1707.04659v1
-
trajectory_length
(itraj, stride=1, skip=0)¶ Returns the length of trajectory of the requested index.
Parameters: - itraj (int) – trajectory index
- stride (int) – return value is the number of frames in the trajectory when running through it with a step size of stride.
- skip (int or None) – skip n frames.
Returns: int
Return type: length of trajectory
-
trajectory_lengths
(stride=1, skip=0)¶ Returns the length of each trajectory.
Parameters: - stride (int) – return value is the number of frames of the trajectories when running through them with a step size of stride.
- skip (int) – skip parameter
Returns: array(dtype=int)
Return type: containing length of each trajectory
-
transform
(X)¶ Maps the input data through the transformer to correspondingly shaped output data array/list.
Parameters: X (ndarray(T, n) or list of ndarray(T_i, n)) – The input data, where T is the number of time steps and n is the number of dimensions. If a list is provided, the number of time steps is allowed to vary, but the number of dimensions are required to be to be consistent. Returns: Y – The mapped data, where T is the number of time steps of the input data and d is the output dimension of this transformer. If called with a list of trajectories, Y will also be a corresponding list of trajectories Return type: ndarray(T, d) or list of ndarray(T_i, d)
-
write_to_csv
(filename=None, extension='.dat', overwrite=False, stride=1, chunksize=None, **kw)¶ write all data to csv with numpy.savetxt
Parameters: - filename (str, optional) –
filename string, which may contain placeholders {itraj} and {stride}:
- itraj will be replaced by trajetory index
- stride is stride argument of this method
If filename is not given, it is being tried to obtain the filenames from the data source of this iterator.
- extension (str, optional, default='.dat') – filename extension of created files
- overwrite (bool, optional, default=False) – shall existing files be overwritten? If a file exists, this method will raise.
- stride (int) – omit every n’th frame
- chunksize (int, default=None) – how many frames to process at once
- kw (dict, optional) – named arguments passed into numpy.savetxt (header, seperator etc.)
Example
Assume you want to save features calculated by some FeatureReader to ASCII:
>>> import numpy as np, pyemma >>> import os >>> from pyemma.util.files import TemporaryDirectory >>> from pyemma.util.contexts import settings >>> data = [np.random.random((10,3))] * 3 >>> reader = pyemma.coordinates.source(data) >>> filename = "distances_{itraj}.dat" >>> with TemporaryDirectory() as td, settings(show_progress_bars=False): ... out = os.path.join(td, filename) ... reader.write_to_csv(out, header='', delimiter=';') ... print(sorted(os.listdir(td))) ['distances_0.dat', 'distances_1.dat', 'distances_2.dat']
- filename (str, optional) –
-
write_to_hdf5
(filename, group='/', data_set_prefix='', overwrite=False, stride=1, chunksize=None, h5_opt=None)¶ writes all data of this Iterable to a given HDF5 file. This is equivalent of writing the result of func:pyemma.coordinates.data._base.DataSource.get_output to a file.
Parameters: - filename (str) – file name of output HDF5 file
- group (str, default='/') – write all trajectories to this HDF5 group. The group name may not already exist in the file.
- data_set_prefix (str, default=None) – data set name prefix, will postfixed with the index of the trajectory.
- overwrite (bool, default=False) – if group and data sets already exist, shall we overwrite data?
- stride (int, default=1) – stride argument to iterator
- chunksize (int, default=None) – how many frames to process at once
- h5_opt (dict) – optional parameters for h5py.create_dataset
Notes
You can pass the following via h5_opt to enable compression/filters/shuffling etc:
- chunks
- (Tuple) Chunk shape, or True to enable auto-chunking.
- maxshape
- (Tuple) Make the dataset resizable up to this shape. Use None for axes you want to be unlimited.
- compression
- (String or int) Compression strategy. Legal values are ‘gzip’, ‘szip’, ‘lzf’. If an integer in range(10), this indicates gzip compression level. Otherwise, an integer indicates the number of a dynamically loaded compression filter.
- compression_opts
- Compression settings. This is an integer for gzip, 2-tuple for szip, etc. If specifying a dynamically loaded compression filter number, this must be a tuple of values.
- scaleoffset
- (Integer) Enable scale/offset filter for (usually) lossy compression of integer or floating-point data. For integer data, the value of scaleoffset is the number of bits to retain (pass 0 to let HDF5 determine the minimum number of bits necessary for lossless compression). For floating point data, scaleoffset is the number of digits after the decimal place to retain; stored values thus have absolute error less than 0.5*10**(-scaleoffset).
- shuffle
- (T/F) Enable shuffle filter. Only effective in combination with chunks.
- fletcher32
- (T/F) Enable fletcher32 error detection. Not permitted in conjunction with the scale/offset filter.
- fillvalue
- (Scalar) Use this value for uninitialized parts of the dataset.
- track_times
- (T/F) Enable dataset creation timestamps.
-