pyemma.coordinates.clustering.MiniBatchKmeansClustering

class pyemma.coordinates.clustering.MiniBatchKmeansClustering(n_clusters, max_iter=5, metric='euclidean', tolerance=1e-05, init_strategy='kmeans++', batch_size=0.2, oom_strategy='memmap', fixed_seed=False, stride=None, n_jobs=None, skip=0, clustercenters=None, keep_data=False)

Mini-batch k-means clustering

__init__(n_clusters, max_iter=5, metric='euclidean', tolerance=1e-05, init_strategy='kmeans++', batch_size=0.2, oom_strategy='memmap', fixed_seed=False, stride=None, n_jobs=None, skip=0, clustercenters=None, keep_data=False)

Kmeans clustering

Parameters
  • n_clusters (int) – amount of cluster centers. When not specified (None), min(sqrt(N), 5000) is chosen as default value, where N denotes the number of data points

  • max_iter (int) – maximum number of iterations before stopping.

  • tolerance (float) –

    stop iteration when the relative change in the cost function

    \[C(S) = \sum_{i=1}^{k} \sum_{\mathbf x \in S_i} \left\| \mathbf x - \boldsymbol\mu_i \right\|^2\]

    is smaller than tolerance.

  • metric (str) – metric to use during clustering (‘euclidean’, ‘minRMSD’)

  • init_strategy (string) – can be either ‘kmeans++’ or ‘uniform’, determining how the initial cluster centers are being chosen

  • fixed_seed (bool or int) – if True, the seed gets set to 42. Use time based seeding otherwise. if an integer is given, use this to initialize the random generator.

  • oom_strategy (string, default='memmap') –

    how to deal with out of memory situation during accumulation of all data.

    • ’memmap’: if no memory is available to store all data, a memory

      mapped file is created and written to

    • ’raise’: raise OutOfMemory exception.

  • stride (int) – stridden data

  • n_jobs (int or None, default None) – Number of threads to use during assignment of the data. If None, all available CPUs will be used.

  • clustercenters (None or array(k, dim)) – This is used to resume the kmeans iteration. Note, that if this is set, the init_strategy is ignored and the centers are directly passed to the kmeans iteration algorithm.

  • keep_data (boolean, default False) – If you intend to resume the kmeans iteration later on, in case it did not converge, this parameter controls whether the input data is kept in memory or not.

Methods

__init__(n_clusters[, max_iter, metric, …])

Kmeans clustering

assign([X, stride])

Assigns the given trajectory or list of trajectories to cluster centers by using the discretization defined by this clustering method (usually a Voronoi tesselation).

describe()

Get a descriptive string representation of this class.

dimension()

output dimension of clustering algorithm (always 1).

estimate(X, **kwargs)

Estimates the model given the data X

fit(X[, y])

Estimates parameters - for compatibility with sklearn.

fit_predict(X[, y])

Performs clustering on X and returns cluster labels.

fit_transform(X[, y])

Fit to data, then transform it.

get_model_params([deep])

Get parameters for this model.

get_output([dimensions, stride, skip, chunk])

Maps all input data of this transformer and returns it as an array or list of arrays

get_params([deep])

Get parameters for this estimator.

iterator([stride, lag, chunk, …])

creates an iterator to stream over the (transformed) data.

load(file_name[, model_name])

Loads a previously saved PyEMMA object from disk.

n_chunks(chunksize[, stride, skip])

how many chunks an iterator of this sourcde will output, starting (eg.

n_frames_total([stride, skip])

Returns total number of frames.

number_of_trajectories([stride])

Returns the number of trajectories.

output_type()

By default transformers return single precision floats.

sample_indexes_by_cluster(clusters, nsample)

Samples trajectory/time indexes according to the given sequence of states.

save(file_name[, model_name, overwrite, …])

saves the current state of this object to given file and name.

save_dtrajs([trajfiles, prefix, output_dir, …])

saves calculated discrete trajectories.

set_model_params(clustercenters)

set_params(**params)

Set the parameters of this estimator.

trajectory_length(itraj[, stride, skip])

Returns the length of trajectory of the requested index.

trajectory_lengths([stride, skip])

Returns the length of each trajectory.

transform(X)

Maps the input data through the transformer to correspondingly shaped output data array/list.

update_model_params(**params)

Update given model parameter if they are set to specific values

write_to_csv([filename, extension, …])

write all data to csv with numpy.savetxt

write_to_hdf5(filename[, group, …])

writes all data of this Iterable to a given HDF5 file.

Attributes

chunksize

chunksize defines how much data is being processed at once.

cluster_centers_

Array containing the coordinates of the calculated cluster centers.

clustercenters

Array containing the coordinates of the calculated cluster centers.

converged

data_producer

The data producer for this data source object (can be another data source object).

default_chunksize

How much data will be processed at once, in case no chunksize has been provided.

dtrajs

Discrete trajectories (assigned data to cluster centers).

filenames

list of file names the data is originally being read from.

fixed_seed

seed for random choice of initial cluster centers.

in_memory

are results stored in memory?

index_clusters

Returns trajectory/time indexes for all the clusters

init_strategy

Strategy to get an initial guess for the centers.

is_random_accessible

Check if self._is_random_accessible is set to true and if all the random access strategies are implemented.

is_reader

Property telling if this data source is a reader or not.

logger

The logger for this class instance

model

The model estimated by this Estimator

n_jobs

Returns number of jobs/threads to use during assignment of data.

name

The name of this instance

ndim

ntraj

overwrite_dtrajs

Should existing dtraj files be overwritten.

ra_itraj_cuboid

Implementation of random access with slicing that can be up to 3-dimensional, where the first dimension corresponds to the trajectory index, the second dimension corresponds to the frames and the third dimension corresponds to the dimensions of the frames.

ra_itraj_jagged

Behaves like ra_itraj_cuboid just that the trajectories are not truncated and returned as a list.

ra_itraj_linear

Implementation of random access that takes arguments as the default random access (i.e., up to three dimensions with trajs, frames and dims, respectively), but which considers the frame indexing to be contiguous.

ra_linear

Implementation of random access that takes a (maximal) two-dimensional slice where the first component corresponds to the frames and the second component corresponds to the dimensions.

show_progress

whether to show the progress of heavy calculations on this object.

assign(X=None, stride=1)

Assigns the given trajectory or list of trajectories to cluster centers by using the discretization defined by this clustering method (usually a Voronoi tesselation).

You can assign multiple times with different strides. The last result of assign will be saved and is available as the attribute dtrajs().

Parameters
  • X (ndarray(T, n) or list of ndarray(T_i, n), optional, default = None) – Optional input data to map, where T is the number of time steps and n is the number of dimensions. When a list is provided they can have differently many time steps, but the number of dimensions need to be consistent. When X is not provided, the result of assign is identical to get_output(), i.e. the data used for clustering will be assigned. If X is given, the stride argument is not accepted.

  • stride (int, optional, default = 1) – If set to 1, all frames of the input data will be assigned. Note that this could cause this calculation to be very slow for large data sets. Since molecular dynamics data is usually correlated at short timescales, it is often sufficient to obtain the discretization at a longer stride. Note that the stride option used to conduct the clustering is independent of the assign stride. This argument is only accepted if X is not given.

Returns

Y – The discretized trajectory: int-array with the indexes of the assigned clusters, or list of such int-arrays. If called with a list of trajectories, Y will also be a corresponding list of discrete trajectories

Return type

ndarray(T, dtype=int) or list of ndarray(T_i, dtype=int)

chunksize

chunksize defines how much data is being processed at once.

cluster_centers_

Array containing the coordinates of the calculated cluster centers.

clustercenters

Array containing the coordinates of the calculated cluster centers.

data_producer

The data producer for this data source object (can be another data source object). :returns: :rtype: This data source’s data producer.

default_chunksize

How much data will be processed at once, in case no chunksize has been provided.

Notes

This variable respects your setting for maximum memory in pyemma.config.default_chunksize

describe()

Get a descriptive string representation of this class.

dimension()

output dimension of clustering algorithm (always 1).

dtrajs

Discrete trajectories (assigned data to cluster centers).

estimate(X, **kwargs)

Estimates the model given the data X

Parameters
  • X (object) – A reference to the data from which the model will be estimated

  • params (dict) – New estimation parameter values. The parameters must that have been announced in the __init__ method of this estimator. The present settings will overwrite the settings of parameters given in the __init__ method, i.e. the parameter values after this call will be those that have been used for this estimation. Use this option if only one or a few parameters change with respect to the __init__ settings for this run, and if you don’t need to remember the original settings of these changed parameters.

Returns

estimator – The estimated estimator with the model being available.

Return type

object

filenames

list of file names the data is originally being read from.

Returns

names – list of file names at the beginning of the input chain.

Return type

list of str

fit(X, y=None)

Estimates parameters - for compatibility with sklearn.

Parameters

X (object) – A reference to the data from which the model will be estimated

Returns

estimator – The estimator (self) with estimated model.

Return type

object

fit_predict(X, y=None)

Performs clustering on X and returns cluster labels. :param X: Input data. :type X: ndarray, shape (n_samples, n_features)

Returns

y – cluster labels

Return type

ndarray, shape (n_samples,)

fit_transform(X, y=None, **fit_params)

Fit to data, then transform it. Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X. :param X: Training set. :type X: numpy array of shape [n_samples, n_features] :param y: Target values. :type y: numpy array of shape [n_samples]

Returns

X_new – Transformed array.

Return type

numpy array of shape [n_samples, n_features_new]

fixed_seed

seed for random choice of initial cluster centers. Fix this to get reproducible results.

get_model_params(deep=True)

Get parameters for this model.

Parameters

deep (boolean, optional) – If True, will return the parameters for this estimator and contained subobjects that are estimators.

Returns

params – Parameter names mapped to their values.

Return type

mapping of string to any

get_output(dimensions=slice(0, None, None), stride=1, skip=0, chunk=None)

Maps all input data of this transformer and returns it as an array or list of arrays

Parameters
  • dimensions (list-like of indexes or slice, default=all) – indices of dimensions you like to keep.

  • stride (int, default=1) – only take every n’th frame.

  • skip (int, default=0) – initially skip n frames of each file.

  • chunk (int, default=None) – How many frames to process at once. If not given obtain the chunk size from the source.

Returns

output – the mapped data, where T is the number of time steps of the input data, or if stride > 1, floor(T_in / stride). d is the output dimension of this transformer. If the input consists of a list of trajectories, Y will also be a corresponding list of trajectories

Return type

list of ndarray(T_i, d)

get_params(deep=True)

Get parameters for this estimator.

Parameters

deep (boolean, optional) – If True, will return the parameters for this estimator and contained subobjects that are estimators.

Returns

params – Parameter names mapped to their values.

Return type

mapping of string to any

in_memory

are results stored in memory?

index_clusters

Returns trajectory/time indexes for all the clusters

Returns

indexes – For each state, all trajectory and time indexes where this cluster occurs. Each matrix has a number of rows equal to the number of occurrences of the corresponding state, with rows consisting of a tuple (i, t), where i is the index of the trajectory and t is the time index within the trajectory.

Return type

list of ndarray( (N_i, 2) )

init_strategy

Strategy to get an initial guess for the centers.

is_random_accessible

Check if self._is_random_accessible is set to true and if all the random access strategies are implemented. :returns: bool :rtype: Returns True if random accessible via strategies and False otherwise.

is_reader

Property telling if this data source is a reader or not. :returns: bool :rtype: True if this data source is a reader and False otherwise

iterator(stride=1, lag=0, chunk=None, return_trajindex=True, cols=None, skip=0)

creates an iterator to stream over the (transformed) data.

If your data is too large to fit into memory and you want to incrementally compute some quantities on it, you can create an iterator on a reader or transformer (eg. TICA) to avoid memory overflows.

Parameters
  • stride (int, default=1) – Take only every stride’th frame.

  • lag (int, default=0) – how many frame to omit for each file.

  • chunk (int, default=None) – How many frames to process at once. If not given obtain the chunk size from the source.

  • return_trajindex (boolean, default=True) – a chunk of data if return_trajindex is False, otherwise a tuple of (trajindex, data).

  • cols (array like, default=None) – return only the given columns.

  • skip (int, default=0) – skip ‘n’ first frames of each trajectory.

Returns

iter – a implementation of a DataSourceIterator to stream over the data

Return type

instance of DataSourceIterator

Examples

>>> from pyemma.coordinates import source; import numpy as np
>>> data = [np.arange(3), np.arange(4, 7)]
>>> reader = source(data)
>>> iterator = reader.iterator(chunk=1)
>>> for array_index, chunk in iterator:
...     print(array_index, chunk)
0 [[0]]
0 [[1]]
0 [[2]]
1 [[4]]
1 [[5]]
1 [[6]]
classmethod load(file_name, model_name='default')

Loads a previously saved PyEMMA object from disk.

Parameters
  • file_name (str or file like object (has to provide read method)) – The file like object tried to be read for a serialized object.

  • model_name (str, default='default') – if multiple models are contained in the file, these can be accessed by their name. Use pyemma.list_models() to get a representation of all stored models.

Returns

obj

Return type

the de-serialized object

logger

The logger for this class instance

model

The model estimated by this Estimator

n_chunks(chunksize, stride=1, skip=0)

how many chunks an iterator of this sourcde will output, starting (eg. after calling reset())

Parameters
  • chunksize

  • stride

  • skip

n_frames_total(stride=1, skip=0)

Returns total number of frames.

Parameters
  • stride (int) – return value is the number of frames in trajectories when running through them with a step size of stride.

  • skip (int, default=0) – skip the first initial n frames per trajectory.

Returns

n_frames_total – total number of frames.

Return type

int

n_jobs

Returns number of jobs/threads to use during assignment of data.

Returns

  • If None it will return the setting of ‘PYEMMA_NJOBS’ or

  • ’SLURM_CPUS_ON_NODE’ environment variable. If none of these environment variables exist,

  • the number of processors /or cores is returned.

Notes

This setting will effectively be multiplied by the the number of threads used by NumPy for algorithms which use multiple processes. So take care if you choose this manually.

name

The name of this instance

number_of_trajectories(stride=1)

Returns the number of trajectories.

Parameters

stride (None (default) or np.ndarray) –

Returns

int

Return type

number of trajectories

output_type()

By default transformers return single precision floats.

overwrite_dtrajs

Should existing dtraj files be overwritten. Set this property to True to overwrite.

ra_itraj_cuboid

Implementation of random access with slicing that can be up to 3-dimensional, where the first dimension corresponds to the trajectory index, the second dimension corresponds to the frames and the third dimension corresponds to the dimensions of the frames.

The with the frame slice selected frames will be loaded from each in the trajectory-slice selected trajectories and then sliced with the dimension slice. For example: The data consists out of three trajectories with length 10, 20, 10, respectively. The slice data[:, :15, :3] returns a 3D array of shape (3, 10, 3), where the first component corresponds to the three trajectories, the second component corresponds to 10 frames (note that the last 5 frames are being truncated as the other two trajectories only have 10 frames) and the third component corresponds to the selected first three dimensions.

Returns

Returns an object that allows access by slices in the described manner.

ra_itraj_jagged

Behaves like ra_itraj_cuboid just that the trajectories are not truncated and returned as a list.

Returns

Returns an object that allows access by slices in the described manner.

ra_itraj_linear

Implementation of random access that takes arguments as the default random access (i.e., up to three dimensions with trajs, frames and dims, respectively), but which considers the frame indexing to be contiguous. Therefore, it returns a simple 2D array.

Returns

A 2D array of the sliced data containing [frames, dims].

ra_linear

Implementation of random access that takes a (maximal) two-dimensional slice where the first component corresponds to the frames and the second component corresponds to the dimensions. Here it is assumed that the frame indexing is contiguous, i.e., the first frame of the second trajectory has the index of the last frame of the first trajectory plus one.

Returns

Returns an object that allows access by slices in the described manner.

sample_indexes_by_cluster(clusters, nsample, replace=True)

Samples trajectory/time indexes according to the given sequence of states.

Parameters
  • clusters (iterable of integers) – It contains the cluster indexes to be sampled

  • nsample (int) – Number of samples per cluster. If replace = False, the number of returned samples per cluster could be smaller if less than nsample indexes are available for a cluster.

  • replace (boolean, optional) – Whether the sample is with or without replacement

Returns

indexes – List of the sampled indices by cluster. Each element is an index array with a number of rows equal to N=len(sequence), with rows consisting of a tuple (i, t), where i is the index of the trajectory and t is the time index within the trajectory.

Return type

list of ndarray( (N, 2) )

save(file_name, model_name='default', overwrite=False, save_streaming_chain=False)

saves the current state of this object to given file and name.

Parameters
  • file_name (str) – path to desired output file

  • model_name (str, default='default') – creates a group named ‘model_name’ in the given file, which will contain all of the data. If the name already exists, and overwrite is False (default) will raise a RuntimeError.

  • overwrite (bool, default=False) – Should overwrite existing model names?

  • save_streaming_chain (boolean, default=False) – if True, the data_producer(s) of this object will also be saved in the given file.

Examples

>>> import pyemma, numpy as np
>>> from pyemma.util.contexts import named_temporary_file
>>> m = pyemma.msm.MSM(P=np.array([[0.1, 0.9], [0.9, 0.1]]))
>>> with named_temporary_file() as file: # doctest: +SKIP
...    m.save(file, 'simple') # doctest: +SKIP
...    inst_restored = pyemma.load(file, 'simple') # doctest: +SKIP
>>> np.testing.assert_equal(m.P, inst_restored.P) # doctest: +SKIP
save_dtrajs(trajfiles=None, prefix='', output_dir='.', output_format='ascii', extension='.dtraj')

saves calculated discrete trajectories. Filenames are taken from given reader. If data comes from memory dtrajs are written to a default filename.

Parameters
  • trajfiles (list of str (optional)) – names of input trajectory files, will be used generate output files.

  • prefix (str) – prepend prefix to filenames.

  • output_dir (str) – save files to this directory.

  • output_format (str) – if format is ‘ascii’ dtrajs will be written as csv files, otherwise they will be written as NumPy .npy files.

  • extension (str) – file extension to append (eg. ‘.itraj’)

set_params(**params)

Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as pipelines). The former have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object. :returns: :rtype: self

show_progress

whether to show the progress of heavy calculations on this object.

trajectory_length(itraj, stride=1, skip=0)

Returns the length of trajectory of the requested index.

Parameters
  • itraj (int) – trajectory index

  • stride (int) – return value is the number of frames in the trajectory when running through it with a step size of stride.

  • skip (int or None) – skip n frames.

Returns

int

Return type

length of trajectory

trajectory_lengths(stride=1, skip=0)

Returns the length of each trajectory.

Parameters
  • stride (int) – return value is the number of frames of the trajectories when running through them with a step size of stride.

  • skip (int) – skip parameter

Returns

array(dtype=int)

Return type

containing length of each trajectory

transform(X)

Maps the input data through the transformer to correspondingly shaped output data array/list.

Parameters

X (ndarray(T, n) or list of ndarray(T_i, n)) – The input data, where T is the number of time steps and n is the number of dimensions. If a list is provided, the number of time steps is allowed to vary, but the number of dimensions are required to be to be consistent.

Returns

Y – The mapped data, where T is the number of time steps of the input data and d is the output dimension of this transformer. If called with a list of trajectories, Y will also be a corresponding list of trajectories

Return type

ndarray(T, d) or list of ndarray(T_i, d)

update_model_params(**params)

Update given model parameter if they are set to specific values

write_to_csv(filename=None, extension='.dat', overwrite=False, stride=1, chunksize=None, **kw)

write all data to csv with numpy.savetxt

Parameters
  • filename (str, optional) –

    filename string, which may contain placeholders {itraj} and {stride}:

    • itraj will be replaced by trajetory index

    • stride is stride argument of this method

    If filename is not given, it is being tried to obtain the filenames from the data source of this iterator.

  • extension (str, optional, default='.dat') – filename extension of created files

  • overwrite (bool, optional, default=False) – shall existing files be overwritten? If a file exists, this method will raise.

  • stride (int) – omit every n’th frame

  • chunksize (int, default=None) – how many frames to process at once

  • kw (dict, optional) – named arguments passed into numpy.savetxt (header, seperator etc.)

Example

Assume you want to save features calculated by some FeatureReader to ASCII:

>>> import numpy as np, pyemma
>>> import os
>>> from pyemma.util.files import TemporaryDirectory
>>> from pyemma.util.contexts import settings
>>> data = [np.random.random((10,3))] * 3
>>> reader = pyemma.coordinates.source(data)
>>> filename = "distances_{itraj}.dat"
>>> with TemporaryDirectory() as td, settings(show_progress_bars=False):
...    out = os.path.join(td, filename)
...    reader.write_to_csv(out, header='', delimiter=';')
...    print(sorted(os.listdir(td)))
['distances_0.dat', 'distances_1.dat', 'distances_2.dat']
write_to_hdf5(filename, group='/', data_set_prefix='', overwrite=False, stride=1, chunksize=None, h5_opt=None)

writes all data of this Iterable to a given HDF5 file. This is equivalent of writing the result of func:pyemma.coordinates.data._base.DataSource.get_output to a file.

Parameters
  • filename (str) – file name of output HDF5 file

  • group (str, default='/') – write all trajectories to this HDF5 group. The group name may not already exist in the file.

  • data_set_prefix (str, default=None) – data set name prefix, will postfixed with the index of the trajectory.

  • overwrite (bool, default=False) – if group and data sets already exist, shall we overwrite data?

  • stride (int, default=1) – stride argument to iterator

  • chunksize (int, default=None) – how many frames to process at once

  • h5_opt (dict) – optional parameters for h5py.create_dataset

Notes

You can pass the following via h5_opt to enable compression/filters/shuffling etc:

chunks

(Tuple) Chunk shape, or True to enable auto-chunking.

maxshape

(Tuple) Make the dataset resizable up to this shape. Use None for axes you want to be unlimited.

compression

(String or int) Compression strategy. Legal values are ‘gzip’, ‘szip’, ‘lzf’. If an integer in range(10), this indicates gzip compression level. Otherwise, an integer indicates the number of a dynamically loaded compression filter.

compression_opts

Compression settings. This is an integer for gzip, 2-tuple for szip, etc. If specifying a dynamically loaded compression filter number, this must be a tuple of values.

scaleoffset

(Integer) Enable scale/offset filter for (usually) lossy compression of integer or floating-point data. For integer data, the value of scaleoffset is the number of bits to retain (pass 0 to let HDF5 determine the minimum number of bits necessary for lossless compression). For floating point data, scaleoffset is the number of digits after the decimal place to retain; stored values thus have absolute error less than 0.5*10**(-scaleoffset).

shuffle

(T/F) Enable shuffle filter. Only effective in combination with chunks.

fletcher32

(T/F) Enable fletcher32 error detection. Not permitted in conjunction with the scale/offset filter.

fillvalue

(Scalar) Use this value for uninitialized parts of the dataset.

track_times

(T/F) Enable dataset creation timestamps.