pyemma.coordinates.data.FeatureReader¶
-
class
pyemma.coordinates.data.
FeatureReader
(trajectories, topologyfile=None, chunksize=1000, featurizer=None)¶ Reads features from MD data.
To select a feature, access the
featurizer
and call a feature selecting method (e.g) distances.Parameters: - trajectories (list of strings) – paths to trajectory files
- topologyfile (string) – path to topology file (e.g. pdb)
- chunksize (int) – how many frames to process in one batch.
- featurizer (MDFeaturizer) – a preconstructed featurizer
Examples
>>> from pyemma.datasets import get_bpti_test_data >>> from pyemma.util.contexts import settings
Iterator access:
>>> reader = FeatureReader(get_bpti_test_data()['trajs'], get_bpti_test_data()['top'])
Optionally set a chunksize
>>> reader.chunksize = 300
Store chunks by their trajectory index
>>> chunks = {i : [] for i in range(reader.number_of_trajectories())} >>> for itraj, X in reader: ... chunks[itraj].append(X)
Calculate some distances of protein during feature reading:
>>> reader.featurizer.add_distances([[0, 3], [10, 15]]) >>> with settings(show_progress_bars=False): ... X = reader.get_output()
-
__init__
(trajectories, topologyfile=None, chunksize=1000, featurizer=None)¶ Initialize self. See help(type(self)) for accurate signature.
Methods
__init__
(trajectories[, topologyfile, …])Initialize self. describe
()Returns a description of this transformer dimension
()Returns the number of output dimensions get_output
([dimensions, stride, skip, chunk])Maps all input data of this transformer and returns it as an array or list of arrays iterator
([stride, lag, chunk, …])creates an iterator to stream over the (transformed) data. load
(file_name[, model_name])Loads a previously saved PyEMMA object from disk. n_chunks
(chunksize[, stride, skip])how many chunks an iterator of this sourcde will output, starting (eg. n_frames_total
([stride, skip])Returns total number of frames. number_of_trajectories
([stride])Returns the number of trajectories. output_type
()By default transformers return single precision floats. save
(file_name[, model_name, overwrite, …])saves the current state of this object to given file and name. supports_format
(file_name)Static method that checks whether the extension of the input file name indicates a file type that can potentially be read with a FeatureReader. trajectory_length
(itraj[, stride, skip])Returns the length of trajectory of the requested index. trajectory_lengths
([stride, skip])Returns the length of each trajectory. write_to_csv
([filename, extension, …])write all data to csv with numpy.savetxt write_to_hdf5
(filename[, group, …])writes all data of this Iterable to a given HDF5 file. Attributes
SUPPORTED_RANDOM_ACCESS_FORMATS
chunksize
data_producer
The data producer for this data source object (can be another data source object). default_chunksize
How much data will be processed at once, in case no chunksize has been provided. filenames
list of file names the data is originally being read from. in_memory
are results stored in memory? is_random_accessible
Check if self._is_random_accessible is set to true and if all the random access strategies are implemented. is_reader
Property telling if this data source is a reader or not. logger
The logger for this class instance name
The name of this instance ndim
ntraj
ra_itraj_cuboid
Implementation of random access with slicing that can be up to 3-dimensional, where the first dimension corresponds to the trajectory index, the second dimension corresponds to the frames and the third dimension corresponds to the dimensions of the frames. ra_itraj_jagged
Behaves like ra_itraj_cuboid just that the trajectories are not truncated and returned as a list. ra_itraj_linear
Implementation of random access that takes arguments as the default random access (i.e., up to three dimensions with trajs, frames and dims, respectively), but which considers the frame indexing to be contiguous. ra_linear
Implementation of random access that takes a (maximal) two-dimensional slice where the first component corresponds to the frames and the second component corresponds to the dimensions. trajfiles
-
data_producer
¶ The data producer for this data source object (can be another data source object). :returns: :rtype: This data source’s data producer.
-
default_chunksize
¶ How much data will be processed at once, in case no chunksize has been provided.
Notes
This variable respects your setting for maximum memory in pyemma.config.default_chunksize
-
describe
()¶ Returns a description of this transformer
Returns:
-
dimension
()¶ Returns the number of output dimensions
Returns:
-
filenames
¶ list of file names the data is originally being read from.
Returns: names – list of file names at the beginning of the input chain. Return type: list of str
-
get_output
(dimensions=slice(0, None, None), stride=1, skip=0, chunk=None)¶ Maps all input data of this transformer and returns it as an array or list of arrays
Parameters: - dimensions (list-like of indexes or slice, default=all) – indices of dimensions you like to keep.
- stride (int, default=1) – only take every n’th frame.
- skip (int, default=0) – initially skip n frames of each file.
- chunk (int, default=None) – How many frames to process at once. If not given obtain the chunk size from the source.
Returns: output – the mapped data, where T is the number of time steps of the input data, or if stride > 1, floor(T_in / stride). d is the output dimension of this transformer. If the input consists of a list of trajectories, Y will also be a corresponding list of trajectories
Return type: list of ndarray(T_i, d)
-
in_memory
¶ are results stored in memory?
-
is_random_accessible
¶ Check if self._is_random_accessible is set to true and if all the random access strategies are implemented. :returns: bool :rtype: Returns True if random accessible via strategies and False otherwise.
-
is_reader
¶ Property telling if this data source is a reader or not. :returns: bool :rtype: True if this data source is a reader and False otherwise
-
iterator
(stride=1, lag=0, chunk=None, return_trajindex=True, cols=None, skip=0)¶ creates an iterator to stream over the (transformed) data.
If your data is too large to fit into memory and you want to incrementally compute some quantities on it, you can create an iterator on a reader or transformer (eg. TICA) to avoid memory overflows.
Parameters: - stride (int, default=1) – Take only every stride’th frame.
- lag (int, default=0) – how many frame to omit for each file.
- chunk (int, default=None) – How many frames to process at once. If not given obtain the chunk size from the source.
- return_trajindex (boolean, default=True) – a chunk of data if return_trajindex is False, otherwise a tuple of (trajindex, data).
- cols (array like, default=None) – return only the given columns.
- skip (int, default=0) – skip ‘n’ first frames of each trajectory.
Returns: iter – a implementation of a DataSourceIterator to stream over the data
Return type: instance of DataSourceIterator
Examples
>>> from pyemma.coordinates import source; import numpy as np >>> data = [np.arange(3), np.arange(4, 7)] >>> reader = source(data) >>> iterator = reader.iterator(chunk=1) >>> for array_index, chunk in iterator: ... print(array_index, chunk) 0 [[0]] 0 [[1]] 0 [[2]] 1 [[4]] 1 [[5]] 1 [[6]]
-
classmethod
load
(file_name, model_name='default')¶ Loads a previously saved PyEMMA object from disk.
Parameters: - file_name (str or file like object (has to provide read method)) – The file like object tried to be read for a serialized object.
- model_name (str, default='default') – if multiple models are contained in the file, these can be accessed by
their name. Use
pyemma.list_models()
to get a representation of all stored models.
Returns: obj
Return type: the de-serialized object
-
logger
¶ The logger for this class instance
-
n_chunks
(chunksize, stride=1, skip=0)¶ how many chunks an iterator of this sourcde will output, starting (eg. after calling reset())
Parameters: - chunksize –
- stride –
- skip –
-
n_frames_total
(stride=1, skip=0)¶ Returns total number of frames.
Parameters: - stride (int) – return value is the number of frames in trajectories when running through them with a step size of stride.
- skip (int, default=0) – skip the first initial n frames per trajectory.
Returns: n_frames_total – total number of frames.
Return type: int
-
name
¶ The name of this instance
-
number_of_trajectories
(stride=None)¶ Returns the number of trajectories.
Parameters: stride (None (default) or np.ndarray) – Returns: int Return type: number of trajectories
-
output_type
()¶ By default transformers return single precision floats.
-
ra_itraj_cuboid
¶ Implementation of random access with slicing that can be up to 3-dimensional, where the first dimension corresponds to the trajectory index, the second dimension corresponds to the frames and the third dimension corresponds to the dimensions of the frames.
The with the frame slice selected frames will be loaded from each in the trajectory-slice selected trajectories and then sliced with the dimension slice. For example: The data consists out of three trajectories with length 10, 20, 10, respectively. The slice data[:, :15, :3] returns a 3D array of shape (3, 10, 3), where the first component corresponds to the three trajectories, the second component corresponds to 10 frames (note that the last 5 frames are being truncated as the other two trajectories only have 10 frames) and the third component corresponds to the selected first three dimensions.
Returns: Returns an object that allows access by slices in the described manner.
-
ra_itraj_jagged
¶ Behaves like ra_itraj_cuboid just that the trajectories are not truncated and returned as a list.
Returns: Returns an object that allows access by slices in the described manner.
-
ra_itraj_linear
¶ Implementation of random access that takes arguments as the default random access (i.e., up to three dimensions with trajs, frames and dims, respectively), but which considers the frame indexing to be contiguous. Therefore, it returns a simple 2D array.
Returns: A 2D array of the sliced data containing [frames, dims].
-
ra_linear
¶ Implementation of random access that takes a (maximal) two-dimensional slice where the first component corresponds to the frames and the second component corresponds to the dimensions. Here it is assumed that the frame indexing is contiguous, i.e., the first frame of the second trajectory has the index of the last frame of the first trajectory plus one.
Returns: Returns an object that allows access by slices in the described manner.
-
save
(file_name, model_name='default', overwrite=False, save_streaming_chain=False)¶ saves the current state of this object to given file and name.
Parameters: - file_name (str) – path to desired output file
- model_name (str, default='default') – creates a group named ‘model_name’ in the given file, which will contain all of the data. If the name already exists, and overwrite is False (default) will raise a RuntimeError.
- overwrite (bool, default=False) – Should overwrite existing model names?
- save_streaming_chain (boolean, default=False) – if True, the data_producer(s) of this object will also be saved in the given file.
Examples
>>> import pyemma, numpy as np >>> from pyemma.util.contexts import named_temporary_file >>> m = pyemma.msm.MSM(P=np.array([[0.1, 0.9], [0.9, 0.1]]))
>>> with named_temporary_file() as file: # doctest: +SKIP ... m.save(file, 'simple') # doctest: +SKIP ... inst_restored = pyemma.load(file, 'simple') # doctest: +SKIP >>> np.testing.assert_equal(m.P, inst_restored.P) # doctest: +SKIP
-
static
supports_format
(file_name)¶ Static method that checks whether the extension of the input file name indicates a file type that can potentially be read with a FeatureReader.
Parameters: file_name – the file name or path Returns: True if the extension indicates a file type that could be read, otherwise False
-
trajectory_length
(itraj, stride=1, skip=0)¶ Returns the length of trajectory of the requested index.
Parameters: - itraj (int) – trajectory index
- stride (int) – return value is the number of frames in the trajectory when running through it with a step size of stride.
- skip (int or None) – skip n frames.
Returns: int
Return type: length of trajectory
-
trajectory_lengths
(stride=1, skip=0)¶ Returns the length of each trajectory.
Parameters: - stride (int) – return value is the number of frames of the trajectories when running through them with a step size of stride.
- skip (int) – skip parameter
Returns: array(dtype=int)
Return type: containing length of each trajectory
-
write_to_csv
(filename=None, extension='.dat', overwrite=False, stride=1, chunksize=None, **kw)¶ write all data to csv with numpy.savetxt
Parameters: - filename (str, optional) –
filename string, which may contain placeholders {itraj} and {stride}:
- itraj will be replaced by trajetory index
- stride is stride argument of this method
If filename is not given, it is being tried to obtain the filenames from the data source of this iterator.
- extension (str, optional, default='.dat') – filename extension of created files
- overwrite (bool, optional, default=False) – shall existing files be overwritten? If a file exists, this method will raise.
- stride (int) – omit every n’th frame
- chunksize (int, default=None) – how many frames to process at once
- kw (dict, optional) – named arguments passed into numpy.savetxt (header, seperator etc.)
Example
Assume you want to save features calculated by some FeatureReader to ASCII:
>>> import numpy as np, pyemma >>> import os >>> from pyemma.util.files import TemporaryDirectory >>> from pyemma.util.contexts import settings >>> data = [np.random.random((10,3))] * 3 >>> reader = pyemma.coordinates.source(data) >>> filename = "distances_{itraj}.dat" >>> with TemporaryDirectory() as td, settings(show_progress_bars=False): ... out = os.path.join(td, filename) ... reader.write_to_csv(out, header='', delimiter=';') ... print(sorted(os.listdir(td))) ['distances_0.dat', 'distances_1.dat', 'distances_2.dat']
- filename (str, optional) –
-
write_to_hdf5
(filename, group='/', data_set_prefix='', overwrite=False, stride=1, chunksize=None, h5_opt=None)¶ writes all data of this Iterable to a given HDF5 file. This is equivalent of writing the result of func:pyemma.coordinates.data._base.DataSource.get_output to a file.
Parameters: - filename (str) – file name of output HDF5 file
- group (str, default='/') – write all trajectories to this HDF5 group. The group name may not already exist in the file.
- data_set_prefix (str, default=None) – data set name prefix, will postfixed with the index of the trajectory.
- overwrite (bool, default=False) – if group and data sets already exist, shall we overwrite data?
- stride (int, default=1) – stride argument to iterator
- chunksize (int, default=None) – how many frames to process at once
- h5_opt (dict) – optional parameters for h5py.create_dataset
Notes
You can pass the following via h5_opt to enable compression/filters/shuffling etc:
- chunks
- (Tuple) Chunk shape, or True to enable auto-chunking.
- maxshape
- (Tuple) Make the dataset resizable up to this shape. Use None for axes you want to be unlimited.
- compression
- (String or int) Compression strategy. Legal values are ‘gzip’, ‘szip’, ‘lzf’. If an integer in range(10), this indicates gzip compression level. Otherwise, an integer indicates the number of a dynamically loaded compression filter.
- compression_opts
- Compression settings. This is an integer for gzip, 2-tuple for szip, etc. If specifying a dynamically loaded compression filter number, this must be a tuple of values.
- scaleoffset
- (Integer) Enable scale/offset filter for (usually) lossy compression of integer or floating-point data. For integer data, the value of scaleoffset is the number of bits to retain (pass 0 to let HDF5 determine the minimum number of bits necessary for lossless compression). For floating point data, scaleoffset is the number of digits after the decimal place to retain; stored values thus have absolute error less than 0.5*10**(-scaleoffset).
- shuffle
- (T/F) Enable shuffle filter. Only effective in combination with chunks.
- fletcher32
- (T/F) Enable fletcher32 error detection. Not permitted in conjunction with the scale/offset filter.
- fillvalue
- (Scalar) Use this value for uninitialized parts of the dataset.
- track_times
- (T/F) Enable dataset creation timestamps.