pyemma.coordinates.source

pyemma.coordinates.source(inp, features=None, top=None, chunksize=None, **kw)

Defines trajectory data source

This function defines input trajectories without loading them. You can pass the resulting object into transformers such as pyemma.coordinates.tica() or clustering algorithms such as pyemma.coordinates.cluster_kmeans(). Then, the data will be streamed instead of being loaded, thus saving memory.

You can also use this function to construct the first stage of a data processing pipeline().

Parameters:
  • inp (str (file name) or ndarray or list of strings (file names) or list of ndarrays or nested list of str|ndarray (1 level)) –

    The inp file names or input data. Can be given in any of these ways:

    1. File name of a single trajectory. It can have any of the molecular dynamics trajectory formats or raw data formats specified in load().
    2. List of trajectory file names. It can have any of the molecular dynamics trajectory formats or raw data formats specified in load().
    3. Molecular dynamics trajectory in memory as a numpy array of shape (T, N, 3) with T time steps, N atoms each having three (x,y,z) spatial coordinates.
    4. List of molecular dynamics trajectories in memory, each given as a numpy array of shape (T_i, N, 3), where trajectory i has T_i time steps and all trajectories have shape (N, 3).
    5. Trajectory of some features or order parameters in memory as a numpy array of shape (T, N) with T time steps and N dimensions.
    6. List of trajectories of some features or order parameters in memory, each given as a numpy array of shape (T_i, N), where trajectory i has T_i time steps and all trajectories have N dimensions.
    7. List of NumPy array files (.npy) of shape (T, N). Note these arrays are not being loaded completely, but mapped into memory (read-only).
    8. List of tabulated ASCII files of shape (T, N).
    9. Nested lists (1 level) like), eg.:
      [[‘traj1_0.xtc’, ‘traj1_1.xtc’], ‘traj2_full.xtc’], [‘traj3_0.xtc, …]]

      the grouped fragments will be treated as a joint trajectory.

  • features (MDFeaturizer, optional, default = None) – a featurizer object specifying how molecular dynamics files should be read (e.g. intramolecular distances, angles, dihedrals, etc). This parameter only makes sense if the input comes in the form of molecular dynamics trajectories or data, and will otherwise create a warning and have no effect.
  • top (str, mdtraj.Trajectory or mdtraj.Topology, optional, default = None) – A topology file name. This is needed when molecular dynamics trajectories are given and no featurizer is given. In this case, only the Cartesian coordinates will be read. You can also pass an already loaded mdtraj.Topology object. If it is an mdtraj.Trajectory object, the topology will be extracted from it.
  • chunksize (int, default=None) – Number of data frames to process at once. Choose a higher value here, to optimize thread usage and gain processing speed. If None is passed, use the default value of the underlying reader/data source. Choose zero to disable chunking at all.
Returns:

reader

Return type:

DataSource object

See also

pyemma.coordinates.load()
If your memory is big enough to load all features into memory, don’t bother using source - working in memory is faster!
pyemma.coordinates.pipeline()
The data input is the first stage for your pipeline. Add other stages to it and build a pipeline to analyze big data in streaming mode.

Examples

Create a reader for NumPy files:

>>> import numpy as np
>>> from pyemma.coordinates import source
>>> reader = source(['001.npy', '002.npy'] # doctest: +SKIP

Create a reader for trajectory files and select some distance as feature:

>>> reader = source(['traj01.xtc', 'traj02.xtc'], top='my_structure.pdb') # doctest: +SKIP
>>> reader.featurizer.add_distances([[0, 1], [5, 6]]) # doctest: +SKIP
>>> calculated_features = reader.get_output() # doctest: +SKIP

create a reader for a csv file:

>>> reader = source('data.csv') # doctest: +SKIP

Create a reader for huge NumPy in-memory arrays to process them in huge chunks to avoid memory issues:

>>> data = np.random.random(int(1e6))
>>> reader = source(data, chunksize=1000)
>>> from pyemma.coordinates import cluster_regspace
>>> regspace = cluster_regspace(reader, dmin=0.1)
Returns:
  • reader (a reader instance)
  • .. autoclass:: pyemma.coordinates.data.interface.ReaderInterface – :members: :undoc-members:

    Methods

    Attributes