Login With Github

Module Reference for Joblibb

joblib.Memory

class joblib.Memory(location=Nonebackend='local'cachedir=Nonemmap_mode=Nonecompress=Falseverbose=1bytes_limit=Nonebackend_options={})

A context object for caching a function's return value each time it is called with the same input arguments.

All values are cached on the filesystem, in a deep directory structure.

Parameter Name Description
location: str or None The path of the base directory to use as a data store or None. If None is given, no caching is done and the Memory object is completely transparent. This option replaces cachedir since version 0.12.
backend: str, optional Type of store backend for reading/writing cache files. Default: 'local'. The 'local' backend is using regular filesystem operations to manipulate data (open, mv, etc) in the backend.
cachedir: str or None, optional  
mmap_mode: {None, 'r+', 'r', 'w+', 'c'}, optional The memmapping mode used when loading from cache numpy arrays. See numpy.load for the meaning of the arguments.
compress: boolean, or integer, optional Whether to zip the stored data on disk. If an integer is given, it should be between 1 and 9, and sets the amount of compression. Note that compressed arrays cannot be read by memmapping.
verbose: int, optional Verbosity flag, controls the debug messages that are issued as functions are evaluated.
bytes_limit: int, optional Limit in bytes of the size of the cache.
backend_options: dict, optional Contains a dictionnary of named parameters used to configure the store backend.

__init__(location=Nonebackend='local'cachedir=Nonemmap_mode=Nonecompress=Falseverbose=1bytes_limit=Nonebackend_options={}

Methods

Method Name Description
__init__([location, backend, cachedir, …])  
cache([func, ignore, verbose, mmap_mode]) Decorates the given function func to only compute its return value for input arguments not cached on disk.
clear([warn]) Erase the complete cache directory.
debug(msg)  
eval(func, *args, **kwargs) Eval function func with arguments *args and **kwargs, in the context of the memory.
format(obj[, indent]) Return the formated representation of the object.
reduce_size() Remove cache elements to make cache size fit in bytes_limit.
warn(msg)  

joblib.Parallel

class joblib.Parallel(n_jobs=None, backend=None, verbose=0, timeout=None, pre_dispatch='2 * n_jobs', batch_size='auto', temp_folder=None, max_nbytes='1M', mmap_mode='r', prefer=None, require=None)

Helper class for readable parallel mapping.

Parameters Description
n_jobs: int, default: None The maximum number of concurrently running jobs, such as the number of Python worker processes when backend="multiprocessing" or the size of the thread-pool when backend="threading". If -1 all CPUs are used. If 1 is given, no parallel computing code is used at all, which is useful for debugging. For n_jobs below -1, (n_cpus + 1 + n_jobs) are used. Thus for n_jobs = -2, all CPUs but one are used. None is a marker for 'unset' that will be interpreted as n_jobs=1 (sequential execution) unless the call is performed under a parallel_backend context manager that sets another value for n_jobs.
backend: str, ParallelBackendBase instance or None, default: 'loky'

Specify the parallelization backend implementation. Supported backends are:

  • "loky" used by default, can induce some communication and memory overhead when exchanging input and output data with the worker Python processes.
  • "multiprocessing" previous process-based backend based onmultiprocessing.Pool. Less robust than loky.
  • "threading" is a very low-overhead backend but it suffers from the Python Global Interpreter Lock if the called function relies a lot on Python objects. "threading" is mostly useful when the execution bottleneck is a compiled extension that explicitly releases the GIL (for instance a Cython loop wrapped in a "with nogil" block or an expensive call to a library such as NumPy).
  • finally, you can register backends by calling register_parallel_backend. This will allow you to implement a backend of your liking.

It is not recommended to hard-code the backend name in a call to Parallel in a library. Instead it is recommended to set soft hints (prefer) or hard constraints (require) so as to make it possible for library users to change the backend from the outside using the parallel_backend context manager.

prefer: str in {'processes', 'threads'} or None, default: None Soft hint to choose the default backend if no specific backend was selected with the parallel_backend context manager. The default process-based backend is 'loky' and the default thread-based backend is 'threading'.
require: 'sharedmem' or None, default None Hard constraint to select the backend. If set to 'sharedmem', the selected backend will be single-host and thread-based even if the user asked for a non-thread based backend with parallel_backend.
verbose: int, optional The verbosity level: if non zero, progress messages are printed. Above 50, the output is sent to stdout. The frequency of the messages increases with the verbosity level. If it more than 10, all iterations are reported.
timeout: float, optional Timeout limit for each task to complete. If any task takes longer a TimeOutError will be raised. Only applied when n_jobs != 1
pre_dispatch: {'all', integer, or expression, as in '3*n_jobs'} The number of batches (of tasks) to be pre-dispatched. Default is '2*n_jobs'. When batch_size="auto" this is reasonable default and the workers should never starve.
batch_size: int or 'auto', default: 'auto' The number of atomic tasks to dispatch at once to each worker. When individual evaluations are very fast, dispatching calls to workers can be slower than sequential computation because of the overhead. Batching fast computations together can mitigate this. The 'auto' strategy keeps track of the time it takes for a batch to complete, and dynamically adjusts the batch size to keep the time on the order of half a second, using a heuristic. The initial batch size is 1. batch_size="auto" with backend="threading" will dispatch batches of a single task at a time as the threading backend has very little overhead and using larger batch size has not proved to bring any gain in that case.
temp_folder: str, optional

Folder to be used by the pool for memmapping large arrays for sharing memory with worker processes. If None, this will try in order:

  • a folder pointed by the JOBLIB_TEMP_FOLDER environment variable,
  • /dev/shm if the folder exists and is writable: this is a RAM disk filesystem available by default on modern Linux distributions,
  • the default system temporary folder that can be overridden with TMP, TMPDIR or TEMP environment variables, typically /tmp under Unix operating systems.

Only active when backend="loky" or "multiprocessing".

max_nbytes int, str, or None, optional, 1M by default Threshold on the size of arrays passed to the workers that triggers automated memory mapping in temp_folder. Can be an int in Bytes, or a human-readable string, e.g., '1M' for 1 megabyte. Use None to disable memmapping of large arrays. Only active when backend="loky" or "multiprocessing".
mmap_mode: {None, 'r+', 'r', 'w+', 'c'} Memmapping mode for numpy arrays passed to workers. See 'max_nbytes' parameter documentation for more details.

Notes

This object uses workers to compute in parallel the application of a function to many different arguments. The main functionality it brings in addition to using the raw multiprocessing or concurrent.futures API are (see examples for details):

  • More readable code, in particular since it avoids constructing list of arguments.
  • Easier debugging:

    • informative tracebacks even when the error happens on the client side
    • using 'n_jobs=1' enables to turn off parallel computing for debugging without changing the codepath
    • early capture of pickling errors
  • An optional progress meter.
  • Interruption of multiprocesses jobs with 'Ctrl-C'
  • Flexible pickling control for the communication to and from the worker processes.
  • Ability to use shared memory efficiently with worker processes for large numpy-based datastructures.

Examples

A simple example:

>>> from math import sqrt
>>> from joblib import Parallel, delayed
>>> Parallel(n_jobs=1)(delayed(sqrt)(i**2) for i in range(10))
[0.0, 1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0]

Reshaping the output when the function has several return values:

>>> from math import modf
>>> from joblib import Parallel, delayed
>>> r = Parallel(n_jobs=1)(delayed(modf)(i/2.) for i in range(10))
>>> res, i = zip(*r)
>>> res
(0.0, 0.5, 0.0, 0.5, 0.0, 0.5, 0.0, 0.5, 0.0, 0.5)
>>> i
(0.0, 0.0, 1.0, 1.0, 2.0, 2.0, 3.0, 3.0, 4.0, 4.0)

The progress meter: the higher the value of verbose, the more messages:

>>> from time import sleep
>>> from joblib import Parallel, delayed
>>> r = Parallel(n_jobs=2, verbose=10)(delayed(sleep)(.2) for _ in range(10)) 
[Parallel(n_jobs=2)]: Done   1 tasks      | elapsed:    0.6s
[Parallel(n_jobs=2)]: Done   4 tasks      | elapsed:    0.8s
[Parallel(n_jobs=2)]: Done  10 out of  10 | elapsed:    1.4s finished

Traceback example, note how the line of the error is indicated as well as the values of the parameter passed to the function that triggered the exception, even though the traceback happens in the child process:

>>> from heapq import nlargest
>>> from joblib import Parallel, delayed
>>> Parallel(n_jobs=2)(delayed(nlargest)(2, n) for n in (range(4), 'abcde', 3)) 
#...
---------------------------------------------------------------------------
Sub-process traceback:
---------------------------------------------------------------------------
TypeError                                          Mon Nov 12 11:37:46 2012
PID: 12934                                    Python 2.7.3: /usr/bin/python
...........................................................................
/usr/lib/python2.7/heapq.pyc in nlargest(n=2, iterable=3, key=None)
    419         if n >= size:
    420             return sorted(iterable, key=key, reverse=True)[:n]
    421
    422     # When key is none, use simpler decoration
    423     if key is None:
--> 424         it = izip(iterable, count(0,-1))                    # decorate
    425         result = _nlargest(n, it)
    426         return map(itemgetter(0), result)                   # undecorate
    427
    428     # General case, slowest method
 TypeError: izip argument #1 must support iteration
___________________________________________________________________________

Using pre_dispatch in a producer/consumer situation, where the data is generated on the fly. Note how the producer is first called 3 times before the parallel loop is initiated, and then called to generate new data on the fly:

>>> from math import sqrt
>>> from joblib import Parallel, delayed
>>> def producer():
...     for i in range(6):
...         print('Produced %s' % i)
...         yield i
>>> out = Parallel(n_jobs=2, verbose=100, pre_dispatch='1.5*n_jobs')(
...                delayed(sqrt)(i) for i in producer()) 
Produced 0
Produced 1
Produced 2
[Parallel(n_jobs=2)]: Done 1 jobs     | elapsed:  0.0s
Produced 3
[Parallel(n_jobs=2)]: Done 2 jobs     | elapsed:  0.0s
Produced 4
[Parallel(n_jobs=2)]: Done 3 jobs     | elapsed:  0.0s
Produced 5
[Parallel(n_jobs=2)]: Done 4 jobs     | elapsed:  0.0s
[Parallel(n_jobs=2)]: Done 6 out of 6 | elapsed:  0.0s remaining: 0.0s
[Parallel(n_jobs=2)]: Done 6 out of 6 | elapsed:  0.0s finished

__init__(n_jobs=Nonebackend=Noneverbose=0timeout=Nonepre_dispatch='2 * n_jobs'batch_size='auto'temp_folder=Nonemax_nbytes='1M'mmap_mode='r'prefer=Nonerequire=None)

Methods

Method Name Description
__init__([n_jobs, backend, verbose, …])  
debug(msg)  
dispatch_next() Dispatch more data for parallel processing
dispatch_one_batch(iterator) Prefetch the tasks for the next batch and dispatch them.
format(obj[, indent]) Return the formated representation of the object.
print_progress() Display the process of the parallel execution only a fraction of time, controlled by self.verbose.
retrieve()  
warn(msg)  

joblib.dump 

joblib.dump(valuefilenamecompress=0protocol=Nonecache_size=None)

Persist an arbitrary Python object into one file.

Read more in the User Guide.

Parameter Name Description
value: any Python object The object to store to disk.
filename: str, pathlib.Path, or file object. The file object or path of the file in which it is to be stored. The compression method corresponding to one of the supported filename extensions ('.z', '.gz', '.bz2', '.xz' or '.lzma') will be used automatically.
compress: int from 0 to 9 or bool or 2-tuple, optional Optional compression level for the data. 0 or False is no compression. Higher value means more compression, but also slower read and write times. Using a value of 3 is often a good compromise. See the notes for more details. If compress is True, the compression level used is 3. If compress is a 2-tuple, the first element must correspond to a string between supported compressors (e.g 'zlib', 'gzip', 'bz2', 'lzma' 'xz'), the second element must be an integer from 0 to 9, corresponding to the compression level.
protocol: int, optional Pickle protocol, see pickle.dump documentation for more details.
cache_size: positive int, optional This option is deprecated in 0.10 and has no effect.

Returns: filenames: list of strings

The list of file names in which the data is stored. If compress is false, each array is stored in a different file.

Notes

Memmapping on load cannot be used for compressed files. Thus using compression can significantly slow down loading. In addition, compressed files take extra extra memory during dump and load.

joblib.load

joblib.load(filenamemmap_mode=None)

Reconstruct a Python object from a file persisted with joblib.dump.

Read more in the User Guide.

Parameter Name Description
filename: str, pathlib.Path, or file object. The file object or path of the file from which to load the object
mmap_mode: {None, 'r+', 'r', 'w+', 'c'}, optional If not None, the arrays are memory-mapped from the disk. This mode has no effect for compressed files. Note that in this case the reconstructed object might no longer match exactly the originally pickled object.

Returns: result: any Python object

The object stored in the file.

Notes

This function can load numpy array files saved separately during the dump. If the mmap_mode argument is given, it is passed to np.load and arrays are loaded as memmaps. As a consequence, the reconstructed object might not match the original pickled object. Note that if the file was saved with compression, the arrays cannot be memmapped.

joblib.hash

joblib.hash(objhash_name='md5'coerce_mmap=False)

Quick calculation of a hash to identify uniquely Python objects containing numpy arrays.

Parameter Name Description
hash_name: 'md5' or 'sha1' Hashing algorithm used. sha1 is supposedly safer, but md5 is faster.
coerce_mmap: boolean Make no difference between np.memmap and np.ndarray

0 Comment

temp