3.5.7. horton.io.lockedh5
– H5 file with lock¶
-
class
horton.io.lockedh5.
LockedH5File
(*args, **kwargs)¶ Bases:
h5py._hl.files.File
Open an HDF5 file exclusively using flock (works only with sec2 driver)
Except for the following two optional arguments, all arguments and keyword arguments are passed on to the h5.File constructor:
- count
- The number of attempts to open the file.
- wait
- The maximum number of seconds to wait between two attempts to open the file. [default=10]
Two processes on the same machine typically can not open the same HDF5 file for writing. The second one will get an IOError because the HDF5 library tries to detect such cases. When the h5.File constructor raises no IOError, fcntl.flock is used to obtain a non-blocking lock: shared when mode==’r’, exclusive in all other cases. This may also raise an IOError when two processes on different machines try to aquire incompatible locks. Whenever an IOError is raised, this constructor will wait for some time and try again, hoping that the other process has finished reading/writing and closed the file. Several attempts are made before finally giving up.
This class guarantees that only one process is writing to the HDF5 file (while no other processes are reading). Multiple processes may still read from the same HDF5 file.
-
__getitem__
()¶ Open an object in the file
-
__init__
(*args, **kwargs)¶ Open an HDF5 file exclusively using flock (works only with sec2 driver)
Except for the following two optional arguments, all arguments and keyword arguments are passed on to the h5.File constructor:
- count
- The number of attempts to open the file.
- wait
- The maximum number of seconds to wait between two attempts to open the file. [default=10]
Two processes on the same machine typically can not open the same HDF5 file for writing. The second one will get an IOError because the HDF5 library tries to detect such cases. When the h5.File constructor raises no IOError, fcntl.flock is used to obtain a non-blocking lock: shared when mode==’r’, exclusive in all other cases. This may also raise an IOError when two processes on different machines try to aquire incompatible locks. Whenever an IOError is raised, this constructor will wait for some time and try again, hoping that the other process has finished reading/writing and closed the file. Several attempts are made before finally giving up.
This class guarantees that only one process is writing to the HDF5 file (while no other processes are reading). Multiple processes may still read from the same HDF5 file.
-
clear
() → None. Remove all items from D.¶
-
close
()¶ Close the file. All open objects become invalid
-
copy
(source, dest, name=None, shallow=False, expand_soft=False, expand_external=False, expand_refs=False, without_attrs=False)¶ Copy an object or group.
The source can be a path, Group, Dataset, or Datatype object. The destination can be either a path or a Group object. The source and destinations need not be in the same file.
If the source is a Group object, all objects contained in that group will be copied recursively.
When the destination is a Group object, by default the target will be created in that group with its current name (basename of obj.name). You can override that by setting “name” to a string.
There are various options which all default to “False”:
- shallow: copy only immediate members of a group.
- expand_soft: expand soft links into new objects.
- expand_external: expand external links into new objects.
- expand_refs: copy objects that are pointed to by references.
- without_attrs: copy object without copying attributes.
Example
>>> f = File('myfile.hdf5') >>> f.listnames() ['MyGroup'] >>> f.copy('MyGroup', 'MyCopy') >>> f.listnames() ['MyGroup', 'MyCopy']
-
create_dataset
(name, shape=None, dtype=None, data=None, **kwds)¶ Create a new HDF5 dataset
- name
- Name of the dataset (absolute or relative). Provide None to make an anonymous dataset.
- shape
- Dataset shape. Use “()” for scalar datasets. Required if “data” isn’t provided.
- dtype
- Numpy dtype or string. If omitted, dtype(‘f’) will be used. Required if “data” isn’t provided; otherwise, overrides data array’s dtype.
- data
- Provide data to initialize the dataset. If used, you can omit shape and dtype arguments.
Keyword-only arguments:
- chunks
- (Tuple) Chunk shape, or True to enable auto-chunking.
- maxshape
- (Tuple) Make the dataset resizable up to this shape. Use None for axes you want to be unlimited.
- compression
- (String or int) Compression strategy. Legal values are ‘gzip’, ‘szip’, ‘lzf’. If an integer in range(10), this indicates gzip compression level. Otherwise, an integer indicates the number of a dynamically loaded compression filter.
- compression_opts
- Compression settings. This is an integer for gzip, 2-tuple for szip, etc. If specifying a dynamically loaded compression filter number, this must be a tuple of values.
- scaleoffset
- (Integer) Enable scale/offset filter for (usually) lossy compression of integer or floating-point data. For integer data, the value of scaleoffset is the number of bits to retain (pass 0 to let HDF5 determine the minimum number of bits necessary for lossless compression). For floating point data, scaleoffset is the number of digits after the decimal place to retain; stored values thus have absolute error less than 0.5*10**(-scaleoffset).
- shuffle
- (T/F) Enable shuffle filter.
- fletcher32
- (T/F) Enable fletcher32 error detection. Not permitted in conjunction with the scale/offset filter.
- fillvalue
- (Scalar) Use this value for uninitialized parts of the dataset.
- track_times
- (T/F) Enable dataset creation timestamps.
-
create_group
(name, track_order=False)¶ Create and return a new subgroup.
Name may be absolute or relative. Fails if the target name already exists.
- track_order
- Track dataset/group creation order under this group if True.
-
flush
()¶ Tell the HDF5 library to flush its buffers.
-
get
(name, default=None, getclass=False, getlink=False)¶ Retrieve an item or other information.
- “name” given only:
- Return the item, or “default” if it doesn’t exist
- “getclass” is True:
- Return the class of object (Group, Dataset, etc.), or “default” if nothing with that name exists
- “getlink” is True:
- Return HardLink, SoftLink or ExternalLink instances. Return “default” if nothing with that name exists.
- “getlink” and “getclass” are True:
- Return HardLink, SoftLink and ExternalLink classes. Return “default” if nothing with that name exists.
Example:
>>> cls = group.get('foo', getclass=True) >>> if cls == SoftLink: ... print '"foo" is a soft link!'
-
items
()¶ Get a list of tuples containing (name, object) pairs
-
iteritems
()¶ Get an iterator over (name, object) pairs
-
iterkeys
() → an iterator over the keys of D¶
-
itervalues
()¶ Get an iterator over member objects
-
keys
()¶ Get a list containing member names
-
move
(source, dest)¶ Move a link to a new location in the file.
If “source” is a hard link, this effectively renames the object. If “source” is a soft or external link, the link itself is moved, with its value unmodified.
-
pop
(k[, d]) → v, remove specified key and return the corresponding value.¶ If key is not found, d is returned if given, otherwise KeyError is raised.
-
popitem
() → (k, v), remove and return some (key, value) pair¶ as a 2-tuple; but raise KeyError if D is empty.
-
require_dataset
(name, shape, dtype, exact=False, **kwds)¶ Open a dataset, creating it if it doesn’t exist.
If keyword “exact” is False (default), an existing dataset must have the same shape and a conversion-compatible dtype to be returned. If True, the shape and dtype must match exactly.
Other dataset keywords (see create_dataset) may be provided, but are only used if a new dataset is to be created.
Raises TypeError if an incompatible object already exists, or if the shape or dtype don’t match according to the above rules.
-
require_group
(name)¶ Return a group, creating it if it doesn’t exist.
TypeError is raised if something with that name already exists that isn’t a group.
-
setdefault
(k[, d]) → D.get(k,d), also set D[k]=d if k not in D¶
-
update
([E, ]**F) → None. Update D from mapping/iterable E and F.¶ If E present and has a .keys() method, does: for k in E: D[k] = E[k] If E present and lacks .keys() method, does: for (k, v) in E: D[k] = v In either case, this is followed by: for k, v in F.items(): D[k] = v
-
values
()¶ Get a list containing member objects
-
visit
(func)¶ Recursively visit all names in this group and subgroups (HDF5 1.8).
You supply a callable (function, method or callable object); it will be called exactly once for each link in this group and every group below it. Your callable must conform to the signature:
func(<member name>) => <None or return value>Returning None continues iteration, returning anything else stops and immediately returns that value from the visit method. No particular order of iteration within groups is guaranteed.
Example:
>>> # List the entire contents of the file >>> f = File("foo.hdf5") >>> list_of_names = [] >>> f.visit(list_of_names.append)
-
visititems
(func)¶ Recursively visit names and objects in this group (HDF5 1.8).
You supply a callable (function, method or callable object); it will be called exactly once for each link in this group and every group below it. Your callable must conform to the signature:
func(<member name>, <object>) => <None or return value>Returning None continues iteration, returning anything else stops and immediately returns that value from the visit method. No particular order of iteration within groups is guaranteed.
Example:
# Get a list of all datasets in the file >>> mylist = [] >>> def func(name, obj): … if isinstance(obj, Dataset): … mylist.append(name) … >>> f = File(‘foo.hdf5’) >>> f.visititems(func)
-
attrs
¶ Attributes attached to this object
-
driver
¶ Low-level HDF5 file driver used to open file
-
fid
¶ File ID (backwards compatibility)
-
file
¶ Return a File instance associated with this object
-
filename
¶ File name on disk
-
id
¶ Low-level identifier appropriate for this object
-
libver
¶ File format version bounds (2-tuple – low, high)
-
mode
¶ Python mode used to open file
-
name
¶ Return the full name of this object. None if anonymous.
-
parent
¶ Return the parent group of this object.
This is always equivalent to obj.file[posixpath.dirname(obj.name)]. ValueError if this object is anonymous.
-
ref
¶ An (opaque) HDF5 reference to this object
-
regionref
¶ Create a region reference (Datasets only).
The syntax is regionref[<slices>]. For example, dset.regionref[…] creates a region reference in which the whole dataset is selected.
Can also be used to determine the shape of the referenced dataset (via .shape property), or the shape of the selection (via the .selection property).
-
userblock_size
¶ User block size (in bytes)