bob.io.HDF5File

class bob.io.HDF5File((object)arg1, (HDF5File)other) → None :

Bases: Boost.Python.instance

A HDF5File allows users to read and write data from and to files containing standard bob binary coded data in HDF5 format. For an introduction to HDF5, please visit http://www.hdfgroup.org/HDF5.

Generates a shallow copy of the already opened file.

__init__( (object)arg1, (str)filename [, (str)openmode_string=’r’]) -> object :
Opens a new file in one of these supported modes: ‘r’ (read-only), ‘a’ (read/write/append), ‘w’ (read/write/truncate) or ‘x’ (read/write/exclusive)
__init__((object)arg1, (HDF5File)other) → None :

Generates a shallow copy of the already opened file.

__init__( (object)arg1, (str)filename [, (str)openmode_string=’r’]) -> object :
Opens a new file in one of these supported modes: ‘r’ (read-only), ‘a’ (read/write/append), ‘w’ (read/write/truncate) or ‘x’ (read/write/exclusive)

Methods

__init__((object)arg1, (HDF5File)other) Generates a shallow copy of the already opened file.
append((HDF5File)self, (str)path, …) Appends a scalar or an array to a dataset.
cd((HDF5File)self, (str)path) Changes the current prefix path.
copy((HDF5File)self, (HDF5File)file) Copies all accessible content to another HDF5 file
create_group((HDF5File)self, (str)path) Creates a new directory inside the file.
delete_attribute((HDF5File)self, …) Deletes a given attribute associated to a (existing) path in the file.
delete_attributes((HDF5File)self [, (str)path=]) Deletes all attributes associated to a (existing) path in the file.
describe((HDF5File)self, (str)key) If a given path to an HDF5 dataset exists inside the file, return a type description of objects recorded in such a dataset, otherwise, raises an exception.
get_attribute((HDF5File)self, (str)name [, …) Returns an object representing an attribute attached to a particular (existing) path in this file.
get_attributes((HDF5File)self [, (str)path=]) Returns a dictionary containing all attributes related to a particular (existing) path in this file.
has_attribute((HDF5File)self, (str)name [, …) Checks if given attribute exists in a given (existing) path.
has_group((HDF5File)self, (str)path) Checks if a path exists inside a file - does not work for datasets, only for directories.
has_key((HDF5File)self, (str)key) Returns True if the file contains an HDF5 dataset with a given path
keys((HDF5File)self [, (bool)relative=False]) Synonym for ‘paths’
lread((HDF5File)self, (str)key [, (int)pos=-1]) Reads a given position from the dataset.
paths((HDF5File)self [, (bool)relative=False]) Returns all paths to datasets available inside this file, stored under the current working directory.
read((HDF5File)self, (str)key) Reads the whole dataset in a single shot.
rename((HDF5File)self, (str)from, (str)to) If a given path to an HDF5 dataset exists in the file, rename it
replace((HDF5File)self, (str)path, (int)pos, …) Modifies the value of a scalar/array inside a dataset in the file.
set((HDF5File)self, (str)path, …) Sets the scalar or array at position 0 to the given value.
set_attribute((HDF5File)self, (str)name, …) Sets the attribute in a given (existing) path using the value provided.
set_attributes((HDF5File)self, …) Sets attributes in a given (existing) path using a dictionary containing the names (keys) and values of those attributes.
sub_groups((HDF5File)self [, …) Returns all the subgroups (sub-directories) in the current file.
unlink((HDF5File)self, (str)key) If a given path to an HDF5 dataset exists inside the file, unlinks it.

Attributes

cwd
append((HDF5File)self, (str)path, (object)data[, (int)compression=0]) → None :

Appends a scalar or an array to a dataset. If the dataset does not yet exist, one is created with the type characteristics.

Keyword Parameters:

path
This is the path to the HDF5 dataset to replace data at
data
This is the data that will be set on the position indicated. It may be a simple python or numpy scalar (such as numpy.uint8) or a numpy.ndarray of any of the supported data types. You can also, optionally, set this to a list or tuple of scalars or arrays. This will cause this method to iterate over the elements and add each individually.
compresssion
This parameter is effective when appending arrays. Set this to a number betwen 0 (default) and 9 (maximum) to compress the contents of this dataset. This setting is only effective if the dataset does not yet exist, otherwise, the previous setting is respected.
cd((HDF5File)self, (str)path) → None :

Changes the current prefix path. When this object is started, the prefix path is empty, which means all following paths to data objects should be given using the full path. If you set this to a different value, it will be used as a prefix to any subsequent operation until you reset it. If path starts with ‘/’, it is treated as an absolute path. ‘..’ and ‘.’ are supported. This object should be a std::string. If the value is relative, it is added to the current path. If it is absolute, it causes the prefix to be reset. Note all operations taking a relative path, following a cd(), will be considered relative to the value defined by the ‘cwd’ property of this object.

copy((HDF5File)self, (HDF5File)file) → None :

Copies all accessible content to another HDF5 file

create_group((HDF5File)self, (str)path) → None :

Creates a new directory inside the file. A relative path is taken w.r.t. to the current directory. If the directory already exists (check it with hasGroup()), an exception will be raised.

cwd
delete_attribute((HDF5File)self, (str)name[, (str)path='.']) → None :

Deletes a given attribute associated to a (existing) path in the file. The path may point to a subdirectory or to a particular dataset. If the path does not exist, a RuntimeError is raised.

delete_attributes((HDF5File)self[, (str)path='.']) → None :

Deletes all attributes associated to a (existing) path in the file. The path may point to a subdirectory or to a particular dataset. If the path does not exist, a RuntimeError is raised.

describe((HDF5File)self, (str)key) → tuple :

If a given path to an HDF5 dataset exists inside the file, return a type description of objects recorded in such a dataset, otherwise, raises an exception. The returned value type is a tuple of tuples (HDF5Type, number-of-objects, expandible) describing the capabilities if the file is read using theses formats.

get_attribute((HDF5File)self, (str)name[, (str)path='.']) → object :

Returns an object representing an attribute attached to a particular (existing) path in this file. The path may point to a subdirectory or to a particular dataset. If the path does not exist, a RuntimeError is raised.

get_attributes((HDF5File)self[, (str)path='.']) → dict :

Returns a dictionary containing all attributes related to a particular (existing) path in this file. The path may point to a subdirectory or to a particular dataset. If the path does not exist, a RuntimeError is raised.

has_attribute((HDF5File)self, (str)name[, (str)path='.']) → bool :

Checks if given attribute exists in a given (existing) path. The path may point to a subdirectory or to a particular dataset. If the path does not exist, a RuntimeError is raised.

has_group((HDF5File)self, (str)path) → bool :

Checks if a path exists inside a file - does not work for datasets, only for directories. If the given path is relative, it is take w.r.t. to the current working directory

has_key((HDF5File)self, (str)key) → bool :

Returns True if the file contains an HDF5 dataset with a given path

keys((HDF5File)self[, (bool)relative=False]) → list :

Synonym for ‘paths’

lread((HDF5File)self, (str)key[, (int)pos=-1]) → object :

Reads a given position from the dataset. Returns a single object if ‘pos’ >= 0, otherwise a list by reading all objects in sequence.

paths((HDF5File)self[, (bool)relative=False]) → list :

Returns all paths to datasets available inside this file, stored under the current working directory. If relative is set to True, the returned paths are relative to the current working directory, otherwise they are asbolute.

read((HDF5File)self, (str)key) → object :

Reads the whole dataset in a single shot. Returns a single object with all contents.

rename((HDF5File)self, (str)from, (str)to) → None :

If a given path to an HDF5 dataset exists in the file, rename it

replace((HDF5File)self, (str)path, (int)pos, (object)data) → None :

Modifies the value of a scalar/array inside a dataset in the file.

Keyword Parameters:

path
This is the path to the HDF5 dataset to replace data at
pos
This is the position we should replace
data
This is the data that will be set on the position indicated
set((HDF5File)self, (str)path, (object)data[, (int)compression=0]) → None :

Sets the scalar or array at position 0 to the given value. This method is equivalent to checking if the scalar or array at position 0 exists and then replacing it. If the path does not exist, we append the new scalar or array.

Keyword Parameters:

path
This is the path to the HDF5 dataset to replace data at
data
This is the data that will be set on the position indicated. It may be a simple python or numpy scalar (such as numpy.uint8) or a numpy.ndarray of any of the supported data types. You can also, optionally, set this to an iterable of scalars or arrays. This will cause this method to collapse the whole iterable into a numpy.ndarray and set that into the file.
compresssion
This parameter is effective when setting arrays. Set this to a number betwen 0 (default) and 9 (maximum) to compress the contents of this dataset. This setting is only effective if the dataset does not yet exist, otherwise, the previous setting is respected.
set_attribute((HDF5File)self, (str)name, (object)value[, (str)path='.']) → None :

Sets the attribute in a given (existing) path using the value provided. The path may point to a subdirectory or to a particular dataset. Only simple scalars (booleans, integers, floats and complex numbers) and arrays of those are supported at the time being. You can use numpy scalars to set values with arbitrary precision (e.g. numpy.uint8). If the path does not exist, a RuntimeError is raised.

set_attributes((HDF5File)self, (dict)attrs[, (str)path='.']) → None :

Sets attributes in a given (existing) path using a dictionary containing the names (keys) and values of those attributes. The path may point to a subdirectory or to a particular dataset. Only simple scalars (booleans, integers, floats and complex numbers) and arrays of those are supported at the time being. You can use numpy scalars to set values with arbitrary precision (e.g. numpy.uint8). If the path does not exist, a RuntimeError is raised.

sub_groups((HDF5File)self[, (bool)relative=False[, (bool)recursive=True]]) → list :

Returns all the subgroups (sub-directories) in the current file.

If a given path to an HDF5 dataset exists inside the file, unlinks it. Please note this will note remove the data from the file, just make it inaccessible. If you wish to cleanup, save the reacheable objects from this file to another HDF5File object using copy(), for example.