Pluggable On-Disk Back-end APIs¶
The internal REST API used between the proxy server and the account, container and object server is almost identical to public Swift REST API, but with a few internal extensions (for example, update an account with a new container).
The pluggable back-end APIs for the three REST API servers (account, container, object) abstracts the needs for servicing the various REST APIs from the details of how data is laid out and stored on-disk.
The APIs are documented in the reference implementations for all three
servers. For historical reasons, the object server backend reference
implementation module is named diskfile
, while the account and container
server backend reference implementation modules are named appropriately.
This API is still under development and not yet finalized.
Back-end API for Account Server REST APIs¶
Pluggable Back-end for Account Server
- class swift.account.backend.AccountBroker(db_file, timeout=25, logger=None, account=None, container=None, pending_timeout=None, stale_reads_ok=False, skip_commits=False)
Encapsulates working with an account database.
- create_account_stat_table(conn, put_timestamp)
Create account_stat table which is specific to the account DB. Not a part of Pluggable Back-ends, internal to the baseline code.
- Parameters:
conn – DB connection object
put_timestamp – put timestamp
- create_container_table(conn)
Create container table which is specific to the account DB.
- Parameters:
conn – DB connection object
- create_policy_stat_table(conn)
Create policy_stat table which is specific to the account DB. Not a part of Pluggable Back-ends, internal to the baseline code.
- Parameters:
conn – DB connection object
- empty()
Check if the account DB is empty.
- Returns:
True if the database has no active containers.
- get_info()
Get global data for the account.
- Returns:
dict with keys: account, created_at, put_timestamp, delete_timestamp, status_changed_at, container_count, object_count, bytes_used, hash, id
- get_policy_stats(do_migrations=False)
Get global policy stats for the account.
- Parameters:
do_migrations – boolean, if True the policy stat dicts will always include the ‘container_count’ key; otherwise it may be omitted on legacy databases until they are migrated.
- Returns:
dict of policy stats where the key is the policy index and the value is a dictionary like {‘object_count’: M, ‘bytes_used’: N, ‘container_count’: L}
- is_status_deleted()
Only returns true if the status field is set to DELETED.
- list_containers_iter(limit, marker, end_marker, prefix, delimiter, reverse=False, allow_reserved=False)
Get a list of containers sorted by name starting at marker onward, up to limit entries. Entries will begin with the prefix and will not have the delimiter after the prefix.
- Parameters:
limit – maximum number of entries to get
marker – marker query
end_marker – end marker query
prefix – prefix query
delimiter – delimiter for query
reverse – reverse the result order.
allow_reserved – exclude names with reserved-byte by default
- Returns:
list of tuples of (name, object_count, bytes_used, put_timestamp, 0)
- make_tuple_for_pickle(record)
Turn this db record dict into the format this service uses for pending pickles.
- merge_items(item_list, source=None)
Merge items into the container table.
- Parameters:
item_list – list of dictionaries of {‘name’, ‘put_timestamp’, ‘delete_timestamp’, ‘object_count’, ‘bytes_used’, ‘deleted’, ‘storage_policy_index’}
source – if defined, update incoming_sync with the source
- put_container(name, put_timestamp, delete_timestamp, object_count, bytes_used, storage_policy_index)
Create a container with the given attributes.
- Parameters:
name – name of the container to create (a native string)
put_timestamp – put_timestamp of the container to create
delete_timestamp – delete_timestamp of the container to create
object_count – number of objects in the container
bytes_used – number of bytes used by the container
storage_policy_index – the storage policy for this container
Back-end API for Container Server REST APIs¶
Pluggable Back-ends for Container Server
- class swift.container.backend.ContainerBroker(db_file, timeout=25, logger=None, account=None, container=None, pending_timeout=None, stale_reads_ok=False, skip_commits=False, force_db_file=False)
Encapsulates working with a container database.
Note that this may involve multiple on-disk DB files if the container becomes sharded:
_db_file
is the path to the legacy container DB name, i.e.<hash>.db
. This file should exist for an initialised broker that has never been sharded, but will not exist once a container has been sharded.db_files
is a list of existing db files for the broker. This list should have at least one entry for an initialised broker, and should have two entries while a broker is in SHARDING state.db_file
is the path to whichever db is currently authoritative for the container. Depending on the container’s state, this may not be the same as thedb_file
argument given to__init__()
, unlessforce_db_file
is True in which casedb_file
is always equal to thedb_file
argument given to__init__()
.pending_file
is always equal to_db_file
extended with.pending
, i.e.<hash>.db.pending
.
- classmethod create_broker(device_path, part, account, container, logger=None, epoch=None, put_timestamp=None, storage_policy_index=None)
Create a ContainerBroker instance. If the db doesn’t exist, initialize the db file.
- Parameters:
device_path – device path
part – partition number
account – account name string
container – container name string
logger – a logger instance
epoch – a timestamp to include in the db filename
put_timestamp – initial timestamp if broker needs to be initialized
storage_policy_index – the storage policy index
- Returns:
a tuple of (
broker
,initialized
) wherebroker
is an instance ofswift.container.backend.ContainerBroker
andinitialized
is True if the db file was initialized, False otherwise.
- create_container_info_table(conn, put_timestamp, storage_policy_index)
Create the container_info table which is specific to the container DB. Not a part of Pluggable Back-ends, internal to the baseline code. Also creates the container_stat view.
- Parameters:
conn – DB connection object
put_timestamp – put timestamp
storage_policy_index – storage policy index
- create_object_table(conn)
Create the object table which is specific to the container DB. Not a part of Pluggable Back-ends, internal to the baseline code.
- Parameters:
conn – DB connection object
- create_policy_stat_table(conn, storage_policy_index=0)
Create policy_stat table.
- Parameters:
conn – DB connection object
storage_policy_index – the policy_index the container is being created with
- create_shard_range_table(conn)
Create the shard_range table which is specific to the container DB.
- Parameters:
conn – DB connection object
- property db_file
Get the path to the primary db file for this broker. This is typically the db file for the most recent sharding epoch. However, if no db files exist on disk, or if
force_db_file
was True when the broker was constructed, then the primary db file is the file passed to the broker constructor.- Returns:
A path to a db file; the file does not necessarily exist.
- property db_files
Gets the cached list of valid db files that exist on disk for this broker.
- The cached list may be refreshed by calling
- Returns:
A list of paths to db files ordered by ascending epoch; the list may be empty.
- delete_object(name, timestamp, storage_policy_index=0)
Mark an object deleted.
- Parameters:
name – object name to be deleted
timestamp – timestamp when the object was marked as deleted
storage_policy_index – the storage policy index for the object
- empty()
Check if container DB is empty.
This method uses more stringent checks on object count than
is_deleted()
: this method checks that there are no objects in any policy; if the container is in the process of sharding then both fresh and retiring databases are checked to be empty; if a root container has shard ranges then they are checked to be empty.- Returns:
True if the database has no active objects, False otherwise
- enable_sharding(epoch)
Updates this broker’s own shard range with the given epoch, sets its state to SHARDING and persists it in the DB.
- Parameters:
epoch – a
Timestamp
- Returns:
the broker’s updated own shard range.
- find_shard_ranges(shard_size, limit=-1, existing_ranges=None, minimum_shard_size=1)
Scans the container db for shard ranges. Scanning will start at the upper bound of the any
existing_ranges
that are given, otherwise atShardRange.MIN
. Scanning will stop whenlimit
shard ranges have been found or when no more shard ranges can be found. In the latter case, the upper bound of the final shard range will be equal to the upper bound of the container namespace.This method does not modify the state of the db; callers are responsible for persisting any shard range data in the db.
- Parameters:
shard_size – the size of each shard range
limit – the maximum number of shard points to be found; a negative value (default) implies no limit.
existing_ranges – an optional list of existing ShardRanges; if given, this list should be sorted in order of upper bounds; the scan for new shard ranges will start at the upper bound of the last existing ShardRange.
minimum_shard_size – Minimum size of the final shard range. If this is greater than one then the final shard range may be extended to more than shard_size in order to avoid a further shard range with less minimum_shard_size rows.
- Returns:
a tuple; the first value in the tuple is a list of dicts each having keys {‘index’, ‘lower’, ‘upper’, ‘object_count’} in order of ascending ‘upper’; the second value in the tuple is a boolean which is True if the last shard range has been found, False otherwise.
- get_all_shard_range_data()
Returns a list of all shard range data, including own shard range and deleted shard ranges.
- Returns:
A list of dict representations of a ShardRange.
- get_brokers()
Return a list of brokers for component dbs. The list has two entries while the db state is sharding: the first entry is a broker for the retiring db with
skip_commits
set toTrue
; the second entry is a broker for the fresh db withskip_commits
set toFalse
. For any other db state the list has one entry.- Returns:
a list of
ContainerBroker
- get_db_state()
Returns the current state of on disk db files.
- get_info()
Get global data for the container.
- Returns:
dict with keys: account, container, created_at, put_timestamp, delete_timestamp, status, status_changed_at, object_count, bytes_used, reported_put_timestamp, reported_delete_timestamp, reported_object_count, reported_bytes_used, hash, id, x_container_sync_point1, x_container_sync_point2, and storage_policy_index, db_state.
- get_info_is_deleted()
Get the is_deleted status and info for the container.
- Returns:
a tuple, in the form (info, is_deleted) info is a dict as returned by get_info and is_deleted is a boolean.
- get_misplaced_since(start, count)
Get a list of objects which are in a storage policy different from the container’s storage policy.
- Parameters:
start – last reconciler sync point
count – maximum number of entries to get
- Returns:
list of dicts with keys: name, created_at, size, content_type, etag, storage_policy_index
- get_objects(limit=None, marker='', end_marker='', include_deleted=None, since_row=None)
Returns a list of objects, including deleted objects, in all policies. Each object in the list is described by a dict with keys {‘name’, ‘created_at’, ‘size’, ‘content_type’, ‘etag’, ‘deleted’, ‘storage_policy_index’}.
- Parameters:
limit – maximum number of entries to get
marker – if set, objects with names less than or equal to this value will not be included in the list.
end_marker – if set, objects with names greater than or equal to this value will not be included in the list.
include_deleted – if True, include only deleted objects; if False, include only undeleted objects; otherwise (default), include both deleted and undeleted objects.
since_row – include only items whose ROWID is greater than the given row id; by default all rows are included.
- Returns:
a list of dicts, each describing an object.
- get_own_shard_range(no_default=False)
Returns a shard range representing this broker’s own shard range. If no such range has been persisted in the broker’s shard ranges table then a default shard range representing the entire namespace will be returned.
The
object_count
andbytes_used
of the returned shard range are not guaranteed to be up-to-date with the current object stats for this broker. Callers that require up-to-date stats should use theget_info
method.- Parameters:
no_default – if True and the broker’s own shard range is not found in the shard ranges table then None is returned, otherwise a default shard range is returned.
- Returns:
an instance of
ShardRange
- get_replication_info()
Get information about the DB required for replication.
- Returns:
dict containing keys from get_info plus max_row and metadata
- Note:: get_info’s <db_contains_type>_count is translated to just
“count” and metadata is the raw string.
- get_shard_ranges(marker=None, end_marker=None, includes=None, reverse=False, include_deleted=False, states=None, include_own=False, exclude_others=False, fill_gaps=False)
Returns a list of persisted shard ranges.
- Parameters:
marker – restricts the returned list to shard ranges whose namespace includes or is greater than the marker value. If
reverse=True
thenmarker
is treated asend_marker
.marker
is ignored ifincludes
is specified.end_marker – restricts the returned list to shard ranges whose namespace includes or is less than the end_marker value. If
reverse=True
thenend_marker
is treated asmarker
.end_marker
is ignored ifincludes
is specified.includes – restricts the returned list to the shard range that includes the given value; if
includes
is specified thenfill_gaps
,marker
andend_marker
are ignored, but other constraints are applied (e.g.exclude_others
andinclude_deleted
).reverse – reverse the result order.
include_deleted – include items that have the delete marker set.
states – if specified, restricts the returned list to shard ranges that have the given state(s); can be a list of ints or a single int.
include_own – boolean that governs whether the row whose name matches the broker’s path is included in the returned list. If True, that row is included unless it is excluded by other constraints (e.g.
marker
,end_marker
,includes
). If False, that row is not included. Default is False.exclude_others – boolean that governs whether the rows whose names do not match the broker’s path are included in the returned list. If True, those rows are not included, otherwise they are included. Default is False.
fill_gaps – if True, insert a modified copy of own shard range to fill any gap between the end of any found shard ranges and the upper bound of own shard range. Gaps enclosed within the found shard ranges are not filled.
fill_gaps
is ignored ifincludes
is specified.
- Returns:
a list of instances of
swift.common.utils.ShardRange
.
- get_shard_usage()
Get the aggregate object stats for all shard ranges in states ACTIVE, SHARDING or SHRINKING.
- Returns:
a dict with keys {bytes_used, object_count}
- get_sharding_sysmeta(key=None)
Returns sharding specific info from the broker’s metadata.
- Parameters:
key – if given the value stored under
key
in the sharding info will be returned.- Returns:
either a dict of sharding info or the value stored under
key
in that dict.
- get_sharding_sysmeta_with_timestamps()
Returns sharding specific info from the broker’s metadata with timestamps.
- Parameters:
key – if given the value stored under
key
in the sharding info will be returned.- Returns:
a dict of sharding info with their timestamps.
- has_other_shard_ranges()
This function tells if there is any shard range other than the broker’s own shard range, that is not marked as deleted.
- Returns:
A boolean value as described above.
- is_reclaimable(now, reclaim_age)
Check if the broker abstraction is empty, and has been marked deleted for at least a reclaim age.
- is_root_container()
Returns True if this container is a root container, False otherwise.
A root container is a container that is not a shard of another container.
- list_objects_iter(limit, marker, end_marker, prefix, delimiter, path=None, storage_policy_index=0, reverse=False, include_deleted=False, since_row=None, transform_func=None, all_policies=False, allow_reserved=False)
Get a list of objects sorted by name starting at marker onward, up to limit entries. Entries will begin with the prefix and will not have the delimiter after the prefix.
- Parameters:
limit – maximum number of entries to get
marker – marker query
end_marker – end marker query
prefix – prefix query
delimiter – delimiter for query
path – if defined, will set the prefix and delimiter based on the path
storage_policy_index – storage policy index for query
reverse – reverse the result order.
include_deleted – if True, include only deleted objects; if False (default), include only undeleted objects; otherwise, include both deleted and undeleted objects.
since_row – include only items whose ROWID is greater than the given row id; by default all rows are included.
transform_func – an optional function that if given will be called for each object to get a transformed version of the object to include in the listing; should have same signature as
_transform_record()
; defaults to_transform_record()
.all_policies – if True, include objects for all storage policies ignoring any value given for
storage_policy_index
allow_reserved – exclude names with reserved-byte by default
- Returns:
list of tuples of (name, created_at, size, content_type, etag, deleted)
- make_tuple_for_pickle(record)
Turn this db record dict into the format this service uses for pending pickles.
- merge_items(item_list, source=None)
Merge items into the object table.
- Parameters:
item_list – list of dictionaries of {‘name’, ‘created_at’, ‘size’, ‘content_type’, ‘etag’, ‘deleted’, ‘storage_policy_index’, ‘ctype_timestamp’, ‘meta_timestamp’}
source – if defined, update incoming_sync with the source
- merge_shard_ranges(shard_ranges)
Merge shard ranges into the shard range table.
- Parameters:
shard_ranges – a shard range or a list of shard ranges; each shard range should be an instance of
ShardRange
or a dict representation of a shard range havingSHARD_RANGE_KEYS
.
- put_object(name, timestamp, size, content_type, etag, deleted=0, storage_policy_index=0, ctype_timestamp=None, meta_timestamp=None)
Creates an object in the DB with its metadata.
- Parameters:
name – object name to be created
timestamp – timestamp of when the object was created
size – object size
content_type – object content-type
etag – object etag
deleted – if True, marks the object as deleted and sets the deleted_at timestamp to timestamp
storage_policy_index – the storage policy index for the object
ctype_timestamp – timestamp of when content_type was last updated
meta_timestamp – timestamp of when metadata was last updated
- reload_db_files()
Reloads the cached list of valid on disk db files for this broker.
- remove_objects(lower, upper, max_row=None)
Removes object records in the given namespace range from the object table.
Note that objects are removed regardless of their storage_policy_index.
- Parameters:
lower – defines the lower bound of object names that will be removed; names greater than this value will be removed; names less than or equal to this value will not be removed.
upper – defines the upper bound of object names that will be removed; names less than or equal to this value will be removed; names greater than this value will not be removed. The empty string is interpreted as there being no upper bound.
max_row – if specified only rows less than or equal to max_row will be removed
- reported(put_timestamp, delete_timestamp, object_count, bytes_used)
Update reported stats, available with container’s get_info.
- Parameters:
put_timestamp – put_timestamp to update
delete_timestamp – delete_timestamp to update
object_count – object_count to update
bytes_used – bytes_used to update
- classmethod resolve_shard_range_states(states)
Given a list of values each of which may be the name of a state, the number of a state, or an alias, return the set of state numbers described by the list.
The following alias values are supported: ‘listing’ maps to all states that are considered valid when listing objects; ‘updating’ maps to all states that are considered valid for redirecting an object update; ‘auditing’ maps to all states that are considered valid for a shard container that is updating its own shard range table from a root (this currently maps to all states except FOUND).
- Parameters:
states – a list of values each of which may be the name of a state, the number of a state, or an alias
- Returns:
a set of integer state numbers, or None if no states are given
- Raises:
ValueError – if any value in the given list is neither a valid state nor a valid alias
- set_sharded_state()
Unlink’s the broker’s retiring DB file.
- Returns:
True if the retiring DB was successfully unlinked, False otherwise.
- set_sharding_state()
Creates and initializes a fresh DB file in preparation for sharding a retiring DB. The broker’s own shard range must have an epoch timestamp for this method to succeed.
- Returns:
True if the fresh DB was successfully created, False otherwise.
- set_sharding_sysmeta(key, value)
Updates the broker’s metadata stored under the given key prefixed with a sharding specific namespace.
- Parameters:
key – metadata key in the sharding metadata namespace.
value – metadata value
- set_storage_policy_index(policy_index, timestamp=None)
Update the container_stat policy_index and status_changed_at.
- sharding_initiated()
Returns True if a broker has shard range state that would be necessary for sharding to have been initiated, False otherwise.
- sharding_required()
Returns True if a broker has shard range state that would be necessary for sharding to have been initiated but has not yet completed sharding, False otherwise.
- swift.container.backend.merge_shards(shard_data, existing)
Compares
shard_data
withexisting
and updatesshard_data
with any items ofexisting
that take precedence over the corresponding item inshard_data
.- Parameters:
shard_data – a dict representation of shard range that may be modified by this method.
existing – a dict representation of shard range.
- Returns:
True if
shard data
has any item(s) that are considered to take precedence over the corresponding item inexisting
- swift.container.backend.sift_shard_ranges(new_shard_ranges, existing_shard_ranges)
Compares new and existing shard ranges, updating the new shard ranges with any more recent state from the existing, and returns shard ranges sorted into those that need adding because they contain new or updated state and those that need deleting because their state has been superseded.
- Parameters:
new_shard_ranges – a list of dicts, each of which represents a shard range.
existing_shard_ranges – a dict mapping shard range names to dicts representing a shard range.
- Returns:
a tuple (to_add, to_delete); to_add is a list of dicts, each of which represents a shard range that is to be added to the existing shard ranges; to_delete is a set of shard range names that are to be deleted.
- swift.container.backend.update_new_item_from_existing(new_item, existing)
Compare the data and meta related timestamps of a new object item with the timestamps of an existing object record, and update the new item with data and/or meta related attributes from the existing record if their timestamps are newer.
The multiple timestamps are encoded into a single string for storing in the ‘created_at’ column of the objects db table.
- Parameters:
new_item – A dict of object update attributes
existing – A dict of existing object attributes
- Returns:
True if any attributes of the new item dict were found to be newer than the existing and therefore not updated, otherwise False implying that the updated item is equal to the existing.
Back-end API for Object Server REST APIs¶
Disk File Interface for the Swift Object Server
The DiskFile, DiskFileWriter and DiskFileReader classes combined define the on-disk abstraction layer for supporting the object server REST API interfaces (excluding REPLICATE). Other implementations wishing to provide an alternative backend for the object server must implement the three classes. An example alternative implementation can be found in the mem_server.py and mem_diskfile.py modules along size this one.
The DiskFileManager is a reference implemenation specific class and is not part of the backend API.
The remaining methods in this module are considered implementation specific and are also not considered part of the backend API.
- class swift.obj.diskfile.AuditLocation(path, device, partition, policy)
Represents an object location to be audited.
Other than being a bucket of data, the only useful thing this does is stringify to a filesystem path so the auditor’s logs look okay.
- class swift.obj.diskfile.BaseDiskFile(mgr, device_path, partition, account=None, container=None, obj=None, _datadir=None, policy=None, use_splice=False, pipe_size=None, open_expired=False, next_part_power=None, **kwargs)
Manage object files.
This specific implementation manages object files on a disk formatted with a POSIX-compliant file system that supports extended attributes as metadata on a file or directory.
Note
The arguments to the constructor are considered implementation specific. The API does not define the constructor arguments.
The following path format is used for data file locations: <devices_path/<device_dir>/<datadir>/<partdir>/<suffixdir>/<hashdir>/ <datafile>.<ext>
- Parameters:
mgr – associated DiskFileManager instance
device_path – path to the target device or drive
partition – partition on the device in which the object lives
account – account name for the object
container – container name for the object
obj – object name for the object
_datadir – override the full datadir otherwise constructed here
policy – the StoragePolicy instance
use_splice – if true, use zero-copy splice() to send data
pipe_size – size of pipe buffer used in zero-copy operations
open_expired – if True, open() will not raise a DiskFileExpired if object is expired
next_part_power – the next partition power to be used
- create(size=None)
Context manager to create a file. We create a temporary file first, and then return a DiskFileWriter object to encapsulate the state.
Note
An implementation is not required to perform on-disk preallocations even if the parameter is specified. But if it does and it fails, it must raise a DiskFileNoSpace exception.
- Parameters:
size – optional initial size of file to explicitly allocate on disk
- Raises:
DiskFileNoSpace – if a size is specified and allocation fails
- delete(timestamp)
Delete the object.
This implementation creates a tombstone file using the given timestamp, and removes any older versions of the object file. Any file that has an older timestamp than timestamp will be deleted.
Note
An implementation is free to use or ignore the timestamp parameter.
- Parameters:
timestamp – timestamp to compare with each file
- Raises:
DiskFileError – this implementation will raise the same errors as the create() method.
- property durable_timestamp
Provides the timestamp of the newest data file found in the object directory.
- Returns:
A Timestamp instance, or None if no data file was found.
- Raises:
DiskFileNotOpen – if the open() method has not been previously called on this instance.
- get_datafile_metadata()
Provide the datafile metadata for a previously opened object as a dictionary. This is metadata that was included when the object was first PUT, and does not include metadata set by any subsequent POST.
- Returns:
object’s datafile metadata dictionary
- Raises:
DiskFileNotOpen – if the
swift.obj.diskfile.DiskFile.open()
method was not previously invoked
- get_metadata()
Provide the metadata for a previously opened object as a dictionary.
- Returns:
object’s metadata dictionary
- Raises:
DiskFileNotOpen – if the
swift.obj.diskfile.DiskFile.open()
method was not previously invoked
- get_metafile_metadata()
Provide the metafile metadata for a previously opened object as a dictionary. This is metadata that was written by a POST and does not include any persistent metadata that was set by the original PUT.
- Returns:
object’s .meta file metadata dictionary, or None if there is no .meta file
- Raises:
DiskFileNotOpen – if the
swift.obj.diskfile.DiskFile.open()
method was not previously invoked
- open(modernize=False, current_time=None)
Open the object.
This implementation opens the data file representing the object, reads the associated metadata in the extended attributes, additionally combining metadata from fast-POST .meta files.
- Parameters:
modernize – if set, update this diskfile to the latest format. Currently, this means adding metadata checksums if none are present.
current_time – Unix time used in checking expiration. If not present, the current time will be used.
Note
An implementation is allowed to raise any of the following exceptions, but is only required to raise DiskFileNotExist when the object representation does not exist.
- Raises:
DiskFileCollision – on name mis-match with metadata
DiskFileNotExist – if the object does not exist
DiskFileDeleted – if the object was previously deleted
DiskFileQuarantined – if while reading metadata of the file some data did pass cross checks
- Returns:
itself for use as a context manager
- read_metadata(current_time=None)
Return the metadata for an object without requiring the caller to open the object first.
- Parameters:
current_time – Unix time used in checking expiration. If not present, the current time will be used.
- Returns:
metadata dictionary for an object
- Raises:
DiskFileError – this implementation will raise the same errors as the open() method.
- reader(keep_cache=False, _quarantine_hook=<function BaseDiskFile.<lambda>>)
Return a
swift.common.swob.Response
class compatible “app_iter” object as defined byswift.obj.diskfile.DiskFileReader
.For this implementation, the responsibility of closing the open file is passed to the
swift.obj.diskfile.DiskFileReader
object.- Parameters:
keep_cache – caller’s preference for keeping data read in the OS buffer cache
_quarantine_hook – 1-arg callable called when obj quarantined; the arg is the reason for quarantine. Default is to ignore it. Not needed by the REST layer.
- Returns:
a
swift.obj.diskfile.DiskFileReader
object
- write_metadata(metadata)
Write a block of metadata to an object without requiring the caller to create the object first. Supports fast-POST behavior semantics.
- Parameters:
metadata – dictionary of metadata to be associated with the object
- Raises:
DiskFileError – this implementation will raise the same errors as the create() method.
- class swift.obj.diskfile.BaseDiskFileManager(conf, logger)
Management class for devices, providing common place for shared parameters and methods not provided by the DiskFile class (which primarily services the object server REST API layer).
The get_diskfile() method is how this implementation creates a DiskFile object.
Note
This class is reference implementation specific and not part of the pluggable on-disk backend API.
Note
TODO(portante): Not sure what the right name to recommend here, as “manager” seemed generic enough, though suggestions are welcome.
- Parameters:
conf – caller provided configuration object
logger – caller provided logger
- cleanup_ondisk_files(hsh_path, **kwargs)
Clean up on-disk files that are obsolete and gather the set of valid on-disk files for an object.
- Parameters:
hsh_path – object hash path
frag_index – if set, search for a specific fragment index .data file, otherwise accept the first valid .data file
- Returns:
a dict that may contain: valid on disk files keyed by their filename extension; a list of obsolete files stored under the key ‘obsolete’; a list of files remaining in the directory, reverse sorted, stored under the key ‘files’.
- static consolidate_hashes(partition_dir)
Take what’s in hashes.pkl and hashes.invalid, combine them, write the result back to hashes.pkl, and clear out hashes.invalid.
- Parameters:
partition_dir – absolute path to partition dir containing hashes.pkl and hashes.invalid
- Returns:
a dict, the suffix hashes (if any), the key ‘valid’ will be False if hashes.pkl is corrupt, cannot be read or does not exist
- construct_dev_path(device)
Construct the path to a device without checking if it is mounted.
- Parameters:
device – name of target device
- Returns:
full path to the device
- get_dev_path(device, mount_check=None)
Return the path to a device, first checking to see if either it is a proper mount point, or at least a directory depending on the mount_check configuration option.
- Parameters:
device – name of target device
mount_check – whether or not to check mountedness of device. Defaults to bool(self.mount_check).
- Returns:
full path to the device, None if the path to the device is not a proper mount point or directory.
- get_diskfile(device, partition, account, container, obj, policy, **kwargs)
Returns a BaseDiskFile instance for an object based on the object’s partition, path parts and policy.
- Parameters:
device – name of target device
partition – partition on device in which the object lives
account – account name for the object
container – container name for the object
obj – object name for the object
policy – the StoragePolicy instance
- get_diskfile_and_filenames_from_hash(device, partition, object_hash, policy, **kwargs)
Returns a tuple of (a DiskFile instance for an object at the given object_hash, the basenames of the files in the object’s hash dir). Just in case someone thinks of refactoring, be sure DiskFileDeleted is not raised, but the DiskFile instance representing the tombstoned object is returned instead.
- Parameters:
device – name of target device
partition – partition on the device in which the object lives
object_hash – the hash of an object path
policy – the StoragePolicy instance
- Raises:
DiskFileNotExist – if the object does not exist
- Returns:
a tuple comprising (an instance of BaseDiskFile, a list of file basenames)
- get_diskfile_from_audit_location(audit_location)
Returns a BaseDiskFile instance for an object at the given AuditLocation.
- Parameters:
audit_location – object location to be audited
- get_diskfile_from_hash(device, partition, object_hash, policy, **kwargs)
Returns a DiskFile instance for an object at the given object_hash. Just in case someone thinks of refactoring, be sure DiskFileDeleted is not raised, but the DiskFile instance representing the tombstoned object is returned instead.
- Parameters:
device – name of target device
partition – partition on the device in which the object lives
object_hash – the hash of an object path
policy – the StoragePolicy instance
- Raises:
DiskFileNotExist – if the object does not exist
- Returns:
an instance of BaseDiskFile
- get_hashes(device, partition, suffixes, policy, skip_rehash=False)
- Parameters:
device – name of target device
partition – partition name
suffixes – a list of suffix directories to be recalculated
policy – the StoragePolicy instance
skip_rehash – just mark the suffixes dirty; return None
- Returns:
a dictionary that maps suffix directories
- get_ondisk_files(files, datadir, verify=True, policy=None, **kwargs)
Given a simple list of files names, determine the files that constitute a valid fileset i.e. a set of files that defines the state of an object, and determine the files that are obsolete and could be deleted. Note that some files may fall into neither category.
If a file is considered part of a valid fileset then its info dict will be added to the results dict, keyed by <extension>_info. Any files that are no longer required will have their info dicts added to a list stored under the key ‘obsolete’.
The results dict will always contain entries with keys ‘ts_file’, ‘data_file’ and ‘meta_file’. Their values will be the fully qualified path to a file of the corresponding type if there is such a file in the valid fileset, or None.
- Parameters:
files – a list of file names.
datadir – directory name files are from; this is used to construct file paths in the results, but the datadir is not modified by this method.
verify – if True verify that the ondisk file contract has not been violated, otherwise do not verify.
policy – storage policy used to store the files. Used to validate fragment indexes for EC policies.
- Returns:
- a dict that will contain keys:
ts_file -> path to a .ts file or None data_file -> path to a .data file or None meta_file -> path to a .meta file or None ctype_file -> path to a .meta file or None
- and may contain keys:
ts_info -> a file info dict for a .ts file data_info -> a file info dict for a .data file meta_info -> a file info dict for a .meta file ctype_info -> a file info dict for a .meta file which contains the content-type value unexpected -> a list of file paths for unexpected files possible_reclaim -> a list of file info dicts for possible reclaimable files obsolete -> a list of file info dicts for obsolete files
- static invalidate_hash(suffix_dir)
Invalidates the hash for a suffix_dir in the partition’s hashes file.
- Parameters:
suffix_dir – absolute path to suffix dir whose hash needs invalidating
- make_on_disk_filename(timestamp, ext=None, ctype_timestamp=None, *a, **kw)
Returns filename for given timestamp.
- Parameters:
timestamp – the object timestamp, an instance of
Timestamp
ext – an optional string representing a file extension to be appended to the returned file name
ctype_timestamp – an optional content-type timestamp, an instance of
Timestamp
- Returns:
a file name
- object_audit_location_generator(policy, device_dirs=None, auditor_type='ALL')
Yield an AuditLocation for all objects stored under device_dirs.
- Parameters:
policy – the StoragePolicy instance
device_dirs – directory of target device
auditor_type – either ALL or ZBF
- parse_on_disk_filename(filename, policy)
Parse an on disk file name.
- Parameters:
filename – the file name including extension
policy – storage policy used to store the file
- Returns:
a dict, with keys for timestamp, ext and ctype_timestamp:
timestamp is a
Timestamp
ctype_timestamp is a
Timestamp
or None for .meta files, otherwise Noneext is a string, the file extension including the leading dot or the empty string if the filename has no extension.
Subclasses may override this method to add further keys to the returned dict.
- Raises:
DiskFileError – if any part of the filename is not able to be validated.
- partition_lock(device, policy, partition, name=None, timeout=None)
A context manager that will lock on the partition given.
- Parameters:
device – device targeted by the lock request
policy – policy targeted by the lock request
partition – partition targeted by the lock request
- Raises:
PartitionLockTimeout – If the lock on the partition cannot be granted within the configured timeout.
- pickle_async_update(device, account, container, obj, data, timestamp, policy)
Write data describing a container update notification to a pickle file in the async_pending directory.
- Parameters:
device – name of target device
account – account name for the object
container – container name for the object
obj – object name for the object
data – update data to be written to pickle file
timestamp – a Timestamp
policy – the StoragePolicy instance
- static quarantine_renamer(device_path, corrupted_file_path)
In the case that a file is corrupted, move it to a quarantined area to allow replication to fix it.
- Params device_path:
The path to the device the corrupted file is on.
- Params corrupted_file_path:
The path to the file you want quarantined.
- Returns:
path (str) of directory the file was moved to
- Raises:
OSError – re-raises non errno.EEXIST / errno.ENOTEMPTY exceptions from rename
- replication_lock(device, policy, partition)
A context manager that will lock on the partition and, if configured to do so, on the device given.
- Parameters:
device – name of target device
policy – policy targeted by the replication request
partition – partition targeted by the replication request
- Raises:
ReplicationLockTimeout – If the lock on the device cannot be granted within the configured timeout.
- yield_hashes(device, partition, policy, suffixes=None, **kwargs)
Yields tuples of (hash_only, timestamps) for object information stored for the given device, partition, and (optionally) suffixes. If suffixes is None, all stored suffixes will be searched for object hashes. Note that if suffixes is not None but empty, such as [], then nothing will be yielded.
timestamps is a dict which may contain items mapping:
ts_data -> timestamp of data or tombstone file,
ts_meta -> timestamp of meta file, if one exists
- ts_ctype -> timestamp of meta file containing most recent
content-type value, if one exists
durable -> True if data file at ts_data is durable, False otherwise
where timestamps are instances of
Timestamp
- Parameters:
device – name of target device
partition – partition name
policy – the StoragePolicy instance
suffixes – optional list of suffix directories to be searched
- yield_suffixes(device, partition, policy)
Yields tuples of (full_path, suffix_only) for suffixes stored on the given device and partition.
- Parameters:
device – name of target device
partition – partition name
policy – the StoragePolicy instance
- class swift.obj.diskfile.BaseDiskFileReader(fp, data_file, obj_size, etag, disk_chunk_size, keep_cache_size, device_path, logger, quarantine_hook, use_splice, pipe_size, diskfile, keep_cache=False)
Encapsulation of the WSGI read context for servicing GET REST API requests. Serves as the context manager object for the
swift.obj.diskfile.DiskFile
class’sswift.obj.diskfile.DiskFile.reader()
method.Note
The quarantining behavior of this method is considered implementation specific, and is not required of the API.
Note
The arguments to the constructor are considered implementation specific. The API does not define the constructor arguments.
- Parameters:
fp – open file object pointer reference
data_file – on-disk data file name for the object
obj_size – verified on-disk size of the object
etag – expected metadata etag value for entire file
disk_chunk_size – size of reads from disk in bytes
keep_cache_size – maximum object size that will be kept in cache
device_path – on-disk device path, used when quarantining an obj
logger – logger caller wants this object to use
quarantine_hook – 1-arg callable called w/reason when quarantined
use_splice – if true, use zero-copy splice() to send data
pipe_size – size of pipe buffer used in zero-copy operations
diskfile – the diskfile creating this DiskFileReader instance
keep_cache – should resulting reads be kept in the buffer cache
- app_iter_range(start, stop)
Returns an iterator over the data file for range (start, stop)
- app_iter_ranges(ranges, content_type, boundary, size)
Returns an iterator over the data file for a set of ranges
- close()
Close the open file handle if present.
For this specific implementation, this method will handle quarantining the file if necessary.
- zero_copy_send(wsockfd)
Does some magic with splice() and tee() to move stuff from disk to network without ever touching userspace.
- Parameters:
wsockfd – file descriptor (integer) of the socket out which to send data
- class swift.obj.diskfile.BaseDiskFileWriter(name, datadir, size, bytes_per_sync, diskfile, next_part_power)
Encapsulation of the write context for servicing PUT REST API requests. Serves as the context manager object for the
swift.obj.diskfile.DiskFile
class’sswift.obj.diskfile.DiskFile.create()
method.Note
It is the responsibility of the
swift.obj.diskfile.DiskFile.create()
method context manager to close the open file descriptor.Note
The arguments to the constructor are considered implementation specific. The API does not define the constructor arguments.
- Parameters:
name – name of object from REST API
datadir – on-disk directory object will end up in on
swift.obj.diskfile.DiskFileWriter.put()
fd – open file descriptor of temporary file to receive data
tmppath – full path name of the opened file descriptor
bytes_per_sync – number bytes written between sync calls
diskfile – the diskfile creating this DiskFileWriter instance
next_part_power – the next partition power to be used
- chunks_finished()
Expose internal stats about written chunks.
- Returns:
a tuple, (upload_size, etag)
- commit(timestamp)
Perform any operations necessary to mark the object as durable. For replication policy type this is a no-op.
- Parameters:
timestamp – object put timestamp, an instance of
Timestamp
- put(metadata)
Finalize writing the file on disk.
- Parameters:
metadata – dictionary of metadata to be associated with the object
- write(chunk)
Write a chunk of data to disk. All invocations of this method must come before invoking the :func:
For this implementation, the data is written into a temporary file.
- Parameters:
chunk – the chunk of data to write as a string object
- class swift.obj.diskfile.DiskFile(mgr, device_path, partition, account=None, container=None, obj=None, _datadir=None, policy=None, use_splice=False, pipe_size=None, open_expired=False, next_part_power=None, **kwargs)
- reader_cls
alias of
DiskFileReader
- writer_cls
alias of
DiskFileWriter
- class swift.obj.diskfile.DiskFileManager(conf, logger)
- diskfile_cls
alias of
DiskFile
- class swift.obj.diskfile.DiskFileReader(fp, data_file, obj_size, etag, disk_chunk_size, keep_cache_size, device_path, logger, quarantine_hook, use_splice, pipe_size, diskfile, keep_cache=False)
- class swift.obj.diskfile.DiskFileWriter(name, datadir, size, bytes_per_sync, diskfile, next_part_power)
- put(metadata)
Finalize writing the file on disk.
- Parameters:
metadata – dictionary of metadata to be associated with the object
- class swift.obj.diskfile.ECDiskFile(*args, **kwargs)
- property durable_timestamp
Provides the timestamp of the newest durable file found in the object directory.
- Returns:
A Timestamp instance, or None if no durable file was found.
- Raises:
DiskFileNotOpen – if the open() method has not been previously called on this instance.
- property fragments
Provides information about all fragments that were found in the object directory, including fragments without a matching durable file, and including any fragment chosen to construct the opened diskfile.
- Returns:
A dict mapping <Timestamp instance> -> <list of frag indexes>, or None if the diskfile has not been opened or no fragments were found.
- purge(timestamp, frag_index, nondurable_purge_delay=0, meta_timestamp=None)
Remove a tombstone file matching the specified timestamp or datafile matching the specified timestamp and fragment index from the object directory.
This provides the EC reconstructor/ssync process with a way to remove a tombstone or fragment from a handoff node after reverting it to its primary node.
The hash will be invalidated, and if empty the hsh_path will be removed immediately.
- Parameters:
timestamp – the object timestamp, an instance of
Timestamp
frag_index – fragment archive index, must be a whole number or None.
nondurable_purge_delay – only remove a non-durable data file if it’s been on disk longer than this many seconds.
meta_timestamp – if not None then remove any meta file with this timestamp
- reader_cls
alias of
ECDiskFileReader
- writer_cls
alias of
ECDiskFileWriter
- class swift.obj.diskfile.ECDiskFileManager(conf, logger)
- diskfile_cls
alias of
ECDiskFile
- make_on_disk_filename(timestamp, ext=None, frag_index=None, ctype_timestamp=None, durable=False, *a, **kw)
Returns the EC specific filename for given timestamp.
- Parameters:
timestamp – the object timestamp, an instance of
Timestamp
ext – an optional string representing a file extension to be appended to the returned file name
frag_index – a fragment archive index, used with .data extension only, must be a whole number.
ctype_timestamp – an optional content-type timestamp, an instance of
Timestamp
durable – if True then include a durable marker in data filename.
- Returns:
a file name
- Raises:
DiskFileError – if ext==’.data’ and the kwarg frag_index is not a whole number
- parse_on_disk_filename(filename, policy)
Returns timestamp(s) and other info extracted from a policy specific file name. For EC policy the data file name includes a fragment index and possibly a durable marker, both of which must be stripped off to retrieve the timestamp.
- Parameters:
filename – the file name including extension
- Returns:
- a dict, with keys for timestamp, frag_index, durable, ext and
ctype_timestamp:
timestamp is a
Timestamp
frag_index is an int or None
ctype_timestamp is a
Timestamp
or None for .meta files, otherwise Noneext is a string, the file extension including the leading dot or the empty string if the filename has no extension
durable is a boolean that is True if the filename is a data file that includes a durable marker
- Raises:
DiskFileError – if any part of the filename is not able to be validated.
- validate_fragment_index(frag_index, policy=None)
Return int representation of frag_index, or raise a DiskFileError if frag_index is not a whole number.
- Parameters:
frag_index – a fragment archive index
policy – storage policy used to validate the index against
- class swift.obj.diskfile.ECDiskFileReader(fp, data_file, obj_size, etag, disk_chunk_size, keep_cache_size, device_path, logger, quarantine_hook, use_splice, pipe_size, diskfile, keep_cache=False)
- class swift.obj.diskfile.ECDiskFileWriter(name, datadir, size, bytes_per_sync, diskfile, next_part_power)
- commit(timestamp)
Finalize put by renaming the object data file to include a durable marker. We do this for EC policy because it requires a 2-phase put commit confirmation.
- Parameters:
timestamp – object put timestamp, an instance of
Timestamp
- Raises:
DiskFileError – if the diskfile frag_index has not been set (either during initialisation or a call to put())
- put(metadata)
The only difference between this method and the replication policy DiskFileWriter method is adding the frag index to the metadata.
- Parameters:
metadata – dictionary of metadata to be associated with object
- swift.obj.diskfile.consolidate_hashes(partition_dir)
Take what’s in hashes.pkl and hashes.invalid, combine them, write the result back to hashes.pkl, and clear out hashes.invalid.
- Parameters:
partition_dir – absolute path to partition dir containing hashes.pkl and hashes.invalid
- Returns:
a dict, the suffix hashes (if any), the key ‘valid’ will be False if hashes.pkl is corrupt, cannot be read or does not exist
- swift.obj.diskfile.extract_policy(obj_path)
Extracts the policy for an object (based on the name of the objects directory) given the device-relative path to the object. Returns None in the event that the path is malformed in some way.
The device-relative path is everything after the mount point; for example:
- /srv/node/d42/objects-5/30/179/
485dc017205a81df3af616d917c90179/1401811134.873649.data
would have device-relative path:
objects-5/30/179/485dc017205a81df3af616d917c90179/1401811134.873649.data
- Parameters:
obj_path – device-relative path of an object, or the full path
- Returns:
a
BaseStoragePolicy
or None
- swift.obj.diskfile.get_async_dir(policy_or_index)
Get the async dir for the given policy.
- Parameters:
policy_or_index –
StoragePolicy
instance, or an index (string or int); if None, the legacy Policy-0 is assumed.- Returns:
async_pending
orasync_pending-<N>
as appropriate
- swift.obj.diskfile.get_data_dir(policy_or_index)
Get the data dir for the given policy.
- Parameters:
policy_or_index –
StoragePolicy
instance, or an index (string or int); if None, the legacy Policy-0 is assumed.- Returns:
objects
orobjects-<N>
as appropriate
- swift.obj.diskfile.get_part_path(dev_path, policy, partition)
Given the device path, policy, and partition, returns the full path to the partition
- swift.obj.diskfile.get_tmp_dir(policy_or_index)
Get the temp dir for the given policy.
- Parameters:
policy_or_index –
StoragePolicy
instance, or an index (string or int); if None, the legacy Policy-0 is assumed.- Returns:
tmp
ortmp-<N>
as appropriate
- swift.obj.diskfile.invalidate_hash(suffix_dir)
Invalidates the hash for a suffix_dir in the partition’s hashes file.
- Parameters:
suffix_dir – absolute path to suffix dir whose hash needs invalidating
- swift.obj.diskfile.object_audit_location_generator(devices, datadir, mount_check=True, logger=None, device_dirs=None, auditor_type='ALL')
Given a devices path (e.g. “/srv/node”), yield an AuditLocation for all objects stored under that directory for the given datadir (policy), if device_dirs isn’t set. If device_dirs is set, only yield AuditLocation for the objects under the entries in device_dirs. The AuditLocation only knows the path to the hash directory, not to the .data file therein (if any). This is to avoid a double listdir(hash_dir); the DiskFile object will always do one, so we don’t.
- Parameters:
devices – parent directory of the devices to be audited
datadir – objects directory
mount_check – flag to check if a mount check should be performed on devices
logger – a logger object
device_dirs – a list of directories under devices to traverse
auditor_type – either ALL or ZBF
- swift.obj.diskfile.quarantine_renamer(device_path, corrupted_file_path)
In the case that a file is corrupted, move it to a quarantined area to allow replication to fix it.
- Params device_path:
The path to the device the corrupted file is on.
- Params corrupted_file_path:
The path to the file you want quarantined.
- Returns:
path (str) of directory the file was moved to
- Raises:
OSError – re-raises non errno.EEXIST / errno.ENOTEMPTY exceptions from rename
- swift.obj.diskfile.read_hashes(partition_dir)
Read the existing hashes.pkl
- Returns:
a dict, the suffix hashes (if any), the key ‘valid’ will be False if hashes.pkl is corrupt, cannot be read or does not exist
- swift.obj.diskfile.read_metadata(fd, add_missing_checksum=False)
Helper function to read the pickled metadata from an object file.
- Parameters:
fd – file descriptor or filename to load the metadata from
add_missing_checksum – if set and checksum is missing, add it
- Returns:
dictionary of metadata
- swift.obj.diskfile.relink_paths(target_path, new_target_path, ignore_missing=True)
Hard-links a file located in
target_path
using the second pathnew_target_path
. Creates intermediate directories if required.- Parameters:
target_path – current absolute filename
new_target_path – new absolute filename for the hardlink
ignore_missing – if True then no exception is raised if the link could not be made because
target_path
did not exist, otherwise an OSError will be raised.
- Raises:
OSError if the hard link could not be created, unless the intended hard link already exists or the
target_path
does not exist andmust_exist
if False.- Returns:
True if the link was created by the call to this method, False otherwise.
- swift.obj.diskfile.write_hashes(partition_dir, hashes)
Write hashes to hashes.pkl
The updated key is added to hashes before it is written.
- swift.obj.diskfile.write_metadata(fd, metadata, xattr_size=65536)
Helper function to write pickled metadata for an object file.
- Parameters:
fd – file descriptor or filename to write the metadata
metadata – metadata to write