Find an example account server configuration at
etc/account-server.conf-sample
in the source code repository.
The available configuration options are:
Configuration option = Default value | Description |
---|---|
backlog = 4096 |
Maximum number of allowed pending TCP connections |
bind_ip = 0.0.0.0 |
IP Address for server to bind to |
bind_port = 6002 |
Port for server to bind to |
bind_timeout = 30 |
Seconds to attempt bind before giving up |
db_preallocation = off |
If you don’t mind the extra disk space usage in overhead, you can turn this on to preallocate disk space with SQLite databases to decrease fragmentation. underlying filesystem does not support it. to setup custom log handlers. bytes you’d like fallocate to reserve, whether there is space for the given file size or not. This is useful for systems that behave badly when they completely run out of space; you can make the services pretend they’re out of space early. server. For most cases, this should be |
devices = /srv/node |
Parent directory of where devices are mounted |
disable_fallocate = false |
Disable “fast fail” fallocate checks if the underlying filesystem does not support it. |
eventlet_debug = false |
If true, turn on debug logging for eventlet |
fallocate_reserve = 0 |
You can set fallocate_reserve to the number of bytes you’d like fallocate to reserve, whether there is space for the given file size or not. This is useful for systems that behave badly when they completely run out of space; you can make the services pretend they’re out of space early. server. For most cases, this should be |
log_address = /dev/log |
Location where syslog sends the logs to |
log_custom_handlers = |
Comma-separated list of functions to call to setup custom log handlers. |
log_facility = LOG_LOCAL0 |
Syslog log facility |
log_level = INFO |
Logging level |
log_max_line_length = 0 |
Caps the length of log lines to the value given; no limit if set to 0, the default. |
log_name = swift |
Label used when logging |
log_statsd_default_sample_rate = 1.0 |
Defines the probability of sending a sample for any given event or timing measurement. |
log_statsd_host = localhost |
If not set, the StatsD feature is disabled. |
log_statsd_metric_prefix = |
Value will be prepended to every metric sent to the StatsD server. |
log_statsd_port = 8125 |
Port value for the StatsD server. |
log_statsd_sample_rate_factor = 1.0 |
Not recommended to set this to a value less than 1.0, if frequency of logging is too high, tune the log_statsd_default_sample_rate instead. |
log_udp_host = |
If not set, the UDP receiver for syslog is disabled. |
log_udp_port = 514 |
Port value for UDP receiver, if enabled. |
max_clients = 1024 |
Maximum number of clients one worker can process simultaneously Lowering the number of clients handled per worker, and raising the number of workers can lessen the impact that a CPU intensive, or blocking, request can have on other requests served by the same worker. If the maximum number of clients is set to one, then a given worker will not perform another call while processing, allowing other workers a chance to process it. |
mount_check = true |
Whether or not check if the devices are mounted to prevent accidentally writing to the root device |
swift_dir = /etc/swift |
Swift configuration directory |
user = swift |
User to run as |
workers = auto |
a much higher value, one can reduce the impact of slow file system operations in one request from negatively impacting other requests. |
Configuration option = Default value | Description |
---|---|
auto_create_account_prefix = . |
Prefix to use when automatically creating accounts |
replication_server = false |
If defined, tells server how to handle replication verbs in requests. When set to True (or 1), only replication verbs will be accepted. When set to False, replication verbs will be rejected. When undefined, server will accept any verb in the request. |
set log_address = /dev/log |
Location where syslog sends the logs to |
set log_facility = LOG_LOCAL0 |
Syslog log facility |
set log_level = INFO |
Log level |
set log_name = account-server |
Label to use when logging |
set log_requests = true |
Whether or not to log requests |
use = egg:swift#account |
Entry point of paste.deploy in the server |
Configuration option = Default value | Description |
---|---|
pipeline = healthcheck recon account-server |
Pipeline to use for processing operations. |
Configuration option = Default value | Description |
---|---|
concurrency = 8 |
Number of replication workers to spawn |
conn_timeout = 0.5 |
Connection timeout to external services |
interval = 30 |
Minimum time for a pass to take |
log_address = /dev/log |
Location where syslog sends the logs to |
log_facility = LOG_LOCAL0 |
Syslog log facility |
log_level = INFO |
Logging level |
log_name = account-replicator |
Label used when logging |
max_diffs = 100 |
Caps how long the replicator spends trying to sync a database per pass |
node_timeout = 10 |
Request timeout to external services |
per_diff = 1000 |
Limit number of items to get per diff |
reclaim_age = 604800 |
Time elapsed in seconds before an object can be reclaimed |
recon_cache_path = /var/cache/swift |
Directory where stats for a few items will be stored |
rsync_compress = no |
Allow rsync to compress data which is transmitted to destination node during sync. However, this is applicable only when destination node is in a different region than the local one. |
rsync_module = {replication_ip}::account |
Format of the rsync module where the replicator will send data. The configuration value can include some variables that will be extracted from the ring. Variables must follow the format {NAME} where NAME is one of: ip, port, replication_ip, replication_port, region, zone, device, meta. See etc/rsyncd.conf-sample for some examples. uses what’s set here, or what’s set in the DEFAULT section, or 10 (though other sections use 3 as the final default). |
run_pause = 30 |
Time in seconds to wait between replication passes |
Configuration option = Default value | Description |
---|---|
accounts_per_second = 200 |
Maximum accounts audited per second. Should be tuned according to individual system specs. 0 is unlimited. |
interval = 1800 |
Minimum time for a pass to take |
log_address = /dev/log |
Location where syslog sends the logs to |
log_facility = LOG_LOCAL0 |
Syslog log facility |
log_level = INFO |
Logging level |
log_name = account-auditor |
Label used when logging |
recon_cache_path = /var/cache/swift |
Directory where stats for a few items will be stored |
Configuration option = Default value | Description |
---|---|
concurrency = 25 |
Number of replication workers to spawn |
conn_timeout = 0.5 |
Connection timeout to external services |
delay_reaping = 0 |
Normally, the reaper begins deleting account information for deleted accounts immediately; you can set this to delay its work however. The value is in seconds, 2592000 = 30 days, for example. bind to giving up worker can process simultaneously (it will actually accept(2) N + 1). Setting this to one (1) will only handle one request at a time, without accepting another request concurrently. By increasing the number of workers to a much higher value, one can reduce the impact of slow file system operations in one request from negatively impacting other requests. |
interval = 3600 |
Minimum time for a pass to take |
log_address = /dev/log |
Location where syslog sends the logs to |
log_facility = LOG_LOCAL0 |
Syslog log facility |
log_level = INFO |
Logging level |
log_name = account-reaper |
Label used when logging |
node_timeout = 10 |
Request timeout to external services |
reap_warn_after = 2592000 |
If the account fails to be reaped due to a persistent error, the account reaper will log a message such as: Account <name> has not been reaped since <date> You can search logs for this message if space is not being reclaimed after you delete account(s). This is in addition to any time requested by delay_reaping. |
Configuration option = Default value | Description |
---|---|
disable_path = |
An optional filesystem path, which if present, will cause the healthcheck URL to return “503 Service Unavailable” with a body of “DISABLED BY FILE” |
use = egg:swift#healthcheck |
Entry point of paste.deploy in the server |
Configuration option = Default value | Description |
---|---|
recon_cache_path = /var/cache/swift |
Directory where stats for a few items will be stored |
use = egg:swift#recon |
Entry point of paste.deploy in the server |
Configuration option = Default value | Description |
---|---|
dump_interval = 5.0 |
the profile data will be dumped to local disk based on above naming rule in this interval (seconds). |
dump_timestamp = false |
Be careful, this option will enable the profiler to dump data into the file with a time stamp which means that there will be lots of files piled up in the directory. |
flush_at_shutdown = false |
Clears the data when the wsgi server shutdowns. |
log_filename_prefix = /tmp/log/swift/profile/default.profile |
This prefix is used to combine the process ID and timestamp to name the profile data file. Make sure the executing user has permission to write into this path. Any missing path segments will be created, if necessary. When you enable profiling in more than one type of daemon, you must override it with a unique value like: /var/log/swift/profile/accoutn.profile |
path = /__profile__ |
This is the path of the URL to access the mini web UI. |
profile_module = eventlet.green.profile |
This option enables you to switch profilers which inherit from the Python standard profiler. Currently, the supported value can be ‘cProfile’, ‘eventlet.green.profile’, etc. |
unwind = false |
unwind the iterator of applications |
use = egg:swift#xprofile |
Entry point of paste.deploy in the server |
[DEFAULT]
# bind_ip = 0.0.0.0
bind_port = 6202
# bind_timeout = 30
# backlog = 4096
# user = swift
# swift_dir = /etc/swift
# devices = /srv/node
# mount_check = true
# disable_fallocate = false
#
# Use an integer to override the number of pre-forked processes that will
# accept connections.
# workers = auto
#
# Maximum concurrent requests per worker
# max_clients = 1024
#
# You can specify default log routing here if you want:
# log_name = swift
# log_facility = LOG_LOCAL0
# log_level = INFO
# log_address = /dev/log
# The following caps the length of log lines to the value given; no limit if
# set to 0, the default.
# log_max_line_length = 0
#
# comma separated list of functions to call to setup custom log handlers.
# functions get passed: conf, name, log_to_console, log_route, fmt, logger,
# adapted_logger
# log_custom_handlers =
#
# If set, log_udp_host will override log_address
# log_udp_host =
# log_udp_port = 514
#
# You can enable StatsD logging here:
# log_statsd_host =
# log_statsd_port = 8125
# log_statsd_default_sample_rate = 1.0
# log_statsd_sample_rate_factor = 1.0
# log_statsd_metric_prefix =
#
# If you don't mind the extra disk space usage in overhead, you can turn this
# on to preallocate disk space with SQLite databases to decrease fragmentation.
# db_preallocation = off
#
# eventlet_debug = false
#
# You can set fallocate_reserve to the number of bytes or percentage of disk
# space you'd like fallocate to reserve, whether there is space for the given
# file size or not. Percentage will be used if the value ends with a '%'.
# fallocate_reserve = 1%
#
# You can set scheduling priority of processes. Niceness values range from -20
# (most favorable to the process) to 19 (least favorable to the process).
# nice_priority =
#
# You can set I/O scheduling class and priority of processes. I/O niceness
# class values are IOPRIO_CLASS_RT (realtime), IOPRIO_CLASS_BE (best-effort) and
# IOPRIO_CLASS_IDLE (idle). I/O niceness priority is a number which goes from
# 0 to 7. The higher the value, the lower the I/O priority of the process.
# Work only with ionice_class.
# ionice_class =
# ionice_priority =
[pipeline:main]
pipeline = healthcheck recon account-server
[app:account-server]
use = egg:swift#account
# You can override the default log routing for this app here:
# set log_name = account-server
# set log_facility = LOG_LOCAL0
# set log_level = INFO
# set log_requests = true
# set log_address = /dev/log
#
# auto_create_account_prefix = .
#
# Configure parameter for creating specific server
# To handle all verbs, including replication verbs, do not specify
# "replication_server" (this is the default). To only handle replication,
# set to a True value (e.g. "True" or "1"). To handle only non-replication
# verbs, set to "False". Unless you have a separate replication network, you
# should not specify any value for "replication_server". Default is empty.
# replication_server = false
#
# You can set scheduling priority of processes. Niceness values range from -20
# (most favorable to the process) to 19 (least favorable to the process).
# nice_priority =
#
# You can set I/O scheduling class and priority of processes. I/O niceness
# class values are IOPRIO_CLASS_RT (realtime), IOPRIO_CLASS_BE (best-effort) and
# IOPRIO_CLASS_IDLE (idle). I/O niceness priority is a number which goes from
# 0 to 7. The higher the value, the lower the I/O priority of the process.
# Work only with ionice_class.
# ionice_class =
# ionice_priority =
[filter:healthcheck]
use = egg:swift#healthcheck
# An optional filesystem path, which if present, will cause the healthcheck
# URL to return "503 Service Unavailable" with a body of "DISABLED BY FILE"
# disable_path =
[filter:recon]
use = egg:swift#recon
# recon_cache_path = /var/cache/swift
[account-replicator]
# You can override the default log routing for this app here (don't use set!):
# log_name = account-replicator
# log_facility = LOG_LOCAL0
# log_level = INFO
# log_address = /dev/log
#
# Maximum number of database rows that will be sync'd in a single HTTP
# replication request. Databases with less than or equal to this number of
# differing rows will always be sync'd using an HTTP replication request rather
# than using rsync.
# per_diff = 1000
#
# Maximum number of HTTP replication requests attempted on each replication
# pass for any one container. This caps how long the replicator will spend
# trying to sync a given database per pass so the other databases don't get
# starved.
# max_diffs = 100
#
# Number of replication workers to spawn.
# concurrency = 8
#
# Time in seconds to wait between replication passes
# interval = 30
# run_pause is deprecated, use interval instead
# run_pause = 30
#
# node_timeout = 10
# conn_timeout = 0.5
#
# The replicator also performs reclamation
# reclaim_age = 604800
#
# Allow rsync to compress data which is transmitted to destination node
# during sync. However, this is applicable only when destination node is in
# a different region than the local one.
# rsync_compress = no
#
# Format of the rysnc module where the replicator will send data. See
# etc/rsyncd.conf-sample for some usage examples.
# rsync_module = {replication_ip}::account
#
# recon_cache_path = /var/cache/swift
#
# You can set scheduling priority of processes. Niceness values range from -20
# (most favorable to the process) to 19 (least favorable to the process).
# nice_priority =
#
# You can set I/O scheduling class and priority of processes. I/O niceness
# class values are IOPRIO_CLASS_RT (realtime), IOPRIO_CLASS_BE (best-effort) and
# IOPRIO_CLASS_IDLE (idle). I/O niceness priority is a number which goes from
# 0 to 7. The higher the value, the lower the I/O priority of the process.
# Work only with ionice_class.
# ionice_class =
# ionice_priority =
[account-auditor]
# You can override the default log routing for this app here (don't use set!):
# log_name = account-auditor
# log_facility = LOG_LOCAL0
# log_level = INFO
# log_address = /dev/log
#
# Will audit each account at most once per interval
# interval = 1800
#
# accounts_per_second = 200
# recon_cache_path = /var/cache/swift
#
# You can set scheduling priority of processes. Niceness values range from -20
# (most favorable to the process) to 19 (least favorable to the process).
# nice_priority =
#
# You can set I/O scheduling class and priority of processes. I/O niceness
# class values are IOPRIO_CLASS_RT (realtime), IOPRIO_CLASS_BE (best-effort) and
# IOPRIO_CLASS_IDLE (idle). I/O niceness priority is a number which goes from
# 0 to 7. The higher the value, the lower the I/O priority of the process.
# Work only with ionice_class.
# ionice_class =
# ionice_priority =
[account-reaper]
# You can override the default log routing for this app here (don't use set!):
# log_name = account-reaper
# log_facility = LOG_LOCAL0
# log_level = INFO
# log_address = /dev/log
#
# concurrency = 25
# interval = 3600
# node_timeout = 10
# conn_timeout = 0.5
#
# Normally, the reaper begins deleting account information for deleted accounts
# immediately; you can set this to delay its work however. The value is in
# seconds; 2592000 = 30 days for example.
# delay_reaping = 0
#
# If the account fails to be reaped due to a persistent error, the
# account reaper will log a message such as:
# Account <name> has not been reaped since <date>
# You can search logs for this message if space is not being reclaimed
# after you delete account(s).
# Default is 2592000 seconds (30 days). This is in addition to any time
# requested by delay_reaping.
# reap_warn_after = 2592000
#
# You can set scheduling priority of processes. Niceness values range from -20
# (most favorable to the process) to 19 (least favorable to the process).
# nice_priority =
#
# You can set I/O scheduling class and priority of processes. I/O niceness
# class values are IOPRIO_CLASS_RT (realtime), IOPRIO_CLASS_BE (best-effort) and
# IOPRIO_CLASS_IDLE (idle). I/O niceness priority is a number which goes from
# 0 to 7. The higher the value, the lower the I/O priority of the process.
# Work only with ionice_class.
# ionice_class =
# ionice_priority =
# Note: Put it at the beginning of the pipeline to profile all middleware. But
# it is safer to put this after healthcheck.
[filter:xprofile]
use = egg:swift#xprofile
# This option enable you to switch profilers which should inherit from python
# standard profiler. Currently the supported value can be 'cProfile',
# 'eventlet.green.profile' etc.
# profile_module = eventlet.green.profile
#
# This prefix will be used to combine process ID and timestamp to name the
# profile data file. Make sure the executing user has permission to write
# into this path (missing path segments will be created, if necessary).
# If you enable profiling in more than one type of daemon, you must override
# it with an unique value like: /var/log/swift/profile/account.profile
# log_filename_prefix = /tmp/log/swift/profile/default.profile
#
# the profile data will be dumped to local disk based on above naming rule
# in this interval.
# dump_interval = 5.0
#
# Be careful, this option will enable profiler to dump data into the file with
# time stamp which means there will be lots of files piled up in the directory.
# dump_timestamp = false
#
# This is the path of the URL to access the mini web UI.
# path = /__profile__
#
# Clear the data when the wsgi server shutdown.
# flush_at_shutdown = false
#
# unwind the iterator of applications
# unwind = false
Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents.