-
Notifications
You must be signed in to change notification settings - Fork 23
RADOS Dictionary Plugin
The Dovecot dictionaries are a good candidate to be implemented using the Ceph omap key/value store. They are a building block to enable a Dovecot, which runs exclusively on Ceph.
Dovecot uses two namespaces for dictionary keys.
shared/<key>
-
These are shared entries. These keys will be stored in a RADOS object named <oid>/shared. <key> will be used as omap key as given.
priv/<key>
-
These are private entries for a user. These keys will be stored in a RADOS object named <oid>/<username>. <key> will be used as omap key as given.
To load the plugin, add dict_rados to the list of mail plugins to be loaded. There are several ways to do this. Add for example the plugin in 10-mail.conf
to mail_plugins.
mail_plugins = $mail_plugins dict_rados
To enable or disable the plugin per user, you can make your userdb return mail_plugins as an extra field. See UserDatabase/ExtraFields for examples.
The name of the dict driver is rados
.
Add the plugin for example to 10-mail.conf
as mail\_attribute\_dict.
See Dovecot Dictionaries for details.
mail_attribute_dict = rados:oid=metadata:pool=mail_dictionary
The configuration parameters are:
- oid
-
The RADOS object id to use.
- pool
-
The RADOS pool to use for the dictionary objects. The pool name ist optional and defaults to mail_dictionary. If the pool is missing, it will be created.
All key/values are be stored in OMAP key/values of the object <oid>.
The plugin uses the default way for Ceph configuration described in Step 2: Configuring a Cluster Handle:
-
rados_conf_parse_env()
: Evaluate the CEPH_ARGS environment variable. -
rados_conf_read_file()
: Search the default locations, and the first found is used. The locations are:-
$CEPH_CONF
(environment variable) -
/etc/ceph/ceph.conf
-
~/.ceph/config
-
ceph.conf
(in the current working directory)
-
The source directory src/dict-rados contains a test application named test-*. They use the configuration in the same directory.
The configuration assumes a Ceph cluster running locally without cephx, that has for example been created using vstart.sh as decribed in Developer Guide (quick) or ceph/README.md.
../src/vstart.sh -X -n -l
Any other way to get a Ceph cluster is valid, too.