Merge git://git.kernel.org/pub/scm/linux/kernel/git/steve/linux-dm

* git://git.kernel.org/pub/scm/linux/kernel/git/steve/linux-dm:
  dm: raid fix device status indicator when array initializing
  dm log userspace: add log device dependency
  dm log userspace: fix comment hyphens
  dm: add thin provisioning target
  dm: add persistent data library
  dm: add bufio
  dm: export dm get md
  dm table: add immutable feature
  dm table: add always writeable feature
  dm table: add singleton feature
  dm kcopyd: add dm_kcopyd_zero to zero an area
  dm: remove superfluous smp_mb
  dm: use local printk ratelimit
  dm table: propagate non rotational flag
This commit is contained in:
Linus Torvalds 2011-11-02 17:02:37 -07:00
commit 43672a0784
42 changed files with 12079 additions and 37 deletions

View File

@ -48,7 +48,7 @@ kernel and userspace, 'connector' is used as the interface for
communication.
There are currently two userspace log implementations that leverage this
framework - "clustered_disk" and "clustered_core". These implementations
framework - "clustered-disk" and "clustered-core". These implementations
provide a cluster-coherent log for shared-storage. Device-mapper mirroring
can be used in a shared-storage environment when the cluster log implementations
are employed.

View File

@ -0,0 +1,84 @@
Introduction
============
The more-sophisticated device-mapper targets require complex metadata
that is managed in kernel. In late 2010 we were seeing that various
different targets were rolling their own data strutures, for example:
- Mikulas Patocka's multisnap implementation
- Heinz Mauelshagen's thin provisioning target
- Another btree-based caching target posted to dm-devel
- Another multi-snapshot target based on a design of Daniel Phillips
Maintaining these data structures takes a lot of work, so if possible
we'd like to reduce the number.
The persistent-data library is an attempt to provide a re-usable
framework for people who want to store metadata in device-mapper
targets. It's currently used by the thin-provisioning target and an
upcoming hierarchical storage target.
Overview
========
The main documentation is in the header files which can all be found
under drivers/md/persistent-data.
The block manager
-----------------
dm-block-manager.[hc]
This provides access to the data on disk in fixed sized-blocks. There
is a read/write locking interface to prevent concurrent accesses, and
keep data that is being used in the cache.
Clients of persistent-data are unlikely to use this directly.
The transaction manager
-----------------------
dm-transaction-manager.[hc]
This restricts access to blocks and enforces copy-on-write semantics.
The only way you can get hold of a writable block through the
transaction manager is by shadowing an existing block (ie. doing
copy-on-write) or allocating a fresh one. Shadowing is elided within
the same transaction so performance is reasonable. The commit method
ensures that all data is flushed before it writes the superblock.
On power failure your metadata will be as it was when last committed.
The Space Maps
--------------
dm-space-map.h
dm-space-map-metadata.[hc]
dm-space-map-disk.[hc]
On-disk data structures that keep track of reference counts of blocks.
Also acts as the allocator of new blocks. Currently two
implementations: a simpler one for managing blocks on a different
device (eg. thinly-provisioned data blocks); and one for managing
the metadata space. The latter is complicated by the need to store
its own data within the space it's managing.
The data structures
-------------------
dm-btree.[hc]
dm-btree-remove.c
dm-btree-spine.c
dm-btree-internal.h
Currently there is only one data structure, a hierarchical btree.
There are plans to add more. For example, something with an
array-like interface would see a lot of use.
The btree is 'hierarchical' in that you can define it to be composed
of nested btrees, and take multiple keys. For example, the
thin-provisioning target uses a btree with two levels of nesting.
The first maps a device id to a mapping tree, and that in turn maps a
virtual block to a physical block.
Values stored in the btrees can have arbitrary size. Keys are always
64bits, although nesting allows you to use multiple keys.

View File

@ -0,0 +1,285 @@
Introduction
============
This document descibes a collection of device-mapper targets that
between them implement thin-provisioning and snapshots.
The main highlight of this implementation, compared to the previous
implementation of snapshots, is that it allows many virtual devices to
be stored on the same data volume. This simplifies administration and
allows the sharing of data between volumes, thus reducing disk usage.
Another significant feature is support for an arbitrary depth of
recursive snapshots (snapshots of snapshots of snapshots ...). The
previous implementation of snapshots did this by chaining together
lookup tables, and so performance was O(depth). This new
implementation uses a single data structure to avoid this degradation
with depth. Fragmentation may still be an issue, however, in some
scenarios.
Metadata is stored on a separate device from data, giving the
administrator some freedom, for example to:
- Improve metadata resilience by storing metadata on a mirrored volume
but data on a non-mirrored one.
- Improve performance by storing the metadata on SSD.
Status
======
These targets are very much still in the EXPERIMENTAL state. Please
do not yet rely on them in production. But do experiment and offer us
feedback. Different use cases will have different performance
characteristics, for example due to fragmentation of the data volume.
If you find this software is not performing as expected please mail
dm-devel@redhat.com with details and we'll try our best to improve
things for you.
Userspace tools for checking and repairing the metadata are under
development.
Cookbook
========
This section describes some quick recipes for using thin provisioning.
They use the dmsetup program to control the device-mapper driver
directly. End users will be advised to use a higher-level volume
manager such as LVM2 once support has been added.
Pool device
-----------
The pool device ties together the metadata volume and the data volume.
It maps I/O linearly to the data volume and updates the metadata via
two mechanisms:
- Function calls from the thin targets
- Device-mapper 'messages' from userspace which control the creation of new
virtual devices amongst other things.
Setting up a fresh pool device
------------------------------
Setting up a pool device requires a valid metadata device, and a
data device. If you do not have an existing metadata device you can
make one by zeroing the first 4k to indicate empty metadata.
dd if=/dev/zero of=$metadata_dev bs=4096 count=1
The amount of metadata you need will vary according to how many blocks
are shared between thin devices (i.e. through snapshots). If you have
less sharing than average you'll need a larger-than-average metadata device.
As a guide, we suggest you calculate the number of bytes to use in the
metadata device as 48 * $data_dev_size / $data_block_size but round it up
to 2MB if the answer is smaller. The largest size supported is 16GB.
If you're creating large numbers of snapshots which are recording large
amounts of change, you may need find you need to increase this.
Reloading a pool table
----------------------
You may reload a pool's table, indeed this is how the pool is resized
if it runs out of space. (N.B. While specifying a different metadata
device when reloading is not forbidden at the moment, things will go
wrong if it does not route I/O to exactly the same on-disk location as
previously.)
Using an existing pool device
-----------------------------
dmsetup create pool \
--table "0 20971520 thin-pool $metadata_dev $data_dev \
$data_block_size $low_water_mark"
$data_block_size gives the smallest unit of disk space that can be
allocated at a time expressed in units of 512-byte sectors. People
primarily interested in thin provisioning may want to use a value such
as 1024 (512KB). People doing lots of snapshotting may want a smaller value
such as 128 (64KB). If you are not zeroing newly-allocated data,
a larger $data_block_size in the region of 256000 (128MB) is suggested.
$data_block_size must be the same for the lifetime of the
metadata device.
$low_water_mark is expressed in blocks of size $data_block_size. If
free space on the data device drops below this level then a dm event
will be triggered which a userspace daemon should catch allowing it to
extend the pool device. Only one such event will be sent.
Resuming a device with a new table itself triggers an event so the
userspace daemon can use this to detect a situation where a new table
already exceeds the threshold.
Thin provisioning
-----------------
i) Creating a new thinly-provisioned volume.
To create a new thinly- provisioned volume you must send a message to an
active pool device, /dev/mapper/pool in this example.
dmsetup message /dev/mapper/pool 0 "create_thin 0"
Here '0' is an identifier for the volume, a 24-bit number. It's up
to the caller to allocate and manage these identifiers. If the
identifier is already in use, the message will fail with -EEXIST.
ii) Using a thinly-provisioned volume.
Thinly-provisioned volumes are activated using the 'thin' target:
dmsetup create thin --table "0 2097152 thin /dev/mapper/pool 0"
The last parameter is the identifier for the thinp device.
Internal snapshots
------------------
i) Creating an internal snapshot.
Snapshots are created with another message to the pool.
N.B. If the origin device that you wish to snapshot is active, you
must suspend it before creating the snapshot to avoid corruption.
This is NOT enforced at the moment, so please be careful!
dmsetup suspend /dev/mapper/thin
dmsetup message /dev/mapper/pool 0 "create_snap 1 0"
dmsetup resume /dev/mapper/thin
Here '1' is the identifier for the volume, a 24-bit number. '0' is the
identifier for the origin device.
ii) Using an internal snapshot.
Once created, the user doesn't have to worry about any connection
between the origin and the snapshot. Indeed the snapshot is no
different from any other thinly-provisioned device and can be
snapshotted itself via the same method. It's perfectly legal to
have only one of them active, and there's no ordering requirement on
activating or removing them both. (This differs from conventional
device-mapper snapshots.)
Activate it exactly the same way as any other thinly-provisioned volume:
dmsetup create snap --table "0 2097152 thin /dev/mapper/pool 1"
Deactivation
------------
All devices using a pool must be deactivated before the pool itself
can be.
dmsetup remove thin
dmsetup remove snap
dmsetup remove pool
Reference
=========
'thin-pool' target
------------------
i) Constructor
thin-pool <metadata dev> <data dev> <data block size (sectors)> \
<low water mark (blocks)> [<number of feature args> [<arg>]*]
Optional feature arguments:
- 'skip_block_zeroing': skips the zeroing of newly-provisioned blocks.
Data block size must be between 64KB (128 sectors) and 1GB
(2097152 sectors) inclusive.
ii) Status
<transaction id> <used metadata blocks>/<total metadata blocks>
<used data blocks>/<total data blocks> <held metadata root>
transaction id:
A 64-bit number used by userspace to help synchronise with metadata
from volume managers.
used data blocks / total data blocks
If the number of free blocks drops below the pool's low water mark a
dm event will be sent to userspace. This event is edge-triggered and
it will occur only once after each resume so volume manager writers
should register for the event and then check the target's status.
held metadata root:
The location, in sectors, of the metadata root that has been
'held' for userspace read access. '-' indicates there is no
held root. This feature is not yet implemented so '-' is
always returned.
iii) Messages
create_thin <dev id>
Create a new thinly-provisioned device.
<dev id> is an arbitrary unique 24-bit identifier chosen by
the caller.
create_snap <dev id> <origin id>
Create a new snapshot of another thinly-provisioned device.
<dev id> is an arbitrary unique 24-bit identifier chosen by
the caller.
<origin id> is the identifier of the thinly-provisioned device
of which the new device will be a snapshot.
delete <dev id>
Deletes a thin device. Irreversible.
trim <dev id> <new size in sectors>
Delete mappings from the end of a thin device. Irreversible.
You might want to use this if you're reducing the size of
your thinly-provisioned device. In many cases, due to the
sharing of blocks between devices, it is not possible to
determine in advance how much space 'trim' will release. (In
future a userspace tool might be able to perform this
calculation.)
set_transaction_id <current id> <new id>
Userland volume managers, such as LVM, need a way to
synchronise their external metadata with the internal metadata of the
pool target. The thin-pool target offers to store an
arbitrary 64-bit transaction id and return it on the target's
status line. To avoid races you must provide what you think
the current transaction id is when you change it with this
compare-and-swap message.
'thin' target
-------------
i) Constructor
thin <pool dev> <dev id>
pool dev:
the thin-pool device, e.g. /dev/mapper/my_pool or 253:0
dev id:
the internal device identifier of the device to be
activated.
The pool doesn't store any size against the thin devices. If you
load a thin target that is smaller than you've been using previously,
then you'll have no access to blocks mapped beyond the end. If you
load a target that is bigger than before, then extra blocks will be
provisioned as and when needed.
If you wish to reduce the size of your thin device and potentially
regain some space then send the 'trim' message to the pool.
ii) Status
<nr mapped sectors> <highest mapped sector>

View File

@ -208,6 +208,16 @@ config DM_DEBUG
If unsure, say N.
config DM_BUFIO
tristate
depends on BLK_DEV_DM && EXPERIMENTAL
---help---
This interface allows you to do buffered I/O on a device and acts
as a cache, holding recently-read blocks in memory and performing
delayed writes.
source "drivers/md/persistent-data/Kconfig"
config DM_CRYPT
tristate "Crypt target support"
depends on BLK_DEV_DM
@ -233,6 +243,32 @@ config DM_SNAPSHOT
---help---
Allow volume managers to take writable snapshots of a device.
config DM_THIN_PROVISIONING
tristate "Thin provisioning target (EXPERIMENTAL)"
depends on BLK_DEV_DM && EXPERIMENTAL
select DM_PERSISTENT_DATA
---help---
Provides thin provisioning and snapshots that share a data store.
config DM_DEBUG_BLOCK_STACK_TRACING
boolean "Keep stack trace of thin provisioning block lock holders"
depends on STACKTRACE_SUPPORT && DM_THIN_PROVISIONING
select STACKTRACE
---help---
Enable this for messages that may help debug problems with the
block manager locking used by thin provisioning.
If unsure, say N.
config DM_DEBUG_SPACE_MAPS
boolean "Extra validation for thin provisioning space maps"
depends on DM_THIN_PROVISIONING
---help---
Enable this for messages that may help debug problems with the
space maps used by thin provisioning.
If unsure, say N.
config DM_MIRROR
tristate "Mirror target"
depends on BLK_DEV_DM

View File

@ -10,6 +10,7 @@ dm-snapshot-y += dm-snap.o dm-exception-store.o dm-snap-transient.o \
dm-mirror-y += dm-raid1.o
dm-log-userspace-y \
+= dm-log-userspace-base.o dm-log-userspace-transfer.o
dm-thin-pool-y += dm-thin.o dm-thin-metadata.o
md-mod-y += md.o bitmap.o
raid456-y += raid5.o
@ -27,6 +28,7 @@ obj-$(CONFIG_MD_MULTIPATH) += multipath.o
obj-$(CONFIG_MD_FAULTY) += faulty.o
obj-$(CONFIG_BLK_DEV_MD) += md-mod.o
obj-$(CONFIG_BLK_DEV_DM) += dm-mod.o
obj-$(CONFIG_DM_BUFIO) += dm-bufio.o
obj-$(CONFIG_DM_CRYPT) += dm-crypt.o
obj-$(CONFIG_DM_DELAY) += dm-delay.o
obj-$(CONFIG_DM_FLAKEY) += dm-flakey.o
@ -34,10 +36,12 @@ obj-$(CONFIG_DM_MULTIPATH) += dm-multipath.o dm-round-robin.o
obj-$(CONFIG_DM_MULTIPATH_QL) += dm-queue-length.o
obj-$(CONFIG_DM_MULTIPATH_ST) += dm-service-time.o
obj-$(CONFIG_DM_SNAPSHOT) += dm-snapshot.o
obj-$(CONFIG_DM_PERSISTENT_DATA) += persistent-data/
obj-$(CONFIG_DM_MIRROR) += dm-mirror.o dm-log.o dm-region-hash.o
obj-$(CONFIG_DM_LOG_USERSPACE) += dm-log-userspace.o
obj-$(CONFIG_DM_ZERO) += dm-zero.o
obj-$(CONFIG_DM_RAID) += dm-raid.o
obj-$(CONFIG_DM_THIN_PROVISIONING) += dm-thin-pool.o
ifeq ($(CONFIG_DM_UEVENT),y)
dm-mod-objs += dm-uevent.o

1699
drivers/md/dm-bufio.c Normal file

File diff suppressed because it is too large Load Diff

112
drivers/md/dm-bufio.h Normal file
View File

@ -0,0 +1,112 @@
/*
* Copyright (C) 2009-2011 Red Hat, Inc.
*
* Author: Mikulas Patocka <mpatocka@redhat.com>
*
* This file is released under the GPL.
*/
#ifndef DM_BUFIO_H
#define DM_BUFIO_H
#include <linux/blkdev.h>
#include <linux/types.h>
/*----------------------------------------------------------------*/
struct dm_bufio_client;
struct dm_buffer;
/*
* Create a buffered IO cache on a given device
*/
struct dm_bufio_client *
dm_bufio_client_create(struct block_device *bdev, unsigned block_size,
unsigned reserved_buffers, unsigned aux_size,
void (*alloc_callback)(struct dm_buffer *),
void (*write_callback)(struct dm_buffer *));
/*
* Release a buffered IO cache.
*/
void dm_bufio_client_destroy(struct dm_bufio_client *c);
/*
* WARNING: to avoid deadlocks, these conditions are observed:
*
* - At most one thread can hold at most "reserved_buffers" simultaneously.
* - Each other threads can hold at most one buffer.
* - Threads which call only dm_bufio_get can hold unlimited number of
* buffers.
*/
/*
* Read a given block from disk. Returns pointer to data. Returns a
* pointer to dm_buffer that can be used to release the buffer or to make
* it dirty.
*/
void *dm_bufio_read(struct dm_bufio_client *c, sector_t block,
struct dm_buffer **bp);
/*
* Like dm_bufio_read, but return buffer from cache, don't read
* it. If the buffer is not in the cache, return NULL.
*/
void *dm_bufio_get(struct dm_bufio_client *c, sector_t block,
struct dm_buffer **bp);
/*
* Like dm_bufio_read, but don't read anything from the disk. It is
* expected that the caller initializes the buffer and marks it dirty.
*/
void *dm_bufio_new(struct dm_bufio_client *c, sector_t block,
struct dm_buffer **bp);
/*
* Release a reference obtained with dm_bufio_{read,get,new}. The data
* pointer and dm_buffer pointer is no longer valid after this call.
*/
void dm_bufio_release(struct dm_buffer *b);
/*
* Mark a buffer dirty. It should be called after the buffer is modified.
*
* In case of memory pressure, the buffer may be written after
* dm_bufio_mark_buffer_dirty, but before dm_bufio_write_dirty_buffers. So
* dm_bufio_write_dirty_buffers guarantees that the buffer is on-disk but
* the actual writing may occur earlier.
*/
void dm_bufio_mark_buffer_dirty(struct dm_buffer *b);
/*
* Initiate writing of dirty buffers, without waiting for completion.
*/
void dm_bufio_write_dirty_buffers_async(struct dm_bufio_client *c);
/*
* Write all dirty buffers. Guarantees that all dirty buffers created prior
* to this call are on disk when this call exits.
*/
int dm_bufio_write_dirty_buffers(struct dm_bufio_client *c);
/*
* Send an empty write barrier to the device to flush hardware disk cache.
*/
int dm_bufio_issue_flush(struct dm_bufio_client *c);
/*
* Like dm_bufio_release but also move the buffer to the new
* block. dm_bufio_write_dirty_buffers is needed to commit the new block.
*/
void dm_bufio_release_move(struct dm_buffer *b, sector_t new_block);
unsigned dm_bufio_get_block_size(struct dm_bufio_client *c);
sector_t dm_bufio_get_device_size(struct dm_bufio_client *c);
sector_t dm_bufio_get_block_number(struct dm_buffer *b);
void *dm_bufio_get_block_data(struct dm_buffer *b);
void *dm_bufio_get_aux_data(struct dm_buffer *b);
struct dm_bufio_client *dm_bufio_get_client(struct dm_buffer *b);
/*----------------------------------------------------------------*/
#endif

View File

@ -1215,6 +1215,7 @@ static int table_load(struct dm_ioctl *param, size_t param_size)
struct hash_cell *hc;
struct dm_table *t;
struct mapped_device *md;
struct target_type *immutable_target_type;
md = find_device(param);
if (!md)
@ -1230,6 +1231,16 @@ static int table_load(struct dm_ioctl *param, size_t param_size)
goto out;
}
immutable_target_type = dm_get_immutable_target_type(md);
if (immutable_target_type &&
(immutable_target_type != dm_table_get_immutable_target_type(t))) {
DMWARN("can't replace immutable target type %s",
immutable_target_type->name);
dm_table_destroy(t);
r = -EINVAL;
goto out;
}
/* Protect md->type and md->queue against concurrent table loads. */
dm_lock_md_type(md);
if (dm_get_md_type(md) == DM_TYPE_NONE)

View File

@ -66,6 +66,8 @@ struct dm_kcopyd_client {
struct list_head pages_jobs;
};
static struct page_list zero_page_list;
static void wake(struct dm_kcopyd_client *kc)
{
queue_work(kc->kcopyd_wq, &kc->kcopyd_work);
@ -254,6 +256,9 @@ int __init dm_kcopyd_init(void)
if (!_job_cache)
return -ENOMEM;
zero_page_list.next = &zero_page_list;
zero_page_list.page = ZERO_PAGE(0);
return 0;
}
@ -322,7 +327,7 @@ static int run_complete_job(struct kcopyd_job *job)
dm_kcopyd_notify_fn fn = job->fn;
struct dm_kcopyd_client *kc = job->kc;
if (job->pages)
if (job->pages && job->pages != &zero_page_list)
kcopyd_put_pages(kc, job->pages);
/*
* If this is the master job, the sub jobs have already
@ -484,6 +489,8 @@ static void dispatch_job(struct kcopyd_job *job)
atomic_inc(&kc->nr_jobs);
if (unlikely(!job->source.count))
push(&kc->complete_jobs, job);
else if (job->pages == &zero_page_list)
push(&kc->io_jobs, job);
else
push(&kc->pages_jobs, job);
wake(kc);
@ -592,14 +599,20 @@ int dm_kcopyd_copy(struct dm_kcopyd_client *kc, struct dm_io_region *from,
job->flags = flags;
job->read_err = 0;
job->write_err = 0;
job->rw = READ;
job->source = *from;
job->num_dests = num_dests;
memcpy(&job->dests, dests, sizeof(*dests) * num_dests);
job->pages = NULL;
if (from) {
job->source = *from;
job->pages = NULL;
job->rw = READ;
} else {
memset(&job->source, 0, sizeof job->source);
job->source.count = job->dests[0].count;
job->pages = &zero_page_list;
job->rw = WRITE;
}
job->fn = fn;
job->context = context;
@ -617,6 +630,14 @@ int dm_kcopyd_copy(struct dm_kcopyd_client *kc, struct dm_io_region *from,
}
EXPORT_SYMBOL(dm_kcopyd_copy);
int dm_kcopyd_zero(struct dm_kcopyd_client *kc,
unsigned num_dests, struct dm_io_region *dests,
unsigned flags, dm_kcopyd_notify_fn fn, void *context)
{
return dm_kcopyd_copy(kc, NULL, num_dests, dests, flags, fn, context);
}
EXPORT_SYMBOL(dm_kcopyd_zero);
void *dm_kcopyd_prepare_callback(struct dm_kcopyd_client *kc,
dm_kcopyd_notify_fn fn, void *context)
{

View File

@ -30,6 +30,7 @@ struct flush_entry {
struct log_c {
struct dm_target *ti;
struct dm_dev *log_dev;
uint32_t region_size;
region_t region_count;
uint64_t luid;
@ -146,7 +147,7 @@ static int build_constructor_string(struct dm_target *ti,
* <UUID> <other args>
* Where 'other args' is the userspace implementation specific log
* arguments. An example might be:
* <UUID> clustered_disk <arg count> <log dev> <region_size> [[no]sync]
* <UUID> clustered-disk <arg count> <log dev> <region_size> [[no]sync]
*
* So, this module will strip off the <UUID> for identification purposes
* when communicating with userspace about a log; but will pass on everything
@ -161,13 +162,15 @@ static int userspace_ctr(struct dm_dirty_log *log, struct dm_target *ti,
struct log_c *lc = NULL;
uint64_t rdata;
size_t rdata_size = sizeof(rdata);
char *devices_rdata = NULL;
size_t devices_rdata_size = DM_NAME_LEN;
if (argc < 3) {
DMWARN("Too few arguments to userspace dirty log");
return -EINVAL;
}
lc = kmalloc(sizeof(*lc), GFP_KERNEL);
lc = kzalloc(sizeof(*lc), GFP_KERNEL);
if (!lc) {
DMWARN("Unable to allocate userspace log context.");
return -ENOMEM;
@ -195,9 +198,19 @@ static int userspace_ctr(struct dm_dirty_log *log, struct dm_target *ti,
return str_size;
}
/* Send table string */
devices_rdata = kzalloc(devices_rdata_size, GFP_KERNEL);
if (!devices_rdata) {
DMERR("Failed to allocate memory for device information");
r = -ENOMEM;
goto out;
}
/*
* Send table string and get back any opened device.
*/
r = dm_consult_userspace(lc->uuid, lc->luid, DM_ULOG_CTR,
ctr_str, str_size, NULL, NULL);
ctr_str, str_size,
devices_rdata, &devices_rdata_size);
if (r < 0) {
if (r == -ESRCH)
@ -220,7 +233,20 @@ static int userspace_ctr(struct dm_dirty_log *log, struct dm_target *ti,
lc->region_size = (uint32_t)rdata;
lc->region_count = dm_sector_div_up(ti->len, lc->region_size);
if (devices_rdata_size) {
if (devices_rdata[devices_rdata_size - 1] != '\0') {
DMERR("DM_ULOG_CTR device return string not properly terminated");
r = -EINVAL;
goto out;
}
r = dm_get_device(ti, devices_rdata,
dm_table_get_mode(ti->table), &lc->log_dev);
if (r)
DMERR("Failed to register %s with device-mapper",
devices_rdata);
}
out:
kfree(devices_rdata);
if (r) {
kfree(lc);
kfree(ctr_str);
@ -241,6 +267,9 @@ static void userspace_dtr(struct dm_dirty_log *log)
NULL, 0,
NULL, NULL);
if (lc->log_dev)
dm_put_device(lc->ti, lc->log_dev);
kfree(lc->usr_argv_str);
kfree(lc);

View File

@ -1017,30 +1017,56 @@ static int raid_status(struct dm_target *ti, status_type_t type,
struct raid_set *rs = ti->private;
unsigned raid_param_cnt = 1; /* at least 1 for chunksize */
unsigned sz = 0;
int i;
int i, array_in_sync = 0;
sector_t sync;
switch (type) {
case STATUSTYPE_INFO:
DMEMIT("%s %d ", rs->raid_type->name, rs->md.raid_disks);
for (i = 0; i < rs->md.raid_disks; i++) {
if (test_bit(Faulty, &rs->dev[i].rdev.flags))
DMEMIT("D");
else if (test_bit(In_sync, &rs->dev[i].rdev.flags))
DMEMIT("A");
else
DMEMIT("a");
}
if (test_bit(MD_RECOVERY_RUNNING, &rs->md.recovery))
sync = rs->md.curr_resync_completed;
else
sync = rs->md.recovery_cp;
if (sync > rs->md.resync_max_sectors)
if (sync >= rs->md.resync_max_sectors) {
array_in_sync = 1;
sync = rs->md.resync_max_sectors;
} else {
/*
* The array may be doing an initial sync, or it may
* be rebuilding individual components. If all the
* devices are In_sync, then it is the array that is
* being initialized.
*/
for (i = 0; i < rs->md.raid_disks; i++)
if (!test_bit(In_sync, &rs->dev[i].rdev.flags))
array_in_sync = 1;
}
/*
* Status characters:
* 'D' = Dead/Failed device
* 'a' = Alive but not in-sync
* 'A' = Alive and in-sync
*/
for (i = 0; i < rs->md.raid_disks; i++) {
if (test_bit(Faulty, &rs->dev[i].rdev.flags))
DMEMIT("D");
else if (!array_in_sync ||
!test_bit(In_sync, &rs->dev[i].rdev.flags))
DMEMIT("a");
else
DMEMIT("A");
}
/*
* In-sync ratio:
* The in-sync ratio shows the progress of:
* - Initializing the array
* - Rebuilding a subset of devices of the array
* The user can distinguish between the two by referring
* to the status characters.
*/
DMEMIT(" %llu/%llu",
(unsigned long long) sync,
(unsigned long long) rs->md.resync_max_sectors);

View File

@ -54,7 +54,9 @@ struct dm_table {
sector_t *highs;
struct dm_target *targets;
struct target_type *immutable_target_type;
unsigned integrity_supported:1;
unsigned singleton:1;
/*
* Indicates the rw permissions for the new logical
@ -740,6 +742,12 @@ int dm_table_add_target(struct dm_table *t, const char *type,
char **argv;
struct dm_target *tgt;
if (t->singleton) {
DMERR("%s: target type %s must appear alone in table",
dm_device_name(t->md), t->targets->type->name);
return -EINVAL;
}
if ((r = check_space(t)))
return r;
@ -758,6 +766,36 @@ int dm_table_add_target(struct dm_table *t, const char *type,
return -EINVAL;
}
if (dm_target_needs_singleton(tgt->type)) {
if (t->num_targets) {
DMERR("%s: target type %s must appear alone in table",
dm_device_name(t->md), type);
return -EINVAL;
}
t->singleton = 1;
}
if (dm_target_always_writeable(tgt->type) && !(t->mode & FMODE_WRITE)) {
DMERR("%s: target type %s may not be included in read-only tables",
dm_device_name(t->md), type);
return -EINVAL;
}
if (t->immutable_target_type) {
if (t->immutable_target_type != tgt->type) {
DMERR("%s: immutable target type %s cannot be mixed with other target types",
dm_device_name(t->md), t->immutable_target_type->name);
return -EINVAL;
}
} else if (dm_target_is_immutable(tgt->type)) {
if (t->num_targets) {
DMERR("%s: immutable target type %s cannot be mixed with other target types",
dm_device_name(t->md), tgt->type->name);
return -EINVAL;
}
t->immutable_target_type = tgt->type;
}
tgt->table = t;
tgt->begin = start;
tgt->len = len;
@ -915,6 +953,11 @@ unsigned dm_table_get_type(struct dm_table *t)
return t->type;
}
struct target_type *dm_table_get_immutable_target_type(struct dm_table *t)
{
return t->immutable_target_type;
}
bool dm_table_request_based(struct dm_table *t)
{
return dm_table_get_type(t) == DM_TYPE_REQUEST_BASED;
@ -1299,6 +1342,31 @@ static bool dm_table_discard_zeroes_data(struct dm_table *t)
return 1;
}
static int device_is_nonrot(struct dm_target *ti, struct dm_dev *dev,
sector_t start, sector_t len, void *data)
{
struct request_queue *q = bdev_get_queue(dev->bdev);
return q && blk_queue_nonrot(q);
}
static bool dm_table_is_nonrot(struct dm_table *t)
{
struct dm_target *ti;
unsigned i = 0;
/* Ensure that all underlying device are non-rotational. */
while (i < dm_table_get_num_targets(t)) {
ti = dm_table_get_target(t, i++);
if (!ti->type->iterate_devices ||
!ti->type->iterate_devices(ti, device_is_nonrot, NULL))
return 0;
}
return 1;
}
void dm_table_set_restrictions(struct dm_table *t, struct request_queue *q,
struct queue_limits *limits)
{
@ -1324,6 +1392,11 @@ void dm_table_set_restrictions(struct dm_table *t, struct request_queue *q,
if (!dm_table_discard_zeroes_data(t))
q->limits.discard_zeroes_data = 0;
if (dm_table_is_nonrot(t))
queue_flag_set_unlocked(QUEUE_FLAG_NONROT, q);
else
queue_flag_clear_unlocked(QUEUE_FLAG_NONROT, q);
dm_table_set_integrity(t);
/*

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,156 @@
/*
* Copyright (C) 2010-2011 Red Hat, Inc.
*
* This file is released under the GPL.
*/
#ifndef DM_THIN_METADATA_H
#define DM_THIN_METADATA_H
#include "persistent-data/dm-block-manager.h"
#define THIN_METADATA_BLOCK_SIZE 4096
/*----------------------------------------------------------------*/
struct dm_pool_metadata;
struct dm_thin_device;
/*
* Device identifier
*/
typedef uint64_t dm_thin_id;
/*
* Reopens or creates a new, empty metadata volume.
*/
struct dm_pool_metadata *dm_pool_metadata_open(struct block_device *bdev,
sector_t data_block_size);
int dm_pool_metadata_close(struct dm_pool_metadata *pmd);
/*
* Compat feature flags. Any incompat flags beyond the ones
* specified below will prevent use of the thin metadata.
*/
#define THIN_FEATURE_COMPAT_SUPP 0UL
#define THIN_FEATURE_COMPAT_RO_SUPP 0UL
#define THIN_FEATURE_INCOMPAT_SUPP 0UL
/*
* Device creation/deletion.
*/
int dm_pool_create_thin(struct dm_pool_metadata *pmd, dm_thin_id dev);
/*
* An internal snapshot.
*
* You can only snapshot a quiesced origin i.e. one that is either
* suspended or not instanced at all.
*/
int dm_pool_create_snap(struct dm_pool_metadata *pmd, dm_thin_id dev,
dm_thin_id origin);
/*
* Deletes a virtual device from the metadata. It _is_ safe to call this
* when that device is open. Operations on that device will just start
* failing. You still need to call close() on the device.
*/
int dm_pool_delete_thin_device(struct dm_pool_metadata *pmd,
dm_thin_id dev);
/*
* Commits _all_ metadata changes: device creation, deletion, mapping
* updates.
*/
int dm_pool_commit_metadata(struct dm_pool_metadata *pmd);
/*
* Set/get userspace transaction id.
*/
int dm_pool_set_metadata_transaction_id(struct dm_pool_metadata *pmd,
uint64_t current_id,
uint64_t new_id);
int dm_pool_get_metadata_transaction_id(struct dm_pool_metadata *pmd,
uint64_t *result);
/*
* Hold/get root for userspace transaction.
*/
int dm_pool_hold_metadata_root(struct dm_pool_metadata *pmd);
int dm_pool_get_held_metadata_root(struct dm_pool_metadata *pmd,
dm_block_t *result);
/*
* Actions on a single virtual device.
*/
/*
* Opening the same device more than once will fail with -EBUSY.
*/
int dm_pool_open_thin_device(struct dm_pool_metadata *pmd, dm_thin_id dev,
struct dm_thin_device **td);
int dm_pool_close_thin_device(struct dm_thin_device *td);
dm_thin_id dm_thin_dev_id(struct dm_thin_device *td);
struct dm_thin_lookup_result {
dm_block_t block;
int shared;
};
/*
* Returns:
* -EWOULDBLOCK iff @can_block is set and would block.
* -ENODATA iff that mapping is not present.
* 0 success
*/
int dm_thin_find_block(struct dm_thin_device *td, dm_block_t block,
int can_block, struct dm_thin_lookup_result *result);
/*
* Obtain an unused block.
*/
int dm_pool_alloc_data_block(struct dm_pool_metadata *pmd, dm_block_t *result);
/*
* Insert or remove block.
*/
int dm_thin_insert_block(struct dm_thin_device *td, dm_block_t block,
dm_block_t data_block);
int dm_thin_remove_block(struct dm_thin_device *td, dm_block_t block);
/*
* Queries.
*/
int dm_thin_get_highest_mapped_block(struct dm_thin_device *td,
dm_block_t *highest_mapped);
int dm_thin_get_mapped_count(struct dm_thin_device *td, dm_block_t *result);
int dm_pool_get_free_block_count(struct dm_pool_metadata *pmd,
dm_block_t *result);
int dm_pool_get_free_metadata_block_count(struct dm_pool_metadata *pmd,
dm_block_t *result);
int dm_pool_get_metadata_dev_size(struct dm_pool_metadata *pmd,
dm_block_t *result);
int dm_pool_get_data_block_size(struct dm_pool_metadata *pmd, sector_t *result);
int dm_pool_get_data_dev_size(struct dm_pool_metadata *pmd, dm_block_t *result);
/*
* Returns -ENOSPC if the new size is too small and already allocated
* blocks would be lost.
*/
int dm_pool_resize_data_dev(struct dm_pool_metadata *pmd, dm_block_t new_size);
/*----------------------------------------------------------------*/
#endif

2428
drivers/md/dm-thin.c Normal file

File diff suppressed because it is too large Load Diff

View File

@ -25,6 +25,16 @@
#define DM_MSG_PREFIX "core"
#ifdef CONFIG_PRINTK
/*
* ratelimit state to be used in DMXXX_LIMIT().
*/
DEFINE_RATELIMIT_STATE(dm_ratelimit_state,
DEFAULT_RATELIMIT_INTERVAL,
DEFAULT_RATELIMIT_BURST);
EXPORT_SYMBOL(dm_ratelimit_state);
#endif
/*
* Cookies are numeric values sent with CHANGE and REMOVE
* uevents while resuming, removing or renaming the device.
@ -130,6 +140,8 @@ struct mapped_device {
/* Protect queue and type against concurrent access. */
struct mutex type_lock;
struct target_type *immutable_target_type;
struct gendisk *disk;
char name[16];
@ -2086,6 +2098,8 @@ static struct dm_table *__bind(struct mapped_device *md, struct dm_table *t,
write_lock_irqsave(&md->map_lock, flags);
old_map = md->map;
md->map = t;
md->immutable_target_type = dm_table_get_immutable_target_type(t);
dm_table_set_restrictions(t, q, limits);
if (merge_is_optional)
set_bit(DMF_MERGE_IS_OPTIONAL, &md->flags);
@ -2156,6 +2170,11 @@ unsigned dm_get_md_type(struct mapped_device *md)
return md->type;
}
struct target_type *dm_get_immutable_target_type(struct mapped_device *md)
{
return md->immutable_target_type;
}
/*
* Fully initialize a request-based queue (->elevator, ->request_fn, etc).
*/
@ -2231,6 +2250,7 @@ struct mapped_device *dm_get_md(dev_t dev)
return md;
}
EXPORT_SYMBOL_GPL(dm_get_md);
void *dm_get_mdptr(struct mapped_device *md)
{
@ -2316,7 +2336,6 @@ static int dm_wait_for_completion(struct mapped_device *md, int interruptible)
while (1) {
set_current_state(interruptible);
smp_mb();
if (!md_in_flight(md))
break;

View File

@ -60,6 +60,7 @@ int dm_table_resume_targets(struct dm_table *t);
int dm_table_any_congested(struct dm_table *t, int bdi_bits);
int dm_table_any_busy_target(struct dm_table *t);
unsigned dm_table_get_type(struct dm_table *t);
struct target_type *dm_table_get_immutable_target_type(struct dm_table *t);
bool dm_table_request_based(struct dm_table *t);
bool dm_table_supports_discards(struct dm_table *t);
int dm_table_alloc_md_mempools(struct dm_table *t);
@ -72,6 +73,7 @@ void dm_lock_md_type(struct mapped_device *md);
void dm_unlock_md_type(struct mapped_device *md);
void dm_set_md_type(struct mapped_device *md, unsigned type);
unsigned dm_get_md_type(struct mapped_device *md);
struct target_type *dm_get_immutable_target_type(struct mapped_device *md);
int dm_setup_md_queue(struct mapped_device *md);

View File

@ -0,0 +1,8 @@
config DM_PERSISTENT_DATA
tristate
depends on BLK_DEV_DM && EXPERIMENTAL
select LIBCRC32C
select DM_BUFIO
---help---
Library providing immutable on-disk data structure support for
device-mapper targets such as the thin provisioning target.

View File

@ -0,0 +1,11 @@
obj-$(CONFIG_DM_PERSISTENT_DATA) += dm-persistent-data.o
dm-persistent-data-objs := \
dm-block-manager.o \
dm-space-map-checker.o \
dm-space-map-common.o \
dm-space-map-disk.o \
dm-space-map-metadata.o \
dm-transaction-manager.o \
dm-btree.o \
dm-btree-remove.o \
dm-btree-spine.o

View File

@ -0,0 +1,620 @@
/*
* Copyright (C) 2011 Red Hat, Inc.
*
* This file is released under the GPL.
*/
#include "dm-block-manager.h"
#include "dm-persistent-data-internal.h"
#include "../dm-bufio.h"
#include <linux/crc32c.h>
#include <linux/module.h>
#include <linux/slab.h>
#include <linux/rwsem.h>
#include <linux/device-mapper.h>
#include <linux/stacktrace.h>
#define DM_MSG_PREFIX "block manager"
/*----------------------------------------------------------------*/
/*
* This is a read/write semaphore with a couple of differences.
*
* i) There is a restriction on the number of concurrent read locks that
* may be held at once. This is just an implementation detail.
*
* ii) Recursive locking attempts are detected and return EINVAL. A stack
* trace is also emitted for the previous lock aquisition.
*
* iii) Priority is given to write locks.
*/
#define MAX_HOLDERS 4
#define MAX_STACK 10
typedef unsigned long stack_entries[MAX_STACK];
struct block_lock {
spinlock_t lock;
__s32 count;
struct list_head waiters;
struct task_struct *holders[MAX_HOLDERS];
#ifdef CONFIG_DM_DEBUG_BLOCK_STACK_TRACING
struct stack_trace traces[MAX_HOLDERS];
stack_entries entries[MAX_HOLDERS];
#endif
};
struct waiter {
struct list_head list;
struct task_struct *task;
int wants_write;
};
static unsigned __find_holder(struct block_lock *lock,
struct task_struct *task)
{
unsigned i;
for (i = 0; i < MAX_HOLDERS; i++)
if (lock->holders[i] == task)
break;
BUG_ON(i == MAX_HOLDERS);
return i;
}
/* call this *after* you increment lock->count */
static void __add_holder(struct block_lock *lock, struct task_struct *task)
{
unsigned h = __find_holder(lock, NULL);
#ifdef CONFIG_DM_DEBUG_BLOCK_STACK_TRACING
struct stack_trace *t;
#endif
get_task_struct(task);
lock->holders[h] = task;
#ifdef CONFIG_DM_DEBUG_BLOCK_STACK_TRACING
t = lock->traces + h;
t->nr_entries = 0;
t->max_entries = MAX_STACK;
t->entries = lock->entries[h];
t->skip = 2;
save_stack_trace(t);
#endif
}
/* call this *before* you decrement lock->count */
static void __del_holder(struct block_lock *lock, struct task_struct *task)
{
unsigned h = __find_holder(lock, task);
lock->holders[h] = NULL;
put_task_struct(task);
}
static int __check_holder(struct block_lock *lock)
{
unsigned i;
#ifdef CONFIG_DM_DEBUG_BLOCK_STACK_TRACING
static struct stack_trace t;
static stack_entries entries;
#endif
for (i = 0; i < MAX_HOLDERS; i++) {
if (lock->holders[i] == current) {
DMERR("recursive lock detected in pool metadata");
#ifdef CONFIG_DM_DEBUG_BLOCK_STACK_TRACING
DMERR("previously held here:");
print_stack_trace(lock->traces + i, 4);
DMERR("subsequent aquisition attempted here:");
t.nr_entries = 0;
t.max_entries = MAX_STACK;
t.entries = entries;
t.skip = 3;
save_stack_trace(&t);
print_stack_trace(&t, 4);
#endif
return -EINVAL;
}
}
return 0;
}
static void __wait(struct waiter *w)
{
for (;;) {
set_task_state(current, TASK_UNINTERRUPTIBLE);
if (!w->task)
break;
schedule();
}
set_task_state(current, TASK_RUNNING);
}
static void __wake_waiter(struct waiter *w)
{
struct task_struct *task;
list_del(&w->list);
task = w->task;
smp_mb();
w->task = NULL;
wake_up_process(task);
}
/*
* We either wake a few readers or a single writer.
*/
static void __wake_many(struct block_lock *lock)
{
struct waiter *w, *tmp;
BUG_ON(lock->count < 0);
list_for_each_entry_safe(w, tmp, &lock->waiters, list) {
if (lock->count >= MAX_HOLDERS)
return;
if (w->wants_write) {
if (lock->count > 0)
return; /* still read locked */
lock->count = -1;
__add_holder(lock, w->task);
__wake_waiter(w);
return;
}
lock->count++;
__add_holder(lock, w->task);
__wake_waiter(w);
}
}
static void bl_init(struct block_lock *lock)
{
int i;
spin_lock_init(&lock->lock);
lock->count = 0;
INIT_LIST_HEAD(&lock->waiters);
for (i = 0; i < MAX_HOLDERS; i++)
lock->holders[i] = NULL;
}
static int __available_for_read(struct block_lock *lock)
{
return lock->count >= 0 &&
lock->count < MAX_HOLDERS &&
list_empty(&lock->waiters);
}
static int bl_down_read(struct block_lock *lock)
{
int r;
struct waiter w;
spin_lock(&lock->lock);
r = __check_holder(lock);
if (r) {
spin_unlock(&lock->lock);
return r;
}
if (__available_for_read(lock)) {
lock->count++;
__add_holder(lock, current);
spin_unlock(&lock->lock);
return 0;
}
get_task_struct(current);
w.task = current;
w.wants_write = 0;
list_add_tail(&w.list, &lock->waiters);
spin_unlock(&lock->lock);
__wait(&w);
put_task_struct(current);
return 0;
}
static int bl_down_read_nonblock(struct block_lock *lock)
{
int r;
spin_lock(&lock->lock);
r = __check_holder(lock);
if (r)
goto out;
if (__available_for_read(lock)) {
lock->count++;
__add_holder(lock, current);
r = 0;
} else
r = -EWOULDBLOCK;
out:
spin_unlock(&lock->lock);
return r;
}
static void bl_up_read(struct block_lock *lock)
{
spin_lock(&lock->lock);
BUG_ON(lock->count <= 0);
__del_holder(lock, current);
--lock->count;
if (!list_empty(&lock->waiters))
__wake_many(lock);
spin_unlock(&lock->lock);
}
static int bl_down_write(struct block_lock *lock)
{
int r;
struct waiter w;
spin_lock(&lock->lock);
r = __check_holder(lock);
if (r) {
spin_unlock(&lock->lock);
return r;
}
if (lock->count == 0 && list_empty(&lock->waiters)) {
lock->count = -1;
__add_holder(lock, current);
spin_unlock(&lock->lock);
return 0;
}
get_task_struct(current);
w.task = current;
w.wants_write = 1;
/*
* Writers given priority. We know there's only one mutator in the
* system, so ignoring the ordering reversal.
*/
list_add(&w.list, &lock->waiters);
spin_unlock(&lock->lock);
__wait(&w);
put_task_struct(current);
return 0;
}
static void bl_up_write(struct block_lock *lock)
{
spin_lock(&lock->lock);
__del_holder(lock, current);
lock->count = 0;
if (!list_empty(&lock->waiters))
__wake_many(lock);
spin_unlock(&lock->lock);
}
static void report_recursive_bug(dm_block_t b, int r)
{
if (r == -EINVAL)
DMERR("recursive acquisition of block %llu requested.",
(unsigned long long) b);
}
/*----------------------------------------------------------------*/
/*
* Block manager is currently implemented using dm-bufio. struct
* dm_block_manager and struct dm_block map directly onto a couple of
* structs in the bufio interface. I want to retain the freedom to move
* away from bufio in the future. So these structs are just cast within
* this .c file, rather than making it through to the public interface.
*/
static struct dm_buffer *to_buffer(struct dm_block *b)
{
return (struct dm_buffer *) b;
}
static struct dm_bufio_client *to_bufio(struct dm_block_manager *bm)
{
return (struct dm_bufio_client *) bm;
}
dm_block_t dm_block_location(struct dm_block *b)
{
return dm_bufio_get_block_number(to_buffer(b));
}
EXPORT_SYMBOL_GPL(dm_block_location);
void *dm_block_data(struct dm_block *b)
{
return dm_bufio_get_block_data(to_buffer(b));
}
EXPORT_SYMBOL_GPL(dm_block_data);
struct buffer_aux {
struct dm_block_validator *validator;
struct block_lock lock;
int write_locked;
};
static void dm_block_manager_alloc_callback(struct dm_buffer *buf)
{
struct buffer_aux *aux = dm_bufio_get_aux_data(buf);
aux->validator = NULL;
bl_init(&aux->lock);
}
static void dm_block_manager_write_callback(struct dm_buffer *buf)
{
struct buffer_aux *aux = dm_bufio_get_aux_data(buf);
if (aux->validator) {
aux->validator->prepare_for_write(aux->validator, (struct dm_block *) buf,
dm_bufio_get_block_size(dm_bufio_get_client(buf)));
}
}
/*----------------------------------------------------------------
* Public interface
*--------------------------------------------------------------*/
struct dm_block_manager *dm_block_manager_create(struct block_device *bdev,
unsigned block_size,
unsigned cache_size,
unsigned max_held_per_thread)
{
return (struct dm_block_manager *)
dm_bufio_client_create(bdev, block_size, max_held_per_thread,
sizeof(struct buffer_aux),
dm_block_manager_alloc_callback,
dm_block_manager_write_callback);
}
EXPORT_SYMBOL_GPL(dm_block_manager_create);
void dm_block_manager_destroy(struct dm_block_manager *bm)
{
return dm_bufio_client_destroy(to_bufio(bm));
}
EXPORT_SYMBOL_GPL(dm_block_manager_destroy);
unsigned dm_bm_block_size(struct dm_block_manager *bm)
{
return dm_bufio_get_block_size(to_bufio(bm));
}
EXPORT_SYMBOL_GPL(dm_bm_block_size);
dm_block_t dm_bm_nr_blocks(struct dm_block_manager *bm)
{
return dm_bufio_get_device_size(to_bufio(bm));
}
static int dm_bm_validate_buffer(struct dm_block_manager *bm,
struct dm_buffer *buf,
struct buffer_aux *aux,
struct dm_block_validator *v)
{
if (unlikely(!aux->validator)) {
int r;
if (!v)
return 0;
r = v->check(v, (struct dm_block *) buf, dm_bufio_get_block_size(to_bufio(bm)));
if (unlikely(r))
return r;
aux->validator = v;
} else {
if (unlikely(aux->validator != v)) {
DMERR("validator mismatch (old=%s vs new=%s) for block %llu",
aux->validator->name, v ? v->name : "NULL",
(unsigned long long)
dm_bufio_get_block_number(buf));
return -EINVAL;
}
}
return 0;
}
int dm_bm_read_lock(struct dm_block_manager *bm, dm_block_t b,
struct dm_block_validator *v,
struct dm_block **result)
{
struct buffer_aux *aux;
void *p;
int r;
p = dm_bufio_read(to_bufio(bm), b, (struct dm_buffer **) result);
if (unlikely(IS_ERR(p)))
return PTR_ERR(p);
aux = dm_bufio_get_aux_data(to_buffer(*result));
r = bl_down_read(&aux->lock);
if (unlikely(r)) {
dm_bufio_release(to_buffer(*result));
report_recursive_bug(b, r);
return r;
}
aux->write_locked = 0;
r = dm_bm_validate_buffer(bm, to_buffer(*result), aux, v);
if (unlikely(r)) {
bl_up_read(&aux->lock);
dm_bufio_release(to_buffer(*result));
return r;
}
return 0;
}
EXPORT_SYMBOL_GPL(dm_bm_read_lock);
int dm_bm_write_lock(struct dm_block_manager *bm,
dm_block_t b, struct dm_block_validator *v,
struct dm_block **result)
{
struct buffer_aux *aux;
void *p;
int r;
p = dm_bufio_read(to_bufio(bm), b, (struct dm_buffer **) result);
if (unlikely(IS_ERR(p)))
return PTR_ERR(p);
aux = dm_bufio_get_aux_data(to_buffer(*result));
r = bl_down_write(&aux->lock);
if (r) {
dm_bufio_release(to_buffer(*result));
report_recursive_bug(b, r);
return r;
}
aux->write_locked = 1;
r = dm_bm_validate_buffer(bm, to_buffer(*result), aux, v);
if (unlikely(r)) {
bl_up_write(&aux->lock);
dm_bufio_release(to_buffer(*result));
return r;
}
return 0;
}
EXPORT_SYMBOL_GPL(dm_bm_write_lock);
int dm_bm_read_try_lock(struct dm_block_manager *bm,
dm_block_t b, struct dm_block_validator *v,
struct dm_block **result)
{
struct buffer_aux *aux;
void *p;
int r;
p = dm_bufio_get(to_bufio(bm), b, (struct dm_buffer **) result);
if (unlikely(IS_ERR(p)))
return PTR_ERR(p);
if (unlikely(!p))
return -EWOULDBLOCK;
aux = dm_bufio_get_aux_data(to_buffer(*result));
r = bl_down_read_nonblock(&aux->lock);
if (r < 0) {
dm_bufio_release(to_buffer(*result));
report_recursive_bug(b, r);
return r;
}
aux->write_locked = 0;
r = dm_bm_validate_buffer(bm, to_buffer(*result), aux, v);
if (unlikely(r)) {
bl_up_read(&aux->lock);
dm_bufio_release(to_buffer(*result));
return r;
}
return 0;
}
int dm_bm_write_lock_zero(struct dm_block_manager *bm,
dm_block_t b, struct dm_block_validator *v,
struct dm_block **result)
{
int r;
struct buffer_aux *aux;
void *p;
p = dm_bufio_new(to_bufio(bm), b, (struct dm_buffer **) result);
if (unlikely(IS_ERR(p)))
return PTR_ERR(p);
memset(p, 0, dm_bm_block_size(bm));
aux = dm_bufio_get_aux_data(to_buffer(*result));
r = bl_down_write(&aux->lock);
if (r) {
dm_bufio_release(to_buffer(*result));
return r;
}
aux->write_locked = 1;
aux->validator = v;
return 0;
}
int dm_bm_unlock(struct dm_block *b)
{
struct buffer_aux *aux;
aux = dm_bufio_get_aux_data(to_buffer(b));
if (aux->write_locked) {
dm_bufio_mark_buffer_dirty(to_buffer(b));
bl_up_write(&aux->lock);
} else
bl_up_read(&aux->lock);
dm_bufio_release(to_buffer(b));
return 0;
}
EXPORT_SYMBOL_GPL(dm_bm_unlock);
int dm_bm_unlock_move(struct dm_block *b, dm_block_t n)
{
struct buffer_aux *aux;
aux = dm_bufio_get_aux_data(to_buffer(b));
if (aux->write_locked) {
dm_bufio_mark_buffer_dirty(to_buffer(b));
bl_up_write(&aux->lock);
} else
bl_up_read(&aux->lock);
dm_bufio_release_move(to_buffer(b), n);
return 0;
}
int dm_bm_flush_and_unlock(struct dm_block_manager *bm,
struct dm_block *superblock)
{
int r;
r = dm_bufio_write_dirty_buffers(to_bufio(bm));
if (unlikely(r))
return r;
r = dm_bufio_issue_flush(to_bufio(bm));
if (unlikely(r))
return r;
dm_bm_unlock(superblock);
r = dm_bufio_write_dirty_buffers(to_bufio(bm));
if (unlikely(r))
return r;
r = dm_bufio_issue_flush(to_bufio(bm));
if (unlikely(r))
return r;
return 0;
}
u32 dm_bm_checksum(const void *data, size_t len, u32 init_xor)
{
return crc32c(~(u32) 0, data, len) ^ init_xor;
}
EXPORT_SYMBOL_GPL(dm_bm_checksum);
/*----------------------------------------------------------------*/
MODULE_LICENSE("GPL");
MODULE_AUTHOR("Joe Thornber <dm-devel@redhat.com>");
MODULE_DESCRIPTION("Immutable metadata library for dm");
/*----------------------------------------------------------------*/

View File

@ -0,0 +1,123 @@
/*
* Copyright (C) 2011 Red Hat, Inc.
*
* This file is released under the GPL.
*/
#ifndef _LINUX_DM_BLOCK_MANAGER_H
#define _LINUX_DM_BLOCK_MANAGER_H
#include <linux/types.h>
#include <linux/blkdev.h>
/*----------------------------------------------------------------*/
/*
* Block number.
*/
typedef uint64_t dm_block_t;
struct dm_block;
dm_block_t dm_block_location(struct dm_block *b);
void *dm_block_data(struct dm_block *b);
/*----------------------------------------------------------------*/
/*
* @name should be a unique identifier for the block manager, no longer
* than 32 chars.
*
* @max_held_per_thread should be the maximum number of locks, read or
* write, that an individual thread holds at any one time.
*/
struct dm_block_manager;
struct dm_block_manager *dm_block_manager_create(
struct block_device *bdev, unsigned block_size,
unsigned cache_size, unsigned max_held_per_thread);
void dm_block_manager_destroy(struct dm_block_manager *bm);
unsigned dm_bm_block_size(struct dm_block_manager *bm);
dm_block_t dm_bm_nr_blocks(struct dm_block_manager *bm);
/*----------------------------------------------------------------*/
/*
* The validator allows the caller to verify newly-read data and modify
* the data just before writing, e.g. to calculate checksums. It's
* important to be consistent with your use of validators. The only time
* you can change validators is if you call dm_bm_write_lock_zero.
*/
struct dm_block_validator {
const char *name;
void (*prepare_for_write)(struct dm_block_validator *v, struct dm_block *b, size_t block_size);
/*
* Return 0 if the checksum is valid or < 0 on error.
*/
int (*check)(struct dm_block_validator *v, struct dm_block *b, size_t block_size);
};
/*----------------------------------------------------------------*/
/*
* You can have multiple concurrent readers or a single writer holding a
* block lock.
*/
/*
* dm_bm_lock() locks a block and returns through @result a pointer to
* memory that holds a copy of that block. If you have write-locked the
* block then any changes you make to memory pointed to by @result will be
* written back to the disk sometime after dm_bm_unlock is called.
*/
int dm_bm_read_lock(struct dm_block_manager *bm, dm_block_t b,
struct dm_block_validator *v,
struct dm_block **result);
int dm_bm_write_lock(struct dm_block_manager *bm, dm_block_t b,
struct dm_block_validator *v,
struct dm_block **result);
/*
* The *_try_lock variants return -EWOULDBLOCK if the block isn't
* available immediately.
*/
int dm_bm_read_try_lock(struct dm_block_manager *bm, dm_block_t b,
struct dm_block_validator *v,
struct dm_block **result);
/*
* Use dm_bm_write_lock_zero() when you know you're going to
* overwrite the block completely. It saves a disk read.
*/
int dm_bm_write_lock_zero(struct dm_block_manager *bm, dm_block_t b,
struct dm_block_validator *v,
struct dm_block **result);
int dm_bm_unlock(struct dm_block *b);
/*
* An optimisation; we often want to copy a block's contents to a new
* block. eg, as part of the shadowing operation. It's far better for
* bufio to do this move behind the scenes than hold 2 locks and memcpy the
* data.
*/
int dm_bm_unlock_move(struct dm_block *b, dm_block_t n);
/*
* It's a common idiom to have a superblock that should be committed last.
*
* @superblock should be write-locked on entry. It will be unlocked during
* this function. All dirty blocks are guaranteed to be written and flushed
* before the superblock.
*
* This method always blocks.
*/
int dm_bm_flush_and_unlock(struct dm_block_manager *bm,
struct dm_block *superblock);
u32 dm_bm_checksum(const void *data, size_t len, u32 init_xor);
/*----------------------------------------------------------------*/
#endif /* _LINUX_DM_BLOCK_MANAGER_H */

View File

@ -0,0 +1,137 @@
/*
* Copyright (C) 2011 Red Hat, Inc.
*
* This file is released under the GPL.
*/
#ifndef DM_BTREE_INTERNAL_H
#define DM_BTREE_INTERNAL_H
#include "dm-btree.h"
/*----------------------------------------------------------------*/
/*
* We'll need 2 accessor functions for n->csum and n->blocknr
* to support dm-btree-spine.c in that case.
*/
enum node_flags {
INTERNAL_NODE = 1,
LEAF_NODE = 1 << 1
};
/*
* Every btree node begins with this structure. Make sure it's a multiple
* of 8-bytes in size, otherwise the 64bit keys will be mis-aligned.
*/
struct node_header {
__le32 csum;
__le32 flags;
__le64 blocknr; /* Block this node is supposed to live in. */
__le32 nr_entries;
__le32 max_entries;
__le32 value_size;
__le32 padding;
} __packed;
struct node {
struct node_header header;
__le64 keys[0];
} __packed;
void inc_children(struct dm_transaction_manager *tm, struct node *n,
struct dm_btree_value_type *vt);
int new_block(struct dm_btree_info *info, struct dm_block **result);
int unlock_block(struct dm_btree_info *info, struct dm_block *b);
/*
* Spines keep track of the rolling locks. There are 2 variants, read-only
* and one that uses shadowing. These are separate structs to allow the
* type checker to spot misuse, for example accidentally calling read_lock
* on a shadow spine.
*/
struct ro_spine {
struct dm_btree_info *info;
int count;
struct dm_block *nodes[2];
};
void init_ro_spine(struct ro_spine *s, struct dm_btree_info *info);
int exit_ro_spine(struct ro_spine *s);
int ro_step(struct ro_spine *s, dm_block_t new_child);
struct node *ro_node(struct ro_spine *s);
struct shadow_spine {
struct dm_btree_info *info;
int count;
struct dm_block *nodes[2];
dm_block_t root;
};
void init_shadow_spine(struct shadow_spine *s, struct dm_btree_info *info);
int exit_shadow_spine(struct shadow_spine *s);
int shadow_step(struct shadow_spine *s, dm_block_t b,
struct dm_btree_value_type *vt);
/*
* The spine must have at least one entry before calling this.
*/
struct dm_block *shadow_current(struct shadow_spine *s);
/*
* The spine must have at least two entries before calling this.
*/
struct dm_block *shadow_parent(struct shadow_spine *s);
int shadow_has_parent(struct shadow_spine *s);
int shadow_root(struct shadow_spine *s);
/*
* Some inlines.
*/
static inline __le64 *key_ptr(struct node *n, uint32_t index)
{
return n->keys + index;
}
static inline void *value_base(struct node *n)
{
return &n->keys[le32_to_cpu(n->header.max_entries)];
}
/*
* FIXME: Now that value size is stored in node we don't need the third parm.
*/
static inline void *value_ptr(struct node *n, uint32_t index, size_t value_size)
{
BUG_ON(value_size != le32_to_cpu(n->header.value_size));
return value_base(n) + (value_size * index);
}
/*
* Assumes the values are suitably-aligned and converts to core format.
*/
static inline uint64_t value64(struct node *n, uint32_t index)
{
__le64 *values_le = value_base(n);
return le64_to_cpu(values_le[index]);
}
/*
* Searching for a key within a single node.
*/
int lower_bound(struct node *n, uint64_t key);
extern struct dm_block_validator btree_node_validator;
#endif /* DM_BTREE_INTERNAL_H */

View File

@ -0,0 +1,566 @@
/*
* Copyright (C) 2011 Red Hat, Inc.
*
* This file is released under the GPL.
*/
#include "dm-btree.h"
#include "dm-btree-internal.h"
#include "dm-transaction-manager.h"
#include <linux/module.h>
/*
* Removing an entry from a btree
* ==============================
*
* A very important constraint for our btree is that no node, except the
* root, may have fewer than a certain number of entries.
* (MIN_ENTRIES <= nr_entries <= MAX_ENTRIES).
*
* Ensuring this is complicated by the way we want to only ever hold the
* locks on 2 nodes concurrently, and only change nodes in a top to bottom
* fashion.
*
* Each node may have a left or right sibling. When decending the spine,
* if a node contains only MIN_ENTRIES then we try and increase this to at
* least MIN_ENTRIES + 1. We do this in the following ways:
*
* [A] No siblings => this can only happen if the node is the root, in which
* case we copy the childs contents over the root.
*
* [B] No left sibling
* ==> rebalance(node, right sibling)
*
* [C] No right sibling
* ==> rebalance(left sibling, node)
*
* [D] Both siblings, total_entries(left, node, right) <= DEL_THRESHOLD
* ==> delete node adding it's contents to left and right
*
* [E] Both siblings, total_entries(left, node, right) > DEL_THRESHOLD
* ==> rebalance(left, node, right)
*
* After these operations it's possible that the our original node no
* longer contains the desired sub tree. For this reason this rebalancing
* is performed on the children of the current node. This also avoids
* having a special case for the root.
*
* Once this rebalancing has occurred we can then step into the child node
* for internal nodes. Or delete the entry for leaf nodes.
*/
/*
* Some little utilities for moving node data around.
*/
static void node_shift(struct node *n, int shift)
{
uint32_t nr_entries = le32_to_cpu(n->header.nr_entries);
uint32_t value_size = le32_to_cpu(n->header.value_size);
if (shift < 0) {
shift = -shift;
BUG_ON(shift > nr_entries);
BUG_ON((void *) key_ptr(n, shift) >= value_ptr(n, shift, value_size));
memmove(key_ptr(n, 0),
key_ptr(n, shift),
(nr_entries - shift) * sizeof(__le64));
memmove(value_ptr(n, 0, value_size),
value_ptr(n, shift, value_size),
(nr_entries - shift) * value_size);
} else {
BUG_ON(nr_entries + shift > le32_to_cpu(n->header.max_entries));
memmove(key_ptr(n, shift),
key_ptr(n, 0),
nr_entries * sizeof(__le64));
memmove(value_ptr(n, shift, value_size),
value_ptr(n, 0, value_size),
nr_entries * value_size);
}
}
static void node_copy(struct node *left, struct node *right, int shift)
{
uint32_t nr_left = le32_to_cpu(left->header.nr_entries);
uint32_t value_size = le32_to_cpu(left->header.value_size);
BUG_ON(value_size != le32_to_cpu(right->header.value_size));
if (shift < 0) {
shift = -shift;
BUG_ON(nr_left + shift > le32_to_cpu(left->header.max_entries));
memcpy(key_ptr(left, nr_left),
key_ptr(right, 0),
shift * sizeof(__le64));
memcpy(value_ptr(left, nr_left, value_size),
value_ptr(right, 0, value_size),
shift * value_size);
} else {
BUG_ON(shift > le32_to_cpu(right->header.max_entries));
memcpy(key_ptr(right, 0),
key_ptr(left, nr_left - shift),
shift * sizeof(__le64));
memcpy(value_ptr(right, 0, value_size),
value_ptr(left, nr_left - shift, value_size),
shift * value_size);
}
}
/*
* Delete a specific entry from a leaf node.
*/
static void delete_at(struct node *n, unsigned index)
{
unsigned nr_entries = le32_to_cpu(n->header.nr_entries);
unsigned nr_to_copy = nr_entries - (index + 1);
uint32_t value_size = le32_to_cpu(n->header.value_size);
BUG_ON(index >= nr_entries);
if (nr_to_copy) {
memmove(key_ptr(n, index),
key_ptr(n, index + 1),
nr_to_copy * sizeof(__le64));
memmove(value_ptr(n, index, value_size),
value_ptr(n, index + 1, value_size),
nr_to_copy * value_size);
}
n->header.nr_entries = cpu_to_le32(nr_entries - 1);
}
static unsigned del_threshold(struct node *n)
{
return le32_to_cpu(n->header.max_entries) / 3;
}
static unsigned merge_threshold(struct node *n)
{
/*
* The extra one is because we know we're potentially going to
* delete an entry.
*/
return 2 * (le32_to_cpu(n->header.max_entries) / 3) + 1;
}
struct child {
unsigned index;
struct dm_block *block;
struct node *n;
};
static struct dm_btree_value_type le64_type = {
.context = NULL,
.size = sizeof(__le64),
.inc = NULL,
.dec = NULL,
.equal = NULL
};
static int init_child(struct dm_btree_info *info, struct node *parent,
unsigned index, struct child *result)
{
int r, inc;
dm_block_t root;
result->index = index;
root = value64(parent, index);
r = dm_tm_shadow_block(info->tm, root, &btree_node_validator,
&result->block, &inc);
if (r)
return r;
result->n = dm_block_data(result->block);
if (inc)
inc_children(info->tm, result->n, &le64_type);
*((__le64 *) value_ptr(parent, index, sizeof(__le64))) =
cpu_to_le64(dm_block_location(result->block));
return 0;
}
static int exit_child(struct dm_btree_info *info, struct child *c)
{
return dm_tm_unlock(info->tm, c->block);
}
static void shift(struct node *left, struct node *right, int count)
{
if (!count)
return;
if (count > 0) {
node_shift(right, count);
node_copy(left, right, count);
} else {
node_copy(left, right, count);
node_shift(right, count);
}
left->header.nr_entries =
cpu_to_le32(le32_to_cpu(left->header.nr_entries) - count);
BUG_ON(le32_to_cpu(left->header.nr_entries) > le32_to_cpu(left->header.max_entries));
right->header.nr_entries =
cpu_to_le32(le32_to_cpu(right->header.nr_entries) + count);
BUG_ON(le32_to_cpu(right->header.nr_entries) > le32_to_cpu(right->header.max_entries));
}
static void __rebalance2(struct dm_btree_info *info, struct node *parent,
struct child *l, struct child *r)
{
struct node *left = l->n;
struct node *right = r->n;
uint32_t nr_left = le32_to_cpu(left->header.nr_entries);
uint32_t nr_right = le32_to_cpu(right->header.nr_entries);
if (nr_left + nr_right <= merge_threshold(left)) {
/*
* Merge
*/
node_copy(left, right, -nr_right);
left->header.nr_entries = cpu_to_le32(nr_left + nr_right);
delete_at(parent, r->index);
/*
* We need to decrement the right block, but not it's
* children, since they're still referenced by left.
*/
dm_tm_dec(info->tm, dm_block_location(r->block));
} else {
/*
* Rebalance.
*/
unsigned target_left = (nr_left + nr_right) / 2;
unsigned shift_ = nr_left - target_left;
BUG_ON(le32_to_cpu(left->header.max_entries) <= nr_left - shift_);
BUG_ON(le32_to_cpu(right->header.max_entries) <= nr_right + shift_);
shift(left, right, nr_left - target_left);
*key_ptr(parent, r->index) = right->keys[0];
}
}
static int rebalance2(struct shadow_spine *s, struct dm_btree_info *info,
unsigned left_index)
{
int r;
struct node *parent;
struct child left, right;
parent = dm_block_data(shadow_current(s));
r = init_child(info, parent, left_index, &left);
if (r)
return r;
r = init_child(info, parent, left_index + 1, &right);
if (r) {
exit_child(info, &left);
return r;
}
__rebalance2(info, parent, &left, &right);
r = exit_child(info, &left);
if (r) {
exit_child(info, &right);
return r;
}
return exit_child(info, &right);
}
static void __rebalance3(struct dm_btree_info *info, struct node *parent,
struct child *l, struct child *c, struct child *r)
{
struct node *left = l->n;
struct node *center = c->n;
struct node *right = r->n;
uint32_t nr_left = le32_to_cpu(left->header.nr_entries);
uint32_t nr_center = le32_to_cpu(center->header.nr_entries);
uint32_t nr_right = le32_to_cpu(right->header.nr_entries);
uint32_t max_entries = le32_to_cpu(left->header.max_entries);
unsigned target;
BUG_ON(left->header.max_entries != center->header.max_entries);
BUG_ON(center->header.max_entries != right->header.max_entries);
if (((nr_left + nr_center + nr_right) / 2) < merge_threshold(center)) {
/*
* Delete center node:
*
* We dump as many entries from center as possible into
* left, then the rest in right, then rebalance2. This
* wastes some cpu, but I want something simple atm.
*/
unsigned shift = min(max_entries - nr_left, nr_center);
BUG_ON(nr_left + shift > max_entries);
node_copy(left, center, -shift);
left->header.nr_entries = cpu_to_le32(nr_left + shift);
if (shift != nr_center) {
shift = nr_center - shift;
BUG_ON((nr_right + shift) >= max_entries);
node_shift(right, shift);
node_copy(center, right, shift);
right->header.nr_entries = cpu_to_le32(nr_right + shift);
}
*key_ptr(parent, r->index) = right->keys[0];
delete_at(parent, c->index);
r->index--;
dm_tm_dec(info->tm, dm_block_location(c->block));
__rebalance2(info, parent, l, r);
return;
}
/*
* Rebalance
*/
target = (nr_left + nr_center + nr_right) / 3;
BUG_ON(target > max_entries);
/*
* Adjust the left node
*/
shift(left, center, nr_left - target);
/*
* Adjust the right node
*/
shift(center, right, target - nr_right);
*key_ptr(parent, c->index) = center->keys[0];
*key_ptr(parent, r->index) = right->keys[0];
}
static int rebalance3(struct shadow_spine *s, struct dm_btree_info *info,
unsigned left_index)
{
int r;
struct node *parent = dm_block_data(shadow_current(s));
struct child left, center, right;
/*
* FIXME: fill out an array?
*/
r = init_child(info, parent, left_index, &left);
if (r)
return r;
r = init_child(info, parent, left_index + 1, &center);
if (r) {
exit_child(info, &left);
return r;
}
r = init_child(info, parent, left_index + 2, &right);
if (r) {
exit_child(info, &left);
exit_child(info, &center);
return r;
}
__rebalance3(info, parent, &left, &center, &right);
r = exit_child(info, &left);
if (r) {
exit_child(info, &center);
exit_child(info, &right);
return r;
}
r = exit_child(info, &center);
if (r) {
exit_child(info, &right);
return r;
}
r = exit_child(info, &right);
if (r)
return r;
return 0;
}
static int get_nr_entries(struct dm_transaction_manager *tm,
dm_block_t b, uint32_t *result)
{
int r;
struct dm_block *block;
struct node *n;
r = dm_tm_read_lock(tm, b, &btree_node_validator, &block);
if (r)
return r;
n = dm_block_data(block);
*result = le32_to_cpu(n->header.nr_entries);
return dm_tm_unlock(tm, block);
}
static int rebalance_children(struct shadow_spine *s,
struct dm_btree_info *info, uint64_t key)
{
int i, r, has_left_sibling, has_right_sibling;
uint32_t child_entries;
struct node *n;
n = dm_block_data(shadow_current(s));
if (le32_to_cpu(n->header.nr_entries) == 1) {
struct dm_block *child;
dm_block_t b = value64(n, 0);
r = dm_tm_read_lock(info->tm, b, &btree_node_validator, &child);
if (r)
return r;
memcpy(n, dm_block_data(child),
dm_bm_block_size(dm_tm_get_bm(info->tm)));
r = dm_tm_unlock(info->tm, child);
if (r)
return r;
dm_tm_dec(info->tm, dm_block_location(child));
return 0;
}
i = lower_bound(n, key);
if (i < 0)
return -ENODATA;
r = get_nr_entries(info->tm, value64(n, i), &child_entries);
if (r)
return r;
if (child_entries > del_threshold(n))
return 0;
has_left_sibling = i > 0;
has_right_sibling = i < (le32_to_cpu(n->header.nr_entries) - 1);
if (!has_left_sibling)
r = rebalance2(s, info, i);
else if (!has_right_sibling)
r = rebalance2(s, info, i - 1);
else
r = rebalance3(s, info, i - 1);
return r;
}
static int do_leaf(struct node *n, uint64_t key, unsigned *index)
{
int i = lower_bound(n, key);
if ((i < 0) ||
(i >= le32_to_cpu(n->header.nr_entries)) ||
(le64_to_cpu(n->keys[i]) != key))
return -ENODATA;
*index = i;
return 0;
}
/*
* Prepares for removal from one level of the hierarchy. The caller must
* call delete_at() to remove the entry at index.
*/
static int remove_raw(struct shadow_spine *s, struct dm_btree_info *info,
struct dm_btree_value_type *vt, dm_block_t root,
uint64_t key, unsigned *index)
{
int i = *index, r;
struct node *n;
for (;;) {
r = shadow_step(s, root, vt);
if (r < 0)
break;
/*
* We have to patch up the parent node, ugly, but I don't
* see a way to do this automatically as part of the spine
* op.
*/
if (shadow_has_parent(s)) {
__le64 location = cpu_to_le64(dm_block_location(shadow_current(s)));
memcpy(value_ptr(dm_block_data(shadow_parent(s)), i, sizeof(__le64)),
&location, sizeof(__le64));
}
n = dm_block_data(shadow_current(s));
if (le32_to_cpu(n->header.flags) & LEAF_NODE)
return do_leaf(n, key, index);
r = rebalance_children(s, info, key);
if (r)
break;
n = dm_block_data(shadow_current(s));
if (le32_to_cpu(n->header.flags) & LEAF_NODE)
return do_leaf(n, key, index);
i = lower_bound(n, key);
/*
* We know the key is present, or else
* rebalance_children would have returned
* -ENODATA
*/
root = value64(n, i);
}
return r;
}
int dm_btree_remove(struct dm_btree_info *info, dm_block_t root,
uint64_t *keys, dm_block_t *new_root)
{
unsigned level, last_level = info->levels - 1;
int index = 0, r = 0;
struct shadow_spine spine;
struct node *n;
init_shadow_spine(&spine, info);
for (level = 0; level < info->levels; level++) {
r = remove_raw(&spine, info,
(level == last_level ?
&info->value_type : &le64_type),
root, keys[level], (unsigned *)&index);
if (r < 0)
break;
n = dm_block_data(shadow_current(&spine));
if (level != last_level) {
root = value64(n, index);
continue;
}
BUG_ON(index < 0 || index >= le32_to_cpu(n->header.nr_entries));
if (info->value_type.dec)
info->value_type.dec(info->value_type.context,
value_ptr(n, index, info->value_type.size));
delete_at(n, index);
}
*new_root = shadow_root(&spine);
exit_shadow_spine(&spine);
return r;
}
EXPORT_SYMBOL_GPL(dm_btree_remove);

View File

@ -0,0 +1,244 @@
/*
* Copyright (C) 2011 Red Hat, Inc.
*
* This file is released under the GPL.
*/
#include "dm-btree-internal.h"
#include "dm-transaction-manager.h"
#include <linux/device-mapper.h>
#define DM_MSG_PREFIX "btree spine"
/*----------------------------------------------------------------*/
#define BTREE_CSUM_XOR 121107
static int node_check(struct dm_block_validator *v,
struct dm_block *b,
size_t block_size);
static void node_prepare_for_write(struct dm_block_validator *v,
struct dm_block *b,
size_t block_size)
{
struct node *n = dm_block_data(b);
struct node_header *h = &n->header;
h->blocknr = cpu_to_le64(dm_block_location(b));
h->csum = cpu_to_le32(dm_bm_checksum(&h->flags,
block_size - sizeof(__le32),
BTREE_CSUM_XOR));
BUG_ON(node_check(v, b, 4096));
}
static int node_check(struct dm_block_validator *v,
struct dm_block *b,
size_t block_size)
{
struct node *n = dm_block_data(b);
struct node_header *h = &n->header;
size_t value_size;
__le32 csum_disk;
uint32_t flags;
if (dm_block_location(b) != le64_to_cpu(h->blocknr)) {
DMERR("node_check failed blocknr %llu wanted %llu",
le64_to_cpu(h->blocknr), dm_block_location(b));
return -ENOTBLK;
}
csum_disk = cpu_to_le32(dm_bm_checksum(&h->flags,
block_size - sizeof(__le32),
BTREE_CSUM_XOR));
if (csum_disk != h->csum) {
DMERR("node_check failed csum %u wanted %u",
le32_to_cpu(csum_disk), le32_to_cpu(h->csum));
return -EILSEQ;
}
value_size = le32_to_cpu(h->value_size);
if (sizeof(struct node_header) +
(sizeof(__le64) + value_size) * le32_to_cpu(h->max_entries) > block_size) {
DMERR("node_check failed: max_entries too large");
return -EILSEQ;
}
if (le32_to_cpu(h->nr_entries) > le32_to_cpu(h->max_entries)) {
DMERR("node_check failed, too many entries");
return -EILSEQ;
}
/*
* The node must be either INTERNAL or LEAF.
*/
flags = le32_to_cpu(h->flags);
if (!(flags & INTERNAL_NODE) && !(flags & LEAF_NODE)) {
DMERR("node_check failed, node is neither INTERNAL or LEAF");
return -EILSEQ;
}
return 0;
}
struct dm_block_validator btree_node_validator = {
.name = "btree_node",
.prepare_for_write = node_prepare_for_write,
.check = node_check
};
/*----------------------------------------------------------------*/
static int bn_read_lock(struct dm_btree_info *info, dm_block_t b,
struct dm_block **result)
{
return dm_tm_read_lock(info->tm, b, &btree_node_validator, result);
}
static int bn_shadow(struct dm_btree_info *info, dm_block_t orig,
struct dm_btree_value_type *vt,
struct dm_block **result)
{
int r, inc;
r = dm_tm_shadow_block(info->tm, orig, &btree_node_validator,
result, &inc);
if (!r && inc)
inc_children(info->tm, dm_block_data(*result), vt);
return r;
}
int new_block(struct dm_btree_info *info, struct dm_block **result)
{
return dm_tm_new_block(info->tm, &btree_node_validator, result);
}
int unlock_block(struct dm_btree_info *info, struct dm_block *b)
{
return dm_tm_unlock(info->tm, b);
}
/*----------------------------------------------------------------*/
void init_ro_spine(struct ro_spine *s, struct dm_btree_info *info)
{
s->info = info;
s->count = 0;
s->nodes[0] = NULL;
s->nodes[1] = NULL;
}
int exit_ro_spine(struct ro_spine *s)
{
int r = 0, i;
for (i = 0; i < s->count; i++) {
int r2 = unlock_block(s->info, s->nodes[i]);
if (r2 < 0)
r = r2;
}
return r;
}
int ro_step(struct ro_spine *s, dm_block_t new_child)
{
int r;
if (s->count == 2) {
r = unlock_block(s->info, s->nodes[0]);
if (r < 0)
return r;
s->nodes[0] = s->nodes[1];
s->count--;
}
r = bn_read_lock(s->info, new_child, s->nodes + s->count);
if (!r)
s->count++;
return r;
}
struct node *ro_node(struct ro_spine *s)
{
struct dm_block *block;
BUG_ON(!s->count);
block = s->nodes[s->count - 1];
return dm_block_data(block);
}
/*----------------------------------------------------------------*/
void init_shadow_spine(struct shadow_spine *s, struct dm_btree_info *info)
{
s->info = info;
s->count = 0;
}
int exit_shadow_spine(struct shadow_spine *s)
{
int r = 0, i;
for (i = 0; i < s->count; i++) {
int r2 = unlock_block(s->info, s->nodes[i]);
if (r2 < 0)
r = r2;
}
return r;
}
int shadow_step(struct shadow_spine *s, dm_block_t b,
struct dm_btree_value_type *vt)
{
int r;
if (s->count == 2) {
r = unlock_block(s->info, s->nodes[0]);
if (r < 0)
return r;
s->nodes[0] = s->nodes[1];
s->count--;
}
r = bn_shadow(s->info, b, vt, s->nodes + s->count);
if (!r) {
if (!s->count)
s->root = dm_block_location(s->nodes[0]);
s->count++;
}
return r;
}
struct dm_block *shadow_current(struct shadow_spine *s)
{
BUG_ON(!s->count);
return s->nodes[s->count - 1];
}
struct dm_block *shadow_parent(struct shadow_spine *s)
{
BUG_ON(s->count != 2);
return s->count == 2 ? s->nodes[0] : NULL;
}
int shadow_has_parent(struct shadow_spine *s)
{
return s->count >= 2;
}
int shadow_root(struct shadow_spine *s)
{
return s->root;
}

View File

@ -0,0 +1,805 @@
/*
* Copyright (C) 2011 Red Hat, Inc.
*
* This file is released under the GPL.
*/
#include "dm-btree-internal.h"
#include "dm-space-map.h"
#include "dm-transaction-manager.h"
#include <linux/module.h>
#include <linux/device-mapper.h>
#define DM_MSG_PREFIX "btree"
/*----------------------------------------------------------------
* Array manipulation
*--------------------------------------------------------------*/
static void memcpy_disk(void *dest, const void *src, size_t len)
__dm_written_to_disk(src)
{
memcpy(dest, src, len);
__dm_unbless_for_disk(src);
}
static void array_insert(void *base, size_t elt_size, unsigned nr_elts,
unsigned index, void *elt)
__dm_written_to_disk(elt)
{
if (index < nr_elts)
memmove(base + (elt_size * (index + 1)),
base + (elt_size * index),
(nr_elts - index) * elt_size);
memcpy_disk(base + (elt_size * index), elt, elt_size);
}
/*----------------------------------------------------------------*/
/* makes the assumption that no two keys are the same. */
static int bsearch(struct node *n, uint64_t key, int want_hi)
{
int lo = -1, hi = le32_to_cpu(n->header.nr_entries);
while (hi - lo > 1) {
int mid = lo + ((hi - lo) / 2);
uint64_t mid_key = le64_to_cpu(n->keys[mid]);
if (mid_key == key)
return mid;
if (mid_key < key)
lo = mid;
else
hi = mid;
}
return want_hi ? hi : lo;
}
int lower_bound(struct node *n, uint64_t key)
{
return bsearch(n, key, 0);
}
void inc_children(struct dm_transaction_manager *tm, struct node *n,
struct dm_btree_value_type *vt)
{
unsigned i;
uint32_t nr_entries = le32_to_cpu(n->header.nr_entries);
if (le32_to_cpu(n->header.flags) & INTERNAL_NODE)
for (i = 0; i < nr_entries; i++)
dm_tm_inc(tm, value64(n, i));
else if (vt->inc)
for (i = 0; i < nr_entries; i++)
vt->inc(vt->context,
value_ptr(n, i, vt->size));
}
static int insert_at(size_t value_size, struct node *node, unsigned index,
uint64_t key, void *value)
__dm_written_to_disk(value)
{
uint32_t nr_entries = le32_to_cpu(node->header.nr_entries);
__le64 key_le = cpu_to_le64(key);
if (index > nr_entries ||
index >= le32_to_cpu(node->header.max_entries)) {
DMERR("too many entries in btree node for insert");
__dm_unbless_for_disk(value);
return -ENOMEM;
}
__dm_bless_for_disk(&key_le);
array_insert(node->keys, sizeof(*node->keys), nr_entries, index, &key_le);
array_insert(value_base(node), value_size, nr_entries, index, value);
node->header.nr_entries = cpu_to_le32(nr_entries + 1);
return 0;
}
/*----------------------------------------------------------------*/
/*
* We want 3n entries (for some n). This works more nicely for repeated
* insert remove loops than (2n + 1).
*/
static uint32_t calc_max_entries(size_t value_size, size_t block_size)
{
uint32_t total, n;
size_t elt_size = sizeof(uint64_t) + value_size; /* key + value */
block_size -= sizeof(struct node_header);
total = block_size / elt_size;
n = total / 3; /* rounds down */
return 3 * n;
}
int dm_btree_empty(struct dm_btree_info *info, dm_block_t *root)
{
int r;
struct dm_block *b;
struct node *n;
size_t block_size;
uint32_t max_entries;
r = new_block(info, &b);
if (r < 0)
return r;
block_size = dm_bm_block_size(dm_tm_get_bm(info->tm));
max_entries = calc_max_entries(info->value_type.size, block_size);
n = dm_block_data(b);
memset(n, 0, block_size);
n->header.flags = cpu_to_le32(LEAF_NODE);
n->header.nr_entries = cpu_to_le32(0);
n->header.max_entries = cpu_to_le32(max_entries);
n->header.value_size = cpu_to_le32(info->value_type.size);
*root = dm_block_location(b);
return unlock_block(info, b);
}
EXPORT_SYMBOL_GPL(dm_btree_empty);
/*----------------------------------------------------------------*/
/*
* Deletion uses a recursive algorithm, since we have limited stack space
* we explicitly manage our own stack on the heap.
*/
#define MAX_SPINE_DEPTH 64
struct frame {
struct dm_block *b;
struct node *n;
unsigned level;
unsigned nr_children;
unsigned current_child;
};
struct del_stack {
struct dm_transaction_manager *tm;
int top;
struct frame spine[MAX_SPINE_DEPTH];
};
static int top_frame(struct del_stack *s, struct frame **f)
{
if (s->top < 0) {
DMERR("btree deletion stack empty");
return -EINVAL;
}
*f = s->spine + s->top;
return 0;
}
static int unprocessed_frames(struct del_stack *s)
{
return s->top >= 0;
}
static int push_frame(struct del_stack *s, dm_block_t b, unsigned level)
{
int r;
uint32_t ref_count;
if (s->top >= MAX_SPINE_DEPTH - 1) {
DMERR("btree deletion stack out of memory");
return -ENOMEM;
}
r = dm_tm_ref(s->tm, b, &ref_count);
if (r)
return r;
if (ref_count > 1)
/*
* This is a shared node, so we can just decrement it's
* reference counter and leave the children.
*/
dm_tm_dec(s->tm, b);
else {
struct frame *f = s->spine + ++s->top;
r = dm_tm_read_lock(s->tm, b, &btree_node_validator, &f->b);
if (r) {
s->top--;
return r;
}
f->n = dm_block_data(f->b);
f->level = level;
f->nr_children = le32_to_cpu(f->n->header.nr_entries);
f->current_child = 0;
}
return 0;
}
static void pop_frame(struct del_stack *s)
{
struct frame *f = s->spine + s->top--;
dm_tm_dec(s->tm, dm_block_location(f->b));
dm_tm_unlock(s->tm, f->b);
}
int dm_btree_del(struct dm_btree_info *info, dm_block_t root)
{
int r;
struct del_stack *s;
s = kmalloc(sizeof(*s), GFP_KERNEL);
if (!s)
return -ENOMEM;
s->tm = info->tm;
s->top = -1;
r = push_frame(s, root, 1);
if (r)
goto out;
while (unprocessed_frames(s)) {
uint32_t flags;
struct frame *f;
dm_block_t b;
r = top_frame(s, &f);
if (r)
goto out;
if (f->current_child >= f->nr_children) {
pop_frame(s);
continue;
}
flags = le32_to_cpu(f->n->header.flags);
if (flags & INTERNAL_NODE) {
b = value64(f->n, f->current_child);
f->current_child++;
r = push_frame(s, b, f->level);
if (r)
goto out;
} else if (f->level != (info->levels - 1)) {
b = value64(f->n, f->current_child);
f->current_child++;
r = push_frame(s, b, f->level + 1);
if (r)
goto out;
} else {
if (info->value_type.dec) {
unsigned i;
for (i = 0; i < f->nr_children; i++)
info->value_type.dec(info->value_type.context,
value_ptr(f->n, i, info->value_type.size));
}
f->current_child = f->nr_children;
}
}
out:
kfree(s);
return r;
}
EXPORT_SYMBOL_GPL(dm_btree_del);
/*----------------------------------------------------------------*/
static int btree_lookup_raw(struct ro_spine *s, dm_block_t block, uint64_t key,
int (*search_fn)(struct node *, uint64_t),
uint64_t *result_key, void *v, size_t value_size)
{
int i, r;
uint32_t flags, nr_entries;
do {
r = ro_step(s, block);
if (r < 0)
return r;
i = search_fn(ro_node(s), key);
flags = le32_to_cpu(ro_node(s)->header.flags);
nr_entries = le32_to_cpu(ro_node(s)->header.nr_entries);
if (i < 0 || i >= nr_entries)
return -ENODATA;
if (flags & INTERNAL_NODE)
block = value64(ro_node(s), i);
} while (!(flags & LEAF_NODE));
*result_key = le64_to_cpu(ro_node(s)->keys[i]);
memcpy(v, value_ptr(ro_node(s), i, value_size), value_size);
return 0;
}
int dm_btree_lookup(struct dm_btree_info *info, dm_block_t root,
uint64_t *keys, void *value_le)
{
unsigned level, last_level = info->levels - 1;
int r = -ENODATA;
uint64_t rkey;
__le64 internal_value_le;
struct ro_spine spine;
init_ro_spine(&spine, info);
for (level = 0; level < info->levels; level++) {
size_t size;
void *value_p;
if (level == last_level) {
value_p = value_le;
size = info->value_type.size;
} else {
value_p = &internal_value_le;
size = sizeof(uint64_t);
}
r = btree_lookup_raw(&spine, root, keys[level],
lower_bound, &rkey,
value_p, size);
if (!r) {
if (rkey != keys[level]) {
exit_ro_spine(&spine);
return -ENODATA;
}
} else {
exit_ro_spine(&spine);
return r;
}
root = le64_to_cpu(internal_value_le);
}
exit_ro_spine(&spine);
return r;
}
EXPORT_SYMBOL_GPL(dm_btree_lookup);
/*
* Splits a node by creating a sibling node and shifting half the nodes
* contents across. Assumes there is a parent node, and it has room for
* another child.
*
* Before:
* +--------+
* | Parent |
* +--------+
* |
* v
* +----------+
* | A ++++++ |
* +----------+
*
*
* After:
* +--------+
* | Parent |
* +--------+
* | |
* v +------+
* +---------+ |
* | A* +++ | v
* +---------+ +-------+
* | B +++ |
* +-------+
*
* Where A* is a shadow of A.
*/
static int btree_split_sibling(struct shadow_spine *s, dm_block_t root,
unsigned parent_index, uint64_t key)
{
int r;
size_t size;
unsigned nr_left, nr_right;
struct dm_block *left, *right, *parent;
struct node *ln, *rn, *pn;
__le64 location;
left = shadow_current(s);
r = new_block(s->info, &right);
if (r < 0)
return r;
ln = dm_block_data(left);
rn = dm_block_data(right);
nr_left = le32_to_cpu(ln->header.nr_entries) / 2;
nr_right = le32_to_cpu(ln->header.nr_entries) - nr_left;
ln->header.nr_entries = cpu_to_le32(nr_left);
rn->header.flags = ln->header.flags;
rn->header.nr_entries = cpu_to_le32(nr_right);
rn->header.max_entries = ln->header.max_entries;
rn->header.value_size = ln->header.value_size;
memcpy(rn->keys, ln->keys + nr_left, nr_right * sizeof(rn->keys[0]));
size = le32_to_cpu(ln->header.flags) & INTERNAL_NODE ?
sizeof(uint64_t) : s->info->value_type.size;
memcpy(value_ptr(rn, 0, size), value_ptr(ln, nr_left, size),
size * nr_right);
/*
* Patch up the parent
*/
parent = shadow_parent(s);
pn = dm_block_data(parent);
location = cpu_to_le64(dm_block_location(left));
__dm_bless_for_disk(&location);
memcpy_disk(value_ptr(pn, parent_index, sizeof(__le64)),
&location, sizeof(__le64));
location = cpu_to_le64(dm_block_location(right));
__dm_bless_for_disk(&location);
r = insert_at(sizeof(__le64), pn, parent_index + 1,
le64_to_cpu(rn->keys[0]), &location);
if (r)
return r;
if (key < le64_to_cpu(rn->keys[0])) {
unlock_block(s->info, right);
s->nodes[1] = left;
} else {
unlock_block(s->info, left);
s->nodes[1] = right;
}
return 0;
}
/*
* Splits a node by creating two new children beneath the given node.
*
* Before:
* +----------+
* | A ++++++ |
* +----------+
*
*
* After:
* +------------+
* | A (shadow) |
* +------------+
* | |
* +------+ +----+
* | |
* v v
* +-------+ +-------+
* | B +++ | | C +++ |
* +-------+ +-------+
*/
static int btree_split_beneath(struct shadow_spine *s, uint64_t key)
{
int r;
size_t size;
unsigned nr_left, nr_right;
struct dm_block *left, *right, *new_parent;
struct node *pn, *ln, *rn;
__le64 val;
new_parent = shadow_current(s);
r = new_block(s->info, &left);
if (r < 0)
return r;
r = new_block(s->info, &right);
if (r < 0) {
/* FIXME: put left */
return r;
}
pn = dm_block_data(new_parent);
ln = dm_block_data(left);
rn = dm_block_data(right);
nr_left = le32_to_cpu(pn->header.nr_entries) / 2;
nr_right = le32_to_cpu(pn->header.nr_entries) - nr_left;
ln->header.flags = pn->header.flags;
ln->header.nr_entries = cpu_to_le32(nr_left);
ln->header.max_entries = pn->header.max_entries;
ln->header.value_size = pn->header.value_size;
rn->header.flags = pn->header.flags;
rn->header.nr_entries = cpu_to_le32(nr_right);
rn->header.max_entries = pn->header.max_entries;
rn->header.value_size = pn->header.value_size;
memcpy(ln->keys, pn->keys, nr_left * sizeof(pn->keys[0]));
memcpy(rn->keys, pn->keys + nr_left, nr_right * sizeof(pn->keys[0]));
size = le32_to_cpu(pn->header.flags) & INTERNAL_NODE ?
sizeof(__le64) : s->info->value_type.size;
memcpy(value_ptr(ln, 0, size), value_ptr(pn, 0, size), nr_left * size);
memcpy(value_ptr(rn, 0, size), value_ptr(pn, nr_left, size),
nr_right * size);
/* new_parent should just point to l and r now */
pn->header.flags = cpu_to_le32(INTERNAL_NODE);
pn->header.nr_entries = cpu_to_le32(2);
pn->header.max_entries = cpu_to_le32(
calc_max_entries(sizeof(__le64),
dm_bm_block_size(
dm_tm_get_bm(s->info->tm))));
pn->header.value_size = cpu_to_le32(sizeof(__le64));
val = cpu_to_le64(dm_block_location(left));
__dm_bless_for_disk(&val);
pn->keys[0] = ln->keys[0];
memcpy_disk(value_ptr(pn, 0, sizeof(__le64)), &val, sizeof(__le64));
val = cpu_to_le64(dm_block_location(right));
__dm_bless_for_disk(&val);
pn->keys[1] = rn->keys[0];
memcpy_disk(value_ptr(pn, 1, sizeof(__le64)), &val, sizeof(__le64));
/*
* rejig the spine. This is ugly, since it knows too
* much about the spine
*/
if (s->nodes[0] != new_parent) {
unlock_block(s->info, s->nodes[0]);
s->nodes[0] = new_parent;
}
if (key < le64_to_cpu(rn->keys[0])) {
unlock_block(s->info, right);
s->nodes[1] = left;
} else {
unlock_block(s->info, left);
s->nodes[1] = right;
}
s->count = 2;
return 0;
}
static int btree_insert_raw(struct shadow_spine *s, dm_block_t root,
struct dm_btree_value_type *vt,
uint64_t key, unsigned *index)
{
int r, i = *index, top = 1;
struct node *node;
for (;;) {
r = shadow_step(s, root, vt);
if (r < 0)
return r;
node = dm_block_data(shadow_current(s));
/*
* We have to patch up the parent node, ugly, but I don't
* see a way to do this automatically as part of the spine
* op.
*/
if (shadow_has_parent(s) && i >= 0) { /* FIXME: second clause unness. */
__le64 location = cpu_to_le64(dm_block_location(shadow_current(s)));
__dm_bless_for_disk(&location);
memcpy_disk(value_ptr(dm_block_data(shadow_parent(s)), i, sizeof(uint64_t)),
&location, sizeof(__le64));
}
node = dm_block_data(shadow_current(s));
if (node->header.nr_entries == node->header.max_entries) {
if (top)
r = btree_split_beneath(s, key);
else
r = btree_split_sibling(s, root, i, key);
if (r < 0)
return r;
}
node = dm_block_data(shadow_current(s));
i = lower_bound(node, key);
if (le32_to_cpu(node->header.flags) & LEAF_NODE)
break;
if (i < 0) {
/* change the bounds on the lowest key */
node->keys[0] = cpu_to_le64(key);
i = 0;
}
root = value64(node, i);
top = 0;
}
if (i < 0 || le64_to_cpu(node->keys[i]) != key)
i++;
*index = i;
return 0;
}
static int insert(struct dm_btree_info *info, dm_block_t root,
uint64_t *keys, void *value, dm_block_t *new_root,
int *inserted)
__dm_written_to_disk(value)
{
int r, need_insert;
unsigned level, index = -1, last_level = info->levels - 1;
dm_block_t block = root;
struct shadow_spine spine;
struct node *n;
struct dm_btree_value_type le64_type;
le64_type.context = NULL;
le64_type.size = sizeof(__le64);
le64_type.inc = NULL;
le64_type.dec = NULL;
le64_type.equal = NULL;
init_shadow_spine(&spine, info);
for (level = 0; level < (info->levels - 1); level++) {
r = btree_insert_raw(&spine, block, &le64_type, keys[level], &index);
if (r < 0)
goto bad;
n = dm_block_data(shadow_current(&spine));
need_insert = ((index >= le32_to_cpu(n->header.nr_entries)) ||
(le64_to_cpu(n->keys[index]) != keys[level]));
if (need_insert) {
dm_block_t new_tree;
__le64 new_le;
r = dm_btree_empty(info, &new_tree);
if (r < 0)
goto bad;
new_le = cpu_to_le64(new_tree);
__dm_bless_for_disk(&new_le);
r = insert_at(sizeof(uint64_t), n, index,
keys[level], &new_le);
if (r)
goto bad;
}
if (level < last_level)
block = value64(n, index);
}
r = btree_insert_raw(&spine, block, &info->value_type,
keys[level], &index);
if (r < 0)
goto bad;
n = dm_block_data(shadow_current(&spine));
need_insert = ((index >= le32_to_cpu(n->header.nr_entries)) ||
(le64_to_cpu(n->keys[index]) != keys[level]));
if (need_insert) {
if (inserted)
*inserted = 1;
r = insert_at(info->value_type.size, n, index,
keys[level], value);
if (r)
goto bad_unblessed;
} else {
if (inserted)
*inserted = 0;
if (info->value_type.dec &&
(!info->value_type.equal ||
!info->value_type.equal(
info->value_type.context,
value_ptr(n, index, info->value_type.size),
value))) {
info->value_type.dec(info->value_type.context,
value_ptr(n, index, info->value_type.size));
}
memcpy_disk(value_ptr(n, index, info->value_type.size),
value, info->value_type.size);
}
*new_root = shadow_root(&spine);
exit_shadow_spine(&spine);
return 0;
bad:
__dm_unbless_for_disk(value);
bad_unblessed:
exit_shadow_spine(&spine);
return r;
}
int dm_btree_insert(struct dm_btree_info *info, dm_block_t root,
uint64_t *keys, void *value, dm_block_t *new_root)
__dm_written_to_disk(value)
{
return insert(info, root, keys, value, new_root, NULL);
}
EXPORT_SYMBOL_GPL(dm_btree_insert);
int dm_btree_insert_notify(struct dm_btree_info *info, dm_block_t root,
uint64_t *keys, void *value, dm_block_t *new_root,
int *inserted)
__dm_written_to_disk(value)
{
return insert(info, root, keys, value, new_root, inserted);
}
EXPORT_SYMBOL_GPL(dm_btree_insert_notify);
/*----------------------------------------------------------------*/
static int find_highest_key(struct ro_spine *s, dm_block_t block,
uint64_t *result_key, dm_block_t *next_block)
{
int i, r;
uint32_t flags;
do {
r = ro_step(s, block);
if (r < 0)
return r;
flags = le32_to_cpu(ro_node(s)->header.flags);
i = le32_to_cpu(ro_node(s)->header.nr_entries);
if (!i)
return -ENODATA;
else
i--;
*result_key = le64_to_cpu(ro_node(s)->keys[i]);
if (next_block || flags & INTERNAL_NODE)
block = value64(ro_node(s), i);
} while (flags & INTERNAL_NODE);
if (next_block)
*next_block = block;
return 0;
}
int dm_btree_find_highest_key(struct dm_btree_info *info, dm_block_t root,
uint64_t *result_keys)
{
int r = 0, count = 0, level;
struct ro_spine spine;
init_ro_spine(&spine, info);
for (level = 0; level < info->levels; level++) {
r = find_highest_key(&spine, root, result_keys + level,
level == info->levels - 1 ? NULL : &root);
if (r == -ENODATA) {
r = 0;
break;
} else if (r)
break;
count++;
}
exit_ro_spine(&spine);
return r ? r : count;
}
EXPORT_SYMBOL_GPL(dm_btree_find_highest_key);

View File

@ -0,0 +1,145 @@
/*
* Copyright (C) 2011 Red Hat, Inc.
*
* This file is released under the GPL.
*/
#ifndef _LINUX_DM_BTREE_H
#define _LINUX_DM_BTREE_H
#include "dm-block-manager.h"
struct dm_transaction_manager;
/*----------------------------------------------------------------*/
/*
* Annotations used to check on-disk metadata is handled as little-endian.
*/
#ifdef __CHECKER__
# define __dm_written_to_disk(x) __releases(x)
# define __dm_reads_from_disk(x) __acquires(x)
# define __dm_bless_for_disk(x) __acquire(x)
# define __dm_unbless_for_disk(x) __release(x)
#else
# define __dm_written_to_disk(x)
# define __dm_reads_from_disk(x)
# define __dm_bless_for_disk(x)
# define __dm_unbless_for_disk(x)
#endif
/*----------------------------------------------------------------*/
/*
* Manipulates hierarchical B+ trees with 64-bit keys and arbitrary-sized
* values.
*/
/*
* Infomation about the values stored within the btree.
*/
struct dm_btree_value_type {
void *context;
/*
* The size in bytes of each value.
*/
uint32_t size;
/*
* Any of these methods can be safely set to NULL if you do not
* need the corresponding feature.
*/
/*
* The btree is making a duplicate of the value, for instance
* because previously-shared btree nodes have now diverged.
* @value argument is the new copy that the copy function may modify.
* (Probably it just wants to increment a reference count
* somewhere.) This method is _not_ called for insertion of a new
* value: It is assumed the ref count is already 1.
*/
void (*inc)(void *context, void *value);
/*
* This value is being deleted. The btree takes care of freeing
* the memory pointed to by @value. Often the del function just
* needs to decrement a reference count somewhere.
*/
void (*dec)(void *context, void *value);
/*
* A test for equality between two values. When a value is
* overwritten with a new one, the old one has the dec method
* called _unless_ the new and old value are deemed equal.
*/
int (*equal)(void *context, void *value1, void *value2);
};
/*
* The shape and contents of a btree.
*/
struct dm_btree_info {
struct dm_transaction_manager *tm;
/*
* Number of nested btrees. (Not the depth of a single tree.)
*/
unsigned levels;
struct dm_btree_value_type value_type;
};
/*
* Set up an empty tree. O(1).
*/
int dm_btree_empty(struct dm_btree_info *info, dm_block_t *root);
/*
* Delete a tree. O(n) - this is the slow one! It can also block, so
* please don't call it on an IO path.
*/
int dm_btree_del(struct dm_btree_info *info, dm_block_t root);
/*
* All the lookup functions return -ENODATA if the key cannot be found.
*/
/*
* Tries to find a key that matches exactly. O(ln(n))
*/
int dm_btree_lookup(struct dm_btree_info *info, dm_block_t root,
uint64_t *keys, void *value_le);
/*
* Insertion (or overwrite an existing value). O(ln(n))
*/
int dm_btree_insert(struct dm_btree_info *info, dm_block_t root,
uint64_t *keys, void *value, dm_block_t *new_root)
__dm_written_to_disk(value);
/*
* A variant of insert that indicates whether it actually inserted or just
* overwrote. Useful if you're keeping track of the number of entries in a
* tree.
*/
int dm_btree_insert_notify(struct dm_btree_info *info, dm_block_t root,
uint64_t *keys, void *value, dm_block_t *new_root,
int *inserted)
__dm_written_to_disk(value);
/*
* Remove a key if present. This doesn't remove empty sub trees. Normally
* subtrees represent a separate entity, like a snapshot map, so this is
* correct behaviour. O(ln(n)).
*/
int dm_btree_remove(struct dm_btree_info *info, dm_block_t root,
uint64_t *keys, dm_block_t *new_root);
/*
* Returns < 0 on failure. Otherwise the number of key entries that have
* been filled out. Remember trees can have zero entries, and as such have
* no highest key.
*/
int dm_btree_find_highest_key(struct dm_btree_info *info, dm_block_t root,
uint64_t *result_keys);
#endif /* _LINUX_DM_BTREE_H */

View File

@ -0,0 +1,19 @@
/*
* Copyright (C) 2011 Red Hat, Inc.
*
* This file is released under the GPL.
*/
#ifndef _DM_PERSISTENT_DATA_INTERNAL_H
#define _DM_PERSISTENT_DATA_INTERNAL_H
#include "dm-block-manager.h"
static inline unsigned dm_hash_block(dm_block_t b, unsigned hash_mask)
{
const unsigned BIG_PRIME = 4294967291UL;
return (((unsigned) b) * BIG_PRIME) & hash_mask;
}
#endif /* _PERSISTENT_DATA_INTERNAL_H */

View File

@ -0,0 +1,437 @@
/*
* Copyright (C) 2011 Red Hat, Inc.
*
* This file is released under the GPL.
*/
#include "dm-space-map-checker.h"
#include <linux/device-mapper.h>
#ifdef CONFIG_DM_DEBUG_SPACE_MAPS
#define DM_MSG_PREFIX "space map checker"
/*----------------------------------------------------------------*/
struct count_array {
dm_block_t nr;
dm_block_t nr_free;
uint32_t *counts;
};
static int ca_get_count(struct count_array *ca, dm_block_t b, uint32_t *count)
{
if (b >= ca->nr)
return -EINVAL;
*count = ca->counts[b];
return 0;
}
static int ca_count_more_than_one(struct count_array *ca, dm_block_t b, int *r)
{
if (b >= ca->nr)
return -EINVAL;
*r = ca->counts[b] > 1;
return 0;
}
static int ca_set_count(struct count_array *ca, dm_block_t b, uint32_t count)
{
uint32_t old_count;
if (b >= ca->nr)
return -EINVAL;
old_count = ca->counts[b];
if (!count && old_count)
ca->nr_free++;
else if (count && !old_count)
ca->nr_free--;
ca->counts[b] = count;
return 0;
}
static int ca_inc_block(struct count_array *ca, dm_block_t b)
{
if (b >= ca->nr)
return -EINVAL;
ca_set_count(ca, b, ca->counts[b] + 1);
return 0;
}
static int ca_dec_block(struct count_array *ca, dm_block_t b)
{
if (b >= ca->nr)
return -EINVAL;
BUG_ON(ca->counts[b] == 0);
ca_set_count(ca, b, ca->counts[b] - 1);
return 0;
}
static int ca_create(struct count_array *ca, struct dm_space_map *sm)
{
int r;
dm_block_t nr_blocks;
r = dm_sm_get_nr_blocks(sm, &nr_blocks);
if (r)
return r;
ca->nr = nr_blocks;
ca->nr_free = nr_blocks;
ca->counts = kzalloc(sizeof(*ca->counts) * nr_blocks, GFP_KERNEL);
if (!ca->counts)
return -ENOMEM;
return 0;
}
static int ca_load(struct count_array *ca, struct dm_space_map *sm)
{
int r;
uint32_t count;
dm_block_t nr_blocks, i;
r = dm_sm_get_nr_blocks(sm, &nr_blocks);
if (r)
return r;
BUG_ON(ca->nr != nr_blocks);
DMWARN("Loading debug space map from disk. This may take some time");
for (i = 0; i < nr_blocks; i++) {
r = dm_sm_get_count(sm, i, &count);
if (r) {
DMERR("load failed");
return r;
}
ca_set_count(ca, i, count);
}
DMWARN("Load complete");
return 0;
}
static int ca_extend(struct count_array *ca, dm_block_t extra_blocks)
{
dm_block_t nr_blocks = ca->nr + extra_blocks;
uint32_t *counts = kzalloc(sizeof(*counts) * nr_blocks, GFP_KERNEL);
if (!counts)
return -ENOMEM;
memcpy(counts, ca->counts, sizeof(*counts) * ca->nr);
kfree(ca->counts);
ca->nr = nr_blocks;
ca->nr_free += extra_blocks;
ca->counts = counts;
return 0;
}
static int ca_commit(struct count_array *old, struct count_array *new)
{
if (old->nr != new->nr) {
BUG_ON(old->nr > new->nr);
ca_extend(old, new->nr - old->nr);
}
BUG_ON(old->nr != new->nr);
old->nr_free = new->nr_free;
memcpy(old->counts, new->counts, sizeof(*old->counts) * old->nr);
return 0;
}
static void ca_destroy(struct count_array *ca)
{
kfree(ca->counts);
}
/*----------------------------------------------------------------*/
struct sm_checker {
struct dm_space_map sm;
struct count_array old_counts;
struct count_array counts;
struct dm_space_map *real_sm;
};
static void sm_checker_destroy(struct dm_space_map *sm)
{
struct sm_checker *smc = container_of(sm, struct sm_checker, sm);
dm_sm_destroy(smc->real_sm);
ca_destroy(&smc->old_counts);
ca_destroy(&smc->counts);
kfree(smc);
}
static int sm_checker_get_nr_blocks(struct dm_space_map *sm, dm_block_t *count)
{
struct sm_checker *smc = container_of(sm, struct sm_checker, sm);
int r = dm_sm_get_nr_blocks(smc->real_sm, count);
if (!r)
BUG_ON(smc->old_counts.nr != *count);
return r;
}
static int sm_checker_get_nr_free(struct dm_space_map *sm, dm_block_t *count)
{
struct sm_checker *smc = container_of(sm, struct sm_checker, sm);
int r = dm_sm_get_nr_free(smc->real_sm, count);
if (!r) {
/*
* Slow, but we know it's correct.
*/
dm_block_t b, n = 0;
for (b = 0; b < smc->old_counts.nr; b++)
if (smc->old_counts.counts[b] == 0 &&
smc->counts.counts[b] == 0)
n++;
if (n != *count)
DMERR("free block counts differ, checker %u, sm-disk:%u",
(unsigned) n, (unsigned) *count);
}
return r;
}
static int sm_checker_new_block(struct dm_space_map *sm, dm_block_t *b)
{
struct sm_checker *smc = container_of(sm, struct sm_checker, sm);
int r = dm_sm_new_block(smc->real_sm, b);
if (!r) {
BUG_ON(*b >= smc->old_counts.nr);
BUG_ON(smc->old_counts.counts[*b] != 0);
BUG_ON(*b >= smc->counts.nr);
BUG_ON(smc->counts.counts[*b] != 0);
ca_set_count(&smc->counts, *b, 1);
}
return r;
}
static int sm_checker_inc_block(struct dm_space_map *sm, dm_block_t b)
{
struct sm_checker *smc = container_of(sm, struct sm_checker, sm);
int r = dm_sm_inc_block(smc->real_sm, b);
int r2 = ca_inc_block(&smc->counts, b);
BUG_ON(r != r2);
return r;
}
static int sm_checker_dec_block(struct dm_space_map *sm, dm_block_t b)
{
struct sm_checker *smc = container_of(sm, struct sm_checker, sm);
int r = dm_sm_dec_block(smc->real_sm, b);
int r2 = ca_dec_block(&smc->counts, b);
BUG_ON(r != r2);
return r;
}
static int sm_checker_get_count(struct dm_space_map *sm, dm_block_t b, uint32_t *result)
{
struct sm_checker *smc = container_of(sm, struct sm_checker, sm);
uint32_t result2 = 0;
int r = dm_sm_get_count(smc->real_sm, b, result);
int r2 = ca_get_count(&smc->counts, b, &result2);
BUG_ON(r != r2);
if (!r)
BUG_ON(*result != result2);
return r;
}
static int sm_checker_count_more_than_one(struct dm_space_map *sm, dm_block_t b, int *result)
{
struct sm_checker *smc = container_of(sm, struct sm_checker, sm);
int result2 = 0;
int r = dm_sm_count_is_more_than_one(smc->real_sm, b, result);
int r2 = ca_count_more_than_one(&smc->counts, b, &result2);
BUG_ON(r != r2);
if (!r)
BUG_ON(!(*result) && result2);
return r;
}
static int sm_checker_set_count(struct dm_space_map *sm, dm_block_t b, uint32_t count)
{
struct sm_checker *smc = container_of(sm, struct sm_checker, sm);
uint32_t old_rc;
int r = dm_sm_set_count(smc->real_sm, b, count);
int r2;
BUG_ON(b >= smc->counts.nr);
old_rc = smc->counts.counts[b];
r2 = ca_set_count(&smc->counts, b, count);
BUG_ON(r != r2);
return r;
}
static int sm_checker_commit(struct dm_space_map *sm)
{
struct sm_checker *smc = container_of(sm, struct sm_checker, sm);
int r;
r = dm_sm_commit(smc->real_sm);
if (r)
return r;
r = ca_commit(&smc->old_counts, &smc->counts);
if (r)
return r;
return 0;
}
static int sm_checker_extend(struct dm_space_map *sm, dm_block_t extra_blocks)
{
struct sm_checker *smc = container_of(sm, struct sm_checker, sm);
int r = dm_sm_extend(smc->real_sm, extra_blocks);
if (r)
return r;
return ca_extend(&smc->counts, extra_blocks);
}
static int sm_checker_root_size(struct dm_space_map *sm, size_t *result)
{
struct sm_checker *smc = container_of(sm, struct sm_checker, sm);
return dm_sm_root_size(smc->real_sm, result);
}
static int sm_checker_copy_root(struct dm_space_map *sm, void *copy_to_here_le, size_t len)
{
struct sm_checker *smc = container_of(sm, struct sm_checker, sm);
return dm_sm_copy_root(smc->real_sm, copy_to_here_le, len);
}
/*----------------------------------------------------------------*/
static struct dm_space_map ops_ = {
.destroy = sm_checker_destroy,
.get_nr_blocks = sm_checker_get_nr_blocks,
.get_nr_free = sm_checker_get_nr_free,
.inc_block = sm_checker_inc_block,
.dec_block = sm_checker_dec_block,
.new_block = sm_checker_new_block,
.get_count = sm_checker_get_count,
.count_is_more_than_one = sm_checker_count_more_than_one,
.set_count = sm_checker_set_count,
.commit = sm_checker_commit,
.extend = sm_checker_extend,
.root_size = sm_checker_root_size,
.copy_root = sm_checker_copy_root
};
struct dm_space_map *dm_sm_checker_create(struct dm_space_map *sm)
{
int r;
struct sm_checker *smc;
if (!sm)
return NULL;
smc = kmalloc(sizeof(*smc), GFP_KERNEL);
if (!smc)
return NULL;
memcpy(&smc->sm, &ops_, sizeof(smc->sm));
r = ca_create(&smc->old_counts, sm);
if (r) {
kfree(smc);
return NULL;
}
r = ca_create(&smc->counts, sm);
if (r) {
ca_destroy(&smc->old_counts);
kfree(smc);
return NULL;
}
smc->real_sm = sm;
r = ca_load(&smc->counts, sm);
if (r) {
ca_destroy(&smc->counts);
ca_destroy(&smc->old_counts);
kfree(smc);
return NULL;
}
r = ca_commit(&smc->old_counts, &smc->counts);
if (r) {
ca_destroy(&smc->counts);
ca_destroy(&smc->old_counts);
kfree(smc);
return NULL;
}
return &smc->sm;
}
EXPORT_SYMBOL_GPL(dm_sm_checker_create);
struct dm_space_map *dm_sm_checker_create_fresh(struct dm_space_map *sm)
{
int r;
struct sm_checker *smc;
if (!sm)
return NULL;
smc = kmalloc(sizeof(*smc), GFP_KERNEL);
if (!smc)
return NULL;
memcpy(&smc->sm, &ops_, sizeof(smc->sm));
r = ca_create(&smc->old_counts, sm);
if (r) {
kfree(smc);
return NULL;
}
r = ca_create(&smc->counts, sm);
if (r) {
ca_destroy(&smc->old_counts);
kfree(smc);
return NULL;
}
smc->real_sm = sm;
return &smc->sm;
}
EXPORT_SYMBOL_GPL(dm_sm_checker_create_fresh);
/*----------------------------------------------------------------*/
#else
struct dm_space_map *dm_sm_checker_create(struct dm_space_map *sm)
{
return sm;
}
EXPORT_SYMBOL_GPL(dm_sm_checker_create);
struct dm_space_map *dm_sm_checker_create_fresh(struct dm_space_map *sm)
{
return sm;
}
EXPORT_SYMBOL_GPL(dm_sm_checker_create_fresh);
/*----------------------------------------------------------------*/
#endif

View File

@ -0,0 +1,26 @@
/*
* Copyright (C) 2011 Red Hat, Inc.
*
* This file is released under the GPL.
*/
#ifndef SNAPSHOTS_SPACE_MAP_CHECKER_H
#define SNAPSHOTS_SPACE_MAP_CHECKER_H
#include "dm-space-map.h"
/*----------------------------------------------------------------*/
/*
* This space map wraps a real on-disk space map, and verifies all of its
* operations. It uses a lot of memory, so only use if you have a specific
* problem that you're debugging.
*
* Ownership of @sm passes.
*/
struct dm_space_map *dm_sm_checker_create(struct dm_space_map *sm);
struct dm_space_map *dm_sm_checker_create_fresh(struct dm_space_map *sm);
/*----------------------------------------------------------------*/
#endif

View File

@ -0,0 +1,705 @@
/*
* Copyright (C) 2011 Red Hat, Inc.
*
* This file is released under the GPL.
*/
#include "dm-space-map-common.h"
#include "dm-transaction-manager.h"
#include <linux/bitops.h>
#include <linux/device-mapper.h>
#define DM_MSG_PREFIX "space map common"
/*----------------------------------------------------------------*/
/*
* Index validator.
*/
#define INDEX_CSUM_XOR 160478
static void index_prepare_for_write(struct dm_block_validator *v,
struct dm_block *b,
size_t block_size)
{
struct disk_metadata_index *mi_le = dm_block_data(b);
mi_le->blocknr = cpu_to_le64(dm_block_location(b));
mi_le->csum = cpu_to_le32(dm_bm_checksum(&mi_le->padding,
block_size - sizeof(__le32),
INDEX_CSUM_XOR));
}
static int index_check(struct dm_block_validator *v,
struct dm_block *b,
size_t block_size)
{
struct disk_metadata_index *mi_le = dm_block_data(b);
__le32 csum_disk;
if (dm_block_location(b) != le64_to_cpu(mi_le->blocknr)) {
DMERR("index_check failed blocknr %llu wanted %llu",
le64_to_cpu(mi_le->blocknr), dm_block_location(b));
return -ENOTBLK;
}
csum_disk = cpu_to_le32(dm_bm_checksum(&mi_le->padding,
block_size - sizeof(__le32),
INDEX_CSUM_XOR));
if (csum_disk != mi_le->csum) {
DMERR("index_check failed csum %u wanted %u",
le32_to_cpu(csum_disk), le32_to_cpu(mi_le->csum));
return -EILSEQ;
}
return 0;
}
static struct dm_block_validator index_validator = {
.name = "index",
.prepare_for_write = index_prepare_for_write,
.check = index_check
};
/*----------------------------------------------------------------*/
/*
* Bitmap validator
*/
#define BITMAP_CSUM_XOR 240779
static void bitmap_prepare_for_write(struct dm_block_validator *v,
struct dm_block *b,
size_t block_size)
{
struct disk_bitmap_header *disk_header = dm_block_data(b);
disk_header->blocknr = cpu_to_le64(dm_block_location(b));
disk_header->csum = cpu_to_le32(dm_bm_checksum(&disk_header->not_used,
block_size - sizeof(__le32),
BITMAP_CSUM_XOR));
}
static int bitmap_check(struct dm_block_validator *v,
struct dm_block *b,
size_t block_size)
{
struct disk_bitmap_header *disk_header = dm_block_data(b);
__le32 csum_disk;
if (dm_block_location(b) != le64_to_cpu(disk_header->blocknr)) {
DMERR("bitmap check failed blocknr %llu wanted %llu",
le64_to_cpu(disk_header->blocknr), dm_block_location(b));
return -ENOTBLK;
}
csum_disk = cpu_to_le32(dm_bm_checksum(&disk_header->not_used,
block_size - sizeof(__le32),
BITMAP_CSUM_XOR));
if (csum_disk != disk_header->csum) {
DMERR("bitmap check failed csum %u wanted %u",
le32_to_cpu(csum_disk), le32_to_cpu(disk_header->csum));
return -EILSEQ;
}
return 0;
}
static struct dm_block_validator dm_sm_bitmap_validator = {
.name = "sm_bitmap",
.prepare_for_write = bitmap_prepare_for_write,
.check = bitmap_check
};
/*----------------------------------------------------------------*/
#define ENTRIES_PER_WORD 32
#define ENTRIES_SHIFT 5
static void *dm_bitmap_data(struct dm_block *b)
{
return dm_block_data(b) + sizeof(struct disk_bitmap_header);
}
#define WORD_MASK_HIGH 0xAAAAAAAAAAAAAAAAULL
static unsigned bitmap_word_used(void *addr, unsigned b)
{
__le64 *words_le = addr;
__le64 *w_le = words_le + (b >> ENTRIES_SHIFT);
uint64_t bits = le64_to_cpu(*w_le);
uint64_t mask = (bits + WORD_MASK_HIGH + 1) & WORD_MASK_HIGH;
return !(~bits & mask);
}
static unsigned sm_lookup_bitmap(void *addr, unsigned b)
{
__le64 *words_le = addr;
__le64 *w_le = words_le + (b >> ENTRIES_SHIFT);
unsigned hi, lo;
b = (b & (ENTRIES_PER_WORD - 1)) << 1;
hi = !!test_bit_le(b, (void *) w_le);
lo = !!test_bit_le(b + 1, (void *) w_le);
return (hi << 1) | lo;
}
static void sm_set_bitmap(void *addr, unsigned b, unsigned val)
{
__le64 *words_le = addr;
__le64 *w_le = words_le + (b >> ENTRIES_SHIFT);
b = (b & (ENTRIES_PER_WORD - 1)) << 1;
if (val & 2)
__set_bit_le(b, (void *) w_le);
else
__clear_bit_le(b, (void *) w_le);
if (val & 1)
__set_bit_le(b + 1, (void *) w_le);
else
__clear_bit_le(b + 1, (void *) w_le);
}
static int sm_find_free(void *addr, unsigned begin, unsigned end,
unsigned *result)
{
while (begin < end) {
if (!(begin & (ENTRIES_PER_WORD - 1)) &&
bitmap_word_used(addr, begin)) {
begin += ENTRIES_PER_WORD;
continue;
}
if (!sm_lookup_bitmap(addr, begin)) {
*result = begin;
return 0;
}
begin++;
}
return -ENOSPC;
}
/*----------------------------------------------------------------*/
static int sm_ll_init(struct ll_disk *ll, struct dm_transaction_manager *tm)
{
ll->tm = tm;
ll->bitmap_info.tm = tm;
ll->bitmap_info.levels = 1;
/*
* Because the new bitmap blocks are created via a shadow
* operation, the old entry has already had its reference count
* decremented and we don't need the btree to do any bookkeeping.
*/
ll->bitmap_info.value_type.size = sizeof(struct disk_index_entry);
ll->bitmap_info.value_type.inc = NULL;
ll->bitmap_info.value_type.dec = NULL;
ll->bitmap_info.value_type.equal = NULL;
ll->ref_count_info.tm = tm;
ll->ref_count_info.levels = 1;
ll->ref_count_info.value_type.size = sizeof(uint32_t);
ll->ref_count_info.value_type.inc = NULL;
ll->ref_count_info.value_type.dec = NULL;
ll->ref_count_info.value_type.equal = NULL;
ll->block_size = dm_bm_block_size(dm_tm_get_bm(tm));
if (ll->block_size > (1 << 30)) {
DMERR("block size too big to hold bitmaps");
return -EINVAL;
}
ll->entries_per_block = (ll->block_size - sizeof(struct disk_bitmap_header)) *
ENTRIES_PER_BYTE;
ll->nr_blocks = 0;
ll->bitmap_root = 0;
ll->ref_count_root = 0;
return 0;
}
int sm_ll_extend(struct ll_disk *ll, dm_block_t extra_blocks)
{
int r;
dm_block_t i, nr_blocks, nr_indexes;
unsigned old_blocks, blocks;
nr_blocks = ll->nr_blocks + extra_blocks;
old_blocks = dm_sector_div_up(ll->nr_blocks, ll->entries_per_block);
blocks = dm_sector_div_up(nr_blocks, ll->entries_per_block);
nr_indexes = dm_sector_div_up(nr_blocks, ll->entries_per_block);
if (nr_indexes > ll->max_entries(ll)) {
DMERR("space map too large");
return -EINVAL;
}
for (i = old_blocks; i < blocks; i++) {
struct dm_block *b;
struct disk_index_entry idx;
r = dm_tm_new_block(ll->tm, &dm_sm_bitmap_validator, &b);
if (r < 0)
return r;
idx.blocknr = cpu_to_le64(dm_block_location(b));
r = dm_tm_unlock(ll->tm, b);
if (r < 0)
return r;
idx.nr_free = cpu_to_le32(ll->entries_per_block);
idx.none_free_before = 0;
r = ll->save_ie(ll, i, &idx);
if (r < 0)
return r;
}
ll->nr_blocks = nr_blocks;
return 0;
}
int sm_ll_lookup_bitmap(struct ll_disk *ll, dm_block_t b, uint32_t *result)
{
int r;
dm_block_t index = b;
struct disk_index_entry ie_disk;
struct dm_block *blk;
b = do_div(index, ll->entries_per_block);
r = ll->load_ie(ll, index, &ie_disk);
if (r < 0)
return r;
r = dm_tm_read_lock(ll->tm, le64_to_cpu(ie_disk.blocknr),
&dm_sm_bitmap_validator, &blk);
if (r < 0)
return r;
*result = sm_lookup_bitmap(dm_bitmap_data(blk), b);
return dm_tm_unlock(ll->tm, blk);
}
int sm_ll_lookup(struct ll_disk *ll, dm_block_t b, uint32_t *result)
{
__le32 le_rc;
int r = sm_ll_lookup_bitmap(ll, b, result);
if (r)
return r;
if (*result != 3)
return r;
r = dm_btree_lookup(&ll->ref_count_info, ll->ref_count_root, &b, &le_rc);
if (r < 0)
return r;
*result = le32_to_cpu(le_rc);
return r;
}
int sm_ll_find_free_block(struct ll_disk *ll, dm_block_t begin,
dm_block_t end, dm_block_t *result)
{
int r;
struct disk_index_entry ie_disk;
dm_block_t i, index_begin = begin;
dm_block_t index_end = dm_sector_div_up(end, ll->entries_per_block);
/*
* FIXME: Use shifts
*/
begin = do_div(index_begin, ll->entries_per_block);
end = do_div(end, ll->entries_per_block);
for (i = index_begin; i < index_end; i++, begin = 0) {
struct dm_block *blk;
unsigned position;
uint32_t bit_end;
r = ll->load_ie(ll, i, &ie_disk);
if (r < 0)
return r;
if (le32_to_cpu(ie_disk.nr_free) == 0)
continue;
r = dm_tm_read_lock(ll->tm, le64_to_cpu(ie_disk.blocknr),
&dm_sm_bitmap_validator, &blk);
if (r < 0)
return r;
bit_end = (i == index_end - 1) ? end : ll->entries_per_block;
r = sm_find_free(dm_bitmap_data(blk),
max_t(unsigned, begin, le32_to_cpu(ie_disk.none_free_before)),
bit_end, &position);
if (r == -ENOSPC) {
/*
* This might happen because we started searching
* part way through the bitmap.
*/
dm_tm_unlock(ll->tm, blk);
continue;
} else if (r < 0) {
dm_tm_unlock(ll->tm, blk);
return r;
}
r = dm_tm_unlock(ll->tm, blk);
if (r < 0)
return r;
*result = i * ll->entries_per_block + (dm_block_t) position;
return 0;
}
return -ENOSPC;
}
int sm_ll_insert(struct ll_disk *ll, dm_block_t b,
uint32_t ref_count, enum allocation_event *ev)
{
int r;
uint32_t bit, old;
struct dm_block *nb;
dm_block_t index = b;
struct disk_index_entry ie_disk;
void *bm_le;
int inc;
bit = do_div(index, ll->entries_per_block);
r = ll->load_ie(ll, index, &ie_disk);
if (r < 0)
return r;
r = dm_tm_shadow_block(ll->tm, le64_to_cpu(ie_disk.blocknr),
&dm_sm_bitmap_validator, &nb, &inc);
if (r < 0) {
DMERR("dm_tm_shadow_block() failed");
return r;
}
ie_disk.blocknr = cpu_to_le64(dm_block_location(nb));
bm_le = dm_bitmap_data(nb);
old = sm_lookup_bitmap(bm_le, bit);
if (ref_count <= 2) {
sm_set_bitmap(bm_le, bit, ref_count);
r = dm_tm_unlock(ll->tm, nb);
if (r < 0)
return r;
#if 0
/* FIXME: dm_btree_remove doesn't handle this yet */
if (old > 2) {
r = dm_btree_remove(&ll->ref_count_info,
ll->ref_count_root,
&b, &ll->ref_count_root);
if (r)
return r;
}
#endif
} else {
__le32 le_rc = cpu_to_le32(ref_count);
sm_set_bitmap(bm_le, bit, 3);
r = dm_tm_unlock(ll->tm, nb);
if (r < 0)
return r;
__dm_bless_for_disk(&le_rc);
r = dm_btree_insert(&ll->ref_count_info, ll->ref_count_root,
&b, &le_rc, &ll->ref_count_root);
if (r < 0) {
DMERR("ref count insert failed");
return r;
}
}
if (ref_count && !old) {
*ev = SM_ALLOC;
ll->nr_allocated++;
ie_disk.nr_free = cpu_to_le32(le32_to_cpu(ie_disk.nr_free) - 1);
if (le32_to_cpu(ie_disk.none_free_before) == bit)
ie_disk.none_free_before = cpu_to_le32(bit + 1);
} else if (old && !ref_count) {
*ev = SM_FREE;
ll->nr_allocated--;
ie_disk.nr_free = cpu_to_le32(le32_to_cpu(ie_disk.nr_free) + 1);
ie_disk.none_free_before = cpu_to_le32(min(le32_to_cpu(ie_disk.none_free_before), bit));
}
return ll->save_ie(ll, index, &ie_disk);
}
int sm_ll_inc(struct ll_disk *ll, dm_block_t b, enum allocation_event *ev)
{
int r;
uint32_t rc;
r = sm_ll_lookup(ll, b, &rc);
if (r)
return r;
return sm_ll_insert(ll, b, rc + 1, ev);
}
int sm_ll_dec(struct ll_disk *ll, dm_block_t b, enum allocation_event *ev)
{
int r;
uint32_t rc;
r = sm_ll_lookup(ll, b, &rc);
if (r)
return r;
if (!rc)
return -EINVAL;
return sm_ll_insert(ll, b, rc - 1, ev);
}
int sm_ll_commit(struct ll_disk *ll)
{
return ll->commit(ll);
}
/*----------------------------------------------------------------*/
static int metadata_ll_load_ie(struct ll_disk *ll, dm_block_t index,
struct disk_index_entry *ie)
{
memcpy(ie, ll->mi_le.index + index, sizeof(*ie));
return 0;
}
static int metadata_ll_save_ie(struct ll_disk *ll, dm_block_t index,
struct disk_index_entry *ie)
{
memcpy(ll->mi_le.index + index, ie, sizeof(*ie));
return 0;
}
static int metadata_ll_init_index(struct ll_disk *ll)
{
int r;
struct dm_block *b;
r = dm_tm_new_block(ll->tm, &index_validator, &b);
if (r < 0)
return r;
memcpy(dm_block_data(b), &ll->mi_le, sizeof(ll->mi_le));
ll->bitmap_root = dm_block_location(b);
return dm_tm_unlock(ll->tm, b);
}
static int metadata_ll_open(struct ll_disk *ll)
{
int r;
struct dm_block *block;
r = dm_tm_read_lock(ll->tm, ll->bitmap_root,
&index_validator, &block);
if (r)
return r;
memcpy(&ll->mi_le, dm_block_data(block), sizeof(ll->mi_le));
return dm_tm_unlock(ll->tm, block);
}
static dm_block_t metadata_ll_max_entries(struct ll_disk *ll)
{
return MAX_METADATA_BITMAPS;
}
static int metadata_ll_commit(struct ll_disk *ll)
{
int r, inc;
struct dm_block *b;
r = dm_tm_shadow_block(ll->tm, ll->bitmap_root, &index_validator, &b, &inc);
if (r)
return r;
memcpy(dm_block_data(b), &ll->mi_le, sizeof(ll->mi_le));
ll->bitmap_root = dm_block_location(b);
return dm_tm_unlock(ll->tm, b);
}
int sm_ll_new_metadata(struct ll_disk *ll, struct dm_transaction_manager *tm)
{
int r;
r = sm_ll_init(ll, tm);
if (r < 0)
return r;
ll->load_ie = metadata_ll_load_ie;
ll->save_ie = metadata_ll_save_ie;
ll->init_index = metadata_ll_init_index;
ll->open_index = metadata_ll_open;
ll->max_entries = metadata_ll_max_entries;
ll->commit = metadata_ll_commit;
ll->nr_blocks = 0;
ll->nr_allocated = 0;
r = ll->init_index(ll);
if (r < 0)
return r;
r = dm_btree_empty(&ll->ref_count_info, &ll->ref_count_root);
if (r < 0)
return r;
return 0;
}
int sm_ll_open_metadata(struct ll_disk *ll, struct dm_transaction_manager *tm,
void *root_le, size_t len)
{
int r;
struct disk_sm_root *smr = root_le;
if (len < sizeof(struct disk_sm_root)) {
DMERR("sm_metadata root too small");
return -ENOMEM;
}
r = sm_ll_init(ll, tm);
if (r < 0)
return r;
ll->load_ie = metadata_ll_load_ie;
ll->save_ie = metadata_ll_save_ie;
ll->init_index = metadata_ll_init_index;
ll->open_index = metadata_ll_open;
ll->max_entries = metadata_ll_max_entries;
ll->commit = metadata_ll_commit;
ll->nr_blocks = le64_to_cpu(smr->nr_blocks);
ll->nr_allocated = le64_to_cpu(smr->nr_allocated);
ll->bitmap_root = le64_to_cpu(smr->bitmap_root);
ll->ref_count_root = le64_to_cpu(smr->ref_count_root);
return ll->open_index(ll);
}
/*----------------------------------------------------------------*/
static int disk_ll_load_ie(struct ll_disk *ll, dm_block_t index,
struct disk_index_entry *ie)
{
return dm_btree_lookup(&ll->bitmap_info, ll->bitmap_root, &index, ie);
}
static int disk_ll_save_ie(struct ll_disk *ll, dm_block_t index,
struct disk_index_entry *ie)
{
__dm_bless_for_disk(ie);
return dm_btree_insert(&ll->bitmap_info, ll->bitmap_root,
&index, ie, &ll->bitmap_root);
}
static int disk_ll_init_index(struct ll_disk *ll)
{
return dm_btree_empty(&ll->bitmap_info, &ll->bitmap_root);
}
static int disk_ll_open(struct ll_disk *ll)
{
/* nothing to do */
return 0;
}
static dm_block_t disk_ll_max_entries(struct ll_disk *ll)
{
return -1ULL;
}
static int disk_ll_commit(struct ll_disk *ll)
{
return 0;
}
int sm_ll_new_disk(struct ll_disk *ll, struct dm_transaction_manager *tm)
{
int r;
r = sm_ll_init(ll, tm);
if (r < 0)
return r;
ll->load_ie = disk_ll_load_ie;
ll->save_ie = disk_ll_save_ie;
ll->init_index = disk_ll_init_index;
ll->open_index = disk_ll_open;
ll->max_entries = disk_ll_max_entries;
ll->commit = disk_ll_commit;
ll->nr_blocks = 0;
ll->nr_allocated = 0;
r = ll->init_index(ll);
if (r < 0)
return r;
r = dm_btree_empty(&ll->ref_count_info, &ll->ref_count_root);
if (r < 0)
return r;
return 0;
}
int sm_ll_open_disk(struct ll_disk *ll, struct dm_transaction_manager *tm,
void *root_le, size_t len)
{
int r;
struct disk_sm_root *smr = root_le;
if (len < sizeof(struct disk_sm_root)) {
DMERR("sm_metadata root too small");
return -ENOMEM;
}
r = sm_ll_init(ll, tm);
if (r < 0)
return r;
ll->load_ie = disk_ll_load_ie;
ll->save_ie = disk_ll_save_ie;
ll->init_index = disk_ll_init_index;
ll->open_index = disk_ll_open;
ll->max_entries = disk_ll_max_entries;
ll->commit = disk_ll_commit;
ll->nr_blocks = le64_to_cpu(smr->nr_blocks);
ll->nr_allocated = le64_to_cpu(smr->nr_allocated);
ll->bitmap_root = le64_to_cpu(smr->bitmap_root);
ll->ref_count_root = le64_to_cpu(smr->ref_count_root);
return ll->open_index(ll);
}
/*----------------------------------------------------------------*/

View File

@ -0,0 +1,126 @@
/*
* Copyright (C) 2011 Red Hat, Inc.
*
* This file is released under the GPL.
*/
#ifndef DM_SPACE_MAP_COMMON_H
#define DM_SPACE_MAP_COMMON_H
#include "dm-btree.h"
/*----------------------------------------------------------------*/
/*
* Low level disk format
*
* Bitmap btree
* ------------
*
* Each value stored in the btree is an index_entry. This points to a
* block that is used as a bitmap. Within the bitmap hold 2 bits per
* entry, which represent UNUSED = 0, REF_COUNT = 1, REF_COUNT = 2 and
* REF_COUNT = many.
*
* Refcount btree
* --------------
*
* Any entry that has a ref count higher than 2 gets entered in the ref
* count tree. The leaf values for this tree is the 32-bit ref count.
*/
struct disk_index_entry {
__le64 blocknr;
__le32 nr_free;
__le32 none_free_before;
} __packed;
#define MAX_METADATA_BITMAPS 255
struct disk_metadata_index {
__le32 csum;
__le32 padding;
__le64 blocknr;
struct disk_index_entry index[MAX_METADATA_BITMAPS];
} __packed;
struct ll_disk;
typedef int (*load_ie_fn)(struct ll_disk *ll, dm_block_t index, struct disk_index_entry *result);
typedef int (*save_ie_fn)(struct ll_disk *ll, dm_block_t index, struct disk_index_entry *ie);
typedef int (*init_index_fn)(struct ll_disk *ll);
typedef int (*open_index_fn)(struct ll_disk *ll);
typedef dm_block_t (*max_index_entries_fn)(struct ll_disk *ll);
typedef int (*commit_fn)(struct ll_disk *ll);
struct ll_disk {
struct dm_transaction_manager *tm;
struct dm_btree_info bitmap_info;
struct dm_btree_info ref_count_info;
uint32_t block_size;
uint32_t entries_per_block;
dm_block_t nr_blocks;
dm_block_t nr_allocated;
/*
* bitmap_root may be a btree root or a simple index.
*/
dm_block_t bitmap_root;
dm_block_t ref_count_root;
struct disk_metadata_index mi_le;
load_ie_fn load_ie;
save_ie_fn save_ie;
init_index_fn init_index;
open_index_fn open_index;
max_index_entries_fn max_entries;
commit_fn commit;
};
struct disk_sm_root {
__le64 nr_blocks;
__le64 nr_allocated;
__le64 bitmap_root;
__le64 ref_count_root;
} __packed;
#define ENTRIES_PER_BYTE 4
struct disk_bitmap_header {
__le32 csum;
__le32 not_used;
__le64 blocknr;
} __packed;
enum allocation_event {
SM_NONE,
SM_ALLOC,
SM_FREE,
};
/*----------------------------------------------------------------*/
int sm_ll_extend(struct ll_disk *ll, dm_block_t extra_blocks);
int sm_ll_lookup_bitmap(struct ll_disk *ll, dm_block_t b, uint32_t *result);
int sm_ll_lookup(struct ll_disk *ll, dm_block_t b, uint32_t *result);
int sm_ll_find_free_block(struct ll_disk *ll, dm_block_t begin,
dm_block_t end, dm_block_t *result);
int sm_ll_insert(struct ll_disk *ll, dm_block_t b, uint32_t ref_count, enum allocation_event *ev);
int sm_ll_inc(struct ll_disk *ll, dm_block_t b, enum allocation_event *ev);
int sm_ll_dec(struct ll_disk *ll, dm_block_t b, enum allocation_event *ev);
int sm_ll_commit(struct ll_disk *ll);
int sm_ll_new_metadata(struct ll_disk *ll, struct dm_transaction_manager *tm);
int sm_ll_open_metadata(struct ll_disk *ll, struct dm_transaction_manager *tm,
void *root_le, size_t len);
int sm_ll_new_disk(struct ll_disk *ll, struct dm_transaction_manager *tm);
int sm_ll_open_disk(struct ll_disk *ll, struct dm_transaction_manager *tm,
void *root_le, size_t len);
/*----------------------------------------------------------------*/
#endif /* DM_SPACE_MAP_COMMON_H */

View File

@ -0,0 +1,335 @@
/*
* Copyright (C) 2011 Red Hat, Inc.
*
* This file is released under the GPL.
*/
#include "dm-space-map-checker.h"
#include "dm-space-map-common.h"
#include "dm-space-map-disk.h"
#include "dm-space-map.h"
#include "dm-transaction-manager.h"
#include <linux/list.h>
#include <linux/slab.h>
#include <linux/module.h>
#include <linux/device-mapper.h>
#define DM_MSG_PREFIX "space map disk"
/*----------------------------------------------------------------*/
/*
* Space map interface.
*/
struct sm_disk {
struct dm_space_map sm;
struct ll_disk ll;
struct ll_disk old_ll;
dm_block_t begin;
dm_block_t nr_allocated_this_transaction;
};
static void sm_disk_destroy(struct dm_space_map *sm)
{
struct sm_disk *smd = container_of(sm, struct sm_disk, sm);
kfree(smd);
}
static int sm_disk_extend(struct dm_space_map *sm, dm_block_t extra_blocks)
{
struct sm_disk *smd = container_of(sm, struct sm_disk, sm);
return sm_ll_extend(&smd->ll, extra_blocks);
}
static int sm_disk_get_nr_blocks(struct dm_space_map *sm, dm_block_t *count)
{
struct sm_disk *smd = container_of(sm, struct sm_disk, sm);
*count = smd->old_ll.nr_blocks;
return 0;
}
static int sm_disk_get_nr_free(struct dm_space_map *sm, dm_block_t *count)
{
struct sm_disk *smd = container_of(sm, struct sm_disk, sm);
*count = (smd->old_ll.nr_blocks - smd->old_ll.nr_allocated) - smd->nr_allocated_this_transaction;
return 0;
}
static int sm_disk_get_count(struct dm_space_map *sm, dm_block_t b,
uint32_t *result)
{
struct sm_disk *smd = container_of(sm, struct sm_disk, sm);
return sm_ll_lookup(&smd->ll, b, result);
}
static int sm_disk_count_is_more_than_one(struct dm_space_map *sm, dm_block_t b,
int *result)
{
int r;
uint32_t count;
r = sm_disk_get_count(sm, b, &count);
if (r)
return r;
return count > 1;
}
static int sm_disk_set_count(struct dm_space_map *sm, dm_block_t b,
uint32_t count)
{
int r;
uint32_t old_count;
enum allocation_event ev;
struct sm_disk *smd = container_of(sm, struct sm_disk, sm);
r = sm_ll_insert(&smd->ll, b, count, &ev);
if (!r) {
switch (ev) {
case SM_NONE:
break;
case SM_ALLOC:
/*
* This _must_ be free in the prior transaction
* otherwise we've lost atomicity.
*/
smd->nr_allocated_this_transaction++;
break;
case SM_FREE:
/*
* It's only free if it's also free in the last
* transaction.
*/
r = sm_ll_lookup(&smd->old_ll, b, &old_count);
if (r)
return r;
if (!old_count)
smd->nr_allocated_this_transaction--;
break;
}
}
return r;
}
static int sm_disk_inc_block(struct dm_space_map *sm, dm_block_t b)
{
int r;
enum allocation_event ev;
struct sm_disk *smd = container_of(sm, struct sm_disk, sm);
r = sm_ll_inc(&smd->ll, b, &ev);
if (!r && (ev == SM_ALLOC))
/*
* This _must_ be free in the prior transaction
* otherwise we've lost atomicity.
*/
smd->nr_allocated_this_transaction++;
return r;
}
static int sm_disk_dec_block(struct dm_space_map *sm, dm_block_t b)
{
int r;
uint32_t old_count;
enum allocation_event ev;
struct sm_disk *smd = container_of(sm, struct sm_disk, sm);
r = sm_ll_dec(&smd->ll, b, &ev);
if (!r && (ev == SM_FREE)) {
/*
* It's only free if it's also free in the last
* transaction.
*/
r = sm_ll_lookup(&smd->old_ll, b, &old_count);
if (r)
return r;
if (!old_count)
smd->nr_allocated_this_transaction--;
}
return r;
}
static int sm_disk_new_block(struct dm_space_map *sm, dm_block_t *b)
{
int r;
enum allocation_event ev;
struct sm_disk *smd = container_of(sm, struct sm_disk, sm);
/* FIXME: we should loop round a couple of times */
r = sm_ll_find_free_block(&smd->old_ll, smd->begin, smd->old_ll.nr_blocks, b);
if (r)
return r;
smd->begin = *b + 1;
r = sm_ll_inc(&smd->ll, *b, &ev);
if (!r) {
BUG_ON(ev != SM_ALLOC);
smd->nr_allocated_this_transaction++;
}
return r;
}
static int sm_disk_commit(struct dm_space_map *sm)
{
int r;
dm_block_t nr_free;
struct sm_disk *smd = container_of(sm, struct sm_disk, sm);
r = sm_disk_get_nr_free(sm, &nr_free);
if (r)
return r;
r = sm_ll_commit(&smd->ll);
if (r)
return r;
memcpy(&smd->old_ll, &smd->ll, sizeof(smd->old_ll));
smd->begin = 0;
smd->nr_allocated_this_transaction = 0;
r = sm_disk_get_nr_free(sm, &nr_free);
if (r)
return r;
return 0;
}
static int sm_disk_root_size(struct dm_space_map *sm, size_t *result)
{
*result = sizeof(struct disk_sm_root);
return 0;
}
static int sm_disk_copy_root(struct dm_space_map *sm, void *where_le, size_t max)
{
struct sm_disk *smd = container_of(sm, struct sm_disk, sm);
struct disk_sm_root root_le;
root_le.nr_blocks = cpu_to_le64(smd->ll.nr_blocks);
root_le.nr_allocated = cpu_to_le64(smd->ll.nr_allocated);
root_le.bitmap_root = cpu_to_le64(smd->ll.bitmap_root);
root_le.ref_count_root = cpu_to_le64(smd->ll.ref_count_root);
if (max < sizeof(root_le))
return -ENOSPC;
memcpy(where_le, &root_le, sizeof(root_le));
return 0;
}
/*----------------------------------------------------------------*/
static struct dm_space_map ops = {
.destroy = sm_disk_destroy,
.extend = sm_disk_extend,
.get_nr_blocks = sm_disk_get_nr_blocks,
.get_nr_free = sm_disk_get_nr_free,
.get_count = sm_disk_get_count,
.count_is_more_than_one = sm_disk_count_is_more_than_one,
.set_count = sm_disk_set_count,
.inc_block = sm_disk_inc_block,
.dec_block = sm_disk_dec_block,
.new_block = sm_disk_new_block,
.commit = sm_disk_commit,
.root_size = sm_disk_root_size,
.copy_root = sm_disk_copy_root
};
static struct dm_space_map *dm_sm_disk_create_real(
struct dm_transaction_manager *tm,
dm_block_t nr_blocks)
{
int r;
struct sm_disk *smd;
smd = kmalloc(sizeof(*smd), GFP_KERNEL);
if (!smd)
return ERR_PTR(-ENOMEM);
smd->begin = 0;
smd->nr_allocated_this_transaction = 0;
memcpy(&smd->sm, &ops, sizeof(smd->sm));
r = sm_ll_new_disk(&smd->ll, tm);
if (r)
goto bad;
r = sm_ll_extend(&smd->ll, nr_blocks);
if (r)
goto bad;
r = sm_disk_commit(&smd->sm);
if (r)
goto bad;
return &smd->sm;
bad:
kfree(smd);
return ERR_PTR(r);
}
struct dm_space_map *dm_sm_disk_create(struct dm_transaction_manager *tm,
dm_block_t nr_blocks)
{
struct dm_space_map *sm = dm_sm_disk_create_real(tm, nr_blocks);
return dm_sm_checker_create_fresh(sm);
}
EXPORT_SYMBOL_GPL(dm_sm_disk_create);
static struct dm_space_map *dm_sm_disk_open_real(
struct dm_transaction_manager *tm,
void *root_le, size_t len)
{
int r;
struct sm_disk *smd;
smd = kmalloc(sizeof(*smd), GFP_KERNEL);
if (!smd)
return ERR_PTR(-ENOMEM);
smd->begin = 0;
smd->nr_allocated_this_transaction = 0;
memcpy(&smd->sm, &ops, sizeof(smd->sm));
r = sm_ll_open_disk(&smd->ll, tm, root_le, len);
if (r)
goto bad;
r = sm_disk_commit(&smd->sm);
if (r)
goto bad;
return &smd->sm;
bad:
kfree(smd);
return ERR_PTR(r);
}
struct dm_space_map *dm_sm_disk_open(struct dm_transaction_manager *tm,
void *root_le, size_t len)
{
return dm_sm_checker_create(
dm_sm_disk_open_real(tm, root_le, len));
}
EXPORT_SYMBOL_GPL(dm_sm_disk_open);
/*----------------------------------------------------------------*/

View File

@ -0,0 +1,25 @@
/*
* Copyright (C) 2011 Red Hat, Inc.
*
* This file is released under the GPL.
*/
#ifndef _LINUX_DM_SPACE_MAP_DISK_H
#define _LINUX_DM_SPACE_MAP_DISK_H
#include "dm-block-manager.h"
struct dm_space_map;
struct dm_transaction_manager;
/*
* Unfortunately we have to use two-phase construction due to the cycle
* between the tm and sm.
*/
struct dm_space_map *dm_sm_disk_create(struct dm_transaction_manager *tm,
dm_block_t nr_blocks);
struct dm_space_map *dm_sm_disk_open(struct dm_transaction_manager *tm,
void *root, size_t len);
#endif /* _LINUX_DM_SPACE_MAP_DISK_H */

View File

@ -0,0 +1,596 @@
/*
* Copyright (C) 2011 Red Hat, Inc.
*
* This file is released under the GPL.
*/
#include "dm-space-map.h"
#include "dm-space-map-common.h"
#include "dm-space-map-metadata.h"
#include <linux/list.h>
#include <linux/slab.h>
#include <linux/device-mapper.h>
#define DM_MSG_PREFIX "space map metadata"
/*----------------------------------------------------------------*/
/*
* Space map interface.
*
* The low level disk format is written using the standard btree and
* transaction manager. This means that performing disk operations may
* cause us to recurse into the space map in order to allocate new blocks.
* For this reason we have a pool of pre-allocated blocks large enough to
* service any metadata_ll_disk operation.
*/
/*
* FIXME: we should calculate this based on the size of the device.
* Only the metadata space map needs this functionality.
*/
#define MAX_RECURSIVE_ALLOCATIONS 1024
enum block_op_type {
BOP_INC,
BOP_DEC
};
struct block_op {
enum block_op_type type;
dm_block_t block;
};
struct sm_metadata {
struct dm_space_map sm;
struct ll_disk ll;
struct ll_disk old_ll;
dm_block_t begin;
unsigned recursion_count;
unsigned allocated_this_transaction;
unsigned nr_uncommitted;
struct block_op uncommitted[MAX_RECURSIVE_ALLOCATIONS];
};
static int add_bop(struct sm_metadata *smm, enum block_op_type type, dm_block_t b)
{
struct block_op *op;
if (smm->nr_uncommitted == MAX_RECURSIVE_ALLOCATIONS) {
DMERR("too many recursive allocations");
return -ENOMEM;
}
op = smm->uncommitted + smm->nr_uncommitted++;
op->type = type;
op->block = b;
return 0;
}
static int commit_bop(struct sm_metadata *smm, struct block_op *op)
{
int r = 0;
enum allocation_event ev;
switch (op->type) {
case BOP_INC:
r = sm_ll_inc(&smm->ll, op->block, &ev);
break;
case BOP_DEC:
r = sm_ll_dec(&smm->ll, op->block, &ev);
break;
}
return r;
}
static void in(struct sm_metadata *smm)
{
smm->recursion_count++;
}
static int out(struct sm_metadata *smm)
{
int r = 0;
/*
* If we're not recursing then very bad things are happening.
*/
if (!smm->recursion_count) {
DMERR("lost track of recursion depth");
return -ENOMEM;
}
if (smm->recursion_count == 1 && smm->nr_uncommitted) {
while (smm->nr_uncommitted && !r) {
smm->nr_uncommitted--;
r = commit_bop(smm, smm->uncommitted +
smm->nr_uncommitted);
if (r)
break;
}
}
smm->recursion_count--;
return r;
}
/*
* When using the out() function above, we often want to combine an error
* code for the operation run in the recursive context with that from
* out().
*/
static int combine_errors(int r1, int r2)
{
return r1 ? r1 : r2;
}
static int recursing(struct sm_metadata *smm)
{
return smm->recursion_count;
}
static void sm_metadata_destroy(struct dm_space_map *sm)
{
struct sm_metadata *smm = container_of(sm, struct sm_metadata, sm);
kfree(smm);
}
static int sm_metadata_extend(struct dm_space_map *sm, dm_block_t extra_blocks)
{
DMERR("doesn't support extend");
return -EINVAL;
}
static int sm_metadata_get_nr_blocks(struct dm_space_map *sm, dm_block_t *count)
{
struct sm_metadata *smm = container_of(sm, struct sm_metadata, sm);
*count = smm->ll.nr_blocks;
return 0;
}
static int sm_metadata_get_nr_free(struct dm_space_map *sm, dm_block_t *count)
{
struct sm_metadata *smm = container_of(sm, struct sm_metadata, sm);
*count = smm->old_ll.nr_blocks - smm->old_ll.nr_allocated -
smm->allocated_this_transaction;
return 0;
}
static int sm_metadata_get_count(struct dm_space_map *sm, dm_block_t b,
uint32_t *result)
{
int r, i;
struct sm_metadata *smm = container_of(sm, struct sm_metadata, sm);
unsigned adjustment = 0;
/*
* We may have some uncommitted adjustments to add. This list
* should always be really short.
*/
for (i = 0; i < smm->nr_uncommitted; i++) {
struct block_op *op = smm->uncommitted + i;
if (op->block != b)
continue;
switch (op->type) {
case BOP_INC:
adjustment++;
break;
case BOP_DEC:
adjustment--;
break;
}
}
r = sm_ll_lookup(&smm->ll, b, result);
if (r)
return r;
*result += adjustment;
return 0;
}
static int sm_metadata_count_is_more_than_one(struct dm_space_map *sm,
dm_block_t b, int *result)
{
int r, i, adjustment = 0;
struct sm_metadata *smm = container_of(sm, struct sm_metadata, sm);
uint32_t rc;
/*
* We may have some uncommitted adjustments to add. This list
* should always be really short.
*/
for (i = 0; i < smm->nr_uncommitted; i++) {
struct block_op *op = smm->uncommitted + i;
if (op->block != b)
continue;
switch (op->type) {
case BOP_INC:
adjustment++;
break;
case BOP_DEC:
adjustment--;
break;
}
}
if (adjustment > 1) {
*result = 1;
return 0;
}
r = sm_ll_lookup_bitmap(&smm->ll, b, &rc);
if (r)
return r;
if (rc == 3)
/*
* We err on the side of caution, and always return true.
*/
*result = 1;
else
*result = rc + adjustment > 1;
return 0;
}
static int sm_metadata_set_count(struct dm_space_map *sm, dm_block_t b,
uint32_t count)
{
int r, r2;
enum allocation_event ev;
struct sm_metadata *smm = container_of(sm, struct sm_metadata, sm);
if (smm->recursion_count) {
DMERR("cannot recurse set_count()");
return -EINVAL;
}
in(smm);
r = sm_ll_insert(&smm->ll, b, count, &ev);
r2 = out(smm);
return combine_errors(r, r2);
}
static int sm_metadata_inc_block(struct dm_space_map *sm, dm_block_t b)
{
int r, r2 = 0;
enum allocation_event ev;
struct sm_metadata *smm = container_of(sm, struct sm_metadata, sm);
if (recursing(smm))
r = add_bop(smm, BOP_INC, b);
else {
in(smm);
r = sm_ll_inc(&smm->ll, b, &ev);
r2 = out(smm);
}
return combine_errors(r, r2);
}
static int sm_metadata_dec_block(struct dm_space_map *sm, dm_block_t b)
{
int r, r2 = 0;
enum allocation_event ev;
struct sm_metadata *smm = container_of(sm, struct sm_metadata, sm);
if (recursing(smm))
r = add_bop(smm, BOP_DEC, b);
else {
in(smm);
r = sm_ll_dec(&smm->ll, b, &ev);
r2 = out(smm);
}
return combine_errors(r, r2);
}
static int sm_metadata_new_block_(struct dm_space_map *sm, dm_block_t *b)
{
int r, r2 = 0;
enum allocation_event ev;
struct sm_metadata *smm = container_of(sm, struct sm_metadata, sm);
r = sm_ll_find_free_block(&smm->old_ll, smm->begin, smm->old_ll.nr_blocks, b);
if (r)
return r;
smm->begin = *b + 1;
if (recursing(smm))
r = add_bop(smm, BOP_INC, *b);
else {
in(smm);
r = sm_ll_inc(&smm->ll, *b, &ev);
r2 = out(smm);
}
if (!r)
smm->allocated_this_transaction++;
return combine_errors(r, r2);
}
static int sm_metadata_new_block(struct dm_space_map *sm, dm_block_t *b)
{
int r = sm_metadata_new_block_(sm, b);
if (r)
DMERR("out of metadata space");
return r;
}
static int sm_metadata_commit(struct dm_space_map *sm)
{
int r;
struct sm_metadata *smm = container_of(sm, struct sm_metadata, sm);
r = sm_ll_commit(&smm->ll);
if (r)
return r;
memcpy(&smm->old_ll, &smm->ll, sizeof(smm->old_ll));
smm->begin = 0;
smm->allocated_this_transaction = 0;
return 0;
}
static int sm_metadata_root_size(struct dm_space_map *sm, size_t *result)
{
*result = sizeof(struct disk_sm_root);
return 0;
}
static int sm_metadata_copy_root(struct dm_space_map *sm, void *where_le, size_t max)
{
struct sm_metadata *smm = container_of(sm, struct sm_metadata, sm);
struct disk_sm_root root_le;
root_le.nr_blocks = cpu_to_le64(smm->ll.nr_blocks);
root_le.nr_allocated = cpu_to_le64(smm->ll.nr_allocated);
root_le.bitmap_root = cpu_to_le64(smm->ll.bitmap_root);
root_le.ref_count_root = cpu_to_le64(smm->ll.ref_count_root);
if (max < sizeof(root_le))
return -ENOSPC;
memcpy(where_le, &root_le, sizeof(root_le));
return 0;
}
static struct dm_space_map ops = {
.destroy = sm_metadata_destroy,
.extend = sm_metadata_extend,
.get_nr_blocks = sm_metadata_get_nr_blocks,
.get_nr_free = sm_metadata_get_nr_free,
.get_count = sm_metadata_get_count,
.count_is_more_than_one = sm_metadata_count_is_more_than_one,
.set_count = sm_metadata_set_count,
.inc_block = sm_metadata_inc_block,
.dec_block = sm_metadata_dec_block,
.new_block = sm_metadata_new_block,
.commit = sm_metadata_commit,
.root_size = sm_metadata_root_size,
.copy_root = sm_metadata_copy_root
};
/*----------------------------------------------------------------*/
/*
* When a new space map is created that manages its own space. We use
* this tiny bootstrap allocator.
*/
static void sm_bootstrap_destroy(struct dm_space_map *sm)
{
}
static int sm_bootstrap_extend(struct dm_space_map *sm, dm_block_t extra_blocks)
{
DMERR("boostrap doesn't support extend");
return -EINVAL;
}
static int sm_bootstrap_get_nr_blocks(struct dm_space_map *sm, dm_block_t *count)
{
struct sm_metadata *smm = container_of(sm, struct sm_metadata, sm);
return smm->ll.nr_blocks;
}
static int sm_bootstrap_get_nr_free(struct dm_space_map *sm, dm_block_t *count)
{
struct sm_metadata *smm = container_of(sm, struct sm_metadata, sm);
*count = smm->ll.nr_blocks - smm->begin;
return 0;
}
static int sm_bootstrap_get_count(struct dm_space_map *sm, dm_block_t b,
uint32_t *result)
{
struct sm_metadata *smm = container_of(sm, struct sm_metadata, sm);
return b < smm->begin ? 1 : 0;
}
static int sm_bootstrap_count_is_more_than_one(struct dm_space_map *sm,
dm_block_t b, int *result)
{
*result = 0;
return 0;
}
static int sm_bootstrap_set_count(struct dm_space_map *sm, dm_block_t b,
uint32_t count)
{
DMERR("boostrap doesn't support set_count");
return -EINVAL;
}
static int sm_bootstrap_new_block(struct dm_space_map *sm, dm_block_t *b)
{
struct sm_metadata *smm = container_of(sm, struct sm_metadata, sm);
/*
* We know the entire device is unused.
*/
if (smm->begin == smm->ll.nr_blocks)
return -ENOSPC;
*b = smm->begin++;
return 0;
}
static int sm_bootstrap_inc_block(struct dm_space_map *sm, dm_block_t b)
{
struct sm_metadata *smm = container_of(sm, struct sm_metadata, sm);
return add_bop(smm, BOP_INC, b);
}
static int sm_bootstrap_dec_block(struct dm_space_map *sm, dm_block_t b)
{
struct sm_metadata *smm = container_of(sm, struct sm_metadata, sm);
return add_bop(smm, BOP_DEC, b);
}
static int sm_bootstrap_commit(struct dm_space_map *sm)
{
return 0;
}
static int sm_bootstrap_root_size(struct dm_space_map *sm, size_t *result)
{
DMERR("boostrap doesn't support root_size");
return -EINVAL;
}
static int sm_bootstrap_copy_root(struct dm_space_map *sm, void *where,
size_t max)
{
DMERR("boostrap doesn't support copy_root");
return -EINVAL;
}
static struct dm_space_map bootstrap_ops = {
.destroy = sm_bootstrap_destroy,
.extend = sm_bootstrap_extend,
.get_nr_blocks = sm_bootstrap_get_nr_blocks,
.get_nr_free = sm_bootstrap_get_nr_free,
.get_count = sm_bootstrap_get_count,
.count_is_more_than_one = sm_bootstrap_count_is_more_than_one,
.set_count = sm_bootstrap_set_count,
.inc_block = sm_bootstrap_inc_block,
.dec_block = sm_bootstrap_dec_block,
.new_block = sm_bootstrap_new_block,
.commit = sm_bootstrap_commit,
.root_size = sm_bootstrap_root_size,
.copy_root = sm_bootstrap_copy_root
};
/*----------------------------------------------------------------*/
struct dm_space_map *dm_sm_metadata_init(void)
{
struct sm_metadata *smm;
smm = kmalloc(sizeof(*smm), GFP_KERNEL);
if (!smm)
return ERR_PTR(-ENOMEM);
memcpy(&smm->sm, &ops, sizeof(smm->sm));
return &smm->sm;
}
int dm_sm_metadata_create(struct dm_space_map *sm,
struct dm_transaction_manager *tm,
dm_block_t nr_blocks,
dm_block_t superblock)
{
int r;
dm_block_t i;
enum allocation_event ev;
struct sm_metadata *smm = container_of(sm, struct sm_metadata, sm);
smm->begin = superblock + 1;
smm->recursion_count = 0;
smm->allocated_this_transaction = 0;
smm->nr_uncommitted = 0;
memcpy(&smm->sm, &bootstrap_ops, sizeof(smm->sm));
r = sm_ll_new_metadata(&smm->ll, tm);
if (r)
return r;
r = sm_ll_extend(&smm->ll, nr_blocks);
if (r)
return r;
memcpy(&smm->sm, &ops, sizeof(smm->sm));
/*
* Now we need to update the newly created data structures with the
* allocated blocks that they were built from.
*/
for (i = superblock; !r && i < smm->begin; i++)
r = sm_ll_inc(&smm->ll, i, &ev);
if (r)
return r;
return sm_metadata_commit(sm);
}
int dm_sm_metadata_open(struct dm_space_map *sm,
struct dm_transaction_manager *tm,
void *root_le, size_t len)
{
int r;
struct sm_metadata *smm = container_of(sm, struct sm_metadata, sm);
r = sm_ll_open_metadata(&smm->ll, tm, root_le, len);
if (r)
return r;
smm->begin = 0;
smm->recursion_count = 0;
smm->allocated_this_transaction = 0;
smm->nr_uncommitted = 0;
memcpy(&smm->old_ll, &smm->ll, sizeof(smm->old_ll));
return 0;
}

View File

@ -0,0 +1,33 @@
/*
* Copyright (C) 2011 Red Hat, Inc.
*
* This file is released under the GPL.
*/
#ifndef DM_SPACE_MAP_METADATA_H
#define DM_SPACE_MAP_METADATA_H
#include "dm-transaction-manager.h"
/*
* Unfortunately we have to use two-phase construction due to the cycle
* between the tm and sm.
*/
struct dm_space_map *dm_sm_metadata_init(void);
/*
* Create a fresh space map.
*/
int dm_sm_metadata_create(struct dm_space_map *sm,
struct dm_transaction_manager *tm,
dm_block_t nr_blocks,
dm_block_t superblock);
/*
* Open from a previously-recorded root.
*/
int dm_sm_metadata_open(struct dm_space_map *sm,
struct dm_transaction_manager *tm,
void *root_le, size_t len);
#endif /* DM_SPACE_MAP_METADATA_H */

View File

@ -0,0 +1,134 @@
/*
* Copyright (C) 2011 Red Hat, Inc.
*
* This file is released under the GPL.
*/
#ifndef _LINUX_DM_SPACE_MAP_H
#define _LINUX_DM_SPACE_MAP_H
#include "dm-block-manager.h"
/*
* struct dm_space_map keeps a record of how many times each block in a device
* is referenced. It needs to be fixed on disk as part of the transaction.
*/
struct dm_space_map {
void (*destroy)(struct dm_space_map *sm);
/*
* You must commit before allocating the newly added space.
*/
int (*extend)(struct dm_space_map *sm, dm_block_t extra_blocks);
/*
* Extensions do not appear in this count until after commit has
* been called.
*/
int (*get_nr_blocks)(struct dm_space_map *sm, dm_block_t *count);
/*
* Space maps must never allocate a block from the previous
* transaction, in case we need to rollback. This complicates the
* semantics of get_nr_free(), it should return the number of blocks
* that are available for allocation _now_. For instance you may
* have blocks with a zero reference count that will not be
* available for allocation until after the next commit.
*/
int (*get_nr_free)(struct dm_space_map *sm, dm_block_t *count);
int (*get_count)(struct dm_space_map *sm, dm_block_t b, uint32_t *result);
int (*count_is_more_than_one)(struct dm_space_map *sm, dm_block_t b,
int *result);
int (*set_count)(struct dm_space_map *sm, dm_block_t b, uint32_t count);
int (*commit)(struct dm_space_map *sm);
int (*inc_block)(struct dm_space_map *sm, dm_block_t b);
int (*dec_block)(struct dm_space_map *sm, dm_block_t b);
/*
* new_block will increment the returned block.
*/
int (*new_block)(struct dm_space_map *sm, dm_block_t *b);
/*
* The root contains all the information needed to fix the space map.
* Generally this info is small, so squirrel it away in a disk block
* along with other info.
*/
int (*root_size)(struct dm_space_map *sm, size_t *result);
int (*copy_root)(struct dm_space_map *sm, void *copy_to_here_le, size_t len);
};
/*----------------------------------------------------------------*/
static inline void dm_sm_destroy(struct dm_space_map *sm)
{
sm->destroy(sm);
}
static inline int dm_sm_extend(struct dm_space_map *sm, dm_block_t extra_blocks)
{
return sm->extend(sm, extra_blocks);
}
static inline int dm_sm_get_nr_blocks(struct dm_space_map *sm, dm_block_t *count)
{
return sm->get_nr_blocks(sm, count);
}
static inline int dm_sm_get_nr_free(struct dm_space_map *sm, dm_block_t *count)
{
return sm->get_nr_free(sm, count);
}
static inline int dm_sm_get_count(struct dm_space_map *sm, dm_block_t b,
uint32_t *result)
{
return sm->get_count(sm, b, result);
}
static inline int dm_sm_count_is_more_than_one(struct dm_space_map *sm,
dm_block_t b, int *result)
{
return sm->count_is_more_than_one(sm, b, result);
}
static inline int dm_sm_set_count(struct dm_space_map *sm, dm_block_t b,
uint32_t count)
{
return sm->set_count(sm, b, count);
}
static inline int dm_sm_commit(struct dm_space_map *sm)
{
return sm->commit(sm);
}
static inline int dm_sm_inc_block(struct dm_space_map *sm, dm_block_t b)
{
return sm->inc_block(sm, b);
}
static inline int dm_sm_dec_block(struct dm_space_map *sm, dm_block_t b)
{
return sm->dec_block(sm, b);
}
static inline int dm_sm_new_block(struct dm_space_map *sm, dm_block_t *b)
{
return sm->new_block(sm, b);
}
static inline int dm_sm_root_size(struct dm_space_map *sm, size_t *result)
{
return sm->root_size(sm, result);
}
static inline int dm_sm_copy_root(struct dm_space_map *sm, void *copy_to_here_le, size_t len)
{
return sm->copy_root(sm, copy_to_here_le, len);
}
#endif /* _LINUX_DM_SPACE_MAP_H */

View File

@ -0,0 +1,400 @@
/*
* Copyright (C) 2011 Red Hat, Inc.
*
* This file is released under the GPL.
*/
#include "dm-transaction-manager.h"
#include "dm-space-map.h"
#include "dm-space-map-checker.h"
#include "dm-space-map-disk.h"
#include "dm-space-map-metadata.h"
#include "dm-persistent-data-internal.h"
#include <linux/module.h>
#include <linux/slab.h>
#include <linux/device-mapper.h>
#define DM_MSG_PREFIX "transaction manager"
/*----------------------------------------------------------------*/
struct shadow_info {
struct hlist_node hlist;
dm_block_t where;
};
/*
* It would be nice if we scaled with the size of transaction.
*/
#define HASH_SIZE 256
#define HASH_MASK (HASH_SIZE - 1)
struct dm_transaction_manager {
int is_clone;
struct dm_transaction_manager *real;
struct dm_block_manager *bm;
struct dm_space_map *sm;
spinlock_t lock;
struct hlist_head buckets[HASH_SIZE];
};
/*----------------------------------------------------------------*/
static int is_shadow(struct dm_transaction_manager *tm, dm_block_t b)
{
int r = 0;
unsigned bucket = dm_hash_block(b, HASH_MASK);
struct shadow_info *si;
struct hlist_node *n;
spin_lock(&tm->lock);
hlist_for_each_entry(si, n, tm->buckets + bucket, hlist)
if (si->where == b) {
r = 1;
break;
}
spin_unlock(&tm->lock);
return r;
}
/*
* This can silently fail if there's no memory. We're ok with this since
* creating redundant shadows causes no harm.
*/
static void insert_shadow(struct dm_transaction_manager *tm, dm_block_t b)
{
unsigned bucket;
struct shadow_info *si;
si = kmalloc(sizeof(*si), GFP_NOIO);
if (si) {
si->where = b;
bucket = dm_hash_block(b, HASH_MASK);
spin_lock(&tm->lock);
hlist_add_head(&si->hlist, tm->buckets + bucket);
spin_unlock(&tm->lock);
}
}
static void wipe_shadow_table(struct dm_transaction_manager *tm)
{
struct shadow_info *si;
struct hlist_node *n, *tmp;
struct hlist_head *bucket;
int i;
spin_lock(&tm->lock);
for (i = 0; i < HASH_SIZE; i++) {
bucket = tm->buckets + i;
hlist_for_each_entry_safe(si, n, tmp, bucket, hlist)
kfree(si);
INIT_HLIST_HEAD(bucket);
}
spin_unlock(&tm->lock);
}
/*----------------------------------------------------------------*/
static struct dm_transaction_manager *dm_tm_create(struct dm_block_manager *bm,
struct dm_space_map *sm)
{
int i;
struct dm_transaction_manager *tm;
tm = kmalloc(sizeof(*tm), GFP_KERNEL);
if (!tm)
return ERR_PTR(-ENOMEM);
tm->is_clone = 0;
tm->real = NULL;
tm->bm = bm;
tm->sm = sm;
spin_lock_init(&tm->lock);
for (i = 0; i < HASH_SIZE; i++)
INIT_HLIST_HEAD(tm->buckets + i);
return tm;
}
struct dm_transaction_manager *dm_tm_create_non_blocking_clone(struct dm_transaction_manager *real)
{
struct dm_transaction_manager *tm;
tm = kmalloc(sizeof(*tm), GFP_KERNEL);
if (tm) {
tm->is_clone = 1;
tm->real = real;
}
return tm;
}
EXPORT_SYMBOL_GPL(dm_tm_create_non_blocking_clone);
void dm_tm_destroy(struct dm_transaction_manager *tm)
{
kfree(tm);
}
EXPORT_SYMBOL_GPL(dm_tm_destroy);
int dm_tm_pre_commit(struct dm_transaction_manager *tm)
{
int r;
if (tm->is_clone)
return -EWOULDBLOCK;
r = dm_sm_commit(tm->sm);
if (r < 0)
return r;
return 0;
}
EXPORT_SYMBOL_GPL(dm_tm_pre_commit);
int dm_tm_commit(struct dm_transaction_manager *tm, struct dm_block *root)
{
if (tm->is_clone)
return -EWOULDBLOCK;
wipe_shadow_table(tm);
return dm_bm_flush_and_unlock(tm->bm, root);
}
EXPORT_SYMBOL_GPL(dm_tm_commit);
int dm_tm_new_block(struct dm_transaction_manager *tm,
struct dm_block_validator *v,
struct dm_block **result)
{
int r;
dm_block_t new_block;
if (tm->is_clone)
return -EWOULDBLOCK;
r = dm_sm_new_block(tm->sm, &new_block);
if (r < 0)
return r;
r = dm_bm_write_lock_zero(tm->bm, new_block, v, result);
if (r < 0) {
dm_sm_dec_block(tm->sm, new_block);
return r;
}
/*
* New blocks count as shadows in that they don't need to be
* shadowed again.
*/
insert_shadow(tm, new_block);
return 0;
}
static int __shadow_block(struct dm_transaction_manager *tm, dm_block_t orig,
struct dm_block_validator *v,
struct dm_block **result)
{
int r;
dm_block_t new;
struct dm_block *orig_block;
r = dm_sm_new_block(tm->sm, &new);
if (r < 0)
return r;
r = dm_sm_dec_block(tm->sm, orig);
if (r < 0)
return r;
r = dm_bm_read_lock(tm->bm, orig, v, &orig_block);
if (r < 0)
return r;
r = dm_bm_unlock_move(orig_block, new);
if (r < 0) {
dm_bm_unlock(orig_block);
return r;
}
return dm_bm_write_lock(tm->bm, new, v, result);
}
int dm_tm_shadow_block(struct dm_transaction_manager *tm, dm_block_t orig,
struct dm_block_validator *v, struct dm_block **result,
int *inc_children)
{
int r;
if (tm->is_clone)
return -EWOULDBLOCK;
r = dm_sm_count_is_more_than_one(tm->sm, orig, inc_children);
if (r < 0)
return r;
if (is_shadow(tm, orig) && !*inc_children)
return dm_bm_write_lock(tm->bm, orig, v, result);
r = __shadow_block(tm, orig, v, result);
if (r < 0)
return r;
insert_shadow(tm, dm_block_location(*result));
return r;
}
int dm_tm_read_lock(struct dm_transaction_manager *tm, dm_block_t b,
struct dm_block_validator *v,
struct dm_block **blk)
{
if (tm->is_clone)
return dm_bm_read_try_lock(tm->real->bm, b, v, blk);
return dm_bm_read_lock(tm->bm, b, v, blk);
}
int dm_tm_unlock(struct dm_transaction_manager *tm, struct dm_block *b)
{
return dm_bm_unlock(b);
}
EXPORT_SYMBOL_GPL(dm_tm_unlock);
void dm_tm_inc(struct dm_transaction_manager *tm, dm_block_t b)
{
/*
* The non-blocking clone doesn't support this.
*/
BUG_ON(tm->is_clone);
dm_sm_inc_block(tm->sm, b);
}
EXPORT_SYMBOL_GPL(dm_tm_inc);
void dm_tm_dec(struct dm_transaction_manager *tm, dm_block_t b)
{
/*
* The non-blocking clone doesn't support this.
*/
BUG_ON(tm->is_clone);
dm_sm_dec_block(tm->sm, b);
}
EXPORT_SYMBOL_GPL(dm_tm_dec);
int dm_tm_ref(struct dm_transaction_manager *tm, dm_block_t b,
uint32_t *result)
{
if (tm->is_clone)
return -EWOULDBLOCK;
return dm_sm_get_count(tm->sm, b, result);
}
struct dm_block_manager *dm_tm_get_bm(struct dm_transaction_manager *tm)
{
return tm->bm;
}
/*----------------------------------------------------------------*/
static int dm_tm_create_internal(struct dm_block_manager *bm,
dm_block_t sb_location,
struct dm_block_validator *sb_validator,
size_t root_offset, size_t root_max_len,
struct dm_transaction_manager **tm,
struct dm_space_map **sm,
struct dm_block **sblock,
int create)
{
int r;
struct dm_space_map *inner;
inner = dm_sm_metadata_init();
if (IS_ERR(inner))
return PTR_ERR(inner);
*tm = dm_tm_create(bm, inner);
if (IS_ERR(*tm)) {
dm_sm_destroy(inner);
return PTR_ERR(*tm);
}
if (create) {
r = dm_bm_write_lock_zero(dm_tm_get_bm(*tm), sb_location,
sb_validator, sblock);
if (r < 0) {
DMERR("couldn't lock superblock");
goto bad1;
}
r = dm_sm_metadata_create(inner, *tm, dm_bm_nr_blocks(bm),
sb_location);
if (r) {
DMERR("couldn't create metadata space map");
goto bad2;
}
*sm = dm_sm_checker_create(inner);
if (!*sm)
goto bad2;
} else {
r = dm_bm_write_lock(dm_tm_get_bm(*tm), sb_location,
sb_validator, sblock);
if (r < 0) {
DMERR("couldn't lock superblock");
goto bad1;
}
r = dm_sm_metadata_open(inner, *tm,
dm_block_data(*sblock) + root_offset,
root_max_len);
if (r) {
DMERR("couldn't open metadata space map");
goto bad2;
}
*sm = dm_sm_checker_create(inner);
if (!*sm)
goto bad2;
}
return 0;
bad2:
dm_tm_unlock(*tm, *sblock);
bad1:
dm_tm_destroy(*tm);
dm_sm_destroy(inner);
return r;
}
int dm_tm_create_with_sm(struct dm_block_manager *bm, dm_block_t sb_location,
struct dm_block_validator *sb_validator,
struct dm_transaction_manager **tm,
struct dm_space_map **sm, struct dm_block **sblock)
{
return dm_tm_create_internal(bm, sb_location, sb_validator,
0, 0, tm, sm, sblock, 1);
}
EXPORT_SYMBOL_GPL(dm_tm_create_with_sm);
int dm_tm_open_with_sm(struct dm_block_manager *bm, dm_block_t sb_location,
struct dm_block_validator *sb_validator,
size_t root_offset, size_t root_max_len,
struct dm_transaction_manager **tm,
struct dm_space_map **sm, struct dm_block **sblock)
{
return dm_tm_create_internal(bm, sb_location, sb_validator, root_offset,
root_max_len, tm, sm, sblock, 0);
}
EXPORT_SYMBOL_GPL(dm_tm_open_with_sm);
/*----------------------------------------------------------------*/

View File

@ -0,0 +1,130 @@
/*
* Copyright (C) 2011 Red Hat, Inc.
*
* This file is released under the GPL.
*/
#ifndef _LINUX_DM_TRANSACTION_MANAGER_H
#define _LINUX_DM_TRANSACTION_MANAGER_H
#include "dm-block-manager.h"
struct dm_transaction_manager;
struct dm_space_map;
/*----------------------------------------------------------------*/
/*
* This manages the scope of a transaction. It also enforces immutability
* of the on-disk data structures by limiting access to writeable blocks.
*
* Clients should not fiddle with the block manager directly.
*/
void dm_tm_destroy(struct dm_transaction_manager *tm);
/*
* The non-blocking version of a transaction manager is intended for use in
* fast path code that needs to do lookups e.g. a dm mapping function.
* You create the non-blocking variant from a normal tm. The interface is
* the same, except that most functions will just return -EWOULDBLOCK.
* Methods that return void yet may block should not be called on a clone
* viz. dm_tm_inc, dm_tm_dec. Call dm_tm_destroy() as you would with a normal
* tm when you've finished with it. You may not destroy the original prior
* to clones.
*/
struct dm_transaction_manager *dm_tm_create_non_blocking_clone(struct dm_transaction_manager *real);
/*
* We use a 2-phase commit here.
*
* i) In the first phase the block manager is told to start flushing, and
* the changes to the space map are written to disk. You should interrogate
* your particular space map to get detail of its root node etc. to be
* included in your superblock.
*
* ii) @root will be committed last. You shouldn't use more than the
* first 512 bytes of @root if you wish the transaction to survive a power
* failure. You *must* have a write lock held on @root for both stage (i)
* and (ii). The commit will drop the write lock.
*/
int dm_tm_pre_commit(struct dm_transaction_manager *tm);
int dm_tm_commit(struct dm_transaction_manager *tm, struct dm_block *root);
/*
* These methods are the only way to get hold of a writeable block.
*/
/*
* dm_tm_new_block() is pretty self-explanatory. Make sure you do actually
* write to the whole of @data before you unlock, otherwise you could get
* a data leak. (The other option is for tm_new_block() to zero new blocks
* before handing them out, which will be redundant in most, if not all,
* cases).
* Zeroes the new block and returns with write lock held.
*/
int dm_tm_new_block(struct dm_transaction_manager *tm,
struct dm_block_validator *v,
struct dm_block **result);
/*
* dm_tm_shadow_block() allocates a new block and copies the data from @orig
* to it. It then decrements the reference count on original block. Use
* this to update the contents of a block in a data structure, don't
* confuse this with a clone - you shouldn't access the orig block after
* this operation. Because the tm knows the scope of the transaction it
* can optimise requests for a shadow of a shadow to a no-op. Don't forget
* to unlock when you've finished with the shadow.
*
* The @inc_children flag is used to tell the caller whether it needs to
* adjust reference counts for children. (Data in the block may refer to
* other blocks.)
*
* Shadowing implicitly drops a reference on @orig so you must not have
* it locked when you call this.
*/
int dm_tm_shadow_block(struct dm_transaction_manager *tm, dm_block_t orig,
struct dm_block_validator *v,
struct dm_block **result, int *inc_children);
/*
* Read access. You can lock any block you want. If there's a write lock
* on it outstanding then it'll block.
*/
int dm_tm_read_lock(struct dm_transaction_manager *tm, dm_block_t b,
struct dm_block_validator *v,
struct dm_block **result);
int dm_tm_unlock(struct dm_transaction_manager *tm, struct dm_block *b);
/*
* Functions for altering the reference count of a block directly.
*/
void dm_tm_inc(struct dm_transaction_manager *tm, dm_block_t b);
void dm_tm_dec(struct dm_transaction_manager *tm, dm_block_t b);
int dm_tm_ref(struct dm_transaction_manager *tm, dm_block_t b,
uint32_t *result);
struct dm_block_manager *dm_tm_get_bm(struct dm_transaction_manager *tm);
/*
* A little utility that ties the knot by producing a transaction manager
* that has a space map managed by the transaction manager...
*
* Returns a tm that has an open transaction to write the new disk sm.
* Caller should store the new sm root and commit.
*/
int dm_tm_create_with_sm(struct dm_block_manager *bm, dm_block_t sb_location,
struct dm_block_validator *sb_validator,
struct dm_transaction_manager **tm,
struct dm_space_map **sm, struct dm_block **sblock);
int dm_tm_open_with_sm(struct dm_block_manager *bm, dm_block_t sb_location,
struct dm_block_validator *sb_validator,
size_t root_offset, size_t root_max_len,
struct dm_transaction_manager **tm,
struct dm_space_map **sm, struct dm_block **sblock);
#endif /* _LINUX_DM_TRANSACTION_MANAGER_H */

View File

@ -10,6 +10,7 @@
#include <linux/bio.h>
#include <linux/blkdev.h>
#include <linux/ratelimit.h>
struct dm_dev;
struct dm_target;
@ -127,10 +128,6 @@ void dm_put_device(struct dm_target *ti, struct dm_dev *d);
* Information about a target type
*/
/*
* Target features
*/
struct target_type {
uint64_t features;
const char *name;
@ -159,6 +156,30 @@ struct target_type {
struct list_head list;
};
/*
* Target features
*/
/*
* Any table that contains an instance of this target must have only one.
*/
#define DM_TARGET_SINGLETON 0x00000001
#define dm_target_needs_singleton(type) ((type)->features & DM_TARGET_SINGLETON)
/*
* Indicates that a target does not support read-only devices.
*/
#define DM_TARGET_ALWAYS_WRITEABLE 0x00000002
#define dm_target_always_writeable(type) \
((type)->features & DM_TARGET_ALWAYS_WRITEABLE)
/*
* Any device that contains a table with an instance of this target may never
* have tables containing any different target type.
*/
#define DM_TARGET_IMMUTABLE 0x00000004
#define dm_target_is_immutable(type) ((type)->features & DM_TARGET_IMMUTABLE)
struct dm_target {
struct dm_table *table;
struct target_type *type;
@ -375,6 +396,14 @@ void *dm_vcalloc(unsigned long nmemb, unsigned long elem_size);
*---------------------------------------------------------------*/
#define DM_NAME "device-mapper"
#ifdef CONFIG_PRINTK
extern struct ratelimit_state dm_ratelimit_state;
#define dm_ratelimit() __ratelimit(&dm_ratelimit_state)
#else
#define dm_ratelimit() 0
#endif
#define DMCRIT(f, arg...) \
printk(KERN_CRIT DM_NAME ": " DM_MSG_PREFIX ": " f "\n", ## arg)
@ -382,7 +411,7 @@ void *dm_vcalloc(unsigned long nmemb, unsigned long elem_size);
printk(KERN_ERR DM_NAME ": " DM_MSG_PREFIX ": " f "\n", ## arg)
#define DMERR_LIMIT(f, arg...) \
do { \
if (printk_ratelimit()) \
if (dm_ratelimit()) \
printk(KERN_ERR DM_NAME ": " DM_MSG_PREFIX ": " \
f "\n", ## arg); \
} while (0)
@ -391,7 +420,7 @@ void *dm_vcalloc(unsigned long nmemb, unsigned long elem_size);
printk(KERN_WARNING DM_NAME ": " DM_MSG_PREFIX ": " f "\n", ## arg)
#define DMWARN_LIMIT(f, arg...) \
do { \
if (printk_ratelimit()) \
if (dm_ratelimit()) \
printk(KERN_WARNING DM_NAME ": " DM_MSG_PREFIX ": " \
f "\n", ## arg); \
} while (0)
@ -400,7 +429,7 @@ void *dm_vcalloc(unsigned long nmemb, unsigned long elem_size);
printk(KERN_INFO DM_NAME ": " DM_MSG_PREFIX ": " f "\n", ## arg)
#define DMINFO_LIMIT(f, arg...) \
do { \
if (printk_ratelimit()) \
if (dm_ratelimit()) \
printk(KERN_INFO DM_NAME ": " DM_MSG_PREFIX ": " f \
"\n", ## arg); \
} while (0)
@ -410,7 +439,7 @@ void *dm_vcalloc(unsigned long nmemb, unsigned long elem_size);
printk(KERN_DEBUG DM_NAME ": " DM_MSG_PREFIX " DEBUG: " f "\n", ## arg)
# define DMDEBUG_LIMIT(f, arg...) \
do { \
if (printk_ratelimit()) \
if (dm_ratelimit()) \
printk(KERN_DEBUG DM_NAME ": " DM_MSG_PREFIX ": " f \
"\n", ## arg); \
} while (0)

View File

@ -267,9 +267,9 @@ enum {
#define DM_DEV_SET_GEOMETRY _IOWR(DM_IOCTL, DM_DEV_SET_GEOMETRY_CMD, struct dm_ioctl)
#define DM_VERSION_MAJOR 4
#define DM_VERSION_MINOR 21
#define DM_VERSION_MINOR 22
#define DM_VERSION_PATCHLEVEL 0
#define DM_VERSION_EXTRA "-ioctl (2011-07-06)"
#define DM_VERSION_EXTRA "-ioctl (2011-10-19)"
/* Status bits */
#define DM_READONLY_FLAG (1 << 0) /* In/Out */

View File

@ -57,5 +57,9 @@ void *dm_kcopyd_prepare_callback(struct dm_kcopyd_client *kc,
dm_kcopyd_notify_fn fn, void *context);
void dm_kcopyd_do_callback(void *job, int read_err, unsigned long write_err);
int dm_kcopyd_zero(struct dm_kcopyd_client *kc,
unsigned num_dests, struct dm_io_region *dests,
unsigned flags, dm_kcopyd_notify_fn fn, void *context);
#endif /* __KERNEL__ */
#endif /* _LINUX_DM_KCOPYD_H */

View File

@ -52,15 +52,20 @@
* Payload-to-userspace:
* A single string containing all the argv arguments separated by ' 's
* Payload-to-kernel:
* None. ('data_size' in the dm_ulog_request struct should be 0.)
* A NUL-terminated string that is the name of the device that is used
* as the backing store for the log data. 'dm_get_device' will be called
* on this device. ('dm_put_device' will be called on this device
* automatically after calling DM_ULOG_DTR.) If there is no device needed
* for log data, 'data_size' in the dm_ulog_request struct should be 0.
*
* The UUID contained in the dm_ulog_request structure is the reference that
* will be used by all request types to a specific log. The constructor must
* record this assotiation with instance created.
* record this association with the instance created.
*
* When the request has been processed, user-space must return the
* dm_ulog_request to the kernel - setting the 'error' field and
* 'data_size' appropriately.
* dm_ulog_request to the kernel - setting the 'error' field, filling the
* data field with the log device if necessary, and setting 'data_size'
* appropriately.
*/
#define DM_ULOG_CTR 1
@ -377,8 +382,11 @@
* dm_ulog_request or a change in the way requests are
* issued/handled. Changes are outlined here:
* version 1: Initial implementation
* version 2: DM_ULOG_CTR allowed to return a string containing a
* device name that is to be registered with DM via
* 'dm_get_device'.
*/
#define DM_ULOG_REQUEST_VERSION 1
#define DM_ULOG_REQUEST_VERSION 2
struct dm_ulog_request {
/*